My school work has finally produced something worth showing off. For a "Program Analysis" course, Pavel Kalvoda and I have made a new proof-of-concept program analysis tool for Haskell. I will not go into how exactly the tool works, but rather how you can use it.

## Patterns and background

On a very high level, Haskell's argument patterns let us define the value of function applications for different inputs. Take the following function as an example:

```
f :: Bool -> Bool -> Int
False True = 1
f False False = 2
f False _ = 3 f
```

The result of this function, when given input of the form `f False True`

, is 1. On the other hand, if the are of the form `f False False`

, then the result is 2.

First notice that the last clause, namely `f False _ = 3`

, is *redundant*. It will never be selected and removing it will not change the semantics of the function.

Second, notice that these patterns are *not exhaustive*, namely there are inputs for which no clause can be selected. In this case, `f True False`

would be one such example.

Next, observe the following example:

```
g :: Bool -> Bool -> Int
True = 1
g _ True True = 2
g False = 3 g _
```

Now, notice that the second clause will never be selected for evaluation, because anything that would be selected by the second clause will already have been selected by the first clause. However, removing the second clause will change the semantics of the function, so it is not redundant. In particular, the evaluation of `g undefined False`

will diverge while without the second clause it wouldn't. We say that the second clause has an *inaccessible right-hand side*.

Non-Exhaustiveness, redundancy and inaccessible right-hand sides are three issues that usually indicate a problem with the code. Non-exhaustiveness and redundancy used to be addressed by GHC already (with `-Wall`

), but recently a paper appeared that addressed all three of these problems *and* immediately also addressed them for GADTs and guards too.

Their approach was parametric in a so-called oracle that would have to be able to tell whether certain constraints are satisfiable. These constraints could be divided into term-constraints and type-constraints. They used a non-trivial type-constraint oracle, namely the GHC type-checker and a near-trivial term-constraint oracle. This meant that, even though their algorithm would allow for analysis of guards, in practice there was still not much improvement on that part.

This is where we come into the picture. We reproduced their algorithm and implemented a non-trivial term-constraint oracle to perform a more precise analysis of guards. The resulting code can be found on Github. In what follows, I will discuss the tool we built and our results.

## Exhaustiveness, Redundancy, Inaccessibility

First of all, our tool can still handle the same problems as GHC 7.10.3 could:

For example, in the case of the following function (the first example from earlier) ...

```
f :: Bool -> Bool -> Int
False True = 1
f False False = 2
f False _ = 3 f
```

... our tool would output the following recommendations:

In function f: The patterns may not be exhaustive, the following clauses are missing: f True _ The following clause is redundant: f False _

However, our tool can now also do more than that.

For example, consider the following (erroneous) implementation of an absolute value function:

```
abs :: Int -> Int
abs x
| x < 0 = - x
| x > 0 = x
```

Our tool will identify the problem, show you under which circumstances the problem manifests itself and show you an example of such a situation.

In function abs: The patterns may not be exhaustive, the following clauses are missing: abs a Constraints: not (a > 0) not (a < 0) Satisfiable. Model: a = 0 :: Int

It can also figure out if guards are redundant. Consider the following function:

```
bguard :: Bool -> Int
bguard x| x = 0
| not x = 1
| otherwise = 2
```

The tool will figure out that this last guard is redundant:

The following clause is redundant: bguard x | otherwise

## Evaluatedness

There is another nice consequence of the way this analysis happens. Other related analyses become a lot simpler.

We built a proof-of-concept implementation of a tool that tells you how, when and how deeply arguments to a function will be evaluated during pattern-matching.

As an example, consider the `fst`

function:

```
fst :: (a, b) -> a
fst (a, _) = a
```

The evaluation of `fst undefined`

will diverge while the evaluation of `fst (undefined, undefined)`

will not diverge during pattern-matching. This is because the `fst`

function only evaluates the topmost constructor (`(,)`

) during pattern matching.

Our tool will produce the following output:

Evaluatedness of 'fst' fst a a: a

This should be read as: "When `fst`

is evaluated with an arbitrary argument that we will call `a`

, the first constructor of `a`

will be evaluated.

Let's take a more complicated example. Consider the following contrived example of an implementation of a `length`

function:

```
length :: [a] -> Int
length [] = 0
length [_] = 1
length (_:_:xs) = 2 + length xs
```

Our tool will compute the following Evaluatedness:

length a a: a length [] []: [] length (b:c) (b:c): (_:c) length [b] [b]: [_]

This shows that it figures out that the list constructors will always be evaluated up to two levels deep, but that the contents of the list will not be evaluated. Indeed, `length [undefined, undefined]`

will simply be evaluated to `2`

.

## Conclusion

This is by no means a production-ready tool, but it was really fun to build. We hope that the tool can be of some use anyway, either academically or practically.