I've been diving into Haskell more lately from the Clojure/Script world, and it has a lot to admire. There's so much about it that's daunting though.
Is there an analogy for what this pulls off that the poor common-fp-programmer can understand? Sounds like heterogeneous metaprogramming would let subsets of Haskell (e.g. more restrictive languages) also join in metaprogramming, but I'm not clear on why or how.
Definitely looking forward to learning more from the Haskell community and their amazing projects!
I suppose the really interesting part isn't the final result (the metaprogramming part), but the Coq model of System FC1 and the linking to GHC's Core passes.
Coq is a functional programming language, similar to Haskell, but with a much more powerful ("dependent") type system. These powerful types let us specify all kinds of stuff about our code, eg. instead of "an integer" we might say "a prime between MIN and MAX whose binary representation is a palindrome". Roughly speaking, if you can compute something with a pure fold, you can use it in your types.
Having the compiler check our code in such detail gives us a lot of confidence that our pre/post conditions are satisfied, that tricky calculations have been implemented correctly and that edge-cases are eliminated. This doesn't really matter for CRUD Web apps, but it's exactly what we want for compiler optimisations.
> Roughly speaking, if you can compute something with a pure fold, you can use it in your types.
I don't really understand that statement but it makes me really curious. Can you expand it a bit? (Background: I can read Haskell but I have only a vague idea of dependent types.)
Just like Haskell types can depend on other types, like "Maybe Int" (which contains an Int or "Nothing") or "Either String Bool" (which contains a String or a Bool), dependent types can depend on any value.
This lets us specify very precise types; commonly-used examples are "Fin n", a type which contains "n" members (ie. "Fin 2" ~= Bool); and "Vect n t", which is a list of exactly "n" elements of type "t" (ie. "Vect 2 t" ~= (t, t)).
This extra precision comes at the price of verbosity, but it forces us to use our code in correct ways. For example, Haskell's "!!" function looks up elements in lists. Its arguments are of type [a] and Int, so we can easily end up with an Int that's bigger than the list (or even negative!). If we use "Fin n" and "Vect n a" instead, we're guaranteed that there will be an element at that position; otherwise we get a compile error.
We can do this with any pure value, and in fact there is a feature called "type providers" which use IO to generate types (eg. reading in database schemas).
One limitation with type-systems is that they can be easily fooled by infinite loops, for example:
loop :: a
loop = loop
This type-checks in Haskell, but it has a completely-general type: we can use it anywhere without the type-checker complaining. This undermines our confidence: we might think we understand what some code will do, based on its type, but the author may have been lazy and just written "loop"!
To prevent this, most dependently-typed languages don't allow infinite loops, at least at compile-time. Hence we can't write any old value. There are various ways to achieve this (syntactically-guarded recursion, structural recursion, termination proofs, co-recursion, etc.) but a good rule of thumb is that if we can implement something using "fold" then we can use it in our types.
It's generally dangerous to allow non-terminating computation into your types since it means you can no longer trust the logic of your type system (Turing completeness implies logical inconsistency).
One way to limit the completeness of a language is to replace fixed-point recursion (as is embodied in the Y combinator and all recursive languages which are complete) by induction/coinduction. These two concepts "divide general recursion in half" such that you can get a lot of work done without actually having completeness/inconsistency.
Haskell for instance has induction via folds and coinduction via unfolds, but since we're allowed to "cross the streams" in folds and unfolds you end up with general recursion and inconsistency.
Is there an analogy for what this pulls off that the poor common-fp-programmer can understand? Sounds like heterogeneous metaprogramming would let subsets of Haskell (e.g. more restrictive languages) also join in metaprogramming, but I'm not clear on why or how.
Definitely looking forward to learning more from the Haskell community and their amazing projects!