Indeed. Type signatures mean that the code is naturally self-documenting.
My stance on the static/dynamic controversy is that static typing becomes a win as the amount of time spent reading others' code increases. A single programmer is probably going to be more productive in Lisp than in ML. (I don't know enough about Haskell to speak for it.) On the other hand, if I were choosing a base language for a team of 10 people who would all be editing the same code, I'd most likely choose one that's statically typed. Static typing enforces interfaces and allows the compiler to detect a decent share of bugs as well as most code breaks.
(Note: I have only read a bunch about Haskell. I am by no means an expert)
In my hobby work with Python, I absolutely have no choice but to read the documentation for a function to know what it does. The dynamic typing necessitates that the function itself tells me nothing. Consider:
def get_console_type(console)
Does this function take a console object? The string name of the console? The IPAddress of it?
In my professional work with C# (well, in this case C), the type signature tells me quite a bit more information.
ConsoleType GetConsoleType(IXboxConsole*)
One step further: does GetConsoleType have any side effects? It shouldn't by naming convention. But it turns out it did. It updates a cache, which can have a major effect on some totally unrelated thing.
Haskell, being a pure language, has to declare the presence of side effects. I'd have know this right away, without having to look at any documentation (there isn't any!). Furthermore, what if the function wasn't actually called GetConsoleType. What if it was called GetXboxType?
I really wished I could have searched for "All functions that return ConsoleType"...
Not all ML/Haskell functions are semantically transparent based on the type signature, but type signatures are really useful.
You might wonder what function you use to find the length of a list in ML. The function obviously has to exist, but there are a number of things it could be called (length? count? size?). At the toploop, you type "Module L = List" and you get the module's type signature. You see that a function called length exists with type signature 'a list -> int. You know that that's the function you're looking for.
Other functions' type signatures give a great indication of what the functions do. For example,
List.map : ('a -> 'b) -> 'a list -> 'b list
List.filter : ('a -> bool) -> 'a list -> 'a list
List.filter_map : ('a -> 'b option) -> 'a list -> 'b list
All of these do the most intuitive thing that a function with that type signature should do. Obviously, not all functions can indicate their semantics through their type signature. For example, you might have one called partition with the following signature:
partition : ('a -> bool) -> 'a list -> ('a list) * ('a list).
You don't know whether the "true" list appears first or second in the returned tuple, but you can easily check this at the toploop.
When you're reading other peoples' code, static typing can be a huge win. If the writer of the code used the type system properly, this cuts your read-work by 80%, even without any other documentation.
My stance on the static/dynamic controversy is that static typing becomes a win as the amount of time spent reading others' code increases. A single programmer is probably going to be more productive in Lisp than in ML. (I don't know enough about Haskell to speak for it.) On the other hand, if I were choosing a base language for a team of 10 people who would all be editing the same code, I'd most likely choose one that's statically typed. Static typing enforces interfaces and allows the compiler to detect a decent share of bugs as well as most code breaks.