Well, yes. What happens in Haskell is that errors get caught at compile time and stop you running anything.
The real question is “how useful is a half working program?”.
Is a program that works 90% of the time and fails 10% of the time, on certain data / values, more or less useful than a program which works 0% of the time because it won’t compile?
For some applications, sure. Failure at runtime is such a bad thing that you’d rather have no program at all than one which blows up at runtime.
For other applications, the reverse might be true.
There’s no single answer that suits all applications. That’s why we have different languages that make different trade-offs.
So the real question about “should I write my server in Haskell or Clojure?” is not “How many lines of code is it?”
It’s “what’s its profile in terms of failure? How catastrophic / costly are certain types of bugs, and how much is it worth paying the extra cost of being forced to fix all my bugs before I can get any part of it working?”
The trade-off between static and dynamic typing should be thought about like the other trade-offs, optimizations etc. In a sense, static typing is like premature optimization. It’s forcing you to fix certain bugs before you may really have to.
That’s the hidden cost on the flip-side of “I know that if it compiles it’s working” claim. Yeah, but it won’t be compiling for a while longer. The cost of dynamic typing turns up in the form of runtime bugs. The cost of static typing turns up in the opportunity cost of code that never got to a state where it could run at all.