Why do you prefer Clojure over Haskell?


#21

To be honest, I did not find it hard to set up neither Clojure nor Haskell for development. CIDER and Intero cover pretty much all my REPL-driven needs and more.

I have been studying Haskell in my spare time while doing Clojure/ClojureScript for work during the day. I can’t say I really prefer one over the other right now. I love how I can easily switch from CLJ to CLJS without losing context because it always feels like the code is “the same”. My next target, when I am confident enough with Haskell, is approaching PureScript and see how it compares to ClojureScript in terms of comfortable programming context.

While I dig Lisp, I am really enjoying Haskell syntax and I am starting to see the benefits of types. However, until I do some real project with Haskell (i.e. a web app that can mirror the beauty of re-frame/bidi/http-kit/etc.), it’s hard to compare the languages for me.


#22

I don’t see this problem with Clojure at all.


#23

What do you think about the Haskell Book ?


#24

That’s the book I am studying. I am near the end of Chapter 10, so far it’s one of the best IT book I bought: clear, rich of details and exercises, wonderfully written.


#25

I like it. It takes you very slowly through the progression of abstractions in Haskell and shows how they really fit together.


#26

On the topic of comparing Haskell and Clojure, I thought this blog post from Eric Normand concentrating on static types and what programming in each language is like was pretty insightful. He has worked professionally with both languages, whereas I get the impression that manyf people have a lot of experience in one language and not much in the other (I myself am in this category).

https://lispcast.com/clojure-and-types/


#27

Which book is the Haskell book?


#28

Thanks for the link @timgilbert I’ll check it out


#29

Great link, a recommended read.

I love the puzzle argument. It aligns perfectly with the research on productivity of languages and language paradigms I’ve read about. It was found that people using dynamically typed languages took less time, but when interviewed, the people with the static type languages felt like the type system made them more productive, and they’d want to keep using static types. While the dynamic ones felt either neutral, or felt like they were slowed down by having to chase down type errors.

This means static typing often makes you feel smart and productive. And I think its that puzzle solving. It’s rewarding and fun, but it’s not progress towards your real problem. While in a dynamic world, you always feel a little more lost, and you’ve got to explore and think harder about what is what, but you’re always focused on the real problem, you might feel confused, but if measured, you are still more productive and turn out equally good software, i.e., equal number of defects in the resulting product.

And that’s my biggest issue with static vs dynamic. No one is being practical and honest about the benefits. Many people say its safety, to have less bugs. Alright, but that’s never held up in empirical studies. In fact, Clojure outperformed many statically typed languages, ranking in the top 3 of lowest defect rate.

Here’s things that empirically demonstrate real defect reduction value:

  • Runtime type checking
  • Strong typing
  • Functional programming
  • Unit tests
  • Integ tests
  • Performance tests
  • Generative tests
  • Fuzzing tests
  • Declarative programming
  • Automated memory management (like GC)
  • Continuous integration, delivery and deployment
  • Code reviews
  • Immutability
  • Formal verification

Clojure offers all of these, except the last one. Though formal verification is often orthogonal, and proved in a parallel language.

You’re better off adding one of these to your development process, than going for static typing, if you’re looking at outputting safer software.

Alright, so what else are static types beneficial for? Some say it makes things easier to read. But I’ve always found this to be a fallacious argument. How easy it is to read and understand a program’s code is not a relevant metric. All that matters is speed of delivery, cost, reliability, safety, performance and scale. No one pays you to have an easier time. How quickly is it for you to add features to an acceptable level of quality, at an acceptable price for the given usage. Where quality are the prior metrics: reliability, availability, safety, performance, scale, completeness and user acceptance.

In that respect, my search for empirical data also only showed me that static types make you less productive, and the only quality metric they affect is performance. Clojure has static types for performance, in the form of type hints.

This is the puzzle thing again. You personally might feel like a code base is messy, hard to follow and understand. But how much time (not mental effort) did it really take you to add the feature, or fix the reported issue, and does it work?

Again, I feel with static types, you have a false impression that you follow along more easily. But its because you quickly figure out what types things are and how they connect. Yet you havn’t understood anything of the architecture, logic and behaviour of the code appart for its type interaction. This feeds into your “easy” impression. But probably doesn’t affect your productivity at all.

I’m not against static types. But I’m honest about there value. They are fun to play with, and I love learning about their cleverness and beauty. I’m unsure they add anything more, and if they did, it must be a very small benefit, that is almost within the margin of error.

Now RUST is king for me in the static type languages. To statically prove proper memory management, while preventing the overhead of a runtime GC is a real concrete benefit. Proving the lack of deadlocks and race conditions in parallel code is another one. These kind of defect prevention is much more practical and beneficial from my perspective, then realizing 10 minutes earlier that you passed a string to a function expecting an int.


#30

While I agree with the general gist of your comment I totally disagree with this statement. The general consens is that you read code 4 times as often as you write code. For me personal (working on one product for more than 7 years now the number is even higher).
It is very important to be able to read and understand code as fast as possible.
Especially in the case of bugfixes in large codebases. I often encounter a bug where I debug and trace through 100-es of lines of code until I understand what is going on exactly, write a test and the fix usually has less than 10 lines of code.

But again, I dont think static typing is of much help. Good abstraction and interfaces are more important.


#31

Only if it results in the bug being fixed more quickly.

Imagine a Chef cooking a rack of lamb. His way of doing it is way harder then my way of doing it. His rack of lamb will result in being way better then mine. Yet my way of cooking it is really easy. Yet he’ll probably cook his faster then mine. Another beginner cook, like me, might find he can only follow my way of cooking it, and so he can assist me in doing so. The Chef needs equally talented assistants, my beginner cook assistant would just be lost, and slow the Chef down.

I believe this is true in software engineering as well. There is worse and better code, but it is not made worse because it is harder for you to understand it given your level of understanding.

Now, if you have a business. Say a Fast Food chain, you don’t hire a Chef, but a bunch of teenager, and you make your recipes easy, throw a lot of them in the kitchen, and you’ve got a pretty great business model. Some companies do business like that, and its fine. In that case, you could argue “easy” is a legitimate metric, because you need to be able to easily hire, and have engineers ramp up quickly, and cost less. But in other circumstances, I wouldn’t say easy is a good metric.

Easy, can be an indirect influencer of productivity, even of an expert. But my point is that, why not measure the real metric? How productive are you? That you found it hard to understand, does not mean it took you longer to fix the bug, or add the feature you needed.

And, static types don’t really make it easier to understand the architecture, logic and behavior of a piece of code. Only easier to understand the types. My claim is that, this does not result in any productivity gains though, even if you feel like it does. If you measured yourself, it might even slow you down, because you focused more on the types, then the design, or algorithm at hand.

That’s basically my criticism of “easy to read”. I know its controversial, but I for now stand by it. I welcome further thought into it though.

EDIT: Also I’m talking about “easy”, not about “simple”. Simple is definitely better, I’d expect the Chef’s cooking to be simpler then mine, I might throw too many spices at it, try stupid things that don’t work out, decide to stuff my rack of lamb with jam, or something like that. I’ll add a bunch of incidental complexity to it, but not the Chef, he/she knows what matters, and how to combine things to perfection. :wink:


#32

Clojure makes programming fun again!

Everyone I share this with inevitably conflates ‘fun’ with ‘toy language’, but that isn’t the case - I use it for serious work stuff.


#33

http://haskellbook.com/


#34

Your comparison does not really make sense. Yes, an experienced chef will cook better food than an unexperienced chef.
Yes an experienced developer will write better code than an unexperienced developer.
But that is tangential to readability of code.

(defn register-user [email password db]
    (let [user {:email email :password}
            email-vaild (validate email)
            password-valid (validate password)]
        (if (and email-valid password-valid)
            (do (jdbc/insert :user_table ["insert into ...."])
                (session/put :email email)
                (response/respone {:ok :ok})
            (response/respone {:error "invalid password/email"}))))

This is a bit of a contrieved example, but compare it to this function:

(defn register-user [email password db]
    (let [user (create-user-map email password)]
        (if-let [invalid-message (validate user)]
            (response/respone {:error invalid-message})
            (do (insert-user-into table user)
                (login-to-session user)
                (response/respone {:ok :ok}))))

For me the second version is more readable than the first one. The point is that whenever I write some code I try to pack it into a function with a name that makes sense. So whenever I read the function name I know what it does, given the context. I dont care how it does it, only that it happens.
(session/put :email email) vs (login-to-session user)
The first function call has to be parsed and understood by my brain. It’s two steps, understand what happens and put it into context.
While the second function tells me what it does and not how or where.
It might seem obvious for this simple case, but in large codebases this is actually a hard task to achieve, but it is so worth it.
Good code can be read by like a book and also be read by an unexperienced programmer, as long as he knows the idioms of the programming language.

Also I said in my last paragraph that types dont help there, so we agree about types, but not the importance of readability.


#35

I am wondering if people here have tried core.typed? I think besides the purity through IO Monads aspect in Haskell, a lot of the type-safety aspects can be captured by it. Yet Clojurists do not even seem to consider adding optional typing to their libraries. I am gulty there as well, but I think there could be a core part of the language and libraries that can be type checked a la carte.

While I like what I have seen from Haskell so far in general (did some toy exercises), I do not like that it is a top-down all or nothing proposition. E.g. pattern matching and typing are very much mixed with the syntax, currying and so on. I feel it way more difficult to understand how complex expressions compose exactly than in Clojure’s S-expressions. Syntax is definitely not hard, but for me Haskell is very opinionated about a lot of things that you have to except.


#36

I think my issue is that by labeling something as “easier to read”, I do not know what benefit that actually brings me, apart from making me feel less exhausted after having read it. If there was a bug in your code, would I really fix it more quickly in the second example? Does writing code like this lowers defect rate?

What makes code “easier to read”, is one of two things:

  1. That it uses techniques, paradigms and constructs I’m more familiar with.
  2. That it better contextualizes what the code does, within the problem domain it operates in.

You can claim #1 makes you more productive, saves you on ramp up time, don’t have to learn and practice as many things. That’s Go’s selling point. The time you save not having to learn as many techniques, you put towards a real world problem instead. You just finished learning Monads, I wrote a Web Server. The other side says, learning more techniques will be a good ROI, as it will make you so much more productive that it’ll make up for the time you spent learning them. I fall in the latter camp, but I do not have hard data on this.

For #2, the claim is that this context is necessary to know, so that you can add features, or fix bugs in it later on. Static type annotations fall in this one. The time it would take you to figure out that context, without any hint from the code, in the form of type annotations, variable names, comments, documentation, etc., would be longer then the time it took you to add it in the first place. Thus its a worthwhile investment, and will make you more productive eventually. I think one challenge here is that you don’t know what context will really help, and which one is not as useful. Maybe contextualizing the types help? Maybe its not really helpful?

Now I feel your example doesn’t fully fit within any of these. Maybe a little in #2, since you’ve given more context about the creation of a user map, and validation of a user. I think your example has changed more then just how easy it is to read though. You’ve re-structured the logic, its modeled differently now. You grouped the validation of a user together, and the creation of a user map together. That’s just better design. Not sure better design should count as “easier to read”. Maybe it makes solutions more obvious. So ya, I guess you could, but if you meant better design, that’s not what I meant when I said “easier to read”. I was talking about #1 or #2 purely.

Thus within my frame of #1 and #2, I’ve never been able to find empirical proof for or against them. That’s why I tend to go with a more middle ground approach. Don’t need everything to be commented, named or annotated. Having map in the name of a parameter to let you know its a map is useful context sometimes. Having to type everything might be overly contextualizing. Doesn’t seem its worth the effort. Naming things a,b,c,… is probably too little context. Etc.

When it comes to design, I value that immensely. Though that’s another topic that lacks a lot in empirical data. When it comes to code factoring, I thus also tend to go for a compromising strategy. Its the spaghetti code versus the lasagna code or the ravioli code. I’d probably lend somewhere near your second example.

For those interested: http://wiki.c2.com/?PastaCode


#37

In order to explain why I prefer Clojure over Haskell, I’ll have to give a bit of background…

I started programming in the 70’s, via programmable calculators, and Algol 60 (via a correspondence course!), then BASIC, Pascal, assembler (various types). At university in the early 80’s I learned about a dozen languages because I enjoyed seeing how different languages approached the same problem. That dozen included traditional ones like FORTRAN and COBOL as well as more esoteric ones like APL and Prolog. My final year undergraduate project was to develop an APL interpreter from scratch in Pascal. My best friend wrote a Lisp interpreter from scratch and I did a lot of testing for him – my first real exposure to Lisp.

I went on to do three years of Ph.D. research on “the design and implementation of functional programming languages”. I first built a LispKit interpreter in Pascal, then designed and implemented an ML-like language on top of that (inspired by W.H.Burge’s book). I experimented with pattern matching, type inference (at one point using a type inference engine written in Prolog!). I worked with ML, Miranda, SASL, and various other languages that were the precursors to Haskell.

When the committee got together and created Haskell, I was very hopeful that it would quickly come to dominate the industry – but it was clear that academia was the focus and no serious effort was likely to be made to productize and package Haskell in a way that industry could really leverage.

My first exposure to C++ was early in '92 and that’s what I spent most of a decade using in the industry. I was an active member of X2J16 (ANSI C++ Standards Committee) for most of that decade as well. Then I switched to Java (starting in '97).

I kept an eye on Haskell and kept tinkering with it over the years but it was always a fragile and frustrating creature with quirky tooling and a mish-mash of libraries, so it never seemed realistic to build production systems with.

That situation has changed somewhat “lately”. Tooling has improved (but is still somewhat archaic). The library ecosystem has also improved (but is still a very distant way behind either the JVM or the JS ecosystems).

By a quirk of employment, I became a ColdFusion (CFML) programmer for a while as well – I worked at Macromedia when they bought Allaire – so I was mostly a Java + CFML user there. I quit after Adobe bought Macromedia and found myself using Groovy + CFML for a while at another company, then joined my current employer and initially tried Scala + CFML.

But then I discovered Clojure – Amit Rathore (Clojure in Action) offered a Saturday workshop nearby for $200 and it was money well-spent: I was hooked. I introduced Clojure at work and cross-trained my team. They didn’t like/understand the Scala code but they found Clojure to be a lot of fun.

And that’s how, today, we are a pure Clojure-on-the-back-end company. Clojure makes programming fun!

So what about Haskell? I still find its tooling, its library ecosystem, and its type-system to be immensely frustrating if I actually want to build something serious. To be honest, I experienced some of that frustration with Scala too. And Java. Looking back over the decades, I really think that, deep down, I just don’t enjoy working with static type systems and “fussy” compilers.

Clojure has a raw immediacy with the REPL and its extremely polymorphic approach to data that it mostly just gets out of my way and lets me “get shit done”.

I gave my background so you can see that I’ve had a long exposure to lots of different languages – and I’ve been a language designer (and also a compiler writer – I co-developed one of the first ANSI-validated C compilers and I’ve also written most of a C++ compiler front end, as well as various “fringe” languages over the decades). I really admire Haskell’s design. I would really like to love Haskell and enjoy using it. But it disappoints me every time I try :cry:


#38

Hello @seancorfield,

Thanks your for sharing your years of experience a d your point of view on this matter.
To be honest, I always found statically typed languages like a stone on the road more that any kind of help. I really hate the clutter and visual noise they introduce. However, when along the years I envy some of the compiler errors when I just get an unhandled exception on my js code.
For that reason I really liked when I first saw it: the type inference seems very nice and very light, even using type annotations while you still get polymorphism and type checking. Seems like a winning combination.
However, most of the safety of types seems to be achievable with monads and functors, so maybe it’s not so necessary.

I started to look at typed functional languages when I started to become more functional on javascript. Function composition is amazing, and currying until some step on your pipeline returns an unexpected shape, and then you are totally busted. It’s not easy to understand at which point on the pipeline the error starts or why. That’s another reason why I saw Haskell appealing at first.
What is your experience on this? Do you have any tips for debugging functional code apart from using ugly trace functions?

Regards


#39

My workflow probably helps me avoid some of the sharp edges that it sounds like you’re running into…? The REPL is always running. I write code in my source file, usually inside a (comment ...) form so I can leave it in the finished product to show both the evolution of the code and my thinking as well as examples of use, and evaluate it into the REPL. As I have an implementation of each piece, I move it from the comment to a top-level (public) function, leaving a call to it in the comment. Rinse and repeat. So pretty much every piece of code has actually been run and tested before becoming an actual function, and example calls to each function call chain are typically close by in a comment.

This even allows for extensive refactoring since I have a mini-suite of “tests” right there in the comments to evaluate as I work on the code. And, yes, some of those become actual tests as well, to help guard against regressions in the future.

Yesterday I refactored a fairly complex set of call chains so that a low-level function returned a hash map instead of just a sequence, that was consumed by a higher-level function, so I could pass additional information up and down the call chain. It was a pretty massive set of changes and (almost) everything “just worked”. The only breakage – which was discovered by my test suite – was that I’d missed changing -> to ->> in two locations (I’d exercised all the other locations via the REPL as I was working).