I have to admit, I’m very aware and had already heard of all the arguments you put forward prior to writing my opinions on this thread. So unfortunately, you don’t change my opinions on static types.
One thing I believe you misread, I’m advocating for learning advanced programming constructs like Monads, not against.
Now, I’m not going to pretend like my reasons for prefering Clojure over Haskell are absolute. Those are both very very good programming languages, and they actually share a lot in common with their heavy FP bias. At the end of the day, it’s about personal preference. And like many tools, how good the wielder is at using the tool matters more than the tool itself. You’re only going to get good with a tool that you like and feel a good symbiosis with.
Many people really struggle without static types, they need them to understand the data model and flow of a program, and to catch mistakes they make in integrating various functions together. Similarly, many people struggle with types, never being able to figure out how to define them properly for it to work, never realizing how to leverage them to enable flexible yet safe structure, and they either end up with overspecialized or underspecialized types.
Now, some people, like me, are good at both, yet can still prefer one over the other simply based on sheer fun. I have more fun with Clojure, it causes me no additional friction, but it’s more fun.
Now static types are just one element. This thread isn’t about static vs dynamic types. That’s really important. I don’t prefer Python/Ruby/Perl over Haskell. I’d rather use Haskell. I only prefer Clojure. I also tend to prefer Rust, and I can’t decide if I like OCaml better or less than Haskell, I’d say they’re even.
That’s because of other aspects as well, using persistent collections to model data as default, having the JVM at my disposal, being able to break purity at will, being able to use structural editing over my code, having the power to extend the language with macros, having an almost instant feedback loop with the REPL, the live programming aspect, the simplicity of serializing Clojure data, being able to use spec to model data that leave my process, being able to use logic programming when I need it, being able to use CSP when I need it, etc.
I love these things, they tip me over the Clojure camp.
Now, I’ll end with an observation, and correct me if I’m wrong about you. I am better at writing Clojure than Haskell, I prefer Clojure. You seem better at writing Haskell than Clojure, you prefer Haskell.
Your posts in this discussion were short and so is your latest reply. To keep it short you use a very simple strategy: not take your discussion partner serious. Either I must be too inexperienced (Common Lisp since 2003 and Clojure full time from 2008-2014) or I might know my stuff but only want to make provocative statements, without being interested in a discussion. None of these is true. But with this assumption it’s easy to escape a true reply.
But I was honest with my assessment about exploratory programming. In some statically typed languages you can kind of do dynamic typing. In Haskell for example there’s the type Dynamic and any value can be of this type. They are checked during runtime.
But still I wouldn’t use them when trying to explore things. I find it actually pretty nice to work on a Haskell Genetic Programming system which is then in turn used for pretty exploratory things. Very possible. Being able to make substantial changes with code above a few thousand lines (and you can say a lot in some thousand lines of Haskell) can be advantageous.
I agree, this kind of develops implicitly. It’s so to say the goal of static typing, at least tight coupling on the type level. When I have a record with a few fields of which two are records themselves and I have a small function working on the outer record then you can’t do anything useful with this FN when seeing it posted here as long I not also give you the declarations of the (say) three records. It’s probably fair to call this coupling. If in Clojure my little function just took a hashmap then 2-3 comments in my post about the keys that it should have will be enough. You just need my function and can play with it in the repl – the corresponding data type you already have (the hashmap) on your site. Even though in Haskell one could also just use a hashmap no Haskeller would be doing it. One would go with records instead.
There is then of course coupling in other ways. For example when you make concrete calls to specific functions instead of programming against an interface. You can do this in Clojure and in Haskell as well.
One kind of coupling is related to types and this may or may not be desireable. While the other coupling is more connected to an „unfortunate” (bad) programming style.
I don’t know how much experience you have with Haskell. If you know statically typed systems such as Java for example then you also have coupling via subclassing. The subclass is coupled to the base class, but there is no opposit relation. This can lead to challenges when trying to refactor the code.
If you have concrete examples at hand I would be happy to hear about them.
My line of thinking is specific to the Haskell type system. The one that we have available today. Not in general „statically typed languages”. And I don’t think that it is always easy peasy. But I think that it pretty much always is easier than in dynamically typed languages (and here I probably mean all of them). I would like to learn about examples/cases where it would be easy to refactor in Clojure but more complicated in Haskell.
I can’t talk this away. I still believe though that this is very rare. It could be the case that deep down in some function call stack you suddenly need IO and in principle you would have to change the type signatures of the previous eight functions in the call chain. And then their callers in other parts of the code may require other changes too. This could happen, it’s not unthinkable. The good thing though is that this can in principle be automated by tools, at least to some extent. Also you can’t accidentally forget a place. After the refactoring you can be confident that the new version will still work. Besides that such sudden needs for IO can more often occur because debugging/printing is required – and here I would be pragmatic and use tracing functions which allow me to temporary get IO without having to touch anything else.
One really important aspect for me though is to use powerful abstractions and make use of type classes. In Haskell the situation is really pretty nice. A lot of code is built around Functors, Applicatives and Monads. Those are really useful abstractions. For example, think about calling bar on an argument in a function foo.
(defun foo1 (x)
But what if you don’t know that you actually get an x?
(defun foo2 (x)
What if x is a list and you want to apply bar to each element?
(defun foo3 (x)
(map bar x))
What if x is a function that only returns the value that you need to put into bar?
(defun foo4 (x)
What if x is a function that can throw an exception?
What if x is a hashmap and you want to apply bar to every value and create a new hashmap where the keys are the same?
(defun foo6 (x)
(zipmap (keys x) (map bar (vals x))))
You see that we need a different implementation.
In Haskell our function would look like this:
foo = fmap bar
And this one single foo would replace all implementations above. It would also work for tons more situations.
Btw, the same can be done in Clojure if people were actually using Functors. The reality here however is that most don’t. So the interesting thing here is that the typical solution in the extremly statically typed language Haskell is shorter and more reusable than the six implementations that are required in a dyanmically typed language. And this is just Functors, which are less useful than Monads.
Thanks to static typing it’s always totally clear what will happen when you call foo.
Yes, totally agree. In JS we have a weakly typed system which can reduce boilerplate code of explicit type conversions but which can introduce subtle and hard-to-find bugs. Stronger typing (i.e. Clojure) helps here because we see errors earlier. And Clojure also went steps into the direction of purity by adopting functional data structures where mutation is explicit. Those are already powerful steps against bugs. No wonder that productivity goes up as debugging time goes down. This purity aspect motivates you to restructure your code differently, often in a cleaner way. So constraints can help to make better design descisions.
Especially when we talk about a type system such as Haskell’s, where you can express quite a bit with it. For example web pages that would in principle allow XSS attacks at runtime will be compile-time errors. Or sending emails inside a DB transaction can be an error (so customers won’t get notified that something worked and then a rollback happens, ugh…).
I hope so. Maybe if you have examples let me know. I showed you how generic and reusable Functors can be. The guys who were studying abstractions for decades in their universities have identified some pretty remarkable things. Understanding that mapping a function over a list shares aspects with running computations that can fail is a pretty powerful discovery. Now we can program against such interfaces that live on a much higher level. And when you take monads into considerations that upgrades the game again. For some years libs start using Profunctors (i.e. contravariant bifunctors) and gain some nice qualities for sql programming, etc. That’s pretty nice (imo).
I don’t understand this part. I mean even if you don’t use type hints every value in Clojure always has a type. And you certainly need to know this type to work with your data.
It’s true. Before us there were thousands of such debates and in the end it’s not really clear that they can „change” much. However: we can learn things. If I have to argue with/against you then you may force me to think of interesting examples. We are users of „esoteric programming languages” and as thus are likely to be deeply interested in topics like this.
It took me years and years of experience to discover that I seem to be from that camp. In principle I used mostly dynamically typed langs in the past 15 years, basically Common Lisp and Clojure.
I totally agree. There is not one pair of shoes that fits everyone extremly well. Why should there be one programming language that everyone loves, even though the mind is a more complex thing than feet?
And I don’t want to argue against people preferring Clojure over Haskell. I am one of the earliest adopters of Clojure. But still I love being honest. Hickey himself said that Clojure is not the silver bullet. When I started studying Haskell I discovered how many of its concepts have influenced Hickey and are now part of Clojure.
About the first three words I could say: this is exactly the same situation in Haskell. But it’s true that one would totally not use just them to model data, and certainly not as default. One would always try to be more specific and communicate to the system what the developer (oneself) is thinking. Be explicit. That’s the desire.
Frege. Eta Lang.
This is constantly done in Haskell. The only difference is that you need to be explicit about this.
Parts of this idea are also in Clojure. There is swap!. There is persistent!. Those have side effects. But the exclamation mark is no accident. Hickey wants to be explicit when purity of data/state is given up. Haskell only goes a step further and ensures this in all cases.
As soon you want to break purity you can always do it, change the type signature in your Haskell code and you have the finest imperative programming language available (to say it in the words of Simon Peyton Jones).
This is important, indeed.
It’s called Template Haskell.
I always have my Haskell REPL open. In one Emacs buffer I edit the file and immediately try my functions out in the REPL.
Probably nothing can beat Lisps in this aspect. This is just dramatically much better in Clojure than in Haskell.
I prefer algebraic data types. They spec things nicely and if I make a sudden change not only I find out that something is wrong but I know even exactly which lines in what files I need to update.
You can serialize Clojure data also in Haskell.
Possibly you talk about the Mini-Kanren implementation. Also available for Haskell: http://minikanren.org/
Besides that: when you do advanced type programming then you switch to a programming style that is similar to Prolog.
Available in Haskell as well, together with several other abstractions. In this aspect Haskell is +/- best in class.
Thanks for the nice post. Your assessment is nearly correct. I still am better at writing Clojure than Haskell. But I indeed already prefer it.
I tend to prefer Clojure over Haskell (and, to be clear, that’s spoiled for choice; if we could use Haskell at work, I’d be thrilled) for the following reasons:
Haskell’s way of handling side effects and state makes normal things (like writing to a terminal or communicating over a network) hard for the sake of purity. While this is a good trade in many areas, I tend to believe that many real-world use cases suffer overall.
Homoiconicity and macros.
Clojure adds the parts I like best about Haskell to the language (immutable by default, functional emphasis and strong concurrency) without the parts about which I am ambivalent (strong type system).
Libraries. Most functional-first languages struggle with libraries. F# and Clojure are probably the best, due to their status as languages hosted on popular runtimes.
They’re both great languages though. You’ll be a better developer for knowing them.
On the theory side, my first exposure to a language in the style of Haskell was using Turner’s Miranda in the 80s, during which era I read with great interest the research done by Friedman, Henderson, Darlington, Hudak, Hughes, Jones, Wadler, &c. The academic background and the motivations of the trade-offs embodied in these languages are not news to me.
On the practical side – in the decades since that time – I’ve shipped substantial codebases in many programming languages, including Haskell and OCaml. I have also built operating systems and
language runtimes as part of my professional life.
Now, why might I think you were trolling? When I said that I prefer prefix notation you replied that:
In Haskell you can express very much of your code in prefix
notation, including operators such as +.
This is such a spectacularly bizarre response that it made me doubt your sincerity. Haskell’s syntax was designed from the start to resemble the kind of algebraic notation used to teach maths. These
decisions were in specific opposition to the prefix notation used in Lisps. That there exist operators that can be applied from prefix position in no way makes Haskell a prefix notation language. So… what was the purpose of your speech act here?
I go on to say that:
I do not like starting from type signatures as much of what I do
involves exploratory programming in which I discover the types
To which you reply:
You always start with the types. Otherwise you could not use values
in many function calls.
You seem to have mistaken me, a human being, with my language runtime, a computer program. The runtime knows the types of all values at every moment. Me? Not always. Often I must interrogate data to learn
how it is configured, and I prefer to have the full power of the computer at my disposal while I do this. This process is assisted by writing “riskier” code that I’m perfectly happy to see fail interactively at runtime in order to learn things about the problem.
You seem to sorta-kinda sideways acknowledge this in your later message to @didibus where they mention that they prefer Clojure for “the live programming aspect” and you say:
Probably nothing can beat Lisps in this aspect. This is just
dramatically much better in Clojure than in Haskell.
Well, yes, and in the context of exploratory programming, “live coding” is – at least in my experience – by far the best way to go, which was the point of my previous comment.
The absence of type signatures makes code more susceptible to run time errors and the presence of type signatures makes the code more susceptible to compiler errors. The likelihood of experiencing a runtime error in the former case is probabilistic, whereas in the latter case one has proven that class of error impossible in exchange for doing some up-front work wrestling with the compiler.
In the early stages of development I am almost always more willing to risk runtime errors for the reasons mentioned above. Furthermore, I have found that the kind of errors I encounter in my code – even
large code bases in dynamic languages! – are very, very rarely type errors, which leads me to have an even greater tolerance for this particular risk in practice.
All of that said, there are situations where I prefer the compiler to offer me greater certainty. This is why I like gradual typing systems and hope to see continued research in that area.
Lastly, with important things that actually are hard to get right, what I really want is a theorem prover. For example, using Teapot to verify a protocol’s invariants really are invariant, or using Coq to know more about one’s program than type signatures alone can provide.
I don’t need to spend any energy naming types. I spend energy naming keywords (and providing specs for them). That keeps the code flexible enough so I can access :bar whenever something looks like it has a :bar.
Naming things that are not important to the code at hand leads to bloat and abominations like XMLFactoryBuilderInstance.
I’d like to stay in the kingdom of verbs as much as possible.
And when I have a better idea of how the shape of my data looks like and if I need to pass those around to external consumers, yes, I’d define a record or a protocol or both and use type hints.
It’s exploratory programming, because very often you don’t know your types yet, you don’t know exactly how they relate.
It’s a different way to work a problem.
It ain’t superior or inferior: It’s different and uses a different mindset.
If you’d like to juggle around and refactor types because you realize they need change: That’s cool too.
Enjoy your added compile time bonus, while I enjoy the runtime spec checks (which go beyond type theory)
I don’t care about static or dynamic types, nor about FP, LP, or OO. For me, they are overly complex, unreliable, and unscientific. I think they are very bad and upset me.
The production methods and business management ideas of large industries are also more mature than FP&OO. I have used them as programming ideas.
I think that RMDB is the simplest and most reliable in theory and practice, and it is the most rigorous, long-term, high-stress test in critical situations.
Before using clojure, I was a Foxpro programmer. I used clojure as a super Foxpro, and I also used it successfully in the field of WebApp and R language mixed programming. I will continue to apply this routine to the AI field in the future.
The main development goal of clojure is to write the database. The development idea is actually from the database, not the FP.
Clojure -> DBMS, SuperFoxpro
STM -> Transaction，MVCC
persistent collections -> db, table, col
Watch -> trigger
Spec -> constraint
Core API -> SQL, Built-in function
function -> Stored Procedure
The naming itself is probably not a real time killer that reduces productivity. But yes, having many types and think about how you want to model data – this can reduce initial productivity. In Clojure where you want to pass in a UserID you will expect an Int or a String. In Haskell you would possibly instead create a new type for this.
Keywords however are problematic because they are just like Strings. You can easily have a typo that goes unnoticed for a while and may even make it into production. As soon you understand that a set of keywords need to be changed because you want to remove one or add one (for example you may model a game and you have certain keywords such as :fire and :air and later you want to add :aether or whatever). Then you don’t know which specs you have to update and which functions now need to test against the new keyword. When I have my type instead and make it now data Attack = Fire | Water | Aether then Emacs tells me immediately all places that I need to update. For sure I will not overlook a single one and I am protected against typos too.
I also like the flexibility so I try to program against interfaces and make use of polymorphic types. By using abstractions such as Semigroups, Monoids and Functors it’s possible that my code is even more flexible and reusable.
Yes, totally agree. And we had to use a Config object to create this Factory. But we couldn’t simply use the Config but first had to create a ConfigFactory. Ugh.
Do you have an example? I am not sure I understand.
Also true. Sometimes we first need to just try something out and see it. Only then we understand better what we really want. It’s an interative process. I am one of those people who after 15 years experience with Lisp (and 11 years were full-time development with Common Lisp and Clojure (5)) came to believe that with a system like Haskell’s I can even do more exploratory programming. I do something and see that I fail. I see how my intuition was wrong and now know what I really want and I start to refactor. And I can do this quickly and confidently in some cases, if my upfront design wasn’t too bad. The refactoring is only so easy because GHC supports me there. And if I start with a really crappy design then no language can help me.
I am also not sure what that means. Can you perhaps explain a bit more about this?
From what you wrote I couldn’t know that you have seen Haskell before. So I wanted to make you aware that you can indeed use prefix notation in most places. All binary operators can be used in prefix notation. All function calls are in prefix notation anyway. Type definitions though and some type-related parts have their own syntax, just like Clojure’s macros such as defrecord, defun or proxy and Clojure’s syntax for destructuring.
It’s not just that some operators support this but all do. Where do we care for prefix notation? I would tend to believe that it is mostly in function calls. All of this is prefix in Haskell, with the exception of infex operators which you immediately also can use in prefix form.
This is also potentially true for some statically typed languages, such as Java, where you can only see at runtime what this Object declared thing is. In Haskell you should typically know most about the structure of data before runtime if you not start using dynamic typing via Dynamic. The situation about live-inspecting data is often easier in Clojure than in Haskell. In Haskell when I inspect data at the REPL I see more visual noise because more types are written down. Nested structures which print beautifully in Python or Clojure are difficult to read when just using readable print representations. It would require some extra work to write a pretty-printer to make them look like EDN or JSON.
Yes, it’s true. One thing is the pretty-printing situation. Another is that by default many things are not showable in Haskell. You need to turn this on. And then there can be unprintable things in a bigger data structure and that whole thing doesn’t print because of this and you first need to go back to the code and implement something for this. In small projects and scripts this could take too much time to be worthwhile. In production software I would hope that it doesn’t matter if in the work of a few years in the end two days were spent to implement a pretty printer for certain data structures.
Hmm I think “wrestling” is a too strong term. It’s very rare that I sit in front of a program and believe that it is totally correct and then have to change big amounts of code to make it work. But there certainly are situations were some boilerplate is required that wouldn’t be there when using Ruby for example.
I totally have to disagree. In the past two years I have intensly looked at bad bugs that caused hours of work to fix. In very many cases it would have been compile-time errors in Haskell. Not necessarily type errors. It could be that in an if/cond block a certain case has been forgotten. In such a case GHC would have complained. It can be a typo in a keyword. A good amount of bugs was due to nil (NPE on the JVM).
Just think about functions that instead of throwing an exception just return nil. This means you sign a contract that you test against nil when calling such a function (like map lookup or calling first on a vector). Just look at some random code and see if every single lookup in a hashmap first tests if the key was present. From my experience it very often happens that you just assume that the key is there and directly continue to work with the value. This may even work, but in the future other devs can change things and such assumptions are no longer true.
I was so very excited when core.typed came out. This was before I started studying Haskell. It was the coolest step forward in Clojure in my opinion. But then I tried it and it turned out to work badly. Perhaps in the future it will become more usable but I had bad experiences with it. And this was not due to a lack of trying.
For me a program is communication with the computer. Why should I not be more thorough and make clear what I want to do? If I know certain properties about data and functions then why shouldn’t I communicate this to the computer explicitly?
In principle I think each and everyone of us would like to know about programming errors. We all make them, constantly. Who really is against knowing defects or even just potential defects in our software?
Agda and Idris are really cool, Coq also. I am sure I will spend time in the future to learn at least one of them, probably both: Agda and Idris. However, also in the Haskell world there still is research going on. It already does support several features of dependend typing. We can use type-safe heterogeneous lists or use vectors where a get call always returns a value and never is out-of-bounds. But even without those Haskell already is pretty powerful. With Rank-N types and Phantom Types we can create proofs that for example keys are present in a hashmap and that a lookup wil certainly be successful. You will get an error at compile-time if you write code that would lookup a key in a map at runtime that isn’t present.
So, some interesting properties can already be expressed. In maybe 5 years Haskell will have Dependend Types and Linear Types. LT will not only improve correctness (by rejecting more bad programs) but also improve performance because it gives us automatic memory-management without GC.
Well yes sure. The same is true for any language. If Clojure’s Records no longer had a Map implementation I couldn’t say (:my-key my-rec) anymore. The thing about Functors and fmap is that you only write this code once. Without it you have to handle exceptions always explicitly via try/catch. You can’t abstract this away trivially into a function that is extremly different from fmap. So provided we have a defprotocol or some interface against we can implement the code in there is likely to be more reusable I reckon.
I don’t want to start a comparison between both languages, that is not my intention, I want feelings, subjective opinions, all kind of answers.
Lisp answered my criticisms of mainstream programming, so I used Lisps. (Most important criticism: how can programmers use an automation tool that’s hard to automate?)
Haskell answered only few of my criticisms of Lisp, and was worse in many dimensions I value. So I rejected Haskell.
Clojure answered many of my criticisms of other Lisps. So I adopted Clojure, as my default tool. (I use other languages when Clojure’s unsuitable, but none are Haskell.)
Maybe spending more time with Haskell will “make me” a better programmer. But that applies to virtually anything. I’d rather spend it on: philosophy, science, anthropology, logic, exercising, games, sleep… All these things typically give me more powers than studying another programming language, except when I have very specific reasons.
When evaluating Haskell, I read what its creators said, as well as Haskell-experienced programmers I trust. I generally ignore typical language advocates and quarrelers, since their claims are almost always too glib to be worth investigating in depth.
The Haskell Committee wanted expressions to look as much like mathematics as possible, and thus from day one we bought into the idea that Haskell would have infix operators. [This is in contrast to the Scheme designers, who consistently used prefix application of functions and binary operators (for example, (+ x y)), instead of adopting mathematical convention.]
Richard Bird [resigned] from the committee in mid-1988, much to our loss. At the time he wrote, “… We are urged to return to the mind-numbing syntax of Lisp (a language that held back the pursuit of functional programming for over a decade)…”
When I’m programming, I want a UI for maximum programmer power. Not for the comfort of the conventionally-trained. Haskell’s choice seems correct for Haskell’s aims. But I look upon their UI decision with similar contempt that Richard Bird had for Lisp’s.
I’m very impressed with the telepathic powers that enable you to disagree with me regarding the kinds of bugs I observe in my own code. If what you’re trying to say is that Haskell helps you avoid the mistakes that you make when you program in Clojure, and that you thus prefer it, well, by all means! Use the tools you like. What I’m telling you is that my experience does not match yours and thus neither do my preferences.
What?!? How could I have a preference between them without knowing both?
You have – in this very thread – repeatedly explained Haskell basics to people with professional experience using it, even after you’ve been given information about the backgrounds of those people. This suggests that you believe that those who have a different preference from yours can only do so in ignorance. I hope, for your sake, that you mature past this at some stage, lest you spend your entire life like that one dude at university who wants everyone to agree that Rush is objectively the best band.
It’s not telepathic powers but practical experience. Statements that some users of dynamically typed languages love to give is that type errors are extremly rare. This is such an outlandish claim that I have difficulties to expect this. I don’t want to rule out that when a genius is coding then he/she may make much less of them. Also I don’t think about type errors only, but also typos and missing cases when checking for values. And I have in mind that Haskell can guard against more errors via its type system. When talking to my fellow Lispers then I see tons of errors they make and they are not clearly aware that those are things that Haskell would have caught. The ideas of what a type error is (or compile-time error) is not always going far enough.
I accept it of course that you have a different experience here. It just contracdicts what I see so much that this simply is not my default assumption.
Especially when it’s about you it doesn’t seem very obvious. You might be a much better Haskeller than I am and you might know a whole bunch more about it than I do. Yet what you say is not too solid. For example your idea about gradual typing. What in Haskell prevents you from starting out with Dynamic. A little bit extra boilerplate, yes, but safer dynamic typing is the result.
As soon you would use core.typed and started to respect its messages the style how you program changes. As soon you begin to take it serious you end up with a language that is different from Clojure, even if you gradually start adopting it.
Besides that: we both are welcome to have our preferences.
core.typed is about five years old at this point. Some of the people you’re arguing with in this thread were working with Haskell’s predecessors in the 80’s and have worked with Haskell on and off since it appeared over 25 years ago. Just sayin’…
When people state that their experience is that the sort of errors they run into wouldn’t have been caught by a type system, you can’t categorically deny that since it is their experience. As @jackrusher said, it may be your experience that most of the errors you encounter in Clojure are ones that Haskell would prevent for you – and we wouldn’t deny your experience.
Some people like typed systems because that matches how they think and/or provides more support for the sort of errors that they would otherwise make. Some people like dynamic systems because that matches how they think and/or provides more support for the type of exploratory and/or evolutionary programming that they focus on.
You joined Clojureverse not all that long ago and you immediately came in and dumped all over this thread, arguing with pretty much everyone who said they prefer Clojure over Haskell. That attitude/approach isn’t very productive. Especially when you don’t have all that much experience with Haskell in the first place. It’s why several people might think you’re a troll – and would discount your opinion because of your actions.
Hi Sean. I like some of your libs. Even used lein-fregec
And I’m glad that it is still actively being worked on. Using it makes Clojure a different language and it showcases nicely how different people can have different preferences. As you can imagine – I like static typing. Yet this specific type system doesn’t feel good to me. I guess that if it were some Hindler-Milner kind of system I would indeed like it more and more or less write Haskell in Clojure.
And I absolutely don’t do this. I believe that others can have other projects and skills so they really run into fewer type errors. I just like to express that it doesn’t match what I have seen and thus comes unexpected. Plus I don’t just want to concentrate on the classical type errors but in general just things that can be caught at compile time. So because I am counting those in as well my stats may be skewed into a certain direction because many other people would not want to count them.
I didn’t say that. It’s highly subjective to say what „much experience” means. What I can tell though is that because I have 15 years experience in the Lisp world and was a super early adoptor of Clojure I still feel more skilled there.