Does it make sense to use clojurescript on front-end and haskell on back-end?

Can you connect your editor to a REPL in a live, running Haskell application and evaluate code from your editor directly into that running program, with the same compilation semantics as if you’d run the Haskell program the “normal” way?

In Clojure, the REPL is central to the workflow: you connect your editor to it, and you start your app up in the REPL and develop against that live, running application. You can also ask an arbitrary Clojure program to start a REPL at startup via JVM options and then connect your editor to that and modify it live, while it is running.

Now, not everyone uses their Clojure REPL like that – they’re missing out on a big piece of productivity – and it’s why people like Stu Halloway, Eric Normand, and many others (myself included) put out videos showing that fast, integrated “REPL-Driven Development” workflow: to encourage more people to work that way, because that’s the unique sweet spot for Clojure development.

5 Likes

People who don’t want to use JVM should consider clojurescript for back-end.

2 Likes

No. I’ve seen it done in Common Lisp, but not Clojure. When I code in Haskell, the REPL loads the code and byte-compiles it. You can edit the code in the code window, and load it, or experiment in the REPL - basically, the Python model.

Where are your videos, please?

I’ve found https://www.youtube.com/watch?v=gIoadGfm5T8. But it’s not a beginner’s video, and a little bit long.

I have some shorter videos on my YT channel – (4) Sean Corfield - YouTube – but I don’t know that they’re beginner-friendly either? Those show my workflow with Atom + Chlorine (Socket REPL integration) + Cognitect’s REBL. I’ve since switched to VS Code + Calva (with nREPL disabled) + Clover (Chlorine for VS Code: Socket REPL integration) + Reveal – but the workflow is essentially identical.

I use Clover/Chlorine so that I can rely on “just” a Socket REPL – which is built into Clojure itself and can be triggered at startup (for any Clojure program) just by specifying a JVM option. We run Socket REPLs in several of our QA and production processes so that we can connect to them – either via our editor or via a command-line tool (e.g., telnet or nc)** – and perform troubleshooting or, in rare cases, actually apply a hotfix via redefining functions in the live, running program.

This means that my workflow is “the same” for both local development and remote debugging/patching. Locally, I start the REPL from the command-line, with a Socket REPL started, and then start the app in the REPL (and I can start multiple apps inside a single REPL since they all use different ports). In both cases, I connect my editor to the Socket REPL running inside the REPL/program and have the full power of Clojure’s semantics available. I use the add-lib3 branch of tools.deps.alpha locally as a source (git) dependency when developing so I can add new dependencies to my running REPL/program without a restart (I show that in the (long) London Clojurians talk/demo). My local REPLs run for weeks, typically, sometimes for months – which is why I don’t start my REPL from my editor, because I have to restart my editor much more often to pick up extension updates.

**Note: for QA/production, we still need to connect via VPN and then set up an ssh tunnel: the Socket REPLs are only open on the loopback address on the server!

2 Likes

I prefer not promoting other languages in a Clojure forum, but if you like functional static type systems, you should know, if you don’t already, that OCaml has js_of_ocaml that compiles to javascript, and also an evolving set of OCaml-plus-alternate-syntax projects Reason/ReasonML/ReScript that compile to Javascript. OCaml programming is a bit different from Haskell programming, of course, partly because it’s not at all lazy (even lazy sequences require special libraries that are not very convenient), and the typing is more flexible than Haskell, in that creating side-effects doesn’t involve a separate monadic sub-language. But, like Haskell, the compiler does type inference, so you’re not required to dirty up your code with declarations, if you don’t want them (and if you do, they usually go in a separate file). Still, if you love Haskell, you might not like OCaml; it’s a different world.

(None of this is intended as a reply to arguments above for Clojure. I love Clojure, and one small part of that is that I like dynamic typing. But I also like static typing, as long as it comes with serious type inference.)

Clojure has spec and optional typing.

1 Like

Well, yes. What happens in Haskell is that errors get caught at compile time and stop you running anything.

The real question is “how useful is a half working program?”.

Is a program that works 90% of the time and fails 10% of the time, on certain data / values, more or less useful than a program which works 0% of the time because it won’t compile?

For some applications, sure. Failure at runtime is such a bad thing that you’d rather have no program at all than one which blows up at runtime.

For other applications, the reverse might be true.

There’s no single answer that suits all applications. That’s why we have different languages that make different trade-offs.

So the real question about “should I write my server in Haskell or Clojure?” is not “How many lines of code is it?”

It’s “what’s its profile in terms of failure? How catastrophic / costly are certain types of bugs, and how much is it worth paying the extra cost of being forced to fix all my bugs before I can get any part of it working?”

The trade-off between static and dynamic typing should be thought about like the other trade-offs, optimizations etc. In a sense, static typing is like premature optimization. It’s forcing you to fix certain bugs before you may really have to.

That’s the hidden cost on the flip-side of “I know that if it compiles it’s working” claim. Yeah, but it won’t be compiling for a while longer. The cost of dynamic typing turns up in the form of runtime bugs. The cost of static typing turns up in the opportunity cost of code that never got to a state where it could run at all.

7 Likes

Great explanation.

That’s an interesting dimension you’re adding that I think isn’t discussed enough.

For example, I brought up the point where for a traditional backend service, a type error in a strict language will throw an error and crash or at least kick off your alarms quite quickly. That means this type of error can be caught pretty quickly, most often at development time as you call your functions in the REPL, if not then during testing when you run your unit tests or integration tests, if not then while you’re baking your changes in staging environments, and if not then on a one box or some sort of staged rollout to production, and if it escaped all that, because maybe the code path to trigger it is rarely used and you didn’t have tests covering it, then at least once your alarm triggers for it, the fix is very quick most of the time.

Since static type checking primarly prevents this type of bug, I think like you said, it matters to evaluate the impact and occurrence rates of such bug to your use case.

For me personally, as I said, I feel a backend service will very rarely have such a bug escape all the way to prod, if it does, it’s most likely a rare occurrence that needed a not often taken code path, and why it escaped to prod in the first place, and wasn’t caught during your entire QA where you ran the service many times. So it’s not a common production bug. And when it does escape to prod, a backend service will generally catch the runtime type error and fail the request, log the error, and finally publish a failure metric which will cut some issue to the team where the on-call will be able to quickly debug the cause from the log, and make a quick patch. This also thus makes it relatively low impact.

Now does this occur a lot during QA? If every code change going through the CI would break the pipeline, break the build, etc. It would also cause development pain and slow the team down. But personally that hasn’t happened for us either, honestly type errors of that sort are most often caught in the REPL during coding time, or on a dev local environment when running tests or integ tests locally from their machine.

Now I’d be curious: what other type of bugs would a type checker catch? if any?

I think using closed ADTs over data-structures and a type checker can help catch typos in key lookups or accessing a key that wouldn’t ever exist on a map. I think this kind of error is more common in development, and maybe a bigger pain points when it comes to slowing the team down. They similarly tend to get caught pretty early though and rarely make it to prod, but if they did, they’d have a higher impact, because those won’t be strictly validated for at runtime either, so you’ll get a nil and the code might act as if it simply thinks the value is nil, or nil puning kicks in and everything “does the right” thing, and maybe there won’t even be an exception or error anywhere, making the bug silent, and only the user might start to suspect something is broken needing to report an issue to the team manually. So I’d personally consider this a bigger fish to fry if we wanted to focus on how to prevent some new kind of defects in Clojure.

What other kind of defects can it catch? And I think we should contrast that with defects a REPL can catch, or a code base with less LOC can catch, or that Spec can catch, etc. Static type checking won’t be your only tool to catch defects, so you have to contrast the occurrence and impact of defects caught by a static type checker against the occurrence and impact of defects caught by unit tests, integ tests, REPL driven development, lower LOC, simpler language constructs?, code reviews, manual QA, baking, generative spec testing, spec runtime assertions, immutability, dev training, better rested devs, etc. Not that static type checkers are exclusive to all of those, but it could affect how much, how quickly or how simply you can do any other one of these, so it still matters.

Finally what you really made me think about though is that there’s also a cost to having a bug in production depending on the use case from the point of view of the user.

For example, in a hardware driver, if there is a type error and it throws an exception and crashes, that really doesn’t help me as a user. My computer just becomes unusable, maybe even so unusable that even patching the driver is non-trivial, maybe I have to boot in safe mode or something like that to do it. So drivers really shouldn’t have type errors or anything to make them panic and crash. On the other hand, like I said, a backend service from the user point of view, you get a few 500 error codes in some rare circumstances, and a few hours later (depending on the service SLA), it suddenly works again, no need to patch, update or do anything from the user side.

But what about in application code? Like frontend or local apps like a command line, a text editor, a game, etc?

I think in those cases, it depends. A type error would be annoying, some error would show up, and so a piece of functionality would be broken, and as a user, I would need to report it manually to the devs (unless they have telemetry in the app), and wait for a new patched release to be made available to me, then proceed with manually performing the update, etc.

On the other hand, if the language has hot-code reload or is source based with readable source the user can edit themselves, or has a config file that allows them to hot-patch things, where the user might be able to fix bugs on their own, and don’t need to wait for the devs to release a fix. Maybe for a type error you’d still expect the devs to quickly deal with that, but for other type of errors, especially some related to your particular setup and environment, having that ability as a user (if you’re savvy) is great. Emacs is a good example of this, but I’ve seen greasemonkey scripts used to fix bugs in web apps before by users. So in Emacs when I encounter a bug, I can just patch my config and the bug is gone. For tech savvy users that’s awesome.

Similarity, if you were to write software for a remote robot sent to Mars, you probably don’t want panics that crashes everything, but you would also benefit from remote hot-patch functionality, because even if you use Idris and had a Coq proof that everything was bug free, and spent 200 months testing everything, you still could encounter a bug, and the ability to remote debug and hot-patch it for a million dollar robot and a year long space travel for it to reach Mars would totally benefit from that.

Anyways, I really like that angle, it brings concrete requirements into the equation and that makes a lot of sense.

1 Like

I have been eyeballing F# and the fable js compiler tool. Like a cousin (I actually grew up in FP doing F# in the early days; really like the language and the semantics are very close to Ocaml, but with better syntax IMO). Just an aside :slight_smile:

2 Likes

I’m curious about this:

Can anyone provide me with some background as to why people don’t want to use the JVM, and which platforms they consider better together with their reasoning?

1 Like

Sometimes, people prefer nodejs or other javascript runtime environments to JVM for back-end.

1 Like

The garbage collector or similar perceived memory overhead range from either fictional bogeymen living on FUD to actual deal breakers for some use cases. JVM likes memory (although as pointed out, you can coax it to use much less by default). At a technological level, it lacks things like value types (structs / types where user controls memory layout / complex primitive types), so everything user defined lives on the heap (more memory). There are proposals in the works to fix this, and to bring other stuff to the JVM like lightweight fibers. If you want to deliver native binaries, you “can” now with native-image, but there are substantial restrictions that require a “closed world assumption” to do so (reflection causes problems, and anything dynamically loading / generating classes at runtime is not supported).

The JVM isn’t (to my knowledge) able to support dumping the heap and cached native code that’s been JITd. There is a style of development (image based development) popularized by Small Talk and various Lisps (currently common lisp) that allows you to define new functionality in a running “image” and dump that state to a file (or executable) for future sessions or efficient application delivery (load up the image with all the stuff your app needs, define an entry point, and save the image to an executable that invokes the entry point and starts more-or-less instantly with no runtime load/compilation on the lisp system). I think the V8 js engine allows this too.

3 Likes

I did a bit of looking around, and it seems there has been experiments around checkpoint and restore for the JVM.

Noteworthy seems to be this: Checkpointing Java from outside of Java | Red Hat Developer which appears actually usable in some circumstances.

It also appears that OpenJ9 is working on adding exactly that feature as part of their JVM: Everyone wants fast startup: introducing JVM snapshot+restore – Thoughts on Managed Runtimes – Dan Heidinga. Eclipse OpenJ9 project lead and JVM developer The project is tracked here: Snapshot+Restore · GitHub

It also look like on the OpenJDK front there are various people looking into this as well, it seems Amazon is looking into it even: Call for Discussion: New Project: CRaC

Finally it seems Azul’s embedded JVM already have such feature: Faster Startup For Embedded Java Applications: Azul Systems Inc.

One interesting thing all these bring up is that some things can’t be restored as easily, file handles and other acquired OS resources. Random seeds, precise clock timings, etc. And some things might need to not be saved as they can be a security risk, such as encryption keys. I wonder how Common Lisp deals with those?

1 Like

Very good question. Same query with Small Talk. From the security standpoint, I think no guarantees are made and you probably have to develop some kind of serialization mechanism of your own (or intentionally leave them out of the running context until needed). Probably the same with file handles and other resources.

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.