Dynamic types: where is the discussion?

All the studies I’ve seen come to similar conclusion. Some of the newer ones I saw use GitHub commit messages to try to estimate defect per LOC. The difference between typed and untyped is just so minor, and there’s so many unaccounted variables that you really can’t conclude anything.

I have a saying which I use a lot at work: “when two smart senior software engineer disagree and have heated arguments about a software related topic, you know it’s because neither is better than the other, and both are probably good enough for the task at hand.”

If one was obviously better, we still wouldn’t be debating static vs dynamic typing.

For example, types themselves are a great idea, but JavaScript was loosely typed, that’s one reason Typescript is popular, you cannot change the runtime to be strictly typed, but a static layer of strict typing can help.

I don’t think anyone would argue anymore for not having types or for loosely typed languages. Because types are clearly a good idea.

Having those types exist statically at compile time and be static at runtime though isn’t obviously better or worse than having them exist at runtime and be dynamic, and that’s why there’s no agreement.

There’s one thing that static types are obviously better at and everyone even Clojure is in favor of that, it’s that static types help with performance and efficiency. That’s why Clojure has type hints for example, or why it has vector-of.

Other then that, I think static vs dynamic is an ergonomic difference, and that inherently mean it’s a personal preference. What better fits your mental and physical frame. But you can’t take just the static or dynamic nature of types in isolation when it comes to ergonomics, you have to consider the whole language package. That’s why I both prefer statically typed languages over dynamically typed languages, and yet I prefer Clojure over the statically typed languages I know.

I think this is an illusion. Dynamically typed languages I believe in reality are multiple times more popular than statically typed ones, even though almost all big company seem to push for static ones, dynamic language still seem to prevail time and time again in popularity. Ruby, Python, JavaScript, Excel, SQL, Bash, Lua, PowerShell, etc.

Also interesting to think that mathematics tend to be dynamically typed as well…

But why bother? If this is subjective, as both @didibus and I say - and studies seem to collaborate - what do you hope to gain by trying to convince someone else their preference is wrong?

1 Like

More like defending dynamic types against people who firmly claim they are inferior because of xyz (especially project managers)

Er, good luck… but maybe time to find another team/project to work on if you feel strongly enough about “dynamic vs static” to want to argue a purely subjective issue with your colleagues and managers?

1 Like

Any ‘argument’ can be pretty much summarised with these memes:


Personally, (1) holds more sway for me over (2) and I dare say for almost everyone out there. At least the guy in (2) is still “correct”. So arguments usually will favour static typing.

Dynamically typed languages ARE inferior in the sense that they are missing compiler features. These compiler features are solving valid pain points for a lot of people and teams - and so are being adopted. Rust for example has a borrow checker and a programming model to produce code in a non-gc runtime. It’s awesome.

And then you get articles like this from people that have bought into it, used it for a long time and developed their view of it:


Being “correct” in a programming sense is not the equivalent of passing compiler checks. Sometimes the “correct” thing to do might be something that compiler does not allow. Or maybe the compiler is always right. This is something that’s really hard to justify without looking at the specifics of the situation.

However, the people that write the rules don’t necessarily use the same rules for themselves. Idris (a more Haskell version of Haskell) is not written in Haskell but Chez Scheme. Julia is backed by a lisp. It’s worth asking the question why this is.

4 Likes

This happened at work. There was code like this,

function foo() {
    return someThing === someValue
}

function bar() {
    if (foo) {
        // do something
    }
}

This had been in the code base for a long time. They switched to typescript, which caught the error. When they fixed it, several things broke. Other parts of the code had worked around the fact that bar was broken and those failed when bar worked correctly.

It’s hard to argue against using types when stuff like this is extremely common in JS code bases. Every JS code base I’ve converted to typescript had plenty of subtle bugs like this laying around.

The things I’ll run into with languages like lisp or clojure that aren’t as bad as js is the form of data in maps changing. When some feature is added, the contents of a map will no longer look the same. Finding all the places in the code where that map is used and are now broken is tedious compared to having a strongly typed language and the compiler telling you everywhere that’s broken.

The number of bugs studies are flawed because of things like my first example. Dynamic typing caused the code to be broken, but it had been hacked around and still worked. So the number of bugs is still the same, but the code is worse because of the dynamic types.

Whether to use dynamic or static typing is really a personal preference, but you’ll be hard pressed to find arguments against static typing that hold up to large scale projects with a wide range of developers working on them.

Ask yourself this: if dynamic is better, why do type hints and spec exist for clojure?

2 Likes

I think this is where “gradual typing” enters the picture, and I feel the claims of Clojure have matured past “untyped” to “gradually typed”, which is what Typescript is, as well as spec.

So maybe the argument is for dynamic typing at the beginning (both beginning of learning and beginning of projects), and the argument is that the ability to start fast and add custom specifications as needed is the point worth making.

I wish I knew some studies, or at least good essays, putting this to well-reasoned words. If they aren’t out there yet, maybe I need to make them…

2 Likes

In your example, can you explain what exactly is the issue? Is it that foo is used as an object in the condition? Instead of being called as a function?

How Typescript catches this as an error? What if the condition actually wants to check that the function is defined?

Is the code really broken if the program runs without defects?

It’s hard to find arguments against guard rails in general. Can you really argue against more stop signs? More pedestrian crosswalks? More protection gear when cycling? Argue against more padding around sharp edges?

The one good argument against in my opinion is the one Paul Graham makes here: Beating the Averages

At my work we’re one of the only Clojure teams, and our team has a reputation for being one of the most reliable at delivering on business impact and value. Stakeholders like working with us, because we get things done. Other teams are using Java or Scala or Kotlin. We own twice as many services as the average team as well, without needing double the engineers.

We also have some of the least defect rates, we work on live services, and have great availability, uptime and the number of support issues and incidents that are due to an actual code bug are minimal, less then a handful per year. All that gives us a pretty good reputation internally of exceeding expectations.

Now, I wouldn’t switch to JavaScript from Clojure and expect the same. That’s why I think all conversation about type checker on their own is irrelevant if you don’t consider the rest of the language as a whole, because the end result is the whole package.

Anyone who comes from JavaScript and proclaims static typing is better has a skewed bias, maybe it is better for JavaScript given everything else about JavaScript. That might not mean it would be better with Clojure, especially if it means sacrificing other things and thus restricting other features or properties of Clojure in the process.

I insist here, because this is a very common fallacy in those discussions. Clojure from the studies I’ve seen has a much lower defect rate than Java, but Java has a much lower defect rate than JavaScript. Yet I’ll see people generalize to say that static types prevent defects, except Clojure has less defects than many statically typed languages, what gives?

My team also maintains a few Java and Scala services, and they have measurably more support issues and incidents that actually are due to a code bug, they also often have more functional bugs, as in they failed to really meet the spec, even if the code doesn’t have bug, these are insidious bugs, but Clojure I think catches those because the REPL means you’re more in touch with the actual runtime behavior, and you more quickly realize wait is this really the behavior that makes sense for the use case?

Again, I wouldn’t be surprised if we switched to JavaScript that it would have even more defects and bugs of all kinds.

That’s why personally talking in the general sense of static type checkers or not doesn’t seem as useful as discussing the actual languages.

If you ask if Clojure is a safe language that leads to low defect programs, or if it is prone to accidental bugs or not, that’s a much more interesting discussion. For that, we also more easily find no evidence from the research analysis and from annectdotes in the wild, or from my own experience. It actually seems like Clojure is on the safer side of the scale, and tends to produce quite low defect programs.

Ask the same for JavaScript and you might have a very different answer.

That’s a misunderstanding of types I believe.

The casual language has gotten a bit confusing so it’s no wonder people misunderstand types.

A static type, is when a variable or container for values cannot change the type of values it contains at runtime.

A dynamic type is when a variable or container for values can change the type of values it contains at runtime.

In both cases types exist.

An untyped variable is one where the type of values a variable or container contains and can contain is unknown, and really it’s just a pointer to a memory location, with no knowledge of what the thing it points too is at all.

Now it turns out that if you wanted to build a validator which is often called a type checker, that could run at compile time, and infer from the source code itself that there will not be assignments of the wrong value type to the wrong variable type at any given point in time of running the code, that if you allow for variables to contain values of different types at different times, it becomes an impossible or not currently known how to implement such a type checker that could truly reason at that level of dynamism.

Thus to build a type checker that can reason about runtime type correctness from the source code only, what is casually called a “static type checker”, you must also enforce a runtime that will not allow for dynamic types, but will force all variables or value containers to have static types, that means that the type of values they can contain cannot change. Well or enforce that your compiler doesn’t allow to compile those programs.

On top of that, there’s other challenges for building such a source based type checker, like allowing a variable of value container to contain a variant of possible types makes things a lot harder as well, so supporting heterogeneous variables or value containers tends to be trickier and if you really want such type checker you might also have to enforce a runtime that doesn’t allow that, such as forcing a List to only contain homogeneous types.

The distinction I’m making is that on one hand you have a program that can reason about type correctness given only source code and some constraints/properties for it to prove will hold of the running program. This doesn’t have anything inherently related to static or dynamic types, except that it’s impossible or very very hard to implement one for the case of a program that would allow dynamic types. And due to the complexity or impossibility of it, you accidentally have to restrict the runtime types to be static.

This made it so much so that we even started calling languages with such a type checker statically typed languages, and those without dynamically typed. In theory, it’s not necessarily the case, you could have a statically typed language that doesn’t have a source based type checker, but nobody does because why would you impose such a restriction without any other reason? The trade off only makes sense if you gain something else in doing so.

Now that I’ve made this important distinction clear. If you look at Spec, you realize that Spec doesn’t actually make types static, in fact Spec is able to validate dynamic types, because it too is dynamic in its validation.

Thus in no way is Spec a form of static typing or forcing static typing on the program.

Type hints on the other hand do enforce static types, allowing the memory to either use primitive types which are more compact and efficient, or allowing the compiler to hard-code the dispatch directly to the method of that exact type without needing dynamic type inspection for the dispatch, which again is more performant.

That use of static types is clearly better, if you care about the performance and memory efficiency, I’ve never seen someone claim otherwise to be honest. Some people can claim it’s annoying to force it, if you don’t mind the performance impact, why force it? But having it as an option is an all around positive in my opinion, I see no trade offs.

This is another slight misunderstanding of nuance that I sometimes get pedantic over haha. Spec is not like gradual typing, Spec in fact doesn’t even deal with types at all. Spec deals with values directly, and uses predicates as constraints. If a type checker could reason about source code from those predicates it would be awesome, but it’s kind of a very different approach.

Where my mental model is still fuzzy about is around Dependent types. Do they allow reasoning about dynamic types from the source code? Do they allow a correspondance between predicates and types?

Gradual typing is an interesting approach, let the programmer decide which trade off to make when and where between the two.

I think the downside has been that it reduces quite a bit what the type checker can prove and so in practice you might find that the added effort of annotating types in the source isn’t worth it if it doesn’t even let the type checker guarantee that it works in the entire code base. Especially because we use so much library code nowadays, of they’re not annotated, all code that depend on that unannotated code can’t be reasoned about and that’s like 90% of your application.

Edit:

Ok, this is already very long, but I’d like to also bring up the consequences to Clojure of all this.

You know the pattern of having a single Atom with a heterogeneous Map to represent all of your application state? That’s very dynamic in type. The values this Atom is allowed to contain can change over time, the values this map can contain can change over time, it can also contain values of different types.

The entire data-orientedness seems hard to reason about from the source. There’s quite a few patterns of structuring values and code I think that wouldn’t allow for a type checker to reason about we’d need to get rid of.

I think that’s the struggle a type checker has if we want one for Clojure, how do you retain the current look and feel of Clojure, keeps it’s current idioms and patterns, the same.ergonomocs, but also allow a type checker to reason about the types of it at runtime, without just forcing Classes back into the language, and going back to homogenous collections?

Even Haskell feels more OO in a lot of ways, because it defines all these “types” that’s like half the ergonomics of an OO language is having to write a definition of a static structure to contain values of types that can’t change.

Point {
  x : int
  y : int
}

That’s technically not OOP, but it sure start to feel like it.

Now you want to put a long in there? Sorry, why don’t you define a PointLong type?

PointLong {
  x : long
  y : long
}

Well that’s annoying? Okay, why not spend an extra 12 months of man effort to add generics?

Point<T> {
  x : T
  y : T
}

Wait, you want a Point where x is an int and y is a long? Err… damn you… Maybe add a runtime cast? :joy:

Ok fine, let’s spend another 12 month man effort to add union types:

Point {
  x : int | long
  y : int | long
}

What is that? You’re worried this allows invalid points in places it wants a Point of x and y of int, and that no where in your program are Points of x long and y int allowed? I’m sorry, I’m out if ideas.

Is this really the feel you want for Clojure? I’m personally not sure if it would bring me the same joy if it was the case.

6 Likes

This isn’t really true – but that static checker can get very, very complicated. This was the kind of problem the company that I used to work for dealt with. We wrote static source code analysis tools for FORTRAN, C, C++, and later for Java. The FORTRAN analyzer wasn’t too complicated, the C one was originally a C compiler front end adapted and enhanced for semantic analysis, but the C++ one was really pretty gnarly. As your value system becomes more sophisticated, your static checker also has to become much more sophisticated. I wrote the C++ analyzer (from scratch) and I had co-written the C compiler at an earlier company, where we also developed a C-to-MALPAS translator so we could run C programs through the MALPAS “analyzer” (see MALPAS Software Static Analysis Toolset - Wikipedia). That ended up being a three-pass “compiler” system that needed to analyze the entire program and then make multiple passes over the C code as all globals were lifted into the function call chain across the entire code base (MALPAS had no global variables).

Of course, such systems are complex, cumbersome, and often quite slow – so the compromise is to add restrictions to the language itself, narrowing the scope of values and therefore narrowing the scope of the analysis needed, and ultimately making a compile-time type checker run fast enough that people are willing to put up with the edit-compile-test cycle that most compiled languages require.

If you’ve ever tried to use Typed Clojure, you’ll have a really good feeling for what I’m talking about here: it’s an incredibly sophisticated piece of software that is not particularly fast at its job and often still needs some assistance with annotations in the code (although its ability to do unassisted inference has improved a lot over the years). These are hard research problems – and type inference and static source code analysis research has been going on for about as long as we’ve had programming languages (part of my PhD research in the early '80s was around type inference and how language design affected that).

I guess a TL;DR of both @didibus 's post and mine is that dynamically typed languages are typically more expressive than statically typed languages because the languages have fewer constraints on them. That sort of thing is always a trade-off – you may get something built a lot faster in a dynamic language but you may end up with bugs in the system that static analysis could have caught. That doesn’t mean the system is automatically buggier than something built with a static language (as seen in the various studies): you may just have a different class of bugs in the system. Some people make the trade-off in one direction because they prefer the expressivity and fluidity (and, potentially, the speed of development). Some people make the trade-off in the other direction because they prefer the feeling of safety and working with a more restrictive medium. They are both valid trade-offs and, again, which you choose is likely to be entirely subjective.

I’ve written production software in everything from assembler and C to Scala – which runs the scale from weakly-typed to strongly-typed – and I prefer something a lot stronger than C, but I’ve also used a variety of dynamic and static languages and prefer something on the more dynamic end of that scale. I like types, I just prefer them to be checked at runtime :slight_smile:

6 Likes

I think it would be fine to talk about it more. Academia in general has tilted so heavily to the side of statically typed languages over the last generation that dynamically typed langs are starved for attention, despite the widespread use of them in actually getting stuff done with no readily apparent difference in quality.

Static typing advocates argue that their programs are better at:

  1. Preventing bugs. Last I checked, the issue trackers of the libraries in statically langs were just as full (perhaps more full) of bugs as the dynamically langs, despite all their types and tests. Bugs are not a result of a lack of types, they are the result of confusing/conflicting/changing requirements, domain complexity, insufficient design, concurrency, etc. While statically typed programs prevent certain classes of bugs, those bugs are also trivial to find and fix, and static analyzers for dynamic languages can do that too. Taken to an extreme, static lang fans will suggest things about proving correctness - that capability does not seem to be either easy or widespread to me, and I have doubts that it requires static types to get there.

  2. Refactoring. I will grant that static langs provide more support for mechanically altering your program but why do you need to refactor things? Most “refactoring” that I’ve seen is actually adaptation to changing requirements.

For example, adding a field to the UI, that flows into the data, that flows into the business logic that flows into the database. Clojure’s approach here is simple - just add more attributes to your map. You’re still going to need to make changes in certain places, but how many incidental changes do you need to make? My own experience has been that dynamic langs respond to change better, and this is the kind of thing I do most frequently. I hear the static folks starting their “row type” printing presses already.

  1. Ease of development. The contention here is that static langs are more amenable to making tools that provide you with better suggestions and completion at development time. But I seem to have a lot of pretty great static analysis of dynamic langs too (and not just in Clojure). I will see your intellisense and raise you a repl for interactive verification and testing. If I had to pick one (I don’t), it would be the latter.

  2. Runtime performance. This assertion here comes from being able to use more information about your program to better compile your code. I just don’t see that much evidence of this - there are counter arguments to make about dynamic runtime optimization (where static does not have that info), and about shared intermediate code between dynamic and static langs in most popular runtimes. But if you want to boil it down - in almost all cases, dynamic code is easily “fast enough”.

With Clojure, we can easily build correct, fast programs that respond well to change, at least as well as other statically typed languages. I think there’s more argument to be made that Clojure is particularly good at responding to change over time, certainly that has been my experience.

15 Likes

Great call outs Alex. I especially think #2 and #3 are the most important ones to people in my experience.

Every time I’ve tried to influence others to pick Clojure it came down to #3 as the first criteria, and #2 as the second criteria.

Now to be honest, I think Clojure could improve when it comes to Ease of Development and Refactoring. I think it is true that Clojure is harder to follow along what keys are available where, what entities exist and what data do they contain, where are they used, etc. I think it’s also true people struggle to figure out simple things, how do I run code? How do I setup and use a REPL? How can I setup tooling to have proper auto-complete? How do I know my syntax is correct? Where is this function used? Where is this piece of data used? I’ve seen people struggle with things like How do I loop? even. For refactoring people have similar issues, how do I find each things that depended on the thing I just changed?

The tooling has definitely improved, but I think their discoverability and getting started/learning curve is still a struggle. And for some of the other challenges, that’s inherent to how the language is today in some aspect. It’s hard to answer the question “What depended on the thing I just changed” when dependencies are sometimes dynamic or generally difficult for tooling to keep track of.

For me the keyword here is “Ease”, those things are harder in Clojure, but they can be done and you can learn to manage it by becoming better at using Clojure.

That said, that lack of “ease” I’m pretty sure is responsible for almost all people who try Clojure and decide not to use it, or who hesitate to learn or work with it.

I think it be interesting to discuss more what could be done around this, and is there a way to make those things easier without trading away other good properties of Clojure?

Static types can be one solution that makes some of this easier, but it also seems to trade too much of what makes Clojure what it is away. Gradual Typing might be a middle ground, but it’s no easy task to add such a layer and find a way to do it that complements Clojure nicely and doesn’t just add friction. I’m curious what else could be done here as well.

Edit:

Some of the things I think have improved on those in recent years are:

  • Schema/Spec/Malli
  • clj-kondo
  • Clojure CLI and tools.deps / tools.build
  • The Orchard with nRepl / middlewares / Calva / Cider / Cursive, etc.
  • REBL / Portal / Reveal
  • Shadow-cljs
  • Figwheel-main
  • Babashka / nbb / sci
3 Likes

A dynamic type escapes type checking at compile-time; instead, it resolves type at run time. But that run-time could be an external program running the program say on file save, or when you eval an expression.That’s how LSP, clj-kondo, and Clojure spec cover a lot of the same ground you find in many static languages but do it more alcart.

What matters is PL fit to the problem you are working and that’s mostly based on how close to the metal you need to get. Beyond that, it’s community size.

I think languages like Haskell are interesting because they encode ideas with very strict termonlogy. However, It’s more “natural” for me to think in datastructures and algorithms, so i’m not inclined to go to far down that road. I don’t fault anyone that does though. It’s like learning another language, the benefit would be mostly to talk to cool people in the community.

2 Likes

Before even adding to the discussion, I wish we could raise people’s awareness to the following facts: most of the opinions we see about the static vs dynamic debate are based on insanely insufficient empirical evidence.

Whenever I see someone saying things like “in my experience, dynamic typing doesn’t scale to large systems”, my BS alarm starts ringing (which is hard to express in a diplomatic manner), and I know this person is really saying something like “I struggled to scale PHP / JavaScript / SomeOtherDynLang in my projects.”

We have not AT ALL exhausted the possibilities of either static or dynamic typing, and are therefore in no position to make general conclusion about their respective viability. Still, experience reports from state-of-the-art, well thought-out languages like Clojure and Haskell are the signal that gets diluted in waaaaaay too much noise in this discussion.

5 Likes

Great points! So, what kind of problems does dynamic/loose typing NOT fit well? (since fitting anything is kind of the MO of dynamic typing)

Yes, I think that is a great reminder to wrap around to the original question: what are some well-studied resources that contribute to this discussion? I think you’re right that gut-feelings are a too-common and unreliable leader in these discussions.

Have you seen this summary? Literature review on the benefits of static types

1 Like

Took me a while to figure out the date of that article – November 2014. I think I’ve run across it a few times over the years and it certainly matches my feelings about several of the studies I’ve seen (all of which he tackles there, I believe). As noted in that article, this is not a “well-studied” field – it is just a “well-discussed” field :slight_smile: and nearly all of the studies seem to be either flawed or inconclusive (or both).

The few comments about static analysis (without adding types to a language) ring true to me – and matches my comments about this in the thread above: static code analysis can find bugs in both dynamic and static type languages that can be hard to detect via testing (and inherently get past the type checker in the static type language case). Many people in both camps find value in linting tools of varying sophistication so this should not be a surprising observation.

2 Likes

That is almost exactly what I was hoping for when I posted this question! Thanks for that, and for @seancorfield response remembering the date of it. The only drawback being its age, but as has been pointed out, there really hasn’t been much added to the conversation in the last decade(s). Just tides of fashion.

One thing I haven’t seen so much pointed out is the difference for beginners. My intuition is that beginning/learning developers may much prefer the dynamic typing because it alleviates one of the (at beginning stage) incidental complexities of development, but my intuition suggests that managers might wish for a static type system to reign in the presumably beginner developers they don’t trust to catch their own mistakes. I have been in that supervisor role and could understand the temptation to call for static type “babysitting” provided by a java-esque type system, knowing this babysitting is exactly what infuriated a veteran developer like Rich into defining certain features of Clojure.

On the usefulness of discussion itself … as long it’s dialectic (and not eristic) it’s useful, as the greeks knew when they named them such, and before empiricism was a thing. On the main discussion, I haven’t tried here to be equivocal about the benefits of static types, but to talk about their hidden costs through a few (admittedly contrived) “facts”.

  • Phone chargers suffer breaking changes, but your brain never does. This difference between “information” and “mechanism” is related to the intuition that programs with “few types, many functions” are malleable, and those with many types, few functions are brittle. Brittleness is a big problem for those who pay us as they expect software to act like humans and grow and adapt to their needs. Programming languages which idiomatically encourage the creation of many new types as a selling point, will suffer death by specificity and will be brittle in a manner we created for ourselves, and not essential to the problem.

  • There are many types of types. Martin Odersky found the need to project through whole static typing landscapes to hit upon those idioms most useful for the programmer, some of which have found their way into Typescript too. And yet Java designer Joshua Bloch felt that the relatively familiar java generics failed to preserve “Java as a blue-collar language, not PhD material but a language for a job”. I have a PhD and love a challenge, but agree that figuring out the right type system from the many options can be its own parallel puzzle, more distracting than beneficial to solving the original problem.

  • TypeScript [may not be] fully coexpressive with idiomatic JavaScript. This was a view expressed in The Typescript Tax that chimed with my experience. In Clojure It’s common to pipeline a map through functions like (defn add-z [{:keys [x y]} :as m] (assoc m :z (zfn x y)), essentially adding new information to a map based on that already in the map using some zfn. It took me ages to work out this has no idiomatic equivalent in Java, requiring as it does intersection types on both the input and the return type, with a “pseudo-code type signature” of type add-z = (x & y) -> (x & y & z). In other words, with all those type options available, there were useful approaches you simply couldn’t express in a fully typed way, making conscientious developers think must be missing something.

Some final thoughts on this. Firstly Uncle Bob (entertainingly as ever) reminds us that the debate started decades ago, face-to-face in offices, by men (mostly) wearing ties. Secondly, Rich Hickey, when asked about Typescript relatively recently, reminds us that static vs dynamic typing is probably not the biggest thing - that being state management in imperative coding. I guess it’s obvious that I am won over by Rich’s reasoning around the costs of static types and encourage anyone looking for discussion to simply study his talks. I’ll leave the last word though to Mark Bastian who gets to the heart of designing “data first” over “api first” and poses the question “why are you doing this” of the latter.

9 Likes

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.