There was a time where there was seldom any abstractions, people wrote in assembly code, it was as close to the concrete machine as you could be. It was painful, complicated, and doing anything was tedious, effortful and slow.
Abstractions were clearly needed. Higher level languages abstracting over the machine lower level details were needed.
Then there was a time where abstractions themselves were very simple, branching and looping were all just done with “goto”, it was error prone, confusing, and made working with other people’s code bases difficult. Abstractions were clearly needed, something to abstract over the lower level details of branching and looping and memory management with relations to those.
Fast forward to Java. Now we already started with quite a lot of abstraction, yet there were still times when things were more tedious then they needed to be, more abstraction was still needed, it led to the addition of Interfaces, the development of frameworks like Spring, the creation of template languages like JSP, the addition of code-generation tools like Lombok or API generation like Open API.
Once again more abstraction became hugely beneficial, delivering real productivity boosts and still helping to make things clearer, not more obfuscated. Even though it is true at each layer it becomes harder to understand how all these abstractions reduce themselves back to some concrete instance at the end of it all. But if you can trust in them, you need not worry about that, a good abstraction lets you forget and ignore the complex details underneath it, freeing you to focus on more of your higher level concerns progressively closer and closer to your real domain problem and away from the computer machine concerns.
Finally enterprise software reached a point where managing complexity got difficult, so people tried to promote best practices they had learned, basically ways to fit in more abstractions in certain situations that again benefited them greatly. There was a big push to advocate for “design patterns” and other judicious use of abstractions.
Lots of people, often mid-level developers, including me at the time, we went seeking for advice, while we didn’t understand why, what’s the need that drove this advice, what’s the use that benefits from it, we took them to heart: SOLID principles, GRASP, YAGNI, inheritance, interfaces, composition, we took it all at face value and tried to arbitrarily use our limited understanding of them everywhere we could, religiously and impartially.
This frivolous misuse of abstractions yielded the plagued over-engineered, obfuscated, puzzle-like, code bases that a lot of enterprise software suffers from. Where the hell is the actual code doing the actual thing?
This had more senior engineer once again try to push some new “best practice”, a new commandment to amend for the misunderstanding of the prior ones: “less abstractions is generally better”. Or in other words, just use the abstractions more experienced people have already put in place, stick to your popular framework, follow its existing patterns, stick to simple usage of your programming language, and don’t try to be smarter than you are, aka too clever.
Arbitrarily being against all abstractions because the best practice said to try to avoid them isn’t ideal though, and clearly abstractions were used throughout software engineering to tame complexity and be able to develop larger scale offerings.
Good abstractions are really awesome, and by definition of what makes them “good” is that they actually help rather than hurt. There’s countless examples of good abstractions throughout the history of software development. There’s even so many more minor abstractions that everyone implements on a daily basis without even realizing, like simply choosing what the function/method will be and what the arguments and return value for it will be. Or choosing where the data will live. Etc.
Understanding what is an abstraction is difficult, as it is abstract in itself.
For a computer application, the concrete is the machine which has live memory in RAM, on disk, in buffers inside network cards or GPU, alongside an ordered list of CPU/GPU/other-hardware instructions.
Machine code is the first abstraction, using the CPU instructions itself to abstract over multiple possible concrete executions, often dependent on inputs or computed results only available at runtime.
Abstractions over abstractions exist, higher-level languages abstract over this machine code, Clojure abstracts over Java byte-code which abstract over Machine Code and other things.
Abstractions are everywhere in programming, good ones pays dividend, bad ones obfuscate what’s really happening, and don’t play well with others, can’t extend to unplanned behavior, etc.
Developing the skill of understanding what are abstractions, what do they abstract, how, what does it take to make them concrete is very beneficial, and getting better at knowing when to use more abstractions, or when to use less abstractions, which construct to use to model them, and how to model them, that’s part of the journey to software mastery.
Food for thought.