Polylith: A software architecture based on lego-like blocks


#1

Take a look at our awesome new architecture: https://polylith.gitbook.io.

  • Develop faster. Work with all your code in a single REPL.
  • Compose systems out of decoupled building blocks.
  • Test and build incrementally and get a faster feedback loop.

Make it simple. Be fast. Have fun.

/Joakim Tengstrand


#2

This looks really interesting, thank you. The written documentation is great, and I like the metaphors. You have convinced me to give it a shot on one of my projects.

Have you had any areas where the polylith approach did not work as well as expected?


#3

Hi,
I’m glad you like it!
No, we only have good experiences so far (the first two years!) :slight_smile:


#4

Have you tried integrating it with the reloaded workflow, e.g. using Stuart Sierra’s component library? My projects all have a system-map at the core, which can be restarted from the REPL, effectively resetting what is running in the REPL to whatever is stored in the namespaces.

Edit:

I am asking because the RealWorld example does not use the reloaded workflow, and I cannot quite wrap my head around how it would work in the context of environments and systems in a polylith.


#5

Interesting, the focus seems to be on the structure of the code in terms of deployment. Specifcallly to make your components individual projects. The guide is well done. I havn’t yet looked at the example app, but there’s a few things I think would be great to talk about.

How does it work with version control and build systems?

How is state handled?

How is versioning handled?

Also, it seems to approach an OOP model, what distinguishes it from one?


#6

Hi,

You don’t need Stuart Sierra’s component library when you use Polylith.

The reason is that you can run all your components and systems that you have in the development environment in a single REPL (thanks to the symbolic links).

Think of it like this.

The development environment is the place where you edit and test your components, bases and systems. The reason I say systems is because when you execute a function in a base from the development environment, then it will delegate all it’s calls to the components that you have in that environment, which is equivalent of testing a real system (the development environment can only contain one component per interface).

Systems are just a place where you put one base and a number of components together, so that you can build a deployable artefact, e.g. a service.


#7

Hi,

I’m glad that you like the documentation!

  • Version control: a workspace is stored in a single e.g. git repository. That allows you to make atomic commits that run the ‘build’ command of the plugin on the server.
  • Build systems: you can read about continuous integration here.
  • State: the bases and components are stateless. If you need to handle state, you can do it the same way you would in a non Polylith system, by having atoms and the like.
  • Version control: You don’t need to version the components and bases, because it’s just code. You probably versioning your systems, in the same way you probably do with non Polylith systems.
  • OOP: One important difference between Polylith and OO is that Polylith is stateless and therefore simpler, because there are less path’s the code can take through your code, which makes it easier to reason about. Components don’t support inheritance, which is a good thing. It adds context to FP, which gives a similar feeling like working with OO.

#8

Thanks for the answers, some more questions.

Does that mean for multi-team you’d recommend using a mono-repo?

So where would that code live, if not in a base or a component?

Also, does that mean your bases and components never accesses any datastore? Or require session management?

But they both have an interface which could change no? And since each component is its own project and artifact, I’m not sure I understand how it’s managed without versions?

Thanks


#9

Version control
Does that mean for multi-team you’d recommend using a mono-repo?

If any kind of sharing and/or communication between the systems i going on, then you will benefit from having all the code in a single repo (workspace). A workspace should aways live in its own repo.

If more than one system is using a component, then it will help you get rid of code duplication and encourage reuse.

Because Polylith is an architecture that focuses on giving you as a developer the best possible development experience, it allows you to have different setup locally and in production. If you want, you can run all your systems as a single monolith in your local development environment that is executed by a single REPL. This will remove the need for mocking and simplify how you set up your local development environment. To be exact, no setup is needed at all, everything is just code running in a single REPL!

State
So where would that code live, if not in a base or a component?
Also, does that mean your bases and components never accesses any datastore? Or require session management?

I think I was unclear here. What I tried to say was that each component and base is just a collection of code that is later put together to form one or several systems. Let’s give an example. Let’s say we have the variable (def v (atom nil)) and the function all-cities living in the component c that when called for the first time checks if v is initialised, and if not, calls the database (by calling component database) and then reset! that atom with the values retrieved from the database. The next time function all-cities is called, it can use the state of v that is now already initialised. That’s an example of how you can handle state in a component or base.

Version control
But they both have an interface which could change no? And since each component is its own project and artifact, I’m not sure I understand how it’s managed without versions?

At least in Clojure, we don’t build single artefacts of our components and bases, we just ship systems. Each component and base that has changed since the last successful test or build will be AOT compiled against the workspace interfaces but that is just to ensure they conform to all interfaces.

It’s possible that in a future release of the Leiningen plugin, we will support AOT compiled components and bases that will need version numbers and be built and stored as JARs and used as libraries. If you implement Polylith for e.g. Java, this would be the way to do it.

Because all the code lives in the same workspace and repo, all the code will always be in sync. The reason is that every time you run the build command from the plugin, all affected interfaces, components, bases and systems are compiled and their tests are executed. The test command will give you the same level of confidence, but will not build the systems.

Every time you deploy, the plugin will know what systems have been changed since the last successful build, and therefore build those systems based on what is currently in the workspace. So if system s has the base b and the components c1, c2 and c3, and only c2 has changed since the last successful build, the whole system s will be marked as changed and in the end, be built and deployed (only c3 will be compiled and tested, not b, c2 and c3).


#10

Hi, I read your website and still don’t understand the advantages of using the Polylith framework over using loosely coupled local libraries and language level polymorphism?

I would normally put my logic into clearly defined libraries that can be re-used by different application projects (web api, CLI app, etc.), and dynamically load config by environment.


#11

Hi,

The advantages are several.
One of the clearest advantages compared to the library approach that you suggest is the development experience. All components are just code that you can work with from one place. This gives you a faster feedback loop compared to having to compile some of them to libraries first. It’s a totally different feeling and something we call ‘development nirvana’ especially if your language supports a REPL.


#12

With clj and deps.edn, your dependencies can just be local directories on the file system (or versioned artifacts in a repository or SHAs for git repositories) so there’s no need for the compile-to-library step at all.

At work we have a monorepo and we moved from Leiningen to Boot partly because of that (and also because we wanted a more “programmatic” build pipeline that was easier to extend using “just code”). As of today, we’ve essentially moved off Boot and onto clj / deps.edn – still with a monorepo – and able to work in a single REPL with access to all of our subprojects (libraries, components, whatever).

I haven’t yet fully read the polylith architecture website but wanted to clarify that tooling already exists that addresses this particular issue without needing an “architecture”.