Polylith: A software architecture based on lego-like blocks

Take a look at our awesome new architecture: https://polylith.gitbook.io.

  • Develop faster. Work with all your code in a single REPL.
  • Compose systems out of decoupled building blocks.
  • Test and build incrementally and get a faster feedback loop.

Make it simple. Be fast. Have fun.

/Joakim Tengstrand


This looks really interesting, thank you. The written documentation is great, and I like the metaphors. You have convinced me to give it a shot on one of my projects.

Have you had any areas where the polylith approach did not work as well as expected?

I’m glad you like it!
No, we only have good experiences so far (the first two years!) :slight_smile:

Have you tried integrating it with the reloaded workflow, e.g. using Stuart Sierra’s component library? My projects all have a system-map at the core, which can be restarted from the REPL, effectively resetting what is running in the REPL to whatever is stored in the namespaces.


I am asking because the RealWorld example does not use the reloaded workflow, and I cannot quite wrap my head around how it would work in the context of environments and systems in a polylith.


Interesting, the focus seems to be on the structure of the code in terms of deployment. Specifcallly to make your components individual projects. The guide is well done. I havn’t yet looked at the example app, but there’s a few things I think would be great to talk about.

How does it work with version control and build systems?

How is state handled?

How is versioning handled?

Also, it seems to approach an OOP model, what distinguishes it from one?

1 Like


You don’t need Stuart Sierra’s component library when you use Polylith.

The reason is that you can run all your components and systems that you have in the development environment in a single REPL (thanks to the symbolic links).

Think of it like this.

The development environment is the place where you edit and test your components, bases and systems. The reason I say systems is because when you execute a function in a base from the development environment, then it will delegate all it’s calls to the components that you have in that environment, which is equivalent of testing a real system (the development environment can only contain one component per interface).

Systems are just a place where you put one base and a number of components together, so that you can build a deployable artefact, e.g. a service.


I’m glad that you like the documentation!

  • Version control: a workspace is stored in a single e.g. git repository. That allows you to make atomic commits that run the ‘build’ command of the plugin on the server.
  • Build systems: you can read about continuous integration here.
  • State: the bases and components are stateless. If you need to handle state, you can do it the same way you would in a non Polylith system, by having atoms and the like.
  • Version control: You don’t need to version the components and bases, because it’s just code. You probably versioning your systems, in the same way you probably do with non Polylith systems.
  • OOP: One important difference between Polylith and OO is that Polylith is stateless and therefore simpler, because there are less path’s the code can take through your code, which makes it easier to reason about. Components don’t support inheritance, which is a good thing. It adds context to FP, which gives a similar feeling like working with OO.

Thanks for the answers, some more questions.

Does that mean for multi-team you’d recommend using a mono-repo?

So where would that code live, if not in a base or a component?

Also, does that mean your bases and components never accesses any datastore? Or require session management?

But they both have an interface which could change no? And since each component is its own project and artifact, I’m not sure I understand how it’s managed without versions?


Version control
Does that mean for multi-team you’d recommend using a mono-repo?

If any kind of sharing and/or communication between the systems i going on, then you will benefit from having all the code in a single repo (workspace). A workspace should aways live in its own repo.

If more than one system is using a component, then it will help you get rid of code duplication and encourage reuse.

Because Polylith is an architecture that focuses on giving you as a developer the best possible development experience, it allows you to have different setup locally and in production. If you want, you can run all your systems as a single monolith in your local development environment that is executed by a single REPL. This will remove the need for mocking and simplify how you set up your local development environment. To be exact, no setup is needed at all, everything is just code running in a single REPL!

So where would that code live, if not in a base or a component?
Also, does that mean your bases and components never accesses any datastore? Or require session management?

I think I was unclear here. What I tried to say was that each component and base is just a collection of code that is later put together to form one or several systems. Let’s give an example. Let’s say we have the variable (def v (atom nil)) and the function all-cities living in the component c that when called for the first time checks if v is initialised, and if not, calls the database (by calling component database) and then reset! that atom with the values retrieved from the database. The next time function all-cities is called, it can use the state of v that is now already initialised. That’s an example of how you can handle state in a component or base.

Version control
But they both have an interface which could change no? And since each component is its own project and artifact, I’m not sure I understand how it’s managed without versions?

At least in Clojure, we don’t build single artefacts of our components and bases, we just ship systems. Each component and base that has changed since the last successful test or build will be AOT compiled against the workspace interfaces but that is just to ensure they conform to all interfaces.

It’s possible that in a future release of the Leiningen plugin, we will support AOT compiled components and bases that will need version numbers and be built and stored as JARs and used as libraries. If you implement Polylith for e.g. Java, this would be the way to do it.

Because all the code lives in the same workspace and repo, all the code will always be in sync. The reason is that every time you run the build command from the plugin, all affected interfaces, components, bases and systems are compiled and their tests are executed. The test command will give you the same level of confidence, but will not build the systems.

Every time you deploy, the plugin will know what systems have been changed since the last successful build, and therefore build those systems based on what is currently in the workspace. So if system s has the base b and the components c1, c2 and c3, and only c2 has changed since the last successful build, the whole system s will be marked as changed and in the end, be built and deployed (only c3 will be compiled and tested, not b, c2 and c3).


Hi, I read your website and still don’t understand the advantages of using the Polylith framework over using loosely coupled local libraries and language level polymorphism?

I would normally put my logic into clearly defined libraries that can be re-used by different application projects (web api, CLI app, etc.), and dynamically load config by environment.


The advantages are several.
One of the clearest advantages compared to the library approach that you suggest is the development experience. All components are just code that you can work with from one place. This gives you a faster feedback loop compared to having to compile some of them to libraries first. It’s a totally different feeling and something we call ‘development nirvana’ especially if your language supports a REPL.

With clj and deps.edn, your dependencies can just be local directories on the file system (or versioned artifacts in a repository or SHAs for git repositories) so there’s no need for the compile-to-library step at all.

At work we have a monorepo and we moved from Leiningen to Boot partly because of that (and also because we wanted a more “programmatic” build pipeline that was easier to extend using “just code”). As of today, we’ve essentially moved off Boot and onto clj / deps.edn – still with a monorepo – and able to work in a single REPL with access to all of our subprojects (libraries, components, whatever).

I haven’t yet fully read the polylith architecture website but wanted to clarify that tooling already exists that addresses this particular issue without needing an “architecture”.

1 Like


Thanks for your feedback!

The clj and deps.edn are great stuff. We haven’t tried them yet, but we understand the ideas and they are great. They could be very useful in Polylith if we decide to add support for sharing components across workspaces or just any code.

Polylith is a lot more than just a way of being able to work in a monorepo. The components are encapsulated building blocks that only expose an interface, telling you what you can do with them but hiding how they are implemented, which is classic encapsulation.

The idea of separating what to do from how to do it is also present when it comes to how you work with environments and systems in Polylith.

In Polylith, the idea is to have a super fast development environment where everything is just code in one place. So if component user “talks” with a component email, it will be in the form of a direct function call executed in RAM without any calls over the network. This represents what to do.

How user should call email in production is seen as an implementation detail, e.g. via a direct call “user -> email” or maybe over the network like “user -> email-api -> email-lambda -> email” where email-api conforms to the interface email and where email-lambda runs on a separate machine. This represents how to do it.

Environments and systems serve different needs. Environments serve developer needs, while systems serve end user needs. The idea of only exposing interfaces within a workspace combined with the flexibility to set up environments and systems differently is a simple idea but turns out to be extremely powerful.

The symbolic links are a simple but powerful way of configuring how the code is assembled in environments and systems. You don’t need to look in config files to see what an environment or a system looks like, you can just look in the file system. It also allows navigation of the code from your development environment without extra support from the IDE/editor.

In the implementation of the Polylith Tool for Clojure, the systems are assembled by using symbolic links to code (components + bases). When a workspace is built, changed components and bases are AOT compiled together with the empty workspace interfaces as a way to guarantee isolation (that they don’t depend on implementation details in components). They are not compiled to libraries!

The standardised way of structuring a Polylith workspace into well-defined directories (interfaces, components, bases, systems, environments) not only helps you as a developer to find and reason about the codebase, it also helps the Polylith Tool to make incremental builds which can speed up the build time significantly, both locally and on the CI server.

It’s worth mention that Polylith is not only meant to target Clojure but other languages as well. The symbolic links are a simple way of implementing it, which gives you most of the benefits (except the incremental builds) without the need for any extra tooling.

1 Like

I appreciate that – and I have (now) read the website in detail. I just wanted to point out that your claims about Polylith in connection with a monorepo are really nothing to do with the architecture and are much more about the tooling and, in your case, the “trick” of constructing a “dev project” built using symlinks to the subprojects.

Having read the Polylith website in detail, I will say that the approach feels very much like “OOP done the hard way”, at least as far as Clojure is concerned. I’d be interested to see what Polylith looks like in other languages (since you keep emphasizing that it is not Clojure-specific) and what the tooling would look like outside of Leiningen (particularly since we migrated from Leiningen to Boot back in 2015 and we just migrated from Boot to clj/deps.edn this month).

Hi Sean,

If you’ve come away from the documentation with the understanding that Polylith feels like “OOP done the hard way”, that probably means that we need to explain it more clearly! :slight_smile:

Do you feel that way because Polylith components have interfaces? Or is there another issue you have with the approach?

As we note in the documentation, Polylith is actually three things: a metaphor, an architecture, and a tool. The architecture part of Polylith is partially shaped by the symbolic link “trick”, but also by the high-level structure that components and bases give your code. These are decoupled and composable building blocks that are easy to reason about, reuse, test, and share.

Polylith doesn’t have any of the negative aspects of mixing state and behaviour that you get with objects. Instead of “OOP done the hard way”, I would describe it more like “Microservices done the right way”! :slight_smile:

Hi, I hope you’ll be able to share with all of us some success stories about Polylith in the future (I hope I just haven’t missed those!).

By the way, I wanted to ask: is Polylith something you would eventually grow into, or something you would want to start with right away?

I’ll try it out with my next project.
The only question remaining for me is: How does ClojureScript fit into this.
Do you have any recommendation about how to set up the local dev environment for this?

1 Like


You can read about our experience with Polylith in production here.

Polylith works equally well for green field projects as with existing systems. Every line of code needs to live in a base or component. All these decisions help you in the design process, and it just takes seconds to create a new component. You will probably start with a base and very soon you will also have a few components.


I’m glad to hear that you want to try Polylith in your next project. It will be exciting for us to follow how it goes.

Polylith has been developed primarily for backend systems. It could be interesting to share workspace interfaces between Clojure and ClojureScript in some situations, but we haven’t tried it out yet.

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.