Namespace inheritance a-la Elixir use/using, is this madness?

@Markus_Agwin asked if I wouldn’t mind posting a response I wrote in the #biff channel on Slack (he brought this thread to my attention over there)–here it is for what it’s worth:

Interesting. I just read through it. My immediate thought is that I’m not really seeing what the benefit of namespace inheritance would be. Didibus mentioned that it’s used in Phoenix to make defining models/views/controllers more convenient. Although Biff doesn’t really follow MVC as far as I’m aware, the equivalents (schema, ring handlers, ui code) seem pretty easy to define already? If I was more familiar with Phoenix/Elixir I might have a clearer understanding of why it’s helpful there and why or why not it might be helpful in biff.

I think namespace inheritance is possibly a separate issue from whether or not the framework actually provides inversion of control (as was correctly pointed out, biff still follows the standard Clojure practice of keeping the “framework” code in your app rather than in an external library). Again at this stage it’s not obvious to me that there would be any benefits to having Biff’s feature maps work more like plugins. the very first release of biff did use a plugin system actually–you annotated namespace with ^:biff metadata and then biff discovered them at runtime. but later I removed it because it didn’t really provide any value over the current explicit approach.

I went and found the commit where I removed the plugin system by the way. You can search for ^:biff example.core and :example/core (both in the same file) to get the gist of it–on startup, Biff would search all the clj files on the classpath for any with ^:biff-annotated namespaces, then do a requiring-resolve on the components vars from those namespaces.

Now we just have users pass in the vector of components themselves, and similarly users also pass in the vector of feature maps. It seems to work well enough :man_shrugging:. And it still allows you to have “feature namespaces” with only your domain-specific code, and no framework code intermingled.

All in all my feeling so far is that the explicit approach can be made sufficiently easy so that it’s not a bottleneck on adoption*. My opinion on that may or may not change as Biff grows. I have been thinking about wrapping more of the stuff in that file (i.e. the template top-level namespace where all the framework code resides) in helper functions. It mainly depends on “what is the probability that people will want to change the framework code.” If we pull more of it into an external library, then it places less of a cognitive load on people at first. But if they do want to change the framework code, it would be less immediately obvious how to do so.

(*IMO social factors probably have a bigger affect on adoption than technical factors, which is why I’ve started doing meetups and things for Biff. But the technical factors probably have a larger affect on if people still like the framework after they’ve been using it a few years/after they’ve inherited a legacy project written with it :slightly_smiling_face:)

4 Likes

I like the Spacemacs approach, where the whole configuration is in one 700loc .spacemacs file.
There are docstings for every function and comments like:

;; Do not write anything past this comment. This is where Emacs will
;; auto-generate custom variable definitions.

Whenever I want to upgrade, I look at the changes between my current and the latest official .spacemacs file. Mostly I do not understand the changes but I am reassured that my custom code is in a safe place that is not affected. I take the new .spacemacs and copy my custom lines into it and thus upgraded the framework.

1 Like

Thanks for joining to the discussion. I don’t know what is better or not, I’m more at the exploring the problem space and the solution space at this stage.

I think that file is maybe a good example of where the Elixir pattern comes in handy. Or the Next.JS pattern, or some more inheritance based pattern. Let me brainstorm a bit here.

In your case, how do you update that file for users? Like if I created a webapp with Biff 1.1, and now in Biff 1.6 that file is drastically evolved?

Am I expected to manually copy/paste the new one over it in my project? Or do some kind of pull merge?

What if I had customized something in that file? I want the new functionality of that file from 1.6, but I don’t want to lose my customization and would still like it applied to 1.6’s version of it.

That’s one area where I think this pattern could help, in that the user’s example.clj would be empty except for inheriting from the framework’s app.clj or wtv it is called. And if the user wants to customize things, just redef whatever var or function you want in the user’s example.clj. When you updgrade Biff, you’d always have the most up to date version of example.clj, with all your re-defs automatically applied back.

Another aspect is in your features, now I don’t know if it’s as useful, but if someone did:

(ns myfeature
  (:require [inheritance :refer [inherit]]))

(inherit biff.feature)

They could have all the requires available, and without doing anything more, this could be fully functioning feature. They could and try to run: url:port/myfeatrue and see a “Hello World”.

The macro would have done:

(require '[com.biffweb :as biff])
(require '[com.myapp.middleware :as mid])
(require '[com.myapp.ui :as ui])
(require '[com.myapp.util :as util])

(defn myfeature [sys]
  [:html
   [:head
    [:script {:src "https://unpkg.com/htmx.org@1.6.1"}]]
   [:body
    [:div "Welcome to your myfeature page!"]]]))

(def features
  {:routes ["" {:middleware [mid/wrap-default]}
            ["/" {:get myfeature}]]})

Also, one thing Pheonix does, is that it double the inheritance, it scaffolds people’s project to inherit from a user base and that to inherit from the framework. So its more like

(ns com.myapp
  (:require [inheritance :refer [inherit]]))

(inherit biff.app)

;; User overrides go below
(ns com.myapp.feature
  (:require [inheritance :refer [inherit binded-quote]]))

(defmacro __inherit__[& {:as opts}]
  (binded-quote
   [opts opts]
   (inherit biff.feature opts)
   ;; User templates go below here   
))
(ns com.myapp.features.home
  (:require [inheritance :refer [inherit]]))

(inherit com.myapp.feature {})

So the user can also add common functionality of their own into all their own features by adding to their com.myapp.myfeature inherit template.

And the opts map can be used to customize the template.

I don’t think any of this is necessary, but I think it can be used to add a certain level of productivity to get started, and ease of use. It basically lets you templatize the common namespaces you think your users are going to be writing themselves, where there’s a nice path to “upgrading” the template, and a nice path to overriding only parts of it.

P.S.: Really like what I saw of Biff otherwise. I think the secret sauce of Biff is the choice of XTDB and HTMX. Those two choices considerably simplifies the challenges, because they work so well with Clojure semantics. Anything using a traditional RDBMS like MySQL you face a big challenge of migrations, mapping tables to entities, and back, connection pooling, and all that. And if you go down the JS path, now you’ve got another sub-project for your ClojureScript, React, all that stuff. But also how you pulled things together seems quite nice, and I like that you included deploy scripts, hot-reload, and all that. It looks like a pretty good start.

I took a more in depth look into Pheonix. It’s pretty awesome and by no means a Rails equivalent.

The main advantage of rails/django is with the orm and simplifying data access

The main advantage of Pheonix is Erlang’s actor mesh. The use case is to build highly scalable realtime networks that takes care of message delivery/fanout. It’s much easier to build a scalable live chatroom application with Pheonix than anything else and I would guess that is why people are using it.


@jacob Isn’t using xtdb a bit of an overkill for platypub if it’s just doing static site generation? Would sqlite suffice or are you using xtdb for versioning on the posts?

Well, in theory yes, but that’s just got to do with running on the Beam VM and OTP, in practice most Phoenix users are Rails refugees using it for small scale web apps or websites. Both Elixir and Phoenix were designed by an ex Rubyist and Rails user, and the syntax is Ruby inspired, even though the semantics are a mix between Clojure and Erlang.

So while it does have a good cluster story, I think the popularity of Phoenix specifically, (Elixir could have had the same problem as Clojure and not have any main framework of choice,.or gone the library route instead), is more to do with the other qualities that they mention:

Build rich, interactive web applications quickly, with less code and fewer moving parts

Phoenix is a rock-solid web framework that improves the tried and true Model-View-Controller (MVC) architecture with a fresh set of functional ideas. Phoenix puts the focus on your business domain, bringing you immediate productivity and long-term code maintainability

Phoenix is a web development framework written in Elixir which implements the server-side Model View Controller (MVC) pattern. Many of its components and concepts will seem familiar to those of us with experience in other web frameworks like Ruby on Rails or Python’s Django

I may not be getting it, but it seems like some of the features your talking about are implicit in the old :use :all method of require that we got away from because of ambiguity.

And to prevent forcing a user to have to require from 50 namespaces, the library author will usually provide a convenience API namespace that consolidates most of the relevant namespaces into a single one, so that consumers can require from just one.

I’m just not understanding the major benefits being described. I understand that providing magic can sometimes make things seem easier at first, but we’ve learned that those kinds of magics can lead to more difficulty down the road. But maybe I’m just not understanding these implicit inheritance features.

One place where I think inheritance is helpful is when you have to build hundreds or thousands of components (like in a UI) that are all very similar but only slightly different. In these cases, you don’t want to concretize prematurely, so being able to reuse implementations can save a lot of space, in code and cognitively.

I made a lib, af.fect, for functional inheritance that, where instead of doing that (inherit bar :name "Jane Doe") stuff, you would instead do:

(ns bar
  (:require [af.fect :as af]))

(def baz
  (af/fect
   {:as ::baz
    :op-env (fn [{:keys [name]}]
              (println "Hello" name))}))

(ns app
  (:require [bar :as b]))

(def baz
  (b/baz 
   {:as ::baz
    :name "Jane Doe"}))

(baz)
;=> Hello Jane Doe

The inheritance machinery follows affects around implicitly, but contained within the functions themselves, not as implicit require macros.

You can add behaviors upstream that affect the functions environment which run at compile (inheritance) time or run (execution) time.

The benefit here is that you can get the inversion of control of a framework, with lots of behaviors built into your functions, but you’re also able to extend the framework behaviors yourself, downstream, just by updating the environment of the affects your using with further inheritance. That’s what I’ve been doing with comp.el.

SQL is greatly overlooked. For example, if you create a stored procedure/function in the database, then any client can call that function on a db connection, making it easier to onboard teams (choose any language you want). Functions in the Database have access to all the tables, are composable, is ACID and can directly short circut without worrying about rollback. For more complicated scenarios where there could be a check on 4 or 5 tables, a write on 3 or 4 tables and a select on 2 or 3, it’s probably better to do all of that on the database itself because it runs synchronously and is be all or nothing, meaning no possible chance of data corruption.

From my experience, sql is extremely compact. Something like 10K of SQL will be the equivalent of around 50K of clojure code with the advantage of being extremely maintainable (well if maintaining SQL is your thing anyway).

So you can model the database in the same way as you’d model a swap! call on an atom. Functions in the database can succeed or fail when called and will atomically swap the state of the database to a new state.

The problem with this approach is that SQL has pretty awful syntax… but that’s another matter entirely.


Anyways… as to your other points:

  • Mapping tables to entities is ORM stuff. You don’t have to use it and just write SQL
  • Migration is definitely an issue but it’s like that on schemaless databases as well. Plus, relational schemas are much more powerful and consistent than type systems/malli/spec because you can create arbitrary graph relationships starting from any table whilst having certain type guarantees on those graphs depending on your schema. It’s much more bang per loc.
  • Connection pooling is not really an issue if a client is directly calling the database.

The team behind Pheonix is really smart. They are basically saying to the ruby people:

Everyone knows that ruby is slow… well here is something that’s close enough to ruby but it comes with a lot more bang for your time commitment. You already have the rails skillset - now you can apply that skillset for realtime systems as well.

It’s basically how clojure was sold to Java devs except the emphasis was on immutability and building more maintainable systems. Really cool project.

That’s an interesting observation, but the big difference is that use in Clojure does not make the vars that you referred available to other namespaces using your namespace.

A framework calls you. So it needs something to call. You could imagine something like:

(defn invoke[opts]
  ;; Setup a bunch of stuff
  (render opts) ; Call the render fn of this namespace
  ;; Do some framework related cleanup stuff
)

The framework might call invoke on namespaces that are supposed to be Views for example. But by default you might want to do some pre or post processing to this, and maybe there’s other things the framework might call, like a render-static and what not.

With Clojure’s use, you can get an invoke alias, but it isn’t itself callable from something that is using your own namespace, and it also would not be calling the render function inside your own namespace, but instead the one inside its own namespace.

Maybe a combination of use and potemkin’s import-vars would be a bit more similar, but even in that scenario, the “imported” invoke will not be calling your own render function.

Let’s try it:

(ns view)

(defn render[]
  "Hello World!")

(defn invoke[]
  (println "Calling render inside view")
  (println (render)))

(ns myview
  (:require [view :refer :all]))

(defn render[]
  "Welcome to my view!")

(ns framework)

(myview/invoke)
;;=>             java.lang.RuntimeException: No such var: myview/invoke
;;=>clojure.lang.Compiler$CompilerException: Syntax error compiling at (1:1)

So it didn’t really inherit invoke, it just is able to use it internally without qualifying it. And even if we try to call invoke from inside myview:

(ns myview
  (:require [view :refer :all]))

(defn render[]
  "Welcome to my view!")

(invoke)
;;=> Calling render inside view
;;=> Hello World!
;;=> nil

We don’t get the correct behavior, invoke has not called our “overriden” render function, but instead is calling view/render. So we see “Hello World!” instead of our own render message.

Where-as:

(ns view
  (:require [inheritance :refer [binded-quote]]))

(defmacro __inherit__[& {:as opts}]
  (binded-quote
    [current-ns-name (str *ns*)
     debug (or (:debug opts) false)]
    (do
      (defn render[]
        "Hello World!")

      (defn invoke[]
        (when debug
          (println "Calling render inside" current-ns-name))
        (println (render))))))

(ns myview
  (:require [inheritance :refer [inherit]]))

(inherit view :debug true)

(defn render[]
  "Welcome to my view!")

(ns framework)

(myview/invoke)
;;=> Calling render inside myview
;;=> Welcome to my view!
;;=> nil

Here it properly inherited invoke, and the inherited invoke is correctly calling the render of myview, and not that of view, and on top of that the macro allows us a few cool things, like using the myview namespace name dynamically to template the code, which is how the framework can automatically print the name of each views as it invokes them. It also could have gone and registered this as a view with the framework so the framework knows to call it if you wanted, and it lets you customize the behavior with options, such as turning on debug for this view.

That’s interesting. Inheritance but at the level of the function body, maybe there’s use cases to that as well, though I’m not as sure for a framework, I guess it could let you extend an existing function in some ways, could this be used instead of middlewares?

1 Like

Okay, I see. But if you try to avoid :refer :all, then one would think you’d want to doubly avoid this :refer :all-on-steroids, right?

The external invocation from a framework seems orthogonal. We def do that with test frameworks, giving test namespace names a suffix of -test and then letting our test framework call functions. We also do the def-forwarding thing often when trying to wrap some other large library, enumerating it’s defs (or providing a list, say of java classes or methods, in the interop case) and then redefing them with a macro, just to avoid having to do it all by hand.

Also, we’ll often inject values into atoms that are managed by an upstream library.

I guess I see the point, but I just think it’d get messy, for the same reasons we generally avoid :refer :all in most cases.

That’s interesting. Inheritance but at the level of the function body, maybe there’s use cases to that as well, though I’m not as sure for a framework, I guess it could let you extend an existing function in some ways, could this be used instead of middlewares?

Yeah, not so frameworky at the namespace level, like you’re describing. Just the function level. It is kind of a function middleware thing. Also kinda like interceptor chains for both the inheritance step and the invocation step, which can be updated upon inheritance. An application or web framework could lean on it for overloading behaviors and letting downstream users lean on them and update them though.

In my experience in another web tech community where the programming skill is typically more on the beginner side, shall we say, what folks are looking for in “web frameworks” is convention and lack of boilerplate. They want to be able to easily write handlers and views and have the framework figure out how to map from requests to handlers to views, which each being optional.

In that community I wrote one of the most popular MVC frameworks and eventually ported it to Clojure – but I sunsetted it because it didn’t really add a lot over basic Ring with a few libraries (although it did remove mapping and rendering boilerplate).

Essentially it mapped /foo/bar to calling <app>.controllers.foo/bar (if such function exists) passing in an enhanced Ring request hash map, and then it would render (using Selmer) <app>/views/foo/bar.html and then wrap that in <app>/layouts/foo/bar.html and walk up the tree wrapping it in additional layouts. Again, all views and layouts were optional. In addition, it would call before and after functions in that controllers.foo ns if present. If you wanted to return data, you’d add a key to response hash map and the framework would render the specified data in the specified format instead of rendering HTML views. There were a few more “request lifecycle” hook functions but that was about it.

There were a couple of simple helper nses that controllers could :require to help process the data but it was deliberately very simple. There was no sense of inheritance for shared functionality (which is considered “bad” in a lot of the OOP world) – but your before function could add keys to the request map that provided functionality to the handler function (and there were global before/after functions too).

The original framework is still very popular today, but since I moved to the Clojure community and handed off maintenance to others it has declined in popularity somewhat – not bad for something I designed on a napkin one lunchtime in 2009 and released a few weeks later with only 400 lines of code.

If anyone’s curious, framework-one/fw1-clj: A port of Framework One (FW/1) from CFML to Clojure (github.com).

2 Likes

Okay, that also provides more color to the use-case.

I wouldn’t think it’s out of the norm in Clojure for an upstream lib to provide before and after fns that take a fn impl from a user and store it in an atom, to then be used in the same way. That gets around having to come at it from invoking externally at the namespace level.

Totally! Actually the reason I didn’t post my original response to begin with is I didn’t want to sound like I was poo-pooing any experimentation with inversion of control.

Yeah, easier upgrades could be a nice advantage of pulling the framework code into the Biff library one way or another. It currently works as you’ve guessed: whenever I make changes to the framework code, in the release notes I include a link to the relevant commit and say “if you want this in an existing project, make the same changes in this commit manually.”

I actually started a new Biff project yesterday, and I experimented with pulling some of the framework code into a disaggregate helper fn:


(def features
  [app/features
   auth/features
   home/features
   worker/features])

(def code (util/disaggregate features))
(def handler (:handler code))
(def on-tx (:on-tx code))
(def tasks (:tasks code))
(def static-pages (:static-pages code))

(defn start []
  (biff/start-system
   {:biff/after-refresh `start
    :biff/handler #'handler
    :biff/malli-opts #'malli-opts
    :biff/static-pages #'static-pages
    :biff.beholder/on-save #'util/on-save
    :biff.xtdb/on-tx #'on-tx
    :biff.chime/tasks tasks
    :biff/config "config.edn"
    :biff/components util/default-components})
  (util/generate-assets! @biff/system)
  (log/info "Go to" (:biff/base-url @biff/system)))

(I also replaced the :biff/components vector with a new default-components value)

It could be taken a step further by putting the code var in the system map and having components pull things out of that:

(def features
  [app/features
   auth/features
   home/features
   worker/features])

(def code (util/disaggregate features))

(defn start []
  (biff/start-system
   {:biff/code code
    :biff/after-refresh `start
    :biff/malli-opts #'malli-opts
    :biff.beholder/on-save #'util/on-save
    :biff/config "config.edn"
    :biff/components util/default-components})
  (util/generate-assets! @biff/system)
  (log/info "Go to" (:biff/base-url @biff/system)))

Then e.g. the use-jetty component would do this to get the handler fn:

(defn use-jetty [sys]
  (let [;; old way:
        ;handler (:biff/handler sys)
        ;; new way:
        handler (fn [request]
                  ((-> sys :biff/code :handler) request))
        server (jetty/run-jetty handler ...)]
    ...))

This way we’d still be able to redefine the code var and have the changes take effect without needing to restart the system.

Then we’d even be able to add new keys to the feature maps (and update the components to use them) without requiring any manual upgrades. I’d still include manual upgrade instructions for anyone who has replaced disaggregate or default-components with their own code though.

(As for if I’ll actually end up putting this into Biff–:man_shrugging:)


I’ve thought about trying to pull the feature map code out into external libraries. I think it would work just as well to use a plain function:

(ns com.myapp
  (:require [some-lib.core :as some-lib]))

(def features (some-lib/features {}))

And then any overrides/changes can be accomplished by passing things in via the options map.

(Specifically I think the authentication code could be a good candidate for this)

Thanks! Agreed about XTDB and HTMX.

1 Like

XTDB can use the filesystem as a storage backend, so it isn’t any less convenient to setup or use than SQLite. It makes a great one-size-fits-all/most DB imo. Also re: platypub specifically–it’s designed to be a regular multi-tenant web service, akin to e.g. Substack, so it has the same database needs as just about any other app.

1 Like

sweet. that makes sense.

1 Like

I’m only citing the beginning of your post here, but it really applies to your entire experiment: I’m curious what your experience using it is, once you have applied it some. In theory it sounds like a great idea, because of how it isolates changes in the framework from application code. It was something akin to this I was toying with in my head, maybe even taken a step further by using a register-feature function callable by features to register themselves with biff; but thinking about it, that extra register fn might be a bit too much magic, compared to what it accomplishes.

So cool to see you experimenting.

1 Like

I don’t know enough about this to be meaningfully helpful. I just wanted to say that part of the reason I love this community is that y’all post questions that contain the word “madness”.

2 Likes

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.