Passing dependencies into ring handlers ... on request map, or `partial`ed into the handler?

I’m curious if anyone has strong opinions one way or another on the (air quotes) best way to provide dependencies like a database connection into a Ring request handler, when working with one of the dependency-injection libraries like component, clip, or integrant. (I’m leaving mount out since it doesn’t quite work the same way).

The two major options I’ve seen are:

  1. Use a middleware (eg wrap-db) and at system start, assoc the db connection into the request map and write arity-1 handlers that know what key to use:
(defn wrap-db [handler db]
  (fn [request] (handler (assoc request :db d))))

(defn ring-handler-1 [request]
  (let [db (:db request)] ,,,))
  1. Form up the router as a component (eg with reitit or compojure), inject the db connection into that, and use partial or anonymous functions to close over the connection and turn handlers that take multiple arguments back into handlers that take only a request:
(defn ring-handler-2 [db request]
  ,,,)

(defn make-handler [db]
  (reitit.ring/ring-handler
   (reitit.core/router
    [""
     ["/a-route" (fn [request] (ring-handler-2 db request))]
     ,,,])))

The first one:

  • has the advantage of being pretty simple
  • spreads assumptions about the shape of the request everywhere
  • is less clear which handlers actually use a given dependency

The second:

  • very explicit, easy to see which handlers use what
  • it’s easy to test individual handlers from the REPL
  • it can be tedious in practice. (Example: it took me a while to notice that if you use partial you’ll close over not just the dependency but also the handler, which forced me to restart the system to pick up changes to the handler. If I set it up as above or with the fn shorthand, then the handler will resolve as part of processing every request so I can just redefine the handler itself).

In bigger route trees I lean towards the second. How do others feel?

5 Likes

I’m by no means an expert on this, and I’m using a different stack (reitit with interceptors). I took the first approach, that is, I have an interceptor that assocs the DB handle into the request, and the handlers that use that (or any other of the configuration-related things I inject with interceptors) have the required parameters made explicit with destructuring, like so:

(defn some-handler [{:keys [db,,,,] :as req}]
,,)

I’ve recently faced an issue with this (using Integrant), which is that at some point, some of the DB connections needed to be restarted (e.g. Cassandra sessions, or XTDB), so I needed to add a layer of indirection, so rather than assoc'ing a DB reference to the request, I pass an fn that gets you the actual handle. It’s not so neat, from an FP standpoint, but it does solve the problem, and all further calls in my data layer and elsewhere take the connection as an explicit argument as they did before.

Testing handlers in the REPL is a bit cumbersome, since you need to create a “fake” request map, but in general you still have to extract the request parameters somewhere, so it may as well be there.

I like to distinguish one-time setup from the request. It clarifies that some variability is once-and-for-all, while other variability actually varies. Better still, the clarification is not just a comment, but it is in program notation, which in combination with immutable data gives some peace of mind that there will not be bugs with varying parameters that ought not vary.

Ring itself provides some handlers that are configured ahead of time, not through the request. Routing is one. For another example, take ring.middleware.resource/wrap-resource. Instead of putting a root-path and class loader into every request, the program configures a handler once and it holds the root path in closure. It certainly could have been done the other way. But then the problem space, over which to ensure the resource path was correct, would be much larger.

About the REPL, and making sure not to calcify the operative portion of the handler at compile time: As I recall, Ring makes some provision there, too, allowing a “handler” to be either a function or a var. If it is a function, it is naturally fixed at compile time; if it is a var, it is dereferenced at run time.

We have code that uses both approaches at work. Most of our apps inject the (Application) Component via middleware. A couple of our apps use the second approach, or a variant of it (that has a component for each handler, essentially). My preference is #1, my teammate’s preference is #2. Both approaches have pros and cons – and you summarized those pretty well.

Over time, as maintenance has extended the number of Components needed in some handlers, the second approach does indeed become tedious, especially when you have to track down the call chain whether a Component is available or not. But it does at least make you think a bit more about whether you really should be using that new Component or whether restructuring your code would be a better idea.

Overall, I still prefer the simplicity of the first approach because the “wiring” is in just one place – the middleware – so it’s easier to reason about and it’s easier to do maintenance when a new Component is needed by something deep in the call tree (although you lose the “prod” about refactoring to keep Component use closer to the edges of your code). Our applications are very database-heavy, so the Database Component tends to be widely used throughout the call tree – which is a pragmatic decision. We did, at one point, fully separate read/process/write but with the high level of DB writes in our system and the complexity of several of the updates, the code became very monadic in style and we found it harder to maintain and to reason about.

2 Likes

I guess I don’t mind that approach: a consistent convention in the args vector is a sort of social solution instead of a technical one.

I’m intrigued by the interceptor pattern. I’ve been waiting for sieppari to mature and to see where Ring 2 ends up. I might give it a try on some smaller prototypes. So far my experience with the pattern in re-frame has been okay; I haven’t really perceived that I’ve needed the power.

True. In effect this is what the wrap-db in the demo is doing, just in a different spot (the middleware stack instead of the route tree). In either case, my taste is drifting away from closing over free vars while manufacturing a function. I wish the runtime visibility/inspectability into functions were better.

So I just discovered the other day that reitit, at least, can’t take a var (eg #'my-handler instead of my-handler). The route tree compilation has an expansion protocol that has been added to functions but not to clojure.lang.Var. On consideration, that’s probably appropriate since that late-binding runtime lookup seems at odds with its promises of routing speed. I thought about trying to extend the protocol but then decided #(my-handler %) is clear enough while developing.

Yeah, this pragmatism is what I tend to like in re-frame: it’s a web app. 99%+ of the time, one database is fine, there is definitely a database (or app-db), and it’s not worth adding it as an explicit argument to practically every function.

I’m not sure about Sieppari’s status (it’s been in “alpha” for a couple of years, but then again, Metosin’s stuff tends to “just work” really well. I like the data-oriented nature of them better than the function composition in Ring-style handlers, but both work. @plexus had some nice write-ups about the pattern some time back: here and here

fyi you can use (partial #'handler db) and then you will get latest code when redefining fn in repl without restarting the app.

3 Likes

I never want to miss a call for a strong opinion…. Despite the fact that I am wrong all the time :wink:

I am a strong proponent of the second approach for any initialization time resources or state, and that those objects should always come from the integrant or component init code. This makes sure that the request map is only talking about the request related state and context. Not global state. The issue with not doing this is that global state in the request creates a hidden dependency between the routing/middleware and the eventual handler. Soon several problems develop. All the global state might meander into every request so every path has everything it might need. Or, alternative handler implementations might
need state that then is hard to figure out if the init code provides. Then different system configurations start including the kitchen sink to make sure every handler has everything it could ever need.

My favorite way to implement this currently is to
make an integrant/component object with the web server and router, and a protocol with a method “mount” that allows any other component to add a route to the current router state. Mount needs to pass the path+method and handler. The server component implements mount by adding the handler to others via whatever router is being used (I like reitit) Now handlers come from individual components that each mount their handlers at init time onto the web server the same way Unix mounts disks at init time. Each handler’s component only depends on the state that it requires. Removing a component from a system then automatically removes its route. Often options to the mount method must include which middleware to put on the route (like often public routes, authenticated routes and static resources are different types of middleware. The web server should organize these and maintain their state (usually with other components), and refuse the route if for example it doesn’t do authenticated routes. This flags config errors at init time.

Requests then are about requests, and handlers hide the details of what they use to fulfill requests in the opaque function/partial of the handler which exactly matches the ring handler signature that simple takes a request and returns a response. Each handler comes from a component that identifies a specific implementation of a route. Maybe today that uses a db, and tomorrow memcached?

Nice question to contribute!

Cheers,
John

4 Likes

Oh of course, thanks. (I only recently grokked that vars implement IFn to resolve to the function value).

This is a neat idea. I’ve done something similar where I (defmethod my.route-table/route-data ::the-reitit-route-key) and use a custom handler for reitit.core/expand to attach the handler as route data.

An advantage to the multimethod is that we can use the mechanism to attach different data to a route key on the frontend and backend. On (some of them) on the frontend: controllers, re-frame root components. On the backend, a ring handler for when someone lands on that route directly from a URL to properly start up. If the front-end route lacks data, the front-end router knows to navigate with a page change instead of using pushstate.

(It does mean my.route-table is a leaf namespace imported everywhere, but in practice this has been OK).

I have tried the every-handler-a-component pattern.

(I like the theory, since it keeps me from having to invent a second injection system. I’m realizing that’s really the slightly goofy question I’ve implicitly asked: when I invent a second system of injecting dependencies over and above just using component/integrant/clip, which one is better… adding to request or partialing the handlers?)

I find the major downside is that the system config map, as I’ve done it, gets massive and hard to scan.

It’s a bit less of a problem with clip, but with integrant the number of (ig/init-key ,,,) calls and system.edn just feels overwhelming to me and I lose the forest for the trees.

I think I saw somewhere that integrant in the large can involve composing up multiple subsystems, so maybe the problem is that I set up a single-level table instead of a 2-level tree. Am I right on this? If so, I’d love to be pointed to good examples.

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.