Clojurians-log: working on a relaunch + cloud sponsor Exoscale

As many of you have noticed clojurians-log is having some issues. The server went down about a month ago, and it took quite a while before I realized (thanks @dustingetz), and tried to get it going again.

It’s up-ish right now, but the server I provisioned turned out to be too small memory-wise, and so whenever I try to import the history it would OOM and the app would restart, and after spending a day that I had really hoped to spend on client work I threw in the towel and decided to take a step back.

With @martinklepsch and other people we’ve been having some conversations lately about how in open source you need to constantly be bringing new people in, because people eventually also want to leave, so you have to constantly be training your successors. I mentioned on Clojurians that I could use some help with the ops, and two people came forward.

Around the same time I reached out to Exoscale to see if they would be interested in sponsoring by providing us with some instances to run ClojureVerse and Clojurians-log. Exoscale is a Swiss cloud provider that uses a lot of Clojure and ClojureScript internally, and they’ve been very supportive of various Clojure community events in the past.

So far Lambda Island (i.e. I) have been picking up the tab for these things, which is fine, I consider it part of the mission of Lambda Island to support and help develop the Clojure community, but on the other hand Lambda Island doesn’t make that much either, so getting a bigger player involved who’s closer to the source seemed like a good idea.

Long story short, Exoscale has been very receptive, we’ve received a pile of credits to start with, and we’ll be relaunching clojurians-log on their platform. Once we get there we’ll also look at migrating ClojureVerse itself there.

I had a call with @lispyclouds and @victorb last weekend, they are both much more knowledgable about devops than I am, and together we’re going to do things a bit more thorough this time, making sure we have things properly automated, and have things like metrics and downtime alarms.

This might take some time, but that’s just how it is. Getting it up and running quick quick so I could get on with other stuff is how we got into this mess. On the plus we do already have ansible playbooks for a lot of this stuff so maybe it won’t be too bad.

I am sorry to say we did lose some logs in the process, so there will be a gap from 2019-03-06 to 2019-03-26. Maybe there’s a way to backfill this later from Zulip… we’ll see. In any case since 2019-03-26 we’re logging again, so eventually things will show up again.

I’m very sorry for the inconvenience, please bear with us.

7 Likes

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.