Arjen Wiersma

A blog on Emacs, self-hosting, Clojure and other nerdy things

How Vibe Coding Fails

{{< admonition type=“tip” title=“Up to now” >}} The video I am commenting on below is part of a series called Vibe-coding in het onderwijs. So far, the series has been excellent! It shows teachers how they can create small tools for their class using AI such as ChatGPT and bolt.new. The projects featured had very little actual logic or complexity, and the use of AI was spot-on! {{< /admonition >}}

Now, take a look at the following video. If you don't know any Dutch, Tom is using bolt.new to create an AI chatbot that simulates a difficult HR conversation. How this relates to education isn't relevant here; the point is that he wants to demonstrate the use of a model with a frontend.

{{< rawhtml >}} {{< /rawhtml >}}

The video goes well until around the 11:20 mark. Bolt has created a frontend for him, and then he wants to safely store the API key for Mistral. Bolt suggests using Supabase with edge functions, but Tom has heard that using .env files is a safe way to store keys.

Tom isn't wrong, but the technology in use can't read .env files. In this case, if he had used the edge functions, there wouldn't be a problem. The point, however, is that the complexity of the application increases significantly once you need to explicitly secure data. This requires just enough coding knowledge to make good decisions.

Anyway, Tom asks Bolt to use a .env file, and Bolt complies. It creates a .env file and incorporates it into the JavaScript files during the build process. He published the application, and anyone can try it out. For somewhat of a security measure, I'm not posting it here.

In the DevTools (F12 in most browsers), you can now inspect all interactions with Mistral, including the Bearer authorization with the API key.

An exposed API key

{{< admonition type=“danger”>}} The API key has been disabled at this time. If you find one in an application on the internet, please notify its creator of the problem instead of using their AI budget for yourself. {{< /admonition >}}

I discuss these risks in my talk “Vibe-coding your way into a security nightmare”. You can watch it below:

{{< rawhtml >}} {{< /rawhtml >}}

Will software development change? Yes, of course. Will we stop making software? No, we'll still be creating software, just not in the same way as before.

For the last few months, a lingering question in our industry has been: is there still room for developers in this AI-driven world? My answer is yes, but we won't be developing in the same way we have for the past 30 years.

My career dates back to my first professional coding job in 1996. Back then, we created software that had to be physically shipped to customers on some form of media. My most ambitious project was the {{< backlink deployment-anxiety “work I did when the Dutch ISP Freeler was created” >}}. We wrote software and then put it on a CD-ROM to ship to customers. Later, the delivery medium became the web, which transformed all our distribution challenges. Programming languages evolved too, shifting from those focused on single platforms and distribution methods to more web-friendly languages.

And now there's AI—technology that can think faster than us and has access to more accumulated knowledge. You can ask it to write an implementation of Dijkstra's Algorithm and within seconds have one in your preferred language. All you need to do is verify it works for your use case and provide guidance to integrate it into your codebase. This verification and integration process exemplifies the symbiosis I see developing between human developers and AI tools—where machines generate code rapidly, but human expertise ensures its proper implementation.

If you follow AI developments closely, you might have heard about “vibe coding,” a state in which you surrender to the rhythm of coding with AI and allow it to make all the changes you want. This term was first coined by Andrej Karpathy and has gained significant traction. I have colleagues who have fully embraced this paradigm and produced some remarkable {{< sidenote projects >}}As far as I know, none of these projects have made it to production though{{< /sidenote >}}.

image.png

But there's another side to this coin. Software development isn't just about creating software—it's primarily about understanding software. There's a clear distinction: the art of creating software—writing code—becomes valuable when you understand what you're building, and especially when you comprehend all the potential pitfalls. This generally marks the difference between junior and senior developers. Now, even someone who previously struggled to learn coding can create software with AI assistance, but they often lack the fundamental understanding of the software they've created. Without this deeper knowledge, they can't effectively maintain, debug, or enhance their code when inevitable challenges arise.

These knowledge gaps manifest in concerning ways. For example, problems frequently emerge when developers lack fundamental knowledge, such as understanding that API keys should never be shared.

image.png

Or when they create products without proper security considerations, allowing users to access features in unintended ways.

image.png

For me, this paints a clear picture: AI can create beautiful and complex things, but it lacks the insights into software development needed to create something of sufficient quality. This is where current software developers come in—we provide the guidance and expertise to build good software.

I believe the role of software developers going forward will be to collaborate with our new AI tools to build better software. As Adarsh Gupta says, “Even if you built 100 projects and add them to your resume, your resume of 'built 100 vibe apps' means nothing if you can't understand the fundamentals” While the creation process might become accessible to many new people, the actual process of making software work correctly will require even more specialized knowledge.

image.png

I am leaving NOVI. Yes, I know, it is sad news. For almost 6 years I have been building and maintaining an organisation that provides the best cybersecurity and software development (Bachelor) education in The Netherlands. In that time I have done amazing things:

  • Created a short course format for people that want to switch careers. With some back of the napkin calculations I have seen over 2500 students pass through one of the programs.
  • I lead a team of quality assurance, educational development, EduTech developers and teachers to build an awesome EduTech tool and provide top-notch education.
  • Started and hosted the {{< backlink “resigning-as-htb-ambassador” “Hack The Box NL meetups” >}} for 4 years.
  • I became part of the management team and helped the organisation through an M&A proces

It has been a wild ride, but like all things that begin, it must end.

I was listening to the Application Security podcast with Jim Routh in which he talks about the 15 year cycle of his career. At first I thought that it did not make much sense, but after a while of reflecting on that statement I found that I myself have a 10 year cycle.

In my career there have been blocks of 10 years in which my interest and career change and re-align.

The first cycle was Software Development. I started as a software engineer (programming professionally) in 1996, working in the Dutch industry and then at Personify in San Francisco. When I moved back to The Netherlands I worked at Tiscali where I transitioned from software development into architecture.

Cycle two was all about architecture. From Tiscali I moved on to eBuddy in 2006 as the lead architect for our chatplatform. It was a beautiful combination of coding and designing and I look back at it with great love. From the insane time at an internet startup I moved into an architecture role at Infomedics. I combined my coding and architecture skills to build out an amazing platform that serves millions of people on a daily basis. During my time at Infomedics I achived a Bachelor Degree in ICT and I started making Cyber Security a more present part of my career, from obtaining certifications such as CEH and OSCP to managing ISO 27001 and ISAE 3402 certifications for our tech teams.

The third cycle is education. From 2016 onward I have worked in education. First I did it as a hobby project, next to my work hours I would give classes at the Hogeschool van Amsterdam in Infrastructure Security, Forensics, Software Security and guided students in their projects and final exams. I even created a completely new Associate Degree in Cybersecurity. In 2019 I made a full switch to education by joining NOVI. During my time at NOVI I attained my {{< backlink “master-of-science” “Master's Degree” >}} and transitioned into a more strategic role where I managed a large part of the organisation next to giving classes and building curricula.

And now my fourth cycle has appeared. Each cycle up to now has brought me amazing challenges, wonderful people and a lot of knowledge. So it is an exciting time to look forward to.

#career

When I tell people that I like to code in {{< backlink “clojure” “Clojure”>}} the common response is “wut?”. Clojure is not known as a programming language in which you create big systems. As all Clojure people know, this is not true. There are many systems written in Clojure. Let me show you some that are very actively maintained.

First there is Lipas, a Finnish platform that shows you information about sports clubs. The structure and techniques used in this code base I use as a reference implementation for my own ClojureScript + Clojure systems. A screenshot of the application is shown here:

Lipas

Next, there is Metabase, a business intelligence platform. The below gif shows you some of the features it has.

Metabase

There is a great talk at Conj 2024 about supporting 50000 users on Metabase. You can watch it over on YouTube.

Finally, also found on the Conj 2024 streams, there is Cisco Threat Intelligence API. This a full threat intelligence service and data model that is built using Clojure. Link to the repository. The talk about the project can be seen on YouTube.

There are plenty of other projects using Clojure, if you know of more that I should add to my list, do let me know!

#clojure #web #programming

Observability in cloud-native applications is crucial for managing complex systems and ensuring reliability (Chakraborty & Kundan, 2021; Kosińska et al., 2023). It enables continuous generation of actionable insights based on system signals, helping teams deliver excellent customer experiences despite underlying complexities (Hausenblas, 2023; Chakraborty & Kundan, 2021). In essence, adding proper observability to your system allows you to find and diagnose issues without having to dig through tons of unstructured log files.

The running project

In {{< backlink “20250107-clojure-reitit-server” “my previous post on reitit”>}} we built a simple endpoint using {{< backlink “clojure” “Clojure”>}} and reitit. The complete code for the small project was:

(ns core
  (:require
   [reitit.ring :as ring]
   [ring.adapter.jetty :as jetty]))

(defn handler [request]
  {:status 200
   :body (str "Hello world!")})

(def router (ring/router
             ["/hello" {:get #'handler}]))

(def app (ring/ring-handler router
                            (ring/create-default-handler)))

Nice and easy eh? That simplicity is what I truly love about {{< backlink “clojure” “Clojure”>}}. That, and the fact that there is an awesome interoperability with the Java ecosystem of libraries.

Adding observability

In {{< backlink “clojure” “Clojure”>}} it is possible to add observability through the wonderful clj-otel library by Steffan Westcott. It implements the OpenTelemetry standard which makes it integrate nicely in products such as HoneyComb.io and Jaeger.

The library has a great tutorial that you can follow here. Applying the knowledge from this tutorial to our reitit application is also trivial. To show the power of observability a JDBC connection will be added to the application. It is not necessary to mess with any tables or such, it will just leverage a connection to a Postgres database and a value will be queried from it.

First, lets see the updated deps.edn file.

{:deps {ring/ring-jetty-adapter {:mvn/version "1.13.0"}
        metosin/reitit {:mvn/version "0.7.2"}

        ;; Observability
        com.github.steffan-westcott/clj-otel-api {:mvn/version "0.2.7"}
        
        ;; Database access
        com.github.seancorfield/next.jdbc {:mvn/version "1.3.981"}
        org.postgresql/postgresql {:mvn/version "42.7.4"}
        com.zaxxer/HikariCP {:mvn/version "6.2.1"}}

 :aliases {:otel {:jvm-opts ["-javaagent:opentelemetry-javaagent.jar"
                             "-Dotel.resource.attributes=service.name=blog-service"
                             "-Dotel.metrics.exporter=none"
                             ]}}}

You will notice some new dependencies, as well as an alias that you can use to start the repl with. If you, like me, use Emacs you can codify this into a .dir-locals.el file for your project.

((nil . ((cider-clojure-cli-aliases . ":otel"))))

Now, whenever cider creates a new repl it will use the otel alias as well.

The agent that is listed as javaagent can be downloaded from the OpenTelemetry Java Instrumentation page. This will immediately bring in a slew of default instrumentations to the project. Give it a try with the starter project, you will notice that all the jetty requests will show up in your jaeger instance (you did look at the tutorial, right?).

Finally, here is the update project for you to play with.

(ns core
  (:require
   [next.jdbc :as jdbc]
   [reitit.ring :as ring]
   [ring.adapter.jetty :as jetty]
   [ring.util.response :as response]
   [steffan-westcott.clj-otel.api.trace.http :as trace-http]
   [steffan-westcott.clj-otel.api.trace.span :as span]))

(def counter (atom 0))

;; add your database configuration here
(def db {:jdbcUrl "jdbc:postgresql://localhost:5432/db-name?user=db-user&password=db-pass"})

(def ds (jdbc/get-datasource db))

(defn wrap-db
  [handler db]
  (fn [req]
    (handler (assoc req :db db))))

(defn wrap-exception [handler]
  (fn [request]
    (try
      (handler request)
      (catch Throwable e
        (span/add-exception! e {:escaping? false})
        (let [resp (response/response (ex-message e))]
          (response/status resp 500))))))

(defn db->value [db]
  (let [current @counter]
    (span/with-span! "Incrementing counter"
      (span/add-span-data! {:attributes {:service.counter/count current}})
      (swap! counter inc))
    (:value (first (jdbc/execute! db [(str "select " current " as value")])))))

(defn handler [request]
  (let [db (:db request)
        dbval (db->value db)]
    (span/add-span-data! {:attributes {:service.counter/count dbval}})
    {:status 200
     :body (str "Hello world: " dbval)}))

(def router (ring/router
             ["/hello" {:get (-> #'handler
                                 (wrap-db ds)
                                 wrap-exception
                                 trace-http/wrap-server-span)}]))
                                 
(def app (ring/ring-handler router
                            (ring/create-default-handler)))

(def server (jetty/run-jetty #'app {:port 3000, :join? false}))
;; (.stop server)

There are several interesting bits to be aware of. First the handler is wrapped in several middleware functions, one to pass the database connection, the other to wrap the exceptions (such as in the tutorial) and finally the middleware to wrap a server request. The db->value creates its own span to keep track of its activity.

After making several requests you will see that Jaeger contains the same amount of traces. A normal trace will show 3 bars, each of which you can expand and explore.

A trace in Jaeger

If you take the database offline (that is why we used Postgres), you will notice that the exception is neatly logged.

Exceptions in Jaeger

Observability allows you to get a great insight into how you application is running in production. With the clj-otel library it is a breeze to enhance your own application.

#clojure #web #observability #programming

{{}} Currently, only use Postgres 14 on the Digital Ocean application platform for development databases. {{}}

While following the book {{< backlink “zero2prod” “Zero2Prod”>}} you will learn how to deploy a {{< backlink “rust” “Rust”>}} application to digital ocean through a Continuous Deployment pipeline. This is hardly anything new for me, I even teach a course in DevOps, but to not stray from the path of the book I followed its instructions.

The spec for digital ocean looks like this (this is abbreviated for your reading pleasure):

name: zero2prod
region: fra
services:
    - name: zero2prod
      dockerfile_path: Dockerfile
      source_dir: .
      github:
        branch: main
        deploy_on_push: true
        repo: credmp/zero2prod
      health_check:
        http_path: /health_check
      http_port: 8000
      instance_count: 1
      instance_size_slug: basic-xxs
      routes:
      - path: /
databases:
  - name: newsletter
    engine: PG
    db_name: newsletter
    db_user: newsletter
    num_nodes: 1
    size: db-s-dev-database
    version: "16"

Actually, in the book it says to use version 12, but that version is no longer available. The latest version support is 16 and I chose that. There is only a small hiccup here, since Postgres 15 in 2022 there has been a breaking change in how databases are created. Notable, a best practice following a CVE in 2018 (CVE-2018-1058), has been made the standard. The standard being that by default users do not have creation rights, as an administrator you have to explicitly grant rights to your users.

Although this has been best practice since 2018, the change in Postgres 15 confronts users with this change. To my surprise Digital Ocean seems to not be aware of this change until now.

The development database created in the application platform using the spec from above creates an user (newsletter) with the following rights:

Role name | Attributes
------------------+------------------------------------------------------------
_doadmin_managed | Cannot login
_doadmin_monitor |
_dodb | Superuser, Replication
doadmin | Create role, Create DB, Replication, Bypass RLS
doadmin_group | Cannot login
newsletter |
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS

You read that correctly, none. At the moment you can still create a postgres 14 database with digital ocean, which grants rights to the user and then you can upgrade it to the latest version, keeping the rights. But that is a workaround.

After determining the cause of the error I decided to mail digital ocean support with the issue. Timeline:

  • December 30th: the answer is that I am using a development database, if I would only upgrade to a managed cluster I would have full access to the database. I politely responded explaining the problem again.
  • December 30th: a quick response from the same agent, saying that based on the information provided I am trying to do things with the doadmin user, again not reading the actual question (or not understanding the problem). I again answer with a full log of the creation of the database and the rights given to the users.
  • December 31st: another agent responds, telling me that using my spec it will create a database and that I can connect using the data from the control panel. This is exactly the information I already sent, but the agent does not actually look at the problem (no rights). I once again explain the issue.
  • December 31st: another agent answers the ticket, asking how I create the database. I once again answer with the spec (which is already in the ticket 2 times now) and the steps I use (doctl from the command line).
  • December 31st: another agent responds with some general information about creating databases, again not actually reading or understanding the issue.
  • Januari 1st: a standard follow up email asking if I am happy with the service. I respond that the problem is not solved, and that I am fearful that given the interaction it will not be solved.
  • Januari 2nd: another agent responds that they are talking internally
  • Januari 2nd: a senior agent called Nate appears in the thread. Actually asking questions that explore the issue. I promptly respond.
  • Januari 2nd: Nate acknowledges the issues and Digital Ocean starts working on a fix for their database provisioning. Provides the workaround of first using version 13 or 14 and then upgrading.
  • Januari 9th: Still working
  • Januari 15th: Still working
  • Januari 21st: Another update that the provisioning process is quite complex and they are still working on a solution.

The proces to get something so trivial through the support channel is quite painful. I do realize I do not have paid support, and I am willing to wait it out because of that, but the first 5 interactions did nothing but destroy my confidence in the Digital Ocean support system. Luckily Nate picked up the ticket.

When a solution eventually comes around I will update this post.

#development #database #programming

In July 2023, I installed NixOS as my daily operating system. NixOS is a Linux distribution that emphasizes a declarative approach to system management. This means you define your desired operating system configuration in a file (e.g., KDE with Emacs 30 and Firefox), and the Nix package manager uses that file to create your OS. Every change generates a new version, allowing you to revert to a previous version if anything goes wrong.

Prior to NixOS, I used various Ubuntu and Debian-based distributions, with POP_OS! being my favorite. I often encountered package conflicts or misconfigurations during updates. NixOS has resolved these issues for me.

Since switching in 2023, I've experienced zero problems with upgrades or stability. While experimenting with different desktop environments posed some challenges, the ability to reboot into a prior OS version (or “generation”) has provided a safety net I didn't realize I needed.

My NixOS configuration primarily revolves around three files: /etc/nixos/configuration.nix, created during installation and tailored to my chosen desktop (currently KDE for my work laptop); /etc/nixos/shared.nix, which contains shared services and settings for my laptop, desktop, and work laptop, encompassing everything from Bluetooth to sound configurations. This setup ensures I have a consistent and functional desktop environment across all my systems.

The last file I manage is ~/.config/home-manager/home.nix, which contains all the programs I want, such as Emacs, wl-clipboard, and Firefox, along with user services like the Emacs daemon. Essentially, I only need to edit home.nix as a user and run home-manager switch to deploy new programs on my system.

During the biannual update cycle in May and November, I update the nixos and home-manager channels and run sudo nixos-rebuild switch --upgrade for a system upgrade. While there can be occasional breaking changes, Nix alerts me to these. I can easily run upgrades before important meetings, confident it will work smoothly, and if issues arise, I can simply reboot into a previous generation.

It's a delightful experience! Although there's a learning curve for newcomers, I highly recommend investing time in a VM to grasp the basics; it's well worth it over time.

In my home.nix, I include only the essential programs I use regularly, like Emacs. For my development projects, I rely on nix-direnv, which manages project-specific dependencies, such as compilers. Each {{< backlink “clojure” “Clojure”>}} project, for instance, contains a flake.nix file in the root that specifies its dependencies.

{
  description = "A basic flake with a shell";
  inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
  inputs.flake-utils.url = "github:numtide/flake-utils";

  outputs = { nixpkgs, flake-utils, ... }:
    flake-utils.lib.eachDefaultSystem (system:
      let
        pkgs = nixpkgs.legacyPackages.${system};
      in
      {
        devShells.default = pkgs.mkShell {
          packages = [ 
            pkgs.clojure
            pkgs.clojure-lsp
            pkgs.clj-kondo
            pkgs.cljfmt
            pkgs.nodejs
            pkgs.jdk23
            pkgs.unzip
          ];
        };
      });
}

The packages list above establishes a complete development environment for users. When I share my project with others using NixOS (nix-direnv), it seamlessly works for them, as it has no external dependencies. For my {{< backlink “rust” “Rust”>}} projects, like hed, I utilize a similar flake.nix specific to that project. Moving it to a new machine and entering the directory automatically builds a new (complete) development environment via nix-direnv, allowing me to dive right in. 🏝️

#linux #nixos #programming #operatingSystems

In {{< backlink “a-new-theme” “my previous post”>}} I highlighted that I set myself the goal of creating a self hosted comic book collection tool. Before that, in {{< backlink “choose-your-tools” “a post about tooling”>}}, I reiterated my ❤️ for {{< backlink “clojure” “Clojure”>}} as a language. So, this is the start of a series of articles detailing how the development is going, and also as an introduction to the various parts of the tech stack.

Clojure is special to me in that there are hardly any big frameworks in the ecosystem. Clojure is more like Lego, there are countless building blocks of various shapes and sizes. It is up to you as the developer to stick the blocks together to get something usefull. You might guess that I also ❤️ Lego.

{{< admonition type=“tip” >}} On youtube you will find various series that detail the creation of Clojure apps. Check out:

If you would like to be added to this list, send me a message: @credmp@fosstodon.org {{< /admonition >}}

So, today I am starting with the first component of my techstack: Metosin's Reitit.

What is reitit?

Metosin's Reitit is a highly performant and extensible routing library for Clojure and ClojureScript applications. It provides a declarative way to define routes for web servers. Reitit integrates seamlessly with Ring, enabling middleware and handler chaining, and offers robust features like path parameters, route coercion, and schema validation.

It is easy to get started with, but is flexible enough to provide everything we need in any type of API. In this post I am going to show you the essentials to get a workflow up and running.

{{< admonition type=“tip” >}} The reitit documentation is extensive and very valuable, read it here. {{< /admonition >}}

A very simple API

There are many ways to start building an API, and pretty much everything is ok. I like to start from the handler and then work my way down all the way to the http server.

A handler

A handler is the code that, well, handles the request. Let's create a Hello World handler, its only task is to return a map which has a :status key and a :body key.

The :status represents the HTTP status code that should be returned, in this case 200 – all is good. The :body will be a string for now. In a later post it will become JSON, but to get started a string is fine.

(defn handler [request]
  (println "Handling" request)
  {:status 200
   :body "hello world!"})

That was quite easy, right? The handler is a function, so it can be called in the repl. As you would expect, it returns a map with the data.

(handler {})
;; => {:status 200, :body "hello world!"}

In the application the handler has to be connected to a URL endpoint, a so-called route.

The router

The router connects routes to handlers. The routes are defined using vectors ([]). The handler that was defined earlier is a greeting, an endpoint for such a thing might be /hello (or /greet, but it is always /greet...). The endpoint becomes a route when it is combined with a method to get there.

In HTTP there are several methods: POST, GET, PUT, DELETE, and a bunch more. These methods are the way HTTP tells the server to create, read, update and delete something on the server.

In this case the handler is only asked to return some information, so a GET method is the right choice here.

(ns blogpost
  (:require
   ;; add these
   [reitit.ring :as ring]
   [reitit.core :as r]))
   
(def router (ring/router
             ["/hello" {:get #'handler}]
             ))

{{< admonition type=“note” >}} I am using #'handler here, which is the same as (var handler) to refer to the var named handler. It is used to reference the var itself instead of its value.

During development this means that the var's value can be updated and the result will immediately be available in the web server, with no need to restart the server. This helps greatly in the development experience. {{< /admonition >}}

With the router created it can be queried to ensure everything is as expected. This is a good way to check what kind of middleware or interceptors are applied to the routes. Currently there is none of that magic going on, but later-on it might be necessary to confirm that the configuration is correct.

An interesting fact, when a route is created or a get, reitit will also create an options route. This is to satisfy browsers and frontend tooling that will request some metadata (options) before calling a potentially expensive, in time, method.

;; return all routes in the router
(r/routes router)
;; => [["/hello" {:get {:handler #'core/handler}}]]

;; retrieve the path within the router
(r/match-by-path router "/hello")
;; => {:template "/hello",
;;     :data {:get {:handler #'core/handler}},
;;     :result
;;     {:get
;;      {:data {:handler #'core/handler},
;;       :handler #'core/handler,
;;       :path "/hello",
;;       :method :get,
;;       :middleware []},
;;      :head nil,
;;      :post nil,
;;      :put nil,
;;      :delete nil,
;;      :connect nil,
;;      :options
;;      {:data
;;       {:no-doc true,
;;        :handler #function[reitit.ring/fn--14482/fn--14491]},
;;       :handler #function[reitit.ring/fn--14482/fn--14491],
;;       :path "/hello",
;;       :method :options,
;;       :middleware []},
;;      :trace nil,
;;      :patch nil},
;;     :path-params {},
;;     :path "/hello"}

With a router defined, the ring handler can be constructed. It is confusing that there are multiple handlers now, so lets refer to the ring handler as the app (or application handler), basically a fully wired up application that can process requests.

The application handler

Constructing the app makes it possible to take a request map, the thing the webserver will receive from a client, and route it to the handler. The handler will then process the request and will return a result. The app will return the result to the client.

For now the ring-handler can be constructed with the router that was created earlier and the ring/create-default-handler. The default handler ensures more correct error responses are created. It differentiates :not-found (no route matched), :method-not-allowed (no method matched) and :not-acceptable (handler returned nil).

(def app 
  (ring/ring-handler 
    router 
    (ring/create-default-handler)))

The ring/ring-handler returns a function. That function can be called with a request map to test it out. Passing a request to the app for an endpoint that does not exist should return a 404, HTTP's way of saying “I have no idea what you want from me”.

(app {:request-method :get, :uri "/clojure"})
;; => {:status 404, :body "", :headers {}}

But calling the route that was defined ealier should yield a very welcoming message.

(app {:request-method :get, :uri "/hello"})
;; => {:status 200, :body "hello world!"}

It works as expected! The final step is to actually connect it to a webserver.

Making it available as a service

The Jetty server is a battle tested http server. It is very easy to use through the ring adapter. By calling run-jetty, and passing in our app (again as a var reference for easy development), the endpoint will finally become available online (on our system).

There are 2 important parameters that are passed to jetty; :port and :join?. The port tells jetty on which port the server should bind, anything about 1024 is good here.

:join? tells jetty to run in its own thread and allows the repl to accept other commands. If it was not passed the repl would have to be restarted to stop the server. The result of run-jetty is stored in server.

;; add a require
[ring.adapter.jetty :as jetty]

(def server 
  (jetty/run-jetty #'app 
                   {:port 3000, :join? false}))

Using a tool such as curl it is now possible to query the API. You can also use the browser of course!

$ curl -v localhost:3000/hello
* Host localhost:3000 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
*   Trying [::1]:3000...
* Connected to localhost (::1) port 3000
> GET /hello HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/8.7.1
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 200 OK
< Date: Tue, 07 Jan 2025 21:26:36 GMT
< Transfer-Encoding: chunked
< Server: Jetty(11.0.24)
<
* Connection #0 to host localhost left intact
hello world!%

From the result (which is verbose due to -v) it is clear that the Jetty server is responding (note the Jetty(11.0.24) line in the headers). Also, there is the very welcoming hello world message at the bottom.

In the repl it is possible to make changes to the handler. After evaluation the API should immediately return the updated message.

To stop the webserver either close the repl, or call .stop on the server var.

(.stop server)

This is a first small step to a new API. Reitit has many things to offer, I would recommend checking out the docs and the examples.

#clojure #web #programming

So, a new year, a new theme! I switched my blog to use the Today I Learned Theme. This theme has a great feature where it also maintains a collection of notes and shows a graph with related notes. This is very similar to how I use org-roam.

I will not be transferring all my notes over, but I thought it would be a very nice feature to share some of my notes with you. This year I am focussing on {{< backlink “choose-your-tools” “Clojure”>}} and {{< backlink “rust” “Rust”>}}, and as a result I will be posting my notes on the new things I learn.

I set myself a goal of creating a “self hosted comic book collection tool”. It should be very nice to create this using the insights from {{< backlink “zero2prod” “Zero 2 Production”>}}. My blog will be a sort of development log along the way.

#blog

{{< admonition type=“note” >}} Originally posted on 2024-09-30 (Monday). It was updated in January of 2025. {{< /admonition >}}

I ❤️ to build software. I sadly do not have a lot of time next to my daily work to spend on my side projects, so I have to be disciplined in where I invest time. I wish I could spend endless amounts of time on exploring new technologies, but sadly I simply do not have that time. In writing this is sometimes referred to as “to kill your darlings”.

Sir Arthur Quiller-Couch wrote in his 1916 book On the Art of Writing: “If you here require a practical rule of me, I will present you with this: ‘Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it—whole-heartedly—and delete it before sending your manuscript to press. Murder your darlings.’”

Luckily for me, I just finished my latest round of education, so I now do have time to spend on building some of the ideas that have been floating around in my head for the last 3 years. And I did start out writing stuff. Some in {{< backlink “rust” “Rust”>}}, some in Go and others in Clojure.

Like many programmers I love to explore new languages, I think you always learn something new from them. As Clojure really taught me about functional programming when all I knew was imperative languages. In the end, after having a summer of not working on my studies I have 0 projects completed, but I do have 4 versions of them.

So, I decided to step back and evaluate. I decided to kill my darlings of different programming languages and focus solely on Clojure again. Development in Clojure conforms to Rule 6 for me. While working out the problem I love the interactive build method. I actually like the parentheses, I know... weirdo me 🤗.

update 2025: during the holiday season I got the book Zero 2 Prod, which is a book about making Rust project production worthy. Experience I already have in Java and Clojure. This sparked rule 6 for me for the {{< backlink “rust” “Rust”>}} language again. The experience following the book has been quite smooth, but the real proof is, of course, creating something yourself. I know, I am like a {{< sidenote “puppy” >}}I love puppies!{{< /sidenote >}} puppy chasing his tail... Let's see where this goes.

From reading the book I already see lots of improvement for my Hed tool.

You might even remember that I used to do a live-streaming series in Clojure. I still don't have a lot of time to continue that one, but who knows... I might drop some videos later again.

Since the summer I have been somewhat involved in Biff, a rapid prototyping web framework in Clojure. It provides a set of sensible defaults to just get started, and it allows you to easily change all its bits. I have been building my latest project on top of it, which, with a bit of luck, might even make it to production.

#clojure #development #rust #emacs