Arjen Wiersma

A blog on Emacs, self-hosting, programming and other nerdy things

{{< admonition type=“tip” >}} This article was first published as part of a substack experiment, I reproduced it here. {{< /admonition >}}

Hey everyone,

Let's be honest. This new wave of generative AI is moving incredibly fast. One minute we're asking it to write a poem, and the next, AI “agents” are being built to act on their own.

I've been working in tech for a long time, but some of the security risks I'm seeing are… different. They're strange, new, and frankly, a little scary.

Your old cybersecurity playbook? It’s not going to cut it. Trying to use old security methods on these new AIs is like trying to put a bike lock on a cloud. The problems are just in a different dimension.

What is not from another dimension? Receiving this entire 12 post series in your mailbox…

So, I decided to put together a guide. For the next three weeks, I'm going to walk you through the security risks of this new AI world. I'll look at the real threats and, more importantly, how to deal with them. On Monday, Wednesday, Friday and Saturday a bite-sized newsletter drops which gets you up to speed on a single topic. A quick read and plenty of discussion at the coffee machine (or in slack if you are home)!

Here’s what you can expect in the coming 12(!) posts:

  • Week 1: Getting a Handle on the Basics Why is securing an AI so different from a regular app? We'll jump right into the most common weak spots, like tricking an AI into doing something it shouldn't (Prompt Injection) or making it spill secrets it's supposed to keep. Then, we’ll talk about AI agents—what happens when AI starts doing things on its own?
  • Week 2: When Things Get Weird This is where it gets really interesting. We’ll look at what happens when AIs team up and their problems multiply. We'll cover AI Hallucinations (what happens when an AI just makes stuff up) and how that can cause a total mess. We'll also dig into scary stuff like an AI's goals being hijacked by a bad actor.
  • Week 3: Building a Defense That Actually Works It's not all doom and gloom! We’ll spend this week focused on solutions. I'll show you how to protect your data when working with AI and what “Red Teaming” an AI looks like. (Hint: It’s about trying to break your own stuff to find the flaws first). We'll also look at some cool new tools and frameworks designed to keep AI systems safe. This series is for you if you're a developer, a security pro, or just curious about what's really going on under the hood of AI.

If you’ve been looking for a straightforward guide to the real security challenges of AI, this is it.

The first post is coming this monday. If you know anyone who should be part of this conversation, now would be a great time to share this with them.

Software Engineering

In my feed the opening talk by DHH at Rails World 2024 popped up, most notably due his stance on the reduction of complexity in running an online business. He promotes running your own (virtual) hardware, reducing build pipelines and not using Platform as a Service providers (#nopaas). Watch it below.

{{< rawhtml >}} {{< /rawhtml >}}

It really interested me, for my hobby projects I don't have a lot of time and I would like the experience to be as smooth as butter. Years ago I wrote Rails based web applications, so the release of Rails 8 with this introduction made me curious how Rails development is nowadays. Spent a weekend working on a small project and it is pretty darn good I must say.

AI Stuff

Threats and stupidity

Tim Bray talked about AI Angst [2], how the world seems to struggle with using AI and feel threatened by it. At the same time we are full into the time of AI Agents with cool projects to track their effectiveness. As it is still possible to leak private data using AI agents (echoleak) [25] and AI agents are wiping your computer when stuff becomes too hard [6], it seems we are still some ways off the safe application of AI agents. Most AI applications seem to be some type of “fraud” as well, such as calory counting apps [7]. Just because you stick AI into it, doesn't make it better.

I highly recommend reading Neil Madden's review of the AI code written by Cloudflare for their new OAuth library [12]. The process they used is well documented, so we can see exactly where the AI stopped being able to generate the required code and needed human interaction. Most interesting point of this review is that Neil is into security, and this is a security library and, shocker, the AI failed at safe application of security. Luckily the humans of Cloudflare are excellent coders and know their stuff!

There are good applications as well of course, such as Honeycomb finding that computers can work faster then humans [16]. Or having experience developers use AI to do something new, such as build an iOS app [14].

Apple, in the meantime, dropped a major paper “Shojaee, Mirzadeh & Alizadeh et al. (2025) The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity, arXiv.org.” [11], which identifies that current reasoning models are using patterns from the past to build up thoughts and are not really reasoning. This resulted in a lot of discussion [13], but the paper seems to hold.

A new repository was launched, vibesec, which holds AI rules for various programming languages/models.

Closing

I really should get a better workflow going. Currently my reading goes into Zotero and then on sunday I just categorize the items correctly. Perhaps I can make something that will build this post during the week, as I read it... how do you do it?

The complete list

{{< rawhtml >}}

[1]
Adding Sign Up to the Rails 8 Authentication Generator. https://robrace.dev/blog/rails-8-authentication-sign-up/, 2024. Accessed: Jun. 13, 2025. [Online]. Available: https://robrace.dev/blog/rails-8-authentication-sign-up/
[2]
[3]
AI Coding Agents. https://aavetis.github.io/ai-pr-watcher/. Accessed: Jun. 09, 2025. [Online]. Available: https://aavetis.github.io/ai-pr-watcher/
[4]
J. Arinze, Why Senior Developers Google Basic Syntax. https://faun.pub/why-senior-developers-google-basic-syntax-fa56445e355f, 2025. Accessed: Jun. 10, 2025. [Online]. Available: https://faun.pub/why-senior-developers-google-basic-syntax-fa56445e355f
[5]
Marco M. Beurer-Kellner, GitHub MCP Exploited: Accessing Private Repositories via MCP. https://invariantlabs.ai/blog/mcp-github-vulnerability, 2025. Accessed: Jun. 05, 2025. [Online]. Available: https://invariantlabs.ai/blog/mcp-github-vulnerability
[6]
Cursor YOLO Deleted Everything in My Computer – Bug Reports. https://forum.cursor.com/t/cursor-yolo-deleted-everything-in-my-computer/103131, 2025. Accessed: Jun. 14, 2025. [Online]. Available: https://forum.cursor.com/t/cursor-yolo-deleted-everything-in-my-computer/103131
[7]
M. Dietz, I Used AI-Powered Calorie Counting Apps, and They Were Even Worse Than I Expected. https://lifehacker.com/health/ai-powered-calorie-counting-apps-worse-than-expected, 2025. Accessed: Jun. 10, 2025. [Online]. Available: https://lifehacker.com/health/ai-powered-calorie-counting-apps-worse-than-expected
[8]
The Gentle Singularity. https://blog.samaltman.com/the-gentle-singularity. Accessed: Jun. 12, 2025. [Online]. Available: https://blog.samaltman.com/the-gentle-singularity
[9]
GitHub – Gbrayhan/Hexagonal-Architecture-Clojure: DDD Hexagonal Architecture Using Clojure. https://github.com/gbrayhan/hexagonal-architecture-clojure/tree/main. Accessed: Jun. 08, 2025. [Online]. Available: https://github.com/gbrayhan/hexagonal-architecture-clojure/tree/main
[10]
J. G. Herrero, “Localhost Tracking” Explained. It Could Cost Meta 32 Billion. https://www.zeropartydata.es/p/localhost-tracking-explained-it-could, 2025. Accessed: Jun. 11, 2025. [Online]. Available: https://www.zeropartydata.es/p/localhost-tracking-explained-it-could
[11]
P. Shojaee, I. Mirzadeh, K. Alizadeh, M. Horton, S. Bengio, and M. Farajtabar, The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity. https://arxiv.org/abs/2506.06941v1, 2025. Accessed: Jun. 15, 2025. [Online]. Available: https://arxiv.org/abs/2506.06941v1
[12]
A Look at CloudFlare’s AI-coded OAuth Library. 2025.
[13]
G. Marcus, Seven Replies to the Viral Apple Reasoning Paper – and Why They Fall Short. 2025.
[14]
My First Attempt at iOS App Development. https://mgx.me/my-first-attempt-at-ios-app-development, 2025. Accessed: Jun. 09, 2025. [Online]. Available: https://mgx.me/my-first-attempt-at-ios-app-development
[15]
[16]
A. Parker, It’s The End Of Observability As We Know It (And I Feel Fine). 2025.
[17]
Ruby on Rails, Rails World 2024 Opening Keynote – David Heinemeier Hansson. 2024.
[18]
J. Searls, Why Agents Are Bad Pair Programmers. https://justin.searls.co/posts/why-agents-are-bad-pair-programmers/, 2025. Accessed: Jun. 10, 2025. [Online]. Available: https://justin.searls.co/posts/why-agents-are-bad-pair-programmers/
[19]
Self-Host & Tech Independence: The Joy of Building Your Own. https://www.ssp.sh/blog/self-host-self-independence/, 2025. Accessed: Jun. 08, 2025. [Online]. Available: https://www.ssp.sh/blog/self-host-self-independence/
[20]
N. Sobo, The Case for Software Craftsmanship in the Era of Vibes – Zed Blog. https://zed.dev/blog/software-craftsmanship-in-the-era-of-vibes, 2025. Accessed: Jun. 13, 2025. [Online]. Available: https://zed.dev/blog/software-craftsmanship-in-the-era-of-vibes
[21]
Software Is About Promises. https://www.bramadams.dev/software-is-about-promises/, 2025. Accessed: Jun. 10, 2025. [Online]. Available: https://www.bramadams.dev/software-is-about-promises/
[22]
N. C. Team, NIS2 Cyber | Comprehensive Guide to EU Cybersecurity Directive. https://www.nis2-cyber.com/. Accessed: Jun. 13, 2025. [Online]. Available: https://www.nis2-cyber.com/
[23]
U. Theory, Untamed-Theory/Vibesec. 2025.
[24]
J. Westenberg, Smart People Don’t Chase Goals; They Create Limits. https://www.joanwestenberg.com/smart-people-dont-chase-goals-they-create-limits/, 2025. Accessed: Jun. 10, 2025. [Online]. Available: https://www.joanwestenberg.com/smart-people-dont-chase-goals-they-create-limits/
[25]
S. Willison, Breaking down `EchoLeak’, the First Zero-Click AI Vulnerability Enabling Data Exfiltration from Microsoft 365 Copilot. https://simonwillison.net/2025/Jun/11/echoleak/. Accessed: Jun. 12, 2025. [Online]. Available: https://simonwillison.net/2025/Jun/11/echoleak/
[26]
S. Willison, Design Patterns for Securing LLM Agents against Prompt Injections. https://simonwillison.net/2025/Jun/13/prompt-injection-design-patterns/. Accessed: Jun. 13, 2025. [Online]. Available: https://simonwillison.net/2025/Jun/13/prompt-injection-design-patterns/

{{< /rawhtml >}}

Tech in general

I learned that most of the layoffs in the US are not so much about AI taking jobs. Sure, there are bound to be a bunch of people that are no longer employed because their jobs was easily replaced by a system, but there is more then meets the eye. In “The hidden time bomb in the tax code that's fueling mass tech layoffs” explores the tax rule that was changed under Trump-I, section 174, which basically no longer allows companies to write-off R&D effort in the current fiscal year.

Security in general

Some really neat attacks or attack vectors:

AI

New models of interest

General News

Lauren Weinstein reported that OpenAI was ordered to store logs of all conversations with ChatGPT, even the private and “do not use for training” data. The original article was by arstechnica.

Antirez wrote a nice opinion post on why thy think humans are still beter then AI at coding.

In the same light, Cloudflare released an oauth library “mostly” written by AI. Max Mitchell went through the github history and found that without human involvement we would not have this library. Granted, 95% of the code seems generated, but it would not work without humans.

A note by Cloudflare: To emphasize, this is not “vibe coded”. Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs. I was trying to validate my skepticism. I ended up proving myself wrong.

As AI is able to understand complex papers much easier then humans, Reuven Cohen posted that he used Perplexity AI to read a paper on secretly tracking human movement through walls using standard WiFi routers (Geng et al.). It took the AI less then 1 hour to implement the paper in an application.

Sonia Mishra wrote a very nice piece on The Rise of Vibe Coding: Innovation at the Cost of Security, I highly recommend anyone thinking about using {{< backlink “vibe-coding” “vibe-coding” >}} to check it out.

Security issues

  • AI-hallucinated code dependencies become new supply chain risk by Bill Toulas
  • Claude seems to have learned how to bypass restrictions set by the Cursor IDE. It was not allowed to use mv and rm, so it wrote a shell script to do it for it and executed it.
  • VectorSmuggle :: A comprehensive proof-of-concept demonstrating vector-based data exfiltration techniques in AI/ML environments. This project illustrates potential risks in RAG systems and provides tools and concepts for defensive analysis.

Model Context Protocol

The world is going nuts about Model Context Protocol.

Attacks

A list of interesting attack vectors or stories:

Defense

A company announced itself, Spawn Systems, that is promoting its product MCP Defender, a firewall type system to shield you of MCP abuse. There is very little information. From the Github history the first commit was May 28th, and the entire thing seems to be {{< backlink “vibe-coding” “vibe coded” >}}, I would not yet trust this project.

How Vibe Coding Fails

{{< admonition type=“tip” title=“Up to now” >}} The video I am commenting on below is part of a series called Vibe-coding in het onderwijs. So far, the series has been excellent! It shows teachers how they can create small tools for their class using AI such as ChatGPT and bolt.new. The projects featured had very little actual logic or complexity, and the use of AI was spot-on! {{< /admonition >}}

Now, take a look at the following video. If you don't know any Dutch, Tom is using bolt.new to create an AI chatbot that simulates a difficult HR conversation. How this relates to education isn't relevant here; the point is that he wants to demonstrate the use of a model with a frontend.

{{< rawhtml >}} {{< /rawhtml >}}

The video goes well until around the 11:20 mark. Bolt has created a frontend for him, and then he wants to safely store the API key for Mistral. Bolt suggests using Supabase with edge functions, but Tom has heard that using .env files is a safe way to store keys.

Tom isn't wrong, but the technology in use can't read .env files. In this case, if he had used the edge functions, there wouldn't be a problem. The point, however, is that the complexity of the application increases significantly once you need to explicitly secure data. This requires just enough coding knowledge to make good decisions.

Anyway, Tom asks Bolt to use a .env file, and Bolt complies. It creates a .env file and incorporates it into the JavaScript files during the build process. He published the application, and anyone can try it out. For somewhat of a security measure, I'm not posting it here.

In the DevTools (F12 in most browsers), you can now inspect all interactions with Mistral, including the Bearer authorization with the API key.

An exposed API key

{{< admonition type=“danger”>}} The API key has been disabled at this time. If you find one in an application on the internet, please notify its creator of the problem instead of using their AI budget for yourself. {{< /admonition >}}

I discuss these risks in my talk “Vibe-coding your way into a security nightmare”. You can watch it below:

{{< rawhtml >}} {{< /rawhtml >}}

Will software development change? Yes, of course. Will we stop making software? No, we'll still be creating software, just not in the same way as before.

For the last few months, a lingering question in our industry has been: is there still room for developers in this AI-driven world? My answer is yes, but we won't be developing in the same way we have for the past 30 years.

My career dates back to my first professional coding job in 1996. Back then, we created software that had to be physically shipped to customers on some form of media. My most ambitious project was the {{< backlink deployment-anxiety “work I did when the Dutch ISP Freeler was created” >}}. We wrote software and then put it on a CD-ROM to ship to customers. Later, the delivery medium became the web, which transformed all our distribution challenges. Programming languages evolved too, shifting from those focused on single platforms and distribution methods to more web-friendly languages.

And now there's AI—technology that can think faster than us and has access to more accumulated knowledge. You can ask it to write an implementation of Dijkstra's Algorithm and within seconds have one in your preferred language. All you need to do is verify it works for your use case and provide guidance to integrate it into your codebase. This verification and integration process exemplifies the symbiosis I see developing between human developers and AI tools—where machines generate code rapidly, but human expertise ensures its proper implementation.

If you follow AI developments closely, you might have heard about “vibe coding,” a state in which you surrender to the rhythm of coding with AI and allow it to make all the changes you want. This term was first coined by Andrej Karpathy and has gained significant traction. I have colleagues who have fully embraced this paradigm and produced some remarkable {{< sidenote projects >}}As far as I know, none of these projects have made it to production though{{< /sidenote >}}.

image.png

But there's another side to this coin. Software development isn't just about creating software—it's primarily about understanding software. There's a clear distinction: the art of creating software—writing code—becomes valuable when you understand what you're building, and especially when you comprehend all the potential pitfalls. This generally marks the difference between junior and senior developers. Now, even someone who previously struggled to learn coding can create software with AI assistance, but they often lack the fundamental understanding of the software they've created. Without this deeper knowledge, they can't effectively maintain, debug, or enhance their code when inevitable challenges arise.

These knowledge gaps manifest in concerning ways. For example, problems frequently emerge when developers lack fundamental knowledge, such as understanding that API keys should never be shared.

image.png

Or when they create products without proper security considerations, allowing users to access features in unintended ways.

image.png

For me, this paints a clear picture: AI can create beautiful and complex things, but it lacks the insights into software development needed to create something of sufficient quality. This is where current software developers come in—we provide the guidance and expertise to build good software.

I believe the role of software developers going forward will be to collaborate with our new AI tools to build better software. As Adarsh Gupta says, “Even if you built 100 projects and add them to your resume, your resume of 'built 100 vibe apps' means nothing if you can't understand the fundamentals” While the creation process might become accessible to many new people, the actual process of making software work correctly will require even more specialized knowledge.

image.png

I am leaving NOVI. Yes, I know, it is sad news. For almost 6 years I have been building and maintaining an organisation that provides the best cybersecurity and software development (Bachelor) education in The Netherlands. In that time I have done amazing things:

  • Created a short course format for people that want to switch careers. With some back of the napkin calculations I have seen over 2500 students pass through one of the programs.
  • I lead a team of quality assurance, educational development, EduTech developers and teachers to build an awesome EduTech tool and provide top-notch education.
  • Started and hosted the {{< backlink “resigning-as-htb-ambassador” “Hack The Box NL meetups” >}} for 4 years.
  • I became part of the management team and helped the organisation through an M&A proces

It has been a wild ride, but like all things that begin, it must end.

I was listening to the Application Security podcast with Jim Routh in which he talks about the 15 year cycle of his career. At first I thought that it did not make much sense, but after a while of reflecting on that statement I found that I myself have a 10 year cycle.

In my career there have been blocks of 10 years in which my interest and career change and re-align.

The first cycle was Software Development. I started as a software engineer (programming professionally) in 1996, working in the Dutch industry and then at Personify in San Francisco. When I moved back to The Netherlands I worked at Tiscali where I transitioned from software development into architecture.

Cycle two was all about architecture. From Tiscali I moved on to eBuddy in 2006 as the lead architect for our chatplatform. It was a beautiful combination of coding and designing and I look back at it with great love. From the insane time at an internet startup I moved into an architecture role at Infomedics. I combined my coding and architecture skills to build out an amazing platform that serves millions of people on a daily basis. During my time at Infomedics I achived a Bachelor Degree in ICT and I started making Cyber Security a more present part of my career, from obtaining certifications such as CEH and OSCP to managing ISO 27001 and ISAE 3402 certifications for our tech teams.

The third cycle is education. From 2016 onward I have worked in education. First I did it as a hobby project, next to my work hours I would give classes at the Hogeschool van Amsterdam in Infrastructure Security, Forensics, Software Security and guided students in their projects and final exams. I even created a completely new Associate Degree in Cybersecurity. In 2019 I made a full switch to education by joining NOVI. During my time at NOVI I attained my {{< backlink “master-of-science” “Master's Degree” >}} and transitioned into a more strategic role where I managed a large part of the organisation next to giving classes and building curricula.

And now my fourth cycle has appeared. Each cycle up to now has brought me amazing challenges, wonderful people and a lot of knowledge. So it is an exciting time to look forward to.

#career

When I tell people that I like to code in {{< backlink “clojure” “Clojure”>}} the common response is “wut?”. Clojure is not known as a programming language in which you create big systems. As all Clojure people know, this is not true. There are many systems written in Clojure. Let me show you some that are very actively maintained.

First there is Lipas, a Finnish platform that shows you information about sports clubs. The structure and techniques used in this code base I use as a reference implementation for my own ClojureScript + Clojure systems. A screenshot of the application is shown here:

Lipas

Next, there is Metabase, a business intelligence platform. The below gif shows you some of the features it has.

Metabase

There is a great talk at Conj 2024 about supporting 50000 users on Metabase. You can watch it over on YouTube.

Finally, also found on the Conj 2024 streams, there is Cisco Threat Intelligence API. This a full threat intelligence service and data model that is built using Clojure. Link to the repository. The talk about the project can be seen on YouTube.

There are plenty of other projects using Clojure, if you know of more that I should add to my list, do let me know!

#clojure #web #programming

Observability in cloud-native applications is crucial for managing complex systems and ensuring reliability (Chakraborty & Kundan, 2021; Kosińska et al., 2023). It enables continuous generation of actionable insights based on system signals, helping teams deliver excellent customer experiences despite underlying complexities (Hausenblas, 2023; Chakraborty & Kundan, 2021). In essence, adding proper observability to your system allows you to find and diagnose issues without having to dig through tons of unstructured log files.

The running project

In {{< backlink “20250107-clojure-reitit-server” “my previous post on reitit”>}} we built a simple endpoint using {{< backlink “clojure” “Clojure”>}} and reitit. The complete code for the small project was:

(ns core
  (:require
   [reitit.ring :as ring]
   [ring.adapter.jetty :as jetty]))

(defn handler [request]
  {:status 200
   :body (str "Hello world!")})

(def router (ring/router
             ["/hello" {:get #'handler}]))

(def app (ring/ring-handler router
                            (ring/create-default-handler)))

Nice and easy eh? That simplicity is what I truly love about {{< backlink “clojure” “Clojure”>}}. That, and the fact that there is an awesome interoperability with the Java ecosystem of libraries.

Adding observability

In {{< backlink “clojure” “Clojure”>}} it is possible to add observability through the wonderful clj-otel library by Steffan Westcott. It implements the OpenTelemetry standard which makes it integrate nicely in products such as HoneyComb.io and Jaeger.

The library has a great tutorial that you can follow here. Applying the knowledge from this tutorial to our reitit application is also trivial. To show the power of observability a JDBC connection will be added to the application. It is not necessary to mess with any tables or such, it will just leverage a connection to a Postgres database and a value will be queried from it.

First, lets see the updated deps.edn file.

{:deps {ring/ring-jetty-adapter {:mvn/version "1.13.0"}
        metosin/reitit {:mvn/version "0.7.2"}

        ;; Observability
        com.github.steffan-westcott/clj-otel-api {:mvn/version "0.2.7"}
        
        ;; Database access
        com.github.seancorfield/next.jdbc {:mvn/version "1.3.981"}
        org.postgresql/postgresql {:mvn/version "42.7.4"}
        com.zaxxer/HikariCP {:mvn/version "6.2.1"}}

 :aliases {:otel {:jvm-opts ["-javaagent:opentelemetry-javaagent.jar"
                             "-Dotel.resource.attributes=service.name=blog-service"
                             "-Dotel.metrics.exporter=none"
                             ]}}}

You will notice some new dependencies, as well as an alias that you can use to start the repl with. If you, like me, use Emacs you can codify this into a .dir-locals.el file for your project.

((nil . ((cider-clojure-cli-aliases . ":otel"))))

Now, whenever cider creates a new repl it will use the otel alias as well.

The agent that is listed as javaagent can be downloaded from the OpenTelemetry Java Instrumentation page. This will immediately bring in a slew of default instrumentations to the project. Give it a try with the starter project, you will notice that all the jetty requests will show up in your jaeger instance (you did look at the tutorial, right?).

Finally, here is the update project for you to play with.

(ns core
  (:require
   [next.jdbc :as jdbc]
   [reitit.ring :as ring]
   [ring.adapter.jetty :as jetty]
   [ring.util.response :as response]
   [steffan-westcott.clj-otel.api.trace.http :as trace-http]
   [steffan-westcott.clj-otel.api.trace.span :as span]))

(def counter (atom 0))

;; add your database configuration here
(def db {:jdbcUrl "jdbc:postgresql://localhost:5432/db-name?user=db-user&password=db-pass"})

(def ds (jdbc/get-datasource db))

(defn wrap-db
  [handler db]
  (fn [req]
    (handler (assoc req :db db))))

(defn wrap-exception [handler]
  (fn [request]
    (try
      (handler request)
      (catch Throwable e
        (span/add-exception! e {:escaping? false})
        (let [resp (response/response (ex-message e))]
          (response/status resp 500))))))

(defn db->value [db]
  (let [current @counter]
    (span/with-span! "Incrementing counter"
      (span/add-span-data! {:attributes {:service.counter/count current}})
      (swap! counter inc))
    (:value (first (jdbc/execute! db [(str "select " current " as value")])))))

(defn handler [request]
  (let [db (:db request)
        dbval (db->value db)]
    (span/add-span-data! {:attributes {:service.counter/count dbval}})
    {:status 200
     :body (str "Hello world: " dbval)}))

(def router (ring/router
             ["/hello" {:get (-> #'handler
                                 (wrap-db ds)
                                 wrap-exception
                                 trace-http/wrap-server-span)}]))
                                 
(def app (ring/ring-handler router
                            (ring/create-default-handler)))

(def server (jetty/run-jetty #'app {:port 3000, :join? false}))
;; (.stop server)

There are several interesting bits to be aware of. First the handler is wrapped in several middleware functions, one to pass the database connection, the other to wrap the exceptions (such as in the tutorial) and finally the middleware to wrap a server request. The db->value creates its own span to keep track of its activity.

After making several requests you will see that Jaeger contains the same amount of traces. A normal trace will show 3 bars, each of which you can expand and explore.

A trace in Jaeger

If you take the database offline (that is why we used Postgres), you will notice that the exception is neatly logged.

Exceptions in Jaeger

Observability allows you to get a great insight into how you application is running in production. With the clj-otel library it is a breeze to enhance your own application.

#clojure #web #observability #programming

{{}} Currently, only use Postgres 14 on the Digital Ocean application platform for development databases. {{}}

While following the book {{< backlink “zero2prod” “Zero2Prod”>}} you will learn how to deploy a {{< backlink “rust” “Rust”>}} application to digital ocean through a Continuous Deployment pipeline. This is hardly anything new for me, I even teach a course in DevOps, but to not stray from the path of the book I followed its instructions.

The spec for digital ocean looks like this (this is abbreviated for your reading pleasure):

name: zero2prod
region: fra
services:
    - name: zero2prod
      dockerfile_path: Dockerfile
      source_dir: .
      github:
        branch: main
        deploy_on_push: true
        repo: credmp/zero2prod
      health_check:
        http_path: /health_check
      http_port: 8000
      instance_count: 1
      instance_size_slug: basic-xxs
      routes:
      - path: /
databases:
  - name: newsletter
    engine: PG
    db_name: newsletter
    db_user: newsletter
    num_nodes: 1
    size: db-s-dev-database
    version: "16"

Actually, in the book it says to use version 12, but that version is no longer available. The latest version support is 16 and I chose that. There is only a small hiccup here, since Postgres 15 in 2022 there has been a breaking change in how databases are created. Notable, a best practice following a CVE in 2018 (CVE-2018-1058), has been made the standard. The standard being that by default users do not have creation rights, as an administrator you have to explicitly grant rights to your users.

Although this has been best practice since 2018, the change in Postgres 15 confronts users with this change. To my surprise Digital Ocean seems to not be aware of this change until now.

The development database created in the application platform using the spec from above creates an user (newsletter) with the following rights:

Role name | Attributes
------------------+------------------------------------------------------------
_doadmin_managed | Cannot login
_doadmin_monitor |
_dodb | Superuser, Replication
doadmin | Create role, Create DB, Replication, Bypass RLS
doadmin_group | Cannot login
newsletter |
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS

You read that correctly, none. At the moment you can still create a postgres 14 database with digital ocean, which grants rights to the user and then you can upgrade it to the latest version, keeping the rights. But that is a workaround.

After determining the cause of the error I decided to mail digital ocean support with the issue. Timeline:

  • December 30th: the answer is that I am using a development database, if I would only upgrade to a managed cluster I would have full access to the database. I politely responded explaining the problem again.
  • December 30th: a quick response from the same agent, saying that based on the information provided I am trying to do things with the doadmin user, again not reading the actual question (or not understanding the problem). I again answer with a full log of the creation of the database and the rights given to the users.
  • December 31st: another agent responds, telling me that using my spec it will create a database and that I can connect using the data from the control panel. This is exactly the information I already sent, but the agent does not actually look at the problem (no rights). I once again explain the issue.
  • December 31st: another agent answers the ticket, asking how I create the database. I once again answer with the spec (which is already in the ticket 2 times now) and the steps I use (doctl from the command line).
  • December 31st: another agent responds with some general information about creating databases, again not actually reading or understanding the issue.
  • Januari 1st: a standard follow up email asking if I am happy with the service. I respond that the problem is not solved, and that I am fearful that given the interaction it will not be solved.
  • Januari 2nd: another agent responds that they are talking internally
  • Januari 2nd: a senior agent called Nate appears in the thread. Actually asking questions that explore the issue. I promptly respond.
  • Januari 2nd: Nate acknowledges the issues and Digital Ocean starts working on a fix for their database provisioning. Provides the workaround of first using version 13 or 14 and then upgrading.
  • Januari 9th: Still working
  • Januari 15th: Still working
  • Januari 21st: Another update that the provisioning process is quite complex and they are still working on a solution.

The proces to get something so trivial through the support channel is quite painful. I do realize I do not have paid support, and I am willing to wait it out because of that, but the first 5 interactions did nothing but destroy my confidence in the Digital Ocean support system. Luckily Nate picked up the ticket.

When a solution eventually comes around I will update this post.

#development #database #programming

In July 2023, I installed NixOS as my daily operating system. NixOS is a Linux distribution that emphasizes a declarative approach to system management. This means you define your desired operating system configuration in a file (e.g., KDE with Emacs 30 and Firefox), and the Nix package manager uses that file to create your OS. Every change generates a new version, allowing you to revert to a previous version if anything goes wrong.

Prior to NixOS, I used various Ubuntu and Debian-based distributions, with POP_OS! being my favorite. I often encountered package conflicts or misconfigurations during updates. NixOS has resolved these issues for me.

Since switching in 2023, I've experienced zero problems with upgrades or stability. While experimenting with different desktop environments posed some challenges, the ability to reboot into a prior OS version (or “generation”) has provided a safety net I didn't realize I needed.

My NixOS configuration primarily revolves around three files: /etc/nixos/configuration.nix, created during installation and tailored to my chosen desktop (currently KDE for my work laptop); /etc/nixos/shared.nix, which contains shared services and settings for my laptop, desktop, and work laptop, encompassing everything from Bluetooth to sound configurations. This setup ensures I have a consistent and functional desktop environment across all my systems.

The last file I manage is ~/.config/home-manager/home.nix, which contains all the programs I want, such as Emacs, wl-clipboard, and Firefox, along with user services like the Emacs daemon. Essentially, I only need to edit home.nix as a user and run home-manager switch to deploy new programs on my system.

During the biannual update cycle in May and November, I update the nixos and home-manager channels and run sudo nixos-rebuild switch --upgrade for a system upgrade. While there can be occasional breaking changes, Nix alerts me to these. I can easily run upgrades before important meetings, confident it will work smoothly, and if issues arise, I can simply reboot into a previous generation.

It's a delightful experience! Although there's a learning curve for newcomers, I highly recommend investing time in a VM to grasp the basics; it's well worth it over time.

In my home.nix, I include only the essential programs I use regularly, like Emacs. For my development projects, I rely on nix-direnv, which manages project-specific dependencies, such as compilers. Each {{< backlink “clojure” “Clojure”>}} project, for instance, contains a flake.nix file in the root that specifies its dependencies.

{
  description = "A basic flake with a shell";
  inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
  inputs.flake-utils.url = "github:numtide/flake-utils";

  outputs = { nixpkgs, flake-utils, ... }:
    flake-utils.lib.eachDefaultSystem (system:
      let
        pkgs = nixpkgs.legacyPackages.${system};
      in
      {
        devShells.default = pkgs.mkShell {
          packages = [ 
            pkgs.clojure
            pkgs.clojure-lsp
            pkgs.clj-kondo
            pkgs.cljfmt
            pkgs.nodejs
            pkgs.jdk23
            pkgs.unzip
          ];
        };
      });
}

The packages list above establishes a complete development environment for users. When I share my project with others using NixOS (nix-direnv), it seamlessly works for them, as it has no external dependencies. For my {{< backlink “rust” “Rust”>}} projects, like hed, I utilize a similar flake.nix specific to that project. Moving it to a new machine and entering the directory automatically builds a new (complete) development environment via nix-direnv, allowing me to dive right in. 🏝️

#linux #nixos #programming #operatingSystems