I am leaving NOVI. Yes, I know, it is sad news. For almost 6 years I have been building and maintaining an organisation that provides the best cybersecurity and software development (Bachelor) education in The Netherlands. In that time I have done amazing things:
Created a short course format for people that want to switch careers. With some back of the napkin calculations I have seen over 2500 students pass through one of the programs.
I lead a team of quality assurance, educational development, EduTech developers and teachers to build an awesome EduTech tool and provide top-notch education.
Started and hosted the {{< backlink “resigning-as-htb-ambassador” “Hack The Box NL meetups” >}} for 4 years.
I became part of the management team and helped the organisation through an M&A proces
It has been a wild ride, but like all things that begin, it must end.
I was listening to the Application Security podcast with Jim Routh in which he talks about the 15 year cycle of his career. At first I thought that it did not make much sense, but after a while of reflecting on that statement I found that I myself have a 10 year cycle.
In my career there have been blocks of 10 years in which my interest and career change and re-align.
The first cycle was Software Development. I started as a software engineer (programming professionally) in 1996, working in the Dutch industry and then at Personify in San Francisco. When I moved back to The Netherlands I worked at Tiscali where I transitioned from software development into architecture.
Cycle two was all about architecture. From Tiscali I moved on to eBuddy in 2006 as the lead architect for our chatplatform. It was a beautiful combination of coding and designing and I look back at it with great love. From the insane time at an internet startup I moved into an architecture role at Infomedics. I combined my coding and architecture skills to build out an amazing platform that serves millions of people on a daily basis. During my time at Infomedics I achived a Bachelor Degree in ICT and I started making Cyber Security a more present part of my career, from obtaining certifications such as CEH and OSCP to managing ISO 27001 and ISAE 3402 certifications for our tech teams.
The third cycle is education. From 2016 onward I have worked in education. First I did it as a hobby project, next to my work hours I would give classes at the Hogeschool van Amsterdam in Infrastructure Security, Forensics, Software Security and guided students in their projects and final exams. I even created a completely new Associate Degree in Cybersecurity. In 2019 I made a full switch to education by joining NOVI. During my time at NOVI I attained my {{< backlink “master-of-science” “Master's Degree” >}} and transitioned into a more strategic role where I managed a large part of the organisation next to giving classes and building curricula.
And now my fourth cycle has appeared. Each cycle up to now has brought me amazing challenges, wonderful people and a lot of knowledge. So it is an exciting time to look forward to.
When I tell people that I like to code in {{< backlink “clojure” “Clojure”>}} the common response is “wut?”. Clojure is not known as a programming language in which you create big systems. As all Clojure people know, this is not true. There are many systems written in Clojure. Let me show you some that are very actively maintained.
First there is Lipas, a Finnish platform that shows you information about sports clubs. The structure and techniques used in this code base I use as a reference implementation for my own ClojureScript + Clojure systems. A screenshot of the application is shown here:
Next, there is Metabase, a business intelligence platform. The below gif shows you some of the features it has.
There is a great talk at Conj 2024 about supporting 50000 users on Metabase. You can watch it over on YouTube.
Finally, also found on the Conj 2024 streams, there is Cisco Threat Intelligence API. This a full threat intelligence service and data model that is built using Clojure. Link to the repository. The talk about the project can be seen on YouTube.
There are plenty of other projects using Clojure, if you know of more that I should add to my list, do let me know!
Observability in cloud-native applications is crucial for managing complex systems and ensuring reliability (Chakraborty & Kundan, 2021; Kosińska et al., 2023). It enables continuous generation of actionable insights based on system signals, helping teams deliver excellent customer experiences despite underlying complexities (Hausenblas, 2023; Chakraborty & Kundan, 2021). In essence, adding proper observability to your system allows you to find and diagnose issues without having to dig through tons of unstructured log files.
The running project
In {{< backlink “20250107-clojure-reitit-server” “my previous post on reitit”>}} we built a simple endpoint using {{< backlink “clojure” “Clojure”>}} and reitit. The complete code for the small project was:
Nice and easy eh? That simplicity is what I truly love about {{< backlink “clojure” “Clojure”>}}. That, and the fact that there is an awesome interoperability with the Java ecosystem of libraries.
The library has a great tutorial that you can follow here. Applying the knowledge from this tutorial to our reitit application is also trivial. To show the power of observability a JDBC connection will be added to the application. It is not necessary to mess with any tables or such, it will just leverage a connection to a Postgres database and a value will be queried from it.
You will notice some new dependencies, as well as an alias that you can use to start the repl with. If you, like me, use Emacs you can codify this into a .dir-locals.el file for your project.
((nil . ((cider-clojure-cli-aliases . ":otel"))))
Now, whenever cider creates a new repl it will use the otel alias as well.
The agent that is listed as javaagent can be downloaded from the OpenTelemetry Java Instrumentation page. This will immediately bring in a slew of default instrumentations to the project. Give it a try with the starter project, you will notice that all the jetty requests will show up in your jaeger instance (you did look at the tutorial, right?).
Finally, here is the update project for you to play with.
There are several interesting bits to be aware of. First the handler is wrapped in several middleware functions, one to pass the database connection, the other to wrap the exceptions (such as in the tutorial) and finally the middleware to wrap a server request. The db->value creates its own span to keep track of its activity.
After making several requests you will see that Jaeger contains the same amount of traces. A normal trace will show 3 bars, each of which you can expand and explore.
If you take the database offline (that is why we used Postgres), you will notice that the exception is neatly logged.
Observability allows you to get a great insight into how you application is running in production. With the clj-otel library it is a breeze to enhance your own application.
{{}}
Currently, only use Postgres 14 on the Digital Ocean application platform for development databases.
{{}}
While following the book {{< backlink “zero2prod” “Zero2Prod”>}} you will learn how to deploy a {{< backlink “rust” “Rust”>}} application to digital ocean through a Continuous Deployment pipeline. This is hardly anything new for me, I even teach a course in DevOps, but to not stray from the path of the book I followed its instructions.
The spec for digital ocean looks like this (this is abbreviated for your reading pleasure):
Actually, in the book it says to use version 12, but that version is no longer available. The latest version support is 16 and I chose that. There is only a small hiccup here, since Postgres 15 in 2022 there has been a breaking change in how databases are created. Notable, a best practice following a CVE in 2018 (CVE-2018-1058), has been made the standard. The standard being that by default users do not have creation rights, as an administrator you have to explicitly grant rights to your users.
Although this has been best practice since 2018, the change in Postgres 15 confronts users with this change. To my surprise Digital Ocean seems to not be aware of this change until now.
The development database created in the application platform using the spec from above creates an user (newsletter) with the following rights:
You read that correctly, none. At the moment you can still create a postgres 14 database with digital ocean, which grants rights to the user and then you can upgrade it to the latest version, keeping the rights. But that is a workaround.
After determining the cause of the error I decided to mail digital ocean support with the issue. Timeline:
December 30th: the answer is that I am using a development database, if I would only upgrade to a managed cluster I would have full access to the database. I politely responded explaining the problem again.
December 30th: a quick response from the same agent, saying that based on the information provided I am trying to do things with the doadmin user, again not reading the actual question (or not understanding the problem). I again answer with a full log of the creation of the database and the rights given to the users.
December 31st: another agent responds, telling me that using my spec it will create a database and that I can connect using the data from the control panel. This is exactly the information I already sent, but the agent does not actually look at the problem (no rights). I once again explain the issue.
December 31st: another agent answers the ticket, asking how I create the database. I once again answer with the spec (which is already in the ticket 2 times now) and the steps I use (doctl from the command line).
December 31st: another agent responds with some general information about creating databases, again not actually reading or understanding the issue.
Januari 1st: a standard follow up email asking if I am happy with the service. I respond that the problem is not solved, and that I am fearful that given the interaction it will not be solved.
Januari 2nd: another agent responds that they are talking internally
Januari 2nd: a senior agent called Nate appears in the thread. Actually asking questions that explore the issue. I promptly respond.
Januari 2nd: Nate acknowledges the issues and Digital Ocean starts working on a fix for their database provisioning. Provides the workaround of first using version 13 or 14 and then upgrading.
Januari 9th: Still working
Januari 15th: Still working
Januari 21st: Another update that the provisioning process is quite complex and they are still working on a solution.
The proces to get something so trivial through the support channel is quite painful. I do realize I do not have paid support, and I am willing to wait it out because of that, but the first 5 interactions did nothing but destroy my confidence in the Digital Ocean support system. Luckily Nate picked up the ticket.
When a solution eventually comes around I will update this post.
In July 2023, I installed NixOS as my daily operating system. NixOS is a Linux distribution that emphasizes a declarative approach to system management. This means you define your desired operating system configuration in a file (e.g., KDE with Emacs 30 and Firefox), and the Nix package manager uses that file to create your OS. Every change generates a new version, allowing you to revert to a previous version if anything goes wrong.
Prior to NixOS, I used various Ubuntu and Debian-based distributions, with POP_OS! being my favorite. I often encountered package conflicts or misconfigurations during updates. NixOS has resolved these issues for me.
Since switching in 2023, I've experienced zero problems with upgrades or stability. While experimenting with different desktop environments posed some challenges, the ability to reboot into a prior OS version (or “generation”) has provided a safety net I didn't realize I needed.
My NixOS configuration primarily revolves around three files: /etc/nixos/configuration.nix, created during installation and tailored to my chosen desktop (currently KDE for my work laptop); /etc/nixos/shared.nix, which contains shared services and settings for my laptop, desktop, and work laptop, encompassing everything from Bluetooth to sound configurations. This setup ensures I have a consistent and functional desktop environment across all my systems.
The last file I manage is ~/.config/home-manager/home.nix, which contains all the programs I want, such as Emacs, wl-clipboard, and Firefox, along with user services like the Emacs daemon. Essentially, I only need to edit home.nix as a user and run home-manager switch to deploy new programs on my system.
During the biannual update cycle in May and November, I update the nixos and home-manager channels and run sudo nixos-rebuild switch --upgrade for a system upgrade. While there can be occasional breaking changes, Nix alerts me to these. I can easily run upgrades before important meetings, confident it will work smoothly, and if issues arise, I can simply reboot into a previous generation.
It's a delightful experience! Although there's a learning curve for newcomers, I highly recommend investing time in a VM to grasp the basics; it's well worth it over time.
In my home.nix, I include only the essential programs I use regularly, like Emacs. For my development projects, I rely on nix-direnv, which manages project-specific dependencies, such as compilers. Each {{< backlink “clojure” “Clojure”>}} project, for instance, contains a flake.nix file in the root that specifies its dependencies.
{
description = "A basic flake with a shell";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
inputs.flake-utils.url = "github:numtide/flake-utils";
outputs = { nixpkgs, flake-utils, ... }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in
{
devShells.default = pkgs.mkShell {
packages = [
pkgs.clojure
pkgs.clojure-lsp
pkgs.clj-kondo
pkgs.cljfmt
pkgs.nodejs
pkgs.jdk23
pkgs.unzip
];
};
});
}
The packages list above establishes a complete development environment for users. When I share my project with others using NixOS (nix-direnv), it seamlessly works for them, as it has no external dependencies. For my {{< backlink “rust” “Rust”>}} projects, like hed, I utilize a similar flake.nix specific to that project. Moving it to a new machine and entering the directory automatically builds a new (complete) development environment via nix-direnv, allowing me to dive right in. 🏝️
In {{< backlink “a-new-theme” “my previous post”>}} I highlighted that I set myself the goal of creating a self hosted comic book collection tool. Before that, in {{< backlink “choose-your-tools” “a post about tooling”>}}, I reiterated my ❤️ for {{< backlink “clojure” “Clojure”>}} as a language. So, this is the start of a series of articles detailing how the development is going, and also as an introduction to the various parts of the tech stack.
Clojure is special to me in that there are hardly any big frameworks in the ecosystem. Clojure is more like Lego, there are countless building blocks of various shapes and sizes. It is up to you as the developer to stick the blocks together to get something usefull. You might guess that I also ❤️ Lego.
{{< admonition type=“tip” >}}
On youtube you will find various series that detail the creation of Clojure apps. Check out:
🤯 Andrey Fadeev's playlist for building production apps from scratch
🏝️ Lambda Island has many interesting Clojure videos on different topics
If you would like to be added to this list, send me a message: @credmp@fosstodon.org
{{< /admonition >}}
So, today I am starting with the first component of my techstack: Metosin's Reitit.
What is reitit?
Metosin's Reitit is a highly performant and extensible routing library for Clojure and ClojureScript applications. It provides a declarative way to define routes for web servers. Reitit integrates seamlessly with Ring, enabling middleware and handler chaining, and offers robust features like path parameters, route coercion, and schema validation.
It is easy to get started with, but is flexible enough to provide everything we need in any type of API. In this post I am going to show you the essentials to get a workflow up and running.
{{< admonition type=“tip” >}}
The reitit documentation is extensive and very valuable, read it here.
{{< /admonition >}}
A very simple API
There are many ways to start building an API, and pretty much everything is ok. I like to start from the handler and then work my way down all the way to the http server.
A handler
A handler is the code that, well, handles the request. Let's create a Hello World handler, its only task is to return a map which has a :status key and a :body key.
The :status represents the HTTP status code that should be returned, in this case 200 – all is good. The :body will be a string for now. In a later post it will become JSON, but to get started a string is fine.
In the application the handler has to be connected to a URL endpoint, a so-called route.
The router
The router connects routes to handlers. The routes are defined using vectors ([]). The handler that was defined earlier is a greeting, an endpoint for such a thing might be /hello (or /greet, but it is always /greet...). The endpoint becomes a route when it is combined with a method to get there.
In HTTP there are several methods: POST, GET, PUT, DELETE, and a bunch more. These methods are the way HTTP tells the server to create, read, update and delete something on the server.
In this case the handler is only asked to return some information, so a GET method is the right choice here.
{{< admonition type=“note” >}}
I am using #'handler here, which is the same as (var handler) to refer to the var named handler. It is used to reference the var itself instead of its value.
During development this means that the var's value can be updated and the result will immediately be available in the web server, with no need to restart the server. This helps greatly in the development experience.
{{< /admonition >}}
With the router created it can be queried to ensure everything is as expected. This is a good way to check what kind of middleware or interceptors are applied to the routes. Currently there is none of that magic going on, but later-on it might be necessary to confirm that the configuration is correct.
An interesting fact, when a route is created or a get, reitit will also create an options route. This is to satisfy browsers and frontend tooling that will request some metadata (options) before calling a potentially expensive, in time, method.
With a router defined, the ring handler can be constructed. It is confusing that there are multiple handlers now, so lets refer to the ring handler as the app (or application handler), basically a fully wired up application that can process requests.
The application handler
Constructing the app makes it possible to take a request map, the thing the webserver will receive from a client, and route it to the handler. The handler will then process the request and will return a result. The app will return the result to the client.
For now the ring-handler can be constructed with the router that was created earlier and the ring/create-default-handler. The default handler ensures more correct error responses are created. It differentiates :not-found (no route matched), :method-not-allowed (no method matched) and :not-acceptable (handler returned nil).
The ring/ring-handler returns a function. That function can be called with a request map to test it out. Passing a request to the app for an endpoint that does not exist should return a 404, HTTP's way of saying “I have no idea what you want from me”.
It works as expected! The final step is to actually connect it to a webserver.
Making it available as a service
The Jetty server is a battle tested http server. It is very easy to use through the ring adapter. By calling run-jetty, and passing in our app (again as a var reference for easy development), the endpoint will finally become available online (on our system).
There are 2 important parameters that are passed to jetty; :port and :join?. The port tells jetty on which port the server should bind, anything about 1024 is good here.
:join? tells jetty to run in its own thread and allows the repl to accept other commands. If it was not passed the repl would have to be restarted to stop the server. The result of run-jetty is stored in server.
;; add a require
[ring.adapter.jetty :as jetty]
(def server
(jetty/run-jetty #'app
{:port 3000, :join? false}))
Using a tool such as curl it is now possible to query the API. You can also use the browser of course!
$ curl -v localhost:3000/hello
* Host localhost:3000 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
* Trying [::1]:3000...
* Connected to localhost (::1) port 3000
> GET /hello HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/8.7.1
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 200 OK
< Date: Tue, 07 Jan 2025 21:26:36 GMT
< Transfer-Encoding: chunked
< Server: Jetty(11.0.24)
<
* Connection #0 to host localhost left intact
hello world!%
From the result (which is verbose due to -v) it is clear that the Jetty server is responding (note the Jetty(11.0.24) line in the headers). Also, there is the very welcoming hello world message at the bottom.
In the repl it is possible to make changes to the handler. After evaluation the API should immediately return the updated message.
To stop the webserver either close the repl, or call .stop on the server var.
(.stop server)
This is a first small step to a new API. Reitit has many things to offer, I would recommend checking out the docs and the examples.
So, a new year, a new theme! I switched my blog to use the Today I Learned Theme. This theme has a great feature where it also maintains a collection of notes and shows a graph with related notes. This is very similar to how I use org-roam.
I will not be transferring all my notes over, but I thought it would be a very nice feature to share some of my notes with you. This year I am focussing on {{< backlink “choose-your-tools” “Clojure”>}} and {{< backlink “rust” “Rust”>}}, and as a result I will be posting my notes on the new things I learn.
I set myself a goal of creating a “self hosted comic book collection tool”. It should be very nice to create this using the insights from {{< backlink “zero2prod” “Zero 2 Production”>}}. My blog will be a sort of development log along the way.
{{< admonition type=“note” >}}
Originally posted on 2024-09-30 (Monday). It was updated in January of 2025.
{{< /admonition >}}
I ❤️ to build software. I sadly do not have a lot of time next to my daily work to spend on my side projects, so I have to be disciplined in where I invest time. I wish I could spend endless amounts of time on exploring new technologies, but sadly I simply do not have that time. In writing this is sometimes referred to as “to kill your darlings”.
Sir Arthur Quiller-Couch wrote in his 1916 book On the Art of Writing: “If you here require a practical rule of me, I will present you with this: ‘Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it—whole-heartedly—and delete it before sending your manuscript to press. Murder your darlings.’”
Luckily for me, I just finished my latest round of education, so I now do have time to spend on building some of the ideas that have been floating around in my head for the last 3 years. And I did start out writing stuff. Some in {{< backlink “rust” “Rust”>}}, some in Go and others in Clojure.
Like many programmers I love to explore new languages, I think you always learn something new from them. As Clojure really taught me about functional programming when all I knew was imperative languages. In the end, after having a summer of not working on my studies I have 0 projects completed, but I do have 4 versions of them.
So, I decided to step back and evaluate. I decided to kill my darlings of different programming languages and focus solely on Clojure again. Development in Clojure conforms to Rule 6 for me. While working out the problem I love the interactive build method. I actually like the parentheses, I know... weirdo me 🤗.
update 2025: during the holiday season I got the book Zero 2 Prod, which is a book about making Rust project production worthy. Experience I already have in Java and Clojure. This sparked rule 6 for me for the {{< backlink “rust” “Rust”>}} language again. The experience following the book has been quite smooth, but the real proof is, of course, creating something yourself. I know, I am like a {{< sidenote “puppy” >}}I love puppies!{{< /sidenote >}} puppy chasing his tail... Let's see where this goes.
From reading the book I already see lots of improvement for my Hed tool.
You might even remember that I used to do a live-streaming series in Clojure. I still don't have a lot of time to continue that one, but who knows... I might drop some videos later again.
Since the summer I have been somewhat involved in Biff, a rapid prototyping web framework in Clojure. It provides a set of sensible defaults to just get started, and it allows you to easily change all its bits. I have been building my latest project on top of it, which, with a bit of luck, might even make it to production.
It is December again and that means it is time for the Advent of Code. Due to my workload and family obligations I will probably not be able to get very far this year, but still I wanted to write a post about it.
This year I am using Java, together with my students. My goal is to write as modern as possible Java, which means using streams and new language constructs where possible.
In day 1 we are parsing 2 lists of numbers, with the lists printed vertically. This means each line has 2 numbers, one for list one and the other for list two. To parse these data structures I used a very nice stream where I map each line onto a String[] using split.
To be sure that the input is valid, the peek method allows you to check if the result is what you intended, and otherwise an exception will terminate everything. From here I map the String[] into a Pair record which holds the 2 numbers. Streaming over the resulting pairs the left and right lists can be extracted quite easily.
I loved this approach, it is very straightforward and does not have a lot of control flow.
@Override
public List<List<Integer>> parseInput(List<String> input) {
var pairs = input.stream()
.map(s -> s.split("\\s+"))
.peek(parts -> {
if (parts.length != 2)
throw new IllegalArgumentException("Invalid input format");
})
.map(parts -> new Pair<>(Integer.parseInt(parts[0]), Integer.parseInt(parts[1])))
.collect(Collectors.toList());
var left = pairs.stream()
.map(Pair::left)
.collect(Collectors.toList());
var right = pairs.stream()
.map(Pair::right)
.collect(Collectors.toList());
return List.of(left, right);
}
Solving the problem with these lists was quite easy. In part 2 there was a need for a frequency table of a list. I also found a very nice solution to that problem using the groupingBy method from Collectors.
input.get(1).stream()
.collect(Collectors.groupingBy(n -> n, Collectors.counting()));
I really liked day 2, the first part was quite straightforward. You have to identify increment or decrement only lists and apply some conditions to them.
Part 2 was much more interesting, here you have to account for fault tolerance. In the Python implementations that were posted the common solution is to concatenate 2 parts of the array and then rerun the validation logic.
Using streams we can something very similar. First we use an IntStream to iterate over every int[] (report). Then for every int in that report, we construct a new array by filtering out the index of the current item. After that it is a simple case of determining increment or decrement and applying the conditional logic.
Suppose you have a list of [1,2,4,7], while iterating it will first hit index 0, the filter will prevent that entry from continuing. Next 1 through 3 will continue and as a result of toArray() a new array will be constructed with only those items.
x == 0
| map ------.
v ------- v
[1, 2, 4, 7] [2,4,7]
input.stream()
// Loop over the list
.filter(in -> IntStream.range(0, in.length) // take a report
// for every entry in that int[]
.anyMatch(x -> {
// create a new list, excluding the one we are on now
int[] c = IntStream.range(0, in.length)
.filter(i -> i != x)
.map(i -> in[i])
.toArray();
boolean allInc = IntStream.range(0, c.length - 1)
.allMatch(i -> c[i] <= c[i + 1]);
boolean allDec = IntStream.range(0, c.length - 1)
.allMatch(i -> c[i] >= c[i + 1]);
boolean good = IntStream.range(0, c.length - 1)
.allMatch(i -> Math.abs(c[i] - c[i + 1]) >= 1 &&
Math.abs(c[i] - c[i + 1]) <= 3);
// matching the condition
return (allInc || allDec) && good;
})
)
.count();
My first solution was nothing like this, but after refining it I am very happy with how clean it came out.
This was the traditional easy puzzle after a more complicated one. Basically simple parsing for which I used regular expressions. Nothing special, on to day 4.
For day 4 I solved the first part with an over engineered path finding solution, which turned out to be quite the overkill, but extremely fun to program.
I really like the pattern in use, below is some of the code of it. First you create a Deque that holds the work, then you load it up with the initial starting points. In the case of the puzzle these are the location of the X characters.
From there you just loop over the work, taking a partial solution and seeing if any cells around it will lead to another partial solution, so from XM to XMA and on the next iteration to XMAS. The dx is a collection of Coord that indicate valid movements across the board.
Deque<Path> work = new ArrayDeque<>();
// Load initial points
for (int y = 0; y < input.length; y++) {
for (int x = 0; x < input[y].length; x++) {
if (input[y][x] == 'X') {
work.add( new Path(List.of(new Coord(x,y)), "X", null) );
}
}
}
// Process each outstanding point..
while (!work.isEmpty()) {
var path = work.pop();
for (Coord d : dx) {
if (path.dir() != null && path.dir != d) {
continue;
}
var newCoord = lastStep.add(d);
// Ensure this is a valid point on the grid
if (newCoord.x() >= 0 && newCoord.x() < input[0].length &&
newCoord.y() >= 0 && newCoord.y() < input.length) {
// ... create new paths and string based on location
// Check if we have an end case, else add it to the work
if (target.equals(xmas)) {
matches.add(newPath);
} else if (target.startsWith(xmas)) {
work.add(newPath);
}
}
}
}
A more straightforward approach, which was actually needed for part 2, is to just try to solve it in one step. First you iterate over both y and x coordinates looking for an X, just as above. When you find one, iterate over [-1, 0, 1] on both the x and y axis-es, using dy and dx for the direction. If both direction are 0, we continue as it would give the current position. The beauty of this approach is that you can move outward in steps, x + 3 * dx will give you a value 3 cells in the give direction. From there it is a simple matter of checking if we are in bounds and if the letters spell MAS.
for (int y = 0; y < input.length; y++) {
for (int x = 0; x < input[y].length; x++) {
if (input[y][x] != 'X') continue;
for (int dy = -1; dy <= 1; dy++) {
for (int dx = -1; dx <= 1; dx++) {
if (dy == dx && dx == 0) continue;
if (!(0 <= y + 3 * dy && y + 3 * dy < input.length &&
0 <= x + 3 * dx && x + 3 * dx < input[y].length)) continue;
if (input[y+1*dy][x+1*dx] == 'M' &&
input[y+2*dy][x+2*dx] == 'A' &&
input[y+3*dy][x+3*dx] == 'S') {
matches++;
}
}
}
}
}
For part 2 a similar approach can be used, however the order is not important. So I chose to create a List and then check against a target list with the containsAll method, it does not care about order.
var x1 = List.of(input[y-1][x-1], input[y][x], input[y+1][x+1]);
var x2 = List.of(input[y-1][x+1], input[y][x], input[y+1][x-1]);
if (x1.containsAll( target ) && x2.containsAll( target )){
matches++;
}
Another reminder to not over engineer at the start.
Another fun puzzle, when I initially read it my mind jumped to graphs. There is a 2 part input, the first part being a list of rules, numbers that are only valid when they are placed in front of other numbers.
The second part of the input is a list of report structures. The first quest was to validate the reports and find only the valid ones.
My first attempt, in part 1, was to take the rules for a number (a List<Integer>) and see if there is an anyMatch of the sublist before it using order::contains. Basically if the pages is 75,97,47,61,53 and the rule 97|75 (97 should be before 75), the the loop will iterate over the pages, and check to see if [75] is in the list of rules for 97.
boolean isValid(List<Integer> pages, Instructions input) {
var valid = true;
for (int i = 0; i < pages.size(); i++) {
var order = input.order().get(pages.get(i));
if (order != null) {
var hasAny = pages.subList(0,i+1).stream().anyMatch(order::contains);
if (hasAny) {
valid = false;
}
}
}
return valid;
}
Part 2 had us fixing the broken pages. After some initial magic with arrays I figured out it is a basic sorting problem. In Java you can use Comparator implementations to create custom sorting rules, as long as it responds with -1,0,1 for to the left, the same, to the right. So the lambda Comparator takes a left hand side and right hand side value, retrieves the rules for the left hand side (if null it is equal 0) and checks to see if the right hand side is in the ruleset (-1). If all checks fail, the value should go to the right hand side.
var answer = 0L;
for (var pages : input.pages()) {
var valid = isValid(pages, input);
var work = new ArrayList<>(pages); // pages is immutable
if (!valid) {
Collections.sort(work, (lhs, rhs) -> {
var order = input.order().get(lhs);
if (order == null) return 0;
if (order.contains(rhs)) return -1;
return 1;
});
answer += work.get(work.size()/2);
}
}
A surprisingly easy solution to a messy problem when you want to implement it yourself.
Traditionally the Friday puzzles seem to be somewhat more challenging, this Friday is no exception. We are given a challenge similar to sliding puzzle games.
Instead of sliding over ice we are to map the movements of a guard to ensure we can move safely through the area. For part 1 there was nothing too exciting, just move the guard over the floor and track the places visited. Depending on your loop you might accidentally avoid an edgecase that will show up in part 2.
Lets take a look at the loop:
while (inBounds) {
visited.add(start);
var next = start.add(delta.get(sign));
if (!next.inBound(0, input[0].length, 0, input.length)) {
inBounds = false;
continue;
}
if (input[next.y()][next.x()] == '#') {
sign = turns.get(sign);
continue;
}
start = next;
}
While we are *in bounds* we keep moving, adding each step into the visited list. We then get the next position by retrieving the delta (a lookup table of coordinates such as -1,0, which indicate that the guard will move -1 on the x-axis and 0 on the y-axis). If we are out of bounds, flip the switch and break out of the while loop, if the next position is an obstacle, #, we set the sign to the 90 degree turned version (another lookup table) and rerun the loop. If, for some reason, you continue checking and validating at this point you might miss the edge-case that turning can result in facing another wall. When all the conditions are checked, simply reset the start variable to the next coordinates and move on.
Part 2 becomes much more interesting; we are to find infinite loops by placing exactly 1 extra obstacle. Intuitively you will remark that the obstacle can only be placed on one of the cells that were visited in part 1. This already eliminates part of the board. From here you can loop over the list of coordinates, place an obstacle and let the guard run its route. When you visit a coordinate twice in *the same direction* you know you are in a loop.
I looked for a “smart” solution, but the brute force is done in less then 2 seconds. So I will leave it at this, but somehow feel there might be more optimizations possible.
The end of week 1, and easier then the Friday puzzle. We are given a list of numbers per line that we need to either add or multiply to get to a target number. I chose to use some recursion to solve this problem. Each iteration of the recursion will reduce the array of numbers using one of the operations.
In the end the list of numbers will be reduced to either the target number, or something else. So the base case checks to see if it was successful.
If the base case is not hit, the first recursion is to add the numbers. A trick here is to use a LongStream to range over 1 to the end, mapping the numbers. If number 1 is mapped, we add the number at position 0 to reduce the array.
The second case applies the multiplication in the same way.
The third case (part 2) is to concatenate the numbers, this is easil done through number + "" + number in java, coercing the numbers into a String and then using Long.valueOf() to read the value again.
boolean isValid(long target, long[] numbers, boolean third) {
if (numbers.length == 1) return target == numbers[0];
if (isValid(target, LongStream.range(1, numbers.length)
.map(i -> {
if (i == 1) return numbers[0] + numbers[1];
return numbers[(int)i];
})
.toArray(), third)) return true;
if (isValid(target, LongStream.range(1, numbers.length)
.map(i -> {
if (i == 1) return numbers[0] * numbers[1];
return numbers[(int)i];
})
.toArray(), third)) return true;
if (third && isValid(target, LongStream.range(1, numbers.length)
.map(i -> {
if (i == 1) return Long.valueOf(numbers[0] + "" + numbers[1]);
return numbers[(int)i];
})
.toArray(), third)) return true;
return false;
}
One last trick is to use the Stream feature to filter the list, mapping each object to a long value and summing.
@Override public Long solver1(List<Calibration> input) {
return input.stream().filter(i -> isValid(i.target, i.numbers, false)).mapToLong(cal -> cal.target).sum();
}
Day 8 has us back in history staring at antennas. The description was quite cryptic, but reading it carefully you learn that the necessary step is to find the difference between a pair of coordinates and then extrapolate the path inside the bounds of the grid.
To read the grid into a structure I used a simple nested loop, adding a new list to the map if it is absent, then adding the new coordinate for the antenna.
for (int r = 0; r < gridH; r++) {
for (int c = 0; c < gridW; c++) {
char ch = input.get(r).charAt(c);
if (ch != '.') {
antennas.computeIfAbsent(ch, k -> new ArrayList<>()).add(new Coord(c, r));
}
}
}
We then have to find the antinode for the point which, for the pair, is just a single difference step from the antenna. Interestingly we need to count the *unique* antinodes. Whenever you get such a requirement, always think about using a Set for storage.
Getting the pairs is straightforward, and we have done it earlier in the series already. The first loop starts at 0 and ends the element before the end, size - 1. The inner loop starts at current pos + 1 and ends at the size of the list.
for (int p = 0; p < coords.size() - 1; p++) {
for (int n = p + 1; n < coords.size(); n++) {
Then just compute the difference and add the antinode when it is in bounds.
var antinode1 = cur.add(cur.diff(next));
var antinode2 = next.add(next.diff(cur));
In part 2 the path needs to be extrapolated until it goes out of bounds. This can easily be wrapped in its own method.
void addAntinodesInDirection(Set<Coord> antinodes, Coord start, Coord diff) {
var current = start;
while (true) {
var next = current.add(diff);
if (!next.inBound(0, gridW, 0, gridH)) {
break;
}
antinodes.add(next);
current = next;
}
}
A pretty straightforward problem to solve, on to tomorrow!
For me this was a hard day. We are given a list of numbers that we use to fragment files on a disk. The first part of the puzzle was quite straightforward, create a list that holds the file ids and spaces and just follow the rules.
for (int i = 0; i < input.length(); i++) {
for (int j = 0; j < input.charAt(i)-'0'; j++) {
if (i%2 == 0) {
disk.add(id);
} else {
disk.add(null);
}
}
if (i%2 == 0) {
id++;
}
}
From there just create 2 pointers, one on the left and one on the right. The left tracks the empty space and the right tracks the file ids that we want to put in the empty space. The important thing in Java is to make the implementation of the list a LinkedList. This allows for little-overhead reshuffling of the list.
var l = 0;
var r = disk.size() - 1;
while (l < r) {
if (disk.get(l) != null) {
l++;
continue;
}
if (disk.get(r) == null) {
r--;
continue;
}
disk.set(l, disk.get(r));
disk.set(r, null);
l++;
r--;
}
Part 2 became much harder, we are not to find space for the blocks of files instead of fragments. I first tried the same approach, but it took forever. I then saw the error of my ways and decided to use a lookup table for the empty spaces. This table maps the empty spaces of size N, in the below example 1 and 2, to a list of start/end coordinates.
1 = [1,1] [2,2]
2 = [3,4] [6,7]
The list of coordinates needs to remain sorted, so I used a PriorityQueue for it. Then it is just a matter of determining the size of the file under the r pointer by looping over it until we hit another id, and then looking up the most left candidate of the empty spaces.
// Gets all candidates that will fit the file
for (int i = bs; i < 10; i++) {
var earliest = free[i].peek();
if (earliest != null && earliest < r) {
candidates.add(new Candidate(i, earliest));
}
}
if (candidates.isEmpty()) {
return null;
}
// Sort based on the index (most left first)
candidates.sort((lhs, rhs) -> {
return lhs.idx() - rhs.idx();
});
var can = candidates.getFirst();
free[can.size()].remove();
The final solution runs in a matter of milliseconds, so I am quite happy with that.
Finally a depth first / breath first path seeker! We need to identify a trail that leads to a summit, or rather all trails that lead to the summit. Part 1 wants to know the score (how many summits can a path reach) and part 2 its rating (how many trails are there that reach a summit). This is pretty straightforward in the sense that you create a Queue and put the start of the trail in, then for each direction you construct a more complete path.
As always it is important to check for bounds and if the path is incremental (business rule). If the new path is actually at the summit, add it to the finished paths. If it is not, try to complete it.
Trail solve(char[][] grid, Coord zero) {
var q = new ArrayDeque<List<Coord>>();
q.add(List.of(zero));
List<List<Coord>> paths = new ArrayList<>();
while (!q.isEmpty()) {
var current = q.removeFirst();
for (Coord d : zero.directNeighbors()) {
var last = current.getLast();
var nc = last.add(d);
if (!nc.inBound(0,grid[0].length, 0, grid.length)) continue;
if (grid[nc.y()][nc.x()] != grid[last.y()][last.x()] + 1) continue;
var newPath = new ArrayList<>(current);
newPath.add(nc);
if (grid[nc.y()][nc.x()] == '9') {
paths.add(newPath);
} else {
q.addLast(newPath);
}
}
}
var score = paths.stream().map(l->l.getLast()).collect(Collectors.toSet()).size();
var rating = paths.size();
return new Trail(score, rating);
Day 11 – one to remember {#day-11-one-to-remember}
For the last couple of days the discussion forums have been full with memes about brute forcing the answer. Up to now you could really do so. I have one colleague who wrote a nice brute force for Day 6 that took several minutes, but it did work. Personally I am not a fan of the brute forcing approach, I like to make it more elegant when possible.
Today is this years first Lanternfish type of problem, one where the problem space becomes so large that your computer is not able to brute force it due to memory constraints. It calls for a more elegant solution.
Part 1 and part 2 are basically the same, the difference is the amount of iterations for the problem. In this case we have some rules in which rocks change and split up. We are tasked to find the amount of rocks after 25 and 75 iterations. The first is do-able with a brute force approach, the second is not.
The rules are straightforward, but the solution to the problem space might not be. The trick is to use something called memoization.
In computing, memoization or memoisation is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls to pure functions and returning the cached result when the same inputs occur again.
So, basically we store results of method calls. Lets first look at my solution. It is a recursive function that takes a number and the number of iterations to apply to it.
if (iterations == 0) return 1;
var cache = new Cache(number,iterations);
if (memo.containsKey(cache)) {
return memo.get(cache);
}
if (number == 0) {
long l = ways(1, iterations - 1, memo);
memo.put(cache, l);
return l;
}
String s = "" + number;
int l = s.length();
if (l % 2 == 0) {
long c = ways(Long.parseLong(s.substring(0,l/2)), iterations-1, memo) + ways(Long.parseLong(s.substring(l/2,l)), iterations-1, memo);
memo.put(cache, c);
return c;
}
long c = ways(number * 2024, iterations-1, memo);
memo.put(cache, c);
return c;
For this call the result is stored in a memo, a simple HashMap that stores the method arguments in a CacheKey and then stores the count, the result of the recursive call. What happens here is that there might be many different calls to this method, such as (9,23). In this case the number 9 has 23 iterations to go. By computing the result once and then storing the computed value we save the time of doing the same calculations many other times.
This was a really fun and relatively quick challenge, greatly enjoyed it!
This day had me stumped for quite a time. The puzzle starts off with a simple question; group the garden (grid) into areas that are the same and count the fences required. A simple flood fill type of solution works very well here.
Whenever a neighbor is not the same type (or we are the edge), a fence is required. If it is the same type, add it to the queue for further processing.
for (int r = 0; r < input.length; r++) {
for (int c = 0; c < input[r].length; c++) {
var start = new Coord(c, r);
if (!seen.add(start)) {
continue;
}
var queue = new ArrayDeque<Coord>();
queue.add(start);
var cells = new HashSet<Coord>();
int perimeter = 0;
char fenceType = input[r][c];
while (!queue.isEmpty()) {
Coord cell = queue.poll();
cells.add(cell);
for (Coord neighbor : Coord.directNeighbors()) {
Coord next = cell.add(neighbor);
if (!next.inBound(0, input[0].length, 0, input.length) ||
input[next.y()][next.x()] != fenceType) {
perimeter++;
} else if (seen.add(next)) {
queue.add(next);
}
}
}
fences.add(new Fence(perimeter, cells));
}
}
The next part had me going for a little bit. Instead of the area or fences the elves need the sides counted. This turns out to be quite a thing until it becomes clear that counting corners also works.
Lets say the below map is our grid. When looking at the A in cell 0,2 it is possible to check if it is a corner by checking that the cell above it and the cell to the right are not the same. The same goes for the cell below and the cell to the right.
{{< figure src=“/ox-hugo/corners.png” >}}
A neat trick to find the sides to an area. The code turned out to be relatively easy:
for (var cell : fence.cells()) {
char curr = input[cell.y()][cell.x()];
for (int ud : new int[]{-1, 1}) {
for (int lr : new int[]{-1, 1}) {
int ny = cell.y() + ud;
int nx = cell.x() + lr;
boolean outOfBoundsY = ny < 0 || ny >= input.length;
boolean outOfBoundsX = nx < 0 || nx >= input[0].length;
if ((outOfBoundsY || input[ny][cell.x()] != curr) &&
(outOfBoundsX || input[cell.y()][nx] != curr)) {
corners++;
} else if (!outOfBoundsY && !outOfBoundsX &&
input[ny][cell.x()] == curr &&
input[cell.y()][nx] == curr &&
input[ny][nx] != curr) {
corners++;
}
}
}
On to Friday the 13th!
Day 13 – Claw Contraption {#day-13-claw-contraption}
Today was quite something. We have claw machines that have 2 buttons. The buttons move the arm a set space on the x and y coordinates. Both buttons have a different value for the cost and we are tasked to find the cheapest path to a prize on some distant x and y location.
My initial solution was naive and used dynamic programming to solve it. It did not adhere to the rule every problem has a solution that completes in at most 15 seconds on ten-year-old hardware. So eventually I found a solution (thanks Hyperneutrino!) that uses math to solve this problem.
Basically we are trying to find the amount of x and y movements both the A button (indicated by S) and the B button (indicated by T) have to make in order to get to the prize x and y values.
axS + bxT = px
ayS + byT = py
We can make these equations the same by multiplying with by and bx. We do this so we can remove the B button from the equation and solve A.
Now we can divide by axby - aybx and get our solution for A. We have to ensure the input is never 0 to prevent division by zero. In the code we can check this by checking that ax * by == ay * bx never occurs.
S = pxby - pybx
-----------
axby - aybx
As we now have the A button value, we can also solve the B button.
In code the solution looks very simple. Notice the ca%1==0 && cb%1==0 check to ensure we do not allow for fractional steps.
private long solve(double ax, double ay, double bx, double by, long px, long py) {
long answer = 0L;
double ca = (px * by - py * bx) / (ax * by - ay * bx);
double cb = (px - ax * ca) / bx;
if (ca % 1 == 0 && cb % 1 == 0) {
answer += (long) (ca*3) + cb;
}
return answer;
}
Day 14 – Restroom redoubt {#day-14-restroom-redoubt}
Today we are back at Easter Bunny HQ, looking for a restroom. We are given a collection of robots, their current position on a grid and their velocity. The question becomes, where are they after 100 iterations (seconds)? An interesting part of this question is that the robots wrap around the grid.
This mechanic allows for very easy calculation of the final coordinates, as (x + vx * 100) % width will give us the final position, instead of having to go through all the calculations.
Interestingly, Java does not really like negative numbers in the modulo operator. For example, -102 % 11 yields -3 while it should yield 8 for it to be useful in our case. So, when the number is negative, just add the width to it.
for (var robot : robots) {
int newX = (robot.start.x() + robot.vel.x() * 100) % width;
if (newX < 0) newX += width;
int newY = (robot.start.y() + robot.vel.y() * 100) % height;
if (newY < 0) newY += height;
positions.merge(new Coord(newX, newY), 1, Integer::sum);
}
Part 2 was a horrible puzzle. There was no clue what to do in order to get this:
{{< figure src=“/ox-hugo/day14.png” >}}
I finally solved it by looking at the field when all robots are on a unique position. I also have seen solution where the minimum safety value is found. The problem description was:
During the bathroom break, someone notices that these robots seem awfully similar to ones built and used at the North Pole. If they're the same type of robots, they should have a hard-coded Easter egg: very rarely, most of the robots should arrange themselves into a picture of a Christmas tree.
What is the fewest number of seconds that must elapse for the robots to display the Easter egg?
Maybe if it had said “very rarely, but when all the robots arrange themselves”, but then again, how are you supposed to know that it means non-overlapping.
Yay, the Lanternfish have made an appearance! Sadly this puzzle had me quite stumped for a while. I had to rewrite my solution 2 times in order to get it right.
First, let me explain. We are still not finding the historian (whom I think is just Eric in a costume). Instead we are on a side-quest helping our fishy friends with robots in their warehouses. The first puzzle is straightforward; move the player around, moving objects that you run into.
When we have that sorted we are sent to a second warehouse. This time the boxes that we move are twice as large, but the robot is still the same size. This means we get into situations as the following example:
######
# #
# [] #
# @ #
######
Here the player can move up, but we are only hitting one side of the box. We have to take into account that we need to move the other part along as well. Even more complicated, we can get into the following situation.
######
#[] #
#[][]#
# [] #
# @ #
######
In this situation we can not move, even though the 2nd box on the middle layer might think we can, as it has a white-space above it.
I worked on arrays for a while, but eventually went for a more “Java” solution and create the factory as objects. Using the objects it is possible to attack the problem more in a “game engine” type of way, by making each object react to the interaction.
My code is horrible not-optimized, I apologize for that right away, but it gets the job done :D
Firstly, I split the process out into two segments, first to see if we can move, then to actually move. Side note: I should really use a lookup table for the coordinate to find the objects instead of looping over it.
The objects are simple POJOs, all extending the aptly named Thing.
class Player extends Thing {
public Player(Coord start, Coord end) {
super(start, end);
}
}
class Wall extends Thing {
public Wall(Coord start, Coord end) {
super(start, end);
this.canMove = false;
}
}
class Box extends Thing {
public Box(Coord start, Coord end) {
super(start, end);
}
}
Thing has all the logic, with canMove() and move basically doing the same thing, except for move actually moving the objects into a new coordinate. Only if we have a space as neighbor, or if *all* of the neighbors can move, only then do we move the current object.
boolean move(List<Thing> factory, Coord direction) {
if (!canMove)
return false;
Coord nsp = start.add(direction);
Coord nep = end.add(direction);
Set<Thing> hits = new HashSet<>();
for (var t : factory) {
if (t == this)
continue;
if (t.collidesWith(nsp))
hits.add(t);
if (t.collidesWith(nep))
hits.add(t);
}
if (hits.size() == 0) { // space, so we can move ourself
this.start = this.start.add(direction);
this.end = this.end.add(direction);
return true;
}
if (hits.stream().allMatch(t -> t.move(factory, direction))) {
this.start = this.start.add(direction);
this.end = this.end.add(direction);
return true;
}
return false;
}
Collision is checked against both the left and right hand side of the object. Meaning that we can easily handle boxes of size 2.
My old laptop, now almost 6 years old, has seen it all. From conferences, to lectures, traveling to distant places and to the library. I did a lot of work on it during the writing of my thesis, and it is a victim to countless hours of compiler time.
Sadly the battery started to die. It got to the point that you can only use it shortly for heavier loads. Luckily, unlike certain hardware (looking at you Apple), it is easy to fix. All it needs is a new battery. So I found that ifixit had the right parts and a very useful kit with all the right tools to do the job.
It only took 10 minutes to do, and now my trusty old laptop is able to last 7 to 8 hours again! Gotta love the ability to repair things. I hope it lasts another 5 to 6 years!