Skip to content

The Tale of the Time Millionaire

Published:
The Tale of the Time Millionaire

There’s a story going around right now that goes something like this: AI makes us so efficient that we’ll soon have more free time than we know what to do with. We’ll work four-hour days, take Fridays off, pick up hobbies, learn instruments. We become time millionaires — rich in hours, finally free from the grind.

It’s a nice story. I don’t think it’s what’s going to happen.

In 1865, the economist William Stanley Jevons noticed something counterintuitive. James Watt’s steam engine had made coal dramatically more efficient. A given task required far less coal than before. You’d expect coal consumption to drop. Instead, it soared. Cheaper energy per unit meant people found entirely new uses for it. Industries that couldn’t justify coal before suddenly could. Factories spread. Railroads expanded. Efficiency didn’t reduce demand — it unlocked it.

This is known as Jevons paradox, and it keeps showing up everywhere.

More fuel-efficient cars didn’t reduce total miles driven. People just drove more — longer commutes became tolerable, road trips got cheaper, households added a second car. The savings per mile got spent on more miles.

Or take laundry. Washing machines became dramatically more efficient over the decades — faster, cheaper to run, easier to use. Did people do the same amount of laundry in less time and enjoy the free hours? No. They raised their standards. Shirts that used to be worn three or four times before washing now go in the hamper after one wear. We wash towels, sheets, and gym clothes at a frequency our grandparents would find absurd. The machines got better, and we just did a lot more laundry.

Every time we make a resource cheaper to use, we use more of it.

I think we’re watching this happen to software right now.

Agentic engineering has made writing code dramatically cheaper. A feature that took a day takes an hour. A prototype that took a week ships in an afternoon. The cost per unit of software has dropped through the floor.

And the response hasn’t been “great, now we need fewer engineers.” The response has been “great, now we can build more things.” Projects that were too small to justify get started. Side features that sat in the backlog for months get built in a day. Internal tools that nobody would have staffed a team for suddenly exist.

I see this in my own work. I don’t write less code than I did a year ago. I write significantly more. The bar for “worth building” has dropped. Ideas that I would have filed away as “nice to have, maybe someday” — I just build them now. The tool I wrote this week to generate banner images for my newsletter? A year ago, that would have stayed on a sticky note. Now it exists, it works, and it took an afternoon.

The same thing is happening at companies. Teams aren’t shrinking. Backlogs aren’t getting shorter. Instead, the definition of what’s worth building is expanding. More experiments, more internal tools, more automation, more custom solutions for problems that were previously solved with spreadsheets and manual processes.

Jevons would have predicted exactly this. When the cost of producing something drops, you don’t produce the same amount more cheaply. You produce more. The efficiency gains get absorbed by increased demand.

This has implications for the time millionaire fantasy. If you’re thinking about AI as a way to do the same work in less time and then go home early, history suggests otherwise. The more likely outcome is the same people doing a lot more work. The bottleneck shifts from execution to deciding what to build — and then to maintaining everything you’ve built.

We’re not going to become time millionaires. We’re going to become people who build, create, and produce at a pace that would have seemed impossible a few years ago. Whether that’s a better outcome depends on how well we manage the abundance — and whether we’re intentional about protecting the time that actually matters.

The Two-Cent Whistle

Published:
The Two-Cent Whistle

My newsletter started out covering 3D printing in 2024. I moved on to other topics, but a story on The Verge pulled me right back.

Across the US, a loose network of about 40 people with 3D printers has shipped over 200,000 whistles to 48 states. Communities use them to alert neighbors when ICE agents are nearby. The cost to print one: roughly two cents.

One of the makers, Kaleb Lutterman in Minneapolis, prints on a Bambu Labs P1S. Exactly the same machine, that I own. I’m just happy there’s no need for these whistles here. He fits 100 whistles on a single plate, runs a batch in about seven and a half hours, and gets 800 whistles out of $15 worth of filament. Most of the people involved don’t know each other’s real names. They coordinate loosely and ship for free.

Whatever your stance on the politics, the technology angle is hard to ignore. A few hundred dollars’ worth of equipment in someone’s living room can produce functional objects at scale, overnight, for nearly nothing. No factory, no supply chain, no lead time. Hard to shut down.

When I was writing about 3D printing two years ago, I was mostly thinking about replacement parts, household gadgets, and fidget spinners. I didn’t expect it to show up in this context. But the same properties apply: it’s fast, it’s cheap, and it’s decentralized. A distributed network of hobbyist printers can produce hundreds of thousands of identical objects in weeks, without any coordination overhead worth mentioning.

3D printing has quietly arrived in enough homes to matter. Good reminder.

The Teacher Who Came Home Educated

Published:
The Teacher Who Came Home Educated

Last week I visited tudock.de to give a two-day training on agentic engineering. I expected to teach. I ended up learning quite a bit myself.

When you work with a technology daily, you naturally gravitate toward the parts that interest you most. You develop blind spots. There are corners of the tooling you skip because something else caught your attention. I knew this about myself, but preparing for the workshop made it pretty clear. Suddenly, I had to cover everything. Not just the hippest stuff, but the edge cases, the boring configuration details, the parts I had often glossed over. Preparing the material took me into corners I had been avoiding.

The workshop itself surprised me too. Two things happened that I didn’t expect.

First, the energy in the room. The team at Tudock finally had dedicated time to experiment with agentic tools without the usual interruptions. No Slack pings, no “quick questions,” no context switching. Just two full days of exploration. People started connecting the dots between what they’d heard about and what they could actually do.

Second, my own energy. Watching people pick up concepts quickly is satisfying. But what pushed me further was being questioned. Being challenged. The team didn’t just absorb information — they pushed back. “Why does this work this way?” “What happens if we do it differently?” “Doesn’t this contradict what you said earlier?”

These questions made me think deeper than I would have on my own. Even if I had blocked two days to explore by myself, I wouldn’t have gone as far. Being put on the spot, having to justify your assumptions — it does something that reading alone doesn’t.

I’ve given talks before, but this was different. A two-day workshop creates a feedback loop that a 45-minute conference slot can’t. By the end, I had a list of topics I need to revisit and assumptions I want to question.

If you work with a technology and feel comfortable with it, try teaching it. You might notice how much you’ve been skipping.

Don't Migrate, Harmonize: A Practical Setup for Claude + Codex Skills

Published:
Don't Migrate, Harmonize: A Practical Setup for Claude + Codex Skills

Over the last few weeks, I have found myself switching between Claude Code and Codex. With Claude Code, it’s great fun to start a project. When you go deeper into the trenches, Codex has the sharper axe. It’s much slower than Claude Code, but often the first solution from Codex just works, even when Claude Code has tried a few times already.

Also I wanted to try out these lines from a Go expert for my AGENTS.md file on both agents. Especially the part with no longer leaving behind build artifacts would be much appreciated.

I found some information about migrating skills and other central files from the cloud to Codex, but I couldn’t find a good way to harmonise them. The following is the result of a coding session. It’s a description of the architecture I ended up with and the small set of rules that make it work.

Please note that my skills usually consist of an Markdown file and a bash script. The Markdown file establishes the correct parameters to send to the script. These depend heavily on the environment we’re running on and what we actually want to do. I also found it to be a much nicer interface than having to come up with parameters myself on the CLI. I also make the agents responsible for supervising the script and checking that the output is as intended. This is also helpful for me and much faster than I could do it myself.

The Architecture

Single source of truth per skill:

skills/
  treeos-release-version/
    SKILL.md
    run.sh
  • SKILL.md is the canonical instruction file.
  • run.sh is the canonical script (short name, always in the same place).

Claude entry point (symlink):

.claude/commands/treeos-release-version.md -> ../../skills/treeos-release-version/SKILL.md

Claude still sees its command file, but it’s just a symlink to the real skill file. No duplication.

Codex entry point (symlink):

~/.codex/skills/treeos-release-version -> /path/to/repo/skills/treeos-release-version

Codex loads from its own central skills directory. So the solution is the same: symlink the skill folder.

Why This Works

  • No drift. You only edit skills/<name>/SKILL.md.
  • Both agents stay in sync. One file, two entry points.
  • Short script names. Everything is run.sh, which keeps paths readable.
  • Easy to audit. Every skill is a folder; you can scan it with your eyes.

A Few Tricks That Matter

1) Use SKILL.md as the canonical file. Codex is strict about frontmatter. It expects name and description in YAML at the top. That’s your single most common failure mode.

2) Don’t duplicate scripts. If you have scripts inside .claude/commands, they’ll drift. Move them into skills/<name>/run.sh and reference that in the instructions.

3) Claude supports symlinks. You don’t need a generator script. Symlink the .md files and keep the content in one place.

4) Mac is case-insensitive. You cannot have both skill.md and SKILL.md. Pick SKILL.md and stick with it.

Installing Skills for Codex

Codex only loads from its own directory, so you need to link your skills into it. I added a tiny helper:

./scripts/install-codex-skills.sh

It walks through skills/* and symlinks each one into ~/.codex/skills.

That’s the only “install step” you need. Re-run it when you add a new skill.

The Result

  • Claude sees the skills through .claude/commands symlinks.
  • Codex sees the same skills through ~/.codex/skills symlinks.
  • You edit one file, everything stays consistent.

I’ve implemented this setup fully in the TreeOS repo — you can see the exact structure, symlinks, and helper script here: ontree-co/treeos.

Just Ralph it, baby!

Published:
Just Ralph it, baby!

This year started with a lot of beef. I gave a talk last year titled ‘The Claude Code Wars’, and it seems that we have now entered the second episode.

This is where it all began, roughly six months ago. Geoffrey Huntley wrote a blog post about how Ralphing works. It’s based on a simple idea: Pick a prompt, then let coding agents run on this prompt for as long as necessary, or until you are happy with the result. This approach became a meme, solving the “human-in-the-loop” problem that agents have today by brute-forcing the approach and burning tokens.

Fast forward to January and Gas Town by Steve Yegge is released. It’s a multi-agent orchestration system with the following architecture diagram:

Gas Town Architecture

If that sounds complicated, that’s because it is. You have one main agent to communicate with: the mayor. The mayor then commands a team of different roles to build your project. Apparently, people are using multiple Claude Max $200 plans simultaneously with this approach. It instantly became very popular.

Gas Town was probably built using a similar approach. You can see in the Commits graph on GitHub that an absurd number of commits were created in a short amount of time.

Gas Town Commit Graph

Although Gas Town is more elaborate than Ralphing, the promise is the same: building software doesn’t require any skill anymore. It can be completely replaced by brute-forcing it with average agents. Give them enough time and your project will surpass any project written by humans.

On the other side of the arena are people with a deep commitment to software engineering who are trying to combine the power of their trained brains with that of agents. Peter’s post, Shipping at Inference-Speed, is an excellent summary and illustrates the extent to which standard software engineering practices have diverged in the last year.

Armin’s post delves deeply into the topic and is well researched. For example, Beads is an issue tracker created by the makers of Gas Town that uses 240k lines of code to track issues in a simple format.

Highly interesting Popcorn Time 🍿, watching everything from the sidelines? I don’t think so.

We’re seeing very different ways of using agents to assist with software project development. Unless you take the totally agnostic stance of not using agents at all, it’s impossible not to position yourself on some side here.

I’m investing my time and money in enhancing my existing software development skills with agentic tools like Claude Code or Codex. I don’t want to be completely replaced by brute force approaches. Fingers crossed! 🤞