While waiting for Micro.blog next chapter, I’m playing with RSS feeds display strategies. This view is called “Journal”. Built using Claude Code and hosted on Vercel.

Building a custom branded Ghost theme for my main website with the help of Claude Code seems like an achievable goal, right?
It’s progressing well… 📺 👀
The speed at which Anthropic is adding new stuff to Claude and Claude Code on the desktop is impressive. Is OpenAI even competing?
This is becoming out of hand… 🤣 but it’s so fun! 🤷🏻♂️

Early this morning, using Craft Agents, I created a new skill that enables me to save my Micro.blog Bookmarks into a Craft collection. The agent figured out the Micro.blog API, the new collection schema and how to move things around. So cool. Any questions?
At this point, I have to admit, the only reason I’m keeping ChatGPT is its image-generation and analysis capabilities.
We're Making a Big Mistake
I believe that IT workers who are also passionate about gen AI are making a major misjudgment. We wrongly assume that the advances we observe in our field, such as the autonomous or semi-autonomous development of applications, also translate to sectors like medicine or law. This is a false generalization.
The field of IT heavily relies on strict formalism: the raw material consumed by LLMs. In the legal field, for example, this is not the case: it is much more complex. Laws, regulations, and judgments are generally written and presented in standardized forms, but the content is far from being as digestible formalism as lines of code written in a programming language. In my opinion, we should remember that when we share our enthusiasm for gen AI. We must be lucid while also setting the right expectations for decision-makers and lawmakers.
Matt Shumer writes in “Something Big is Happening”:
The AI labs made a deliberate choice. They focused on making AI great at writing code first… because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That’s why they did it first.
Clever. Exciting. But scary, too.
The Rise of Cognitive Dept
Margaret-Anne Storey introduces “cognitive debt” as a concept that may be more threatening than technical debt in AI-augmented development. Unlike technical debt (which lives in code), cognitive debt is the erosion of shared understanding that resides in developers’ minds. Drawing on Peter Naur’s concept of a program as a “theory” distributed across teams, the article argues that as AI and agentic tools push for development velocity, teams risk losing their collective understanding of why systems work the way they do. Even if AI generates technically clean code, teams can become paralyzed when no one can explain design decisions or anticipate the consequences of changes. The author calls for intentional slowdowns, collaborative practices, and serious research into measuring and mitigating this growing challenge.
“As generative and agentic AI accelerate development, protecting that shared theory of what the software does and how it can change may matter more for long-term software health than any single metric of speed or output.”