Today, I tested Ollama and Locally AI more extensively on my M2 MacBook Air, and it was quite demanding. It’s no surprise that a serious local AI setup requires an Apple M5 Pro or M5 Pro Max with at least 16 or 24 GB of RAM. My M4 Pro Mac mini has 24 GB, and I could use it remotely through an SSH session. This experimentation puts any plans to replace my aging M2 MacBook Air into perspective.

Anthropic Rebuilds Claude Code Desktop App Around Parallel Sessions:

Anthropic has released a redesigned Claude Code experience for its Claude desktop app, bringing in a new sidebar for managing multiple sessions, a drag-and-drop layout for arranging the workspace, and more.

I’ve been testing the new fat client and found it to be familiar yet overwhelming. It’s taking the shape of a full IDE. Monolithic clients aren’t getting the favor of people apparently but I prefer this approach over separate apps.

Creating an email summarizer was simpler than expected. However, Claude AI struggled significantly with email decoding and data extraction. While one might assume these processes are well-documented and easy, that doesn’t seem to be the case. Additionally, setting up a new Gmail account proved to be unreliable. I encountered numerous errors at various stages, making me question whether the process actually succeeded. You’d expect Google to handle this smoothly, but unfortunately, I wasn’t so lucky.

Here’s the overall workflow.

With my blog’s new custom design in place, I’m considering my next project: developing an email summarizer using Gmail, n8n workflows, Claude AI, and Discord. I don’t plan to use this often, but I do have some emails that I get that I wish could be summarized just by sending them to a dedicated email address.

Inoreader announced support for third-party AI providers for articles summarization. Anthropic and OpenAI are supported among others. Just enter your own API key and voilà! But there is a big catch: even if available for Pro plans, you need an add-on upgrade to enable this! That, I don’t understand because in this scenario, Inoreader is in fact delegating the LLM so they incur not additional costs. I find this perplexing to say the least. Or I might be missing something. I hope someone at Inoreader will catch this comment.

Hacker News’s article “Ransomware Is Growing Three Times Faster Than the Spending Meant to Stop It”:

Ransomware leak-site claims surged 30.7% year-over-year in 2025 to reach 7,760 incidents, significantly outpacing the 10.1% growth in worldwide security spending ($213 billion), indicating a widening gap between observable threat volume and security investment.

The article doesn’t say what is causing this surge, and if AI has anything to do with it. I would bet that AI might be helping hacker groups.

Even if I use an automation (via n8n) for my Micro.blog timeline summarization, I still come here to read my timeline, although less frequently than before. It’s a good way to gauge how well or poorly LLM can be at summarizing such content. ☝🏻

Quoting Bryan Cantrill — Simon Willison

Work costs nothing to an LLM, and LLMs do not feel a need to optimize for their own (or anyone’s) future time, and will happily dump more and more onto a layercake of garbage.

Until recently, thanks to rising chip prices, human developers were too lazy to optimize their software stacks, leading to bloated software requiring much more memory than necessary to run. Now there is an incentive to rethink how they allocate development time, but it will probably be delegated to LLMs… we’re doomed.

With the blog redesign, I decided to remove the /archive page. I add an issue with Micro.blog / Hugo handling of long list of blog posts. Because I couldn’t figure out how to fix it (nor Claude AI apparently), it was removed. I bet that nobody really look into this page anyway1.


  1. Except AI bots. ↩︎