In today’s world, with LLM, Claude Code, etc., is Apple’s Swift Playground still relevant, even for younger aspiring coders? A few years ago, it seems we were hearing much more about it than today.
I’m building a Dashboard. 😎
Apparently, people are barely using Stack Overflow to ask questions, thanks to LLMs and AI. I expect a similar trend among people in a community like this one on Micro.blog. Some questions would be super easy to answer by asking ChatGPT or the like. I do understand that many people still want this human touch, though.
Something is about to happen. Again. 🤗🫣
Updated my n8n instance from v2.0.3 to v2.2.4. Super easy to do (I’m using the Docker Compose installation provided by the DigitalOcean 1-click install droplet. Took a droplet snapshot before, just in case something goes wrong. So far, so good. Of course, Claude helped me out on this. I’m not a Linux or Docker expert. 😅
Claude Code skills are probably the most intriguing aspect of Claude Code and Claude AI. I’m not so sure yet how to take advantage of them. My understanding from this excellent video is that you have to be an expert at something to create those skills.md file.
An Important Lesson
When I started my studies in computer science over 40 years ago, we learned to read functional specifications and then translate them into machine instructions (COBOL, FORTRAN, Pascal, etc.). It was the training of a programmer. I knew that one day I could become the person who writes functional specifications. I didn’t become a programmer, nor did I work in the development world.
Due to my recent experience with Claude AI, Claude Code, and Vercel to create custom applications, I realize that I have become the one who writes functional specifications, but for processing by artificial intelligence. What does this tell me about the profession of a software developer? The need to write specifications remains essential, if not more so, even with powerful tools like AI. I think it’s a valuable lesson.
Added a new and much-needed feature to my micro.blog front end. See my prompt below.
On OpenRouter.ai
I just finished reading about the service openrouter.ai. I was curious to understand the purpose of this service as well as its business model. I saw several instances of this service being used in n8n workflows. The problem I see with this service is that it makes the consistency of the quality of responses from the requested LLMs even more unpredictable. Each request could be handled by LLMs with different characteristics and performance from one time to another. I’ll pass on this, but I still learned something tonight.
My latest n8n workflow automates summarizing my Micro.blog timeline (via its private RSS feed) and sends the results to my Discord server every hour. Pretty cool, right?
