People are so quick to say “I don’t want him to win but I also don’t want him to die” as if there’s some prize for not wishing death on someone who does more than wish us dead. If someone in power actively attempts to do unthinkable damage to the world and people around you, just a reminder: you get absolutely nothing in exchange for kindness toward them.

Some food for thoughts.

My next trip to the US is to see a Sting show in New York in October. After that, I’m not planning to visit the US for at least five years. I refuse to visit totalitarian states. Fuck #Trump. Fuck those who supports this buffoon and clown.

Pressure Might Be Mounting on Apple with Apple Intelligence in Unexpected Ways

Warning: Unsettled thoughts: I think many tech pundits are overestimating Apple Intelligence capabilities and influence potential. If Apple fails to deliver, even slightly, it might trigger a crash like the dot com crash. Some tech pundits are fast at expecting Apple to be the gateway to generative AI legitimacy. In this logic, if Apple fails, AI will fail too. I might be over simplifying things here.

Could Generative AI Content Usage Be THE Biggest Problem?

I’m wondering if the way someone elects to use generated content from generative AI models is way more potentially problematic and subject to debate than anything related to models training using content from the open web.

Also: generative AI content used to train generative AI models is also source of concerns to me. I call that process “knowledge desinfection” or “knowledge toxification” or even better “knowledge asphyxiation”. Or should I replace “knowledge” with “intelligence”?

One more thing: the more I think about generative AI training, the less I think it should be considered plagiarism. More on that one soon.

Maybe We Should Stop Crying Fool: We’ve Been Trained Ourselves!

Thinking out loud about generative models training.

In a way, we’ve all been trained ourselves in our life by the books we read, the movies we saw, the music we listened. Some people have been trained on very specific knowledge bodies, in very specific fields. People use this accumulated “training” also forming “culture” to create new things and produce new content. Some people might be trained on a specific music style or dancing style. We’ve been trained by teachers. As “trained” creators, do we ask a permission when writing something new or writing music using our training data? Now because it happens at a large scale by large (and “nasty”?) corporations to create products, we cry foul?! Where is the line to be drawn here? I don’t know.

Comparing generative AI model capacities to a school study level is plain stupid (OpenAI pretends GPT-5 to be PhD level). AI is not about intelligence just like having a PhD. The latter is a mere indicator or proxy, at best.

We are starting to see some cracks in the AI bubble castle… AI stock market will probably go through the same scenario as the tech bubble in the early 2000. You probably read it first here.

Claim of the moment: Perplexity AI ignores robot.txt files and crawls websites even when the site owner says no. rknight.me/blog/perplexity-ai-

Woah, that is not cool, at all. Even if I don’t care too much for AI bots to crawl and ingest my content, I would expect them to respect those author and site owners who decides otherwise. It’s not the best way to build trust.