Pressure Might Be Mounting on Apple with Apple Intelligence in Unexpected Ways

Warning: Unsettled thoughts: I think many tech pundits are overestimating Apple Intelligence capabilities and influence potential. If Apple fails to deliver, even slightly, it might trigger a crash like the dot com crash. Some tech pundits are fast at expecting Apple to be the gateway to generative AI legitimacy. In this logic, if Apple fails, AI will fail too. I might be over simplifying things here.

Could Generative AI Content Usage Be THE Biggest Problem?

I’m wondering if the way someone elects to use generated content from generative AI models is way more potentially problematic and subject to debate than anything related to models training using content from the open web.

Also: generative AI content used to train generative AI models is also source of concerns to me. I call that process “knowledge desinfection” or “knowledge toxification” or even better “knowledge asphyxiation”. Or should I replace “knowledge” with “intelligence”?

One more thing: the more I think about generative AI training, the less I think it should be considered plagiarism. More on that one soon.

Maybe We Should Stop Crying Fool: We’ve Been Trained Ourselves!

Thinking out loud about generative models training.

In a way, we’ve all been trained ourselves in our life by the books we read, the movies we saw, the music we listened. Some people have been trained on very specific knowledge bodies, in very specific fields. People use this accumulated “training” also forming “culture” to create new things and produce new content. Some people might be trained on a specific music style or dancing style. We’ve been trained by teachers. As “trained” creators, do we ask a permission when writing something new or writing music using our training data? Now because it happens at a large scale by large (and “nasty”?) corporations to create products, we cry foul?! Where is the line to be drawn here? I don’t know.

Comparing generative AI model capacities to a school study level is plain stupid (OpenAI pretends GPT-5 to be PhD level). AI is not about intelligence just like having a PhD. The latter is a mere indicator or proxy, at best.

We are starting to see some cracks in the AI bubble castle… AI stock market will probably go through the same scenario as the tech bubble in the early 2000. You probably read it first here.

Claim of the moment: Perplexity AI ignores robot.txt files and crawls websites even when the site owner says no. rknight.me/blog/perplexity-ai-

Woah, that is not cool, at all. Even if I don’t care too much for AI bots to crawl and ingest my content, I would expect them to respect those author and site owners who decides otherwise. It’s not the best way to build trust.

Two Highly Different Approaches

Microsoft is recalling “Recall” after all, and this makes them look rather bad. This happens on the same week of Apple revealing Apple Intelligence which received a more positive set of reactions.

We are witnessing two different approaches to the challenge of intelligently integrating generative AI prowess to the base operating system. These two events couldn’t be more evocative of how different Apple and Microsoft strategy and culture are. Guess which approach I prefer? I’m excited for Apple Intelligence, but I appreciate the time it will take to make it right.

Referring to this post from MacStories’ Viticci, I might be living or coming from a different planet, but I do not want to block any of my sites from AI bot crawlers, none of them, even if it is from Google, OpenAI, Apple or even Meta. I want to embrace this new era while being critical to what is happening. More to come soon.

It seems that we communicate less and less by words… more and more by images and videos. I find it fascinating that GenAI and ChatGPT forces us (for now) to return to written words in order to communicate and interact with GenAI tools1. Just a thought.


  1. It will probably transition to mostly voice one day, who knows? ↩︎