People are excited with generative AI and this is legitimate but this excitement overshadow a question we should ask more often about genAI (and about any new tech trends): what do we lose in return? My guess is that we lose a lot but we’re not sure what exactly. When we start to learn about the losses through experience and studies, we will question, hopefully, our use of AI in general. That’s my hope.

In my opinion, the best place for Liquid Glass is on the Apple Watch, because the overall graphic content is much more standardized and controlled. You rarely get weird backgrounds under UI elements.

Yesterday I showed my iPhone running iOS 26 to my son. I was scrolling through my notifications on the lock screen. The first thing he said was: woah… that is so… much harder to read.

Not cool. Not impressed. Just harder to read. He is 21. 🤷🏻‍♂️

Using an external display with iPadOS 26 beta4 is still very problematic, especially when invoking the Control Center, the Notifications Center and dragging windows from the iPad to the external display.

Liquid Glass disillusioned here: Dear Apple, if an app constantly needs to darken or lighten the background or adjust control transparency for the user interface to be barely usable, perhaps this indicates a flawed approach to user interface design? 😵‍💫

Windows PC is a powerful platform for gaming, with high frame rate, gorgeous graphics and animation, and so on and on… so why is a modern pro laptop running the latest OS, Windows 11, still struggles when resizing windows? It’s 2025. But, hey, we have AI everywhere, right? 🤷🏻‍♂️😒