Why "hard to bypass" would be a sufficient thing?
It depends on the technology used to connect the two phones. Bypassing this process can range from "easy" to "quite complicated", but it remains possible. Once the security is compromised, the entire network loses its core value since a single interaction is enough to establish a permanent connection.
> Meanwhile my religious-FAANG friend has 4 kids, lives in a community where everyone knows each other, lives much closer to family (intentional choice) and just overall sees his life, both the ups and the downs, as part of something purposeful and meaningful.
I am a full atheist, living in Switzerland. The community is strong, the neighborhood too and the city is a charm (Geneva). 3 kids, coding and spending my time contemplating humans at their best: having fun and getting on a higher ground. I don't have an answer regarding the bigger picture but I will surely think about it and get back to you.
EDIT
As I wrote in another comment: confronting the truth (whatever the spirituality behind) in itself doesn't make someone unhappy, it's the sense of losing one's footing that does. In many ways, America was built along those lines.
Must be some ironic comment that I didn't get, but a simple script (curl calling steam API and sending a notification if the price is set) + cron handle the job perfectly.
EDIT: Giving the keys to an agent for such a trivial work is ... I got your sarcasm I think ^^.
Tangent and admittedly off-topic but I've come to see LLM-assisted coding as a kind of teleportation.
With LLMs, you glimpse a distant mountain. In the next instant, you're standing on its summit. Blink, and you are halfway down a ridge you never climbed. A moment later, you're flung onto another peak with no trail behind you, no sense of direction, no memory of the ascent. The landscape keeps shifting beneath your feet, but you never quite see the panorama. Before you know it, you're back near the base, disoriented, as if the journey never happened. But confident, you say you were on the top of the mountain.
Manual coding feels entirely different. You spot the mountain, you study its slopes, trace a route, pack your gear. You begin the climb. Each step is earned steadily and deliberately. You feel the strain, adjust your path, learn the terrain. And when you finally reach the summit, the view unfolds with meaning. You know exactly where you are, because you've crossed every meter to get there. The satisfaction isn't just in arriving, nor in saying you were there: it is in having truly climbed.
The thing is, with manual coding, you spot a view in the distance, you trek your way for a few hours, and you realize when you get there that the view isn’t as great as you thought it was.
With LLM-assisted coding, you skip the trek and you instantly know that’s not it.
The economic hypothesis that has dominated the past hundred years is that economic growth is infinite because resources are infinite and (almost) free. We all know this is unrealistic and disconnected from our human condition.
Regarding "innovation", I agree with your idea. I even think that the major innovation will be to transpose models locally, using reduced infrastructures that will still be sufficient for the majority of use cases.
... and I have this little idea in the back of my mind: when companies can no longer keep up with demand and people have (albeit more limited and reduced) local capacity, minds will start focusing on techniques (more humble and modest ones) to keep part of the system running locally, without dependency.
I know it may sound ridiculous, but it could actually become a way to break away from the business models that have been developed over the past few decades. Broadly speaking, this even amounts to saying that the biggest victims of AI could be the companies that bet on AI as a service.
Yet I know my vision is way too idealistic but I'm coming to imagine that a human brain, although less efficient in the long run, remains a reliable way to control the resulting costs and could even turn out to be more advantageous and more readily available than its silicon-based counterpart.
The human brain is incredibly efficient (Approximately 20W of energy consumption¹). These AI systems use many orders of magnitude more energy than human equivalents.
A further bit of a tangent, but anyway: what really strikes me is the choice of such an image to represent whatever they're trying to convey. It feels bland, and there's a kind of underlying sadness to it... the books, the small sculpture, the shelf, the desk... it all drags me down.
I'm pretty sure the "fakeness" is intentional. The image seems designed to appeal to a specific target audience (when I look at their 'AI erase/replace tool' example I get a clear idea).
This is not an article, but a link to Apple Plans/Maps where you can observe all locations have been removed. As for now, it is hard to conclude anything but this looks like a bug (I know I am being optimistic).
reply