Hacker Newsnew | past | comments | ask | show | jobs | submit | gipp's commentslogin

Well, let's start by confronting and acknowledging the very strong case that we -- "we" here being the tech world in general, and the audience of this site -- bear a heavy burden of responsibility for it.

It could be argued that it was all inevitable given the development of the Internet: development of social media, the movement online of commerce and other activities that used to heavily involve "incidental" socialization, etc. And maybe it was. But "we" are still the ones who built it. So are "we" really the right ones to solve it, through the same old silicon valley playbook?

The usual thought process of trying to push local "community groups," hobby-based organizations etc is not bad, but I think it misses an important piece of the puzzle, which is that we've started a kind of death spiral, a positive feedback loop suppressing IRL interaction. People started to move online because it was easier, and more immediate than "IRL." But as more people, and a greater fraction of our social interaction moves online, "IRL" in turn becomes even more featureless. There are fewer community groups, fewer friends at the bar or the movies, fewer people open to spontaneous interaction. This, then, drives even more of culture online.

What use is trying to get "back out into the real world," when everyone else has left it too, while you were gone?


Not everyone has left the real world, only the people who got really sucked into the online/social media world. This can maybe seem like the whole world though, if you're in that bubble.

Bars are still packed on the weekends, people still gather at churches, or gyms, or bowling leagues, or book clubs, or any number of other "IRL" activities of all kinds that are going on. You do have to make the effort to go out and get involved though, nobody is going to come and rescue you.


Many of these activities have gotten extremely expensive or just died out in the majority of places. There's 2600 bowling alleys in the US compared to 4000 20 years ago. That's a decline of 35% without accounting for population. Last time I went to a bowling alley the price for a lane without drinks was 100 dollars for an hour and a half. This isn't even in a very high cost of living area. Besides a place like the gym can hardly be considered a place where people gather, most people just see it as something functional and would rather not be disturbed (I believe this has gotten 'worse').

I agree with the fact that it's exaggerated online but when you see these kinda numbers in the vast majority of activities which were affordable for most Americans not too long ago it's not solely to be blamed on individuals.

I believe in the majority of the country things will only get worse with how little value is placed on being involved in things for 'community'. People have gotten more anti-social because of social media (and just media in general).

Most tech workers won't be as impacted by this I assume, they can afford paying 200 dollars for bowling without thinking twice, same with many others of the upper middle class.


People were absolutely giving attitude towards people in Teslas in general, and Cybertrucks in particular, around the peak of all the DOGE nonsense.

Still are, for Cybertrucks


Nonsense?

Yeah, you're right, the US Federal government is a peak engine of efficiency and it's nonsense to think massive sums of money are wasted.


If I told you I could save you money on fuel by making your car more efficient, then removed it's engine, you would still call that nonsense no matter how much of a gas guzzler it was before or how little fuel gets put in it now.

You just made a massive non sequitur. The government does have waste, as does any large organization, including in the private sector. Whether or not DOGE saved money needs an independent analysis, not numbers which DOGE itself produces.

Musk and Trump cut a large number of jobs and declared, without any evidence, that it was all fraud and waste. For example, they dismissed everyone who was in a probationary period, claiming these were all low-performing people. In fact, every person hired or promoted was automatically in a probation status. In many cases the fired people turned out to be critical and the government asked them to come back.

Think about this: when Enron exploded, it took a team of forensic accountants months to untangle the bookkeeping. Musk came in with a team of mostly teenage hacker types to siphon all the data from all the agencies he could and in less than 48 hours declared he had found hundreds of billions of dollars of waste and fraud. It beggars belief that Elon Musk just happens to be an accounting expert and could process terabytes of data and make sense of it in a day or two.

Another thing you should know is the founder of Gumroad, a man in his 30s and who joined DOGE in a good-faith effort to help make the government more efficient, found that things were not at all like he expected. Even if you don't believe him, he was closer to the action than Musk, has more technical knowledge than Musk, and if nothing else, offers a counter-narrative from what you apparently have bought:

https://www.npr.org/2025/06/02/nx-s1-5417994/former-doge-eng...

After expressing his opinions he was quickly sacked by DOGE. Transparency indeed.

Oh, and many (hundreds?) of thousands of people will die each year due to loss of international aid. Meanwhile Musk was dancing around on stage like an idiot with a chainsaw thinking he was the coolest guy.


Nonsense in how they approached things. Clinton-era we had govt. cut backs all over the place. It was done according to a plan and according to the law.

This was just a hatchet job, aimed and cutting and gutting any and every agency they thought they could get away with.


And NYC. Just saw one this morning

Glad to hear it and I wish them luck! I was just clearing up the current status of self-driving vehicles re: the headline. For now they do not work in areas where road surfaces and edges get covered by and location changed by snow. They do not work "Around the US".

> This feels forced, there are obvious and good reasons for running that experiment. Namely, learning how it fails and to generate some potentially viral content for investor relationship. The second one seems like an extremely good business move. It is also a great business move from WSJ, get access to some of that investor money in an obviously sponsored content bit that could go viral.

That's... exactly what the author said in the post. But with the argument that those are cynical and terrible reasons. I think it's pretty clear the "you" in "why would you want an AI" vending machine is supposed to be "an actual user of a vending machine."


I think you’re overstating your own interpretation of what the author wrote. If we’re going to take your use of the word “exactly” (with emphasis) for real then I’d argue that the author offers no charitable reasons for why the experiment took place.

The closest that I think he even gets to one is:

> At first glance, it is funny and it looks like journalists doing their job criticising the AI industry.

Which arguably assumes that journalists ought to be critical of AI in the same way as him...


> that the author offers no charitable reasons for why the experiment took place.

Right, and neither did the GP. They both offered the exact same two reasons, the GP just apparently doesn't find them as repugnant as the author


Are you sure? The entire post treats the event incredulously. I can’t pick out a single line that affords the issue with the level of consideration that the GP comment does.

The two reasons I believe you may be referring to from above are:

1) "learning how it fails" 2) "to generate some potentially viral content for investor relationship."

The whole of Ploum’s argument may be summarized in his own words as:

> But what appears to be journalism is, in fact, pure advertising. [...] What this video is really doing is normalising the fact that “even if it is completely stupid, AI will be everywhere, get used to it!” [...] So the whole thing is advertising a world where chatbots will be everywhere and where world-class workers will do long queue just to get a free soda. And the best advice about it is that you should probably prepare for that world.

I hate to be pedantic...but my growing disdain for modern blog posts compels me to do so in defense of literacy and clear arguments.

Whether the GP and the author offer the “exact same two reasons” is a matter of interpretation that becomes the duty of readers like us to figure out.

If we take Ploum’s words at their face...the most he does is presuppose (and I hope I’m using that word correctly) that the reader is already keen on the two reasons that `TrainedMonkey makes explicit and like the author, finds them to be stupid. While he does say that the video is not journalism and that it is advertising and that the video does show how the AI failed at the task it was assigned he does not give any credence as to why this is the case from a position other than his own.

Maybe I’m misunderstanding the concept of a “charitable interpretation” too. But I don’t think that there is one present in this post that we’re responding to. `TrainedMonkey’s comment leads off by telling us that this is what (I think) he’s about to offer in the remarks that follow when he says “there are obvious and good reasons for running that experiment”.

So my gripe is that you’re making it sound like there’s a clear counterargument entertained in this post when there isn’t. Because you overstated your interpretation of the GP comment in what looks like an attempt to make Ploum’s argument appear more appealing than it ought to be. Even though both `TrainedMonkey and myself have expressed agreement with the point he’s trying to make in general, perhaps we’re less inclined toward pugnaciousness without a well thought out warrant.


Those are completely deterministic systems, of bounded scope. They can be ~completely solved, in the sense that all possible inputs fall within the understood and always correctly handled bounds of the system's specifications.

There's no need for ongoing, consistent human verification at runtime. Any problems with the implementation can wait for a skilled human to do whatever research is necessary to develop the specific system understanding needed to fix it. This is really not a valid comparison.


There are enormous microcode, firmware and drivers blobs everywhere on any pathway. Even with very privileged access of someone at Intel or NVIDIA, ability to have a reasonable level of deterministic control of systems that involve CPU/GPU/LAN were long gone, almost for a decade now.


I think we're using very different senses of "deterministic," and I'm not sure the one you're using is relevant to the discussion.

Those proprietary blobs are either correct or not. If there are bugs, they fail in the same way for the same input every time. There's still no sense in which ongoing human verification of routine usage is a requirement for operating the thing.


Sure, but how many LLM streaming clients are out there?

Namespacing, sure. But is "We use gh:someguy/openai/llm-streaming-client to talk to the backend" (x50 similarly cumbersome names in any architecture discussion) really better than "We use Pegasus as our LLM streaming client"?


Nobody says "gh:someguy/openai/llm-streaming-client" in conversation. You say "the streaming client" or "llm-stream" the same way you'd say "Pegasus." But when someone new joins or you're reading code, "llm-stream" is self-documenting. "Pegasus" requires looking it up every single time until you memorize an arbitrary mapping.


This sounds awful, now you'll be reading some documentation or comment about llm-stream where they didn't mention the full namespace, so you have no idea which of the 50 different llm-stream tools they're talking about, and on top of that you can't even search for it online.


> You say "the streaming client"

"Which one?! There are seven popular projects with this exact name on GitHub that have >100K stars; which particular one do you use?"


I promise you, names are not self documenting. Not in any meaningful way.

This is one of those classic examples where things you've already learned are "obvious and intuitive" and new things are "opaque and indistinct".

We can go back and forth with specific examples all day: cat, ls, grep, etc are all famously inscrutable, power shell tried to name everything with a self-documenting name and the results are impossible to memorize. "llm-stream" tells me absolutely nothing without context and if it had context, pegasus would be equally understandable.


Engineers at Google are much less likely to be doing green-field generation of large amounts of code . It's much more incremental, carefully measured changes to mature, complex software stacks, and done within the Google ecosystem, which is heavily divergent from the OSS-focused world of startups, where most training data comes from


That is the problem.

AI is optimized to solve a problem no matter what it takes. It will try to solve one problem by creating 10 more.

I think long time/term agentic AI is just snake oil at this point. AI works best if you can segment your task into 5-10 minutes chunks, including the AI generating time, correcting time and engineer review time. To put it another way, a 10 minute sync with human is necessary, otherwise it will go astray.

Then it just makes software engineering into bothering supervisor job. Yes I typed less, but I didn’t feel the thrill of doing so.


> it just makes software engineering into bothering supervisor job.

I'm pretty sure this is the entire enthusiasm from C-level for AI in a nutshell. Until AI SWE resisted being mashed into a replaceable cog job that they don't have to think/care about. AI is the magic beans that are just tantalizingly out of reach and boy do they want it.


But every version of AI for almost a century had this property, right down from the first vocoders that were going to replace entire callcenters to convolutional AI that was going to give us self-driving cars. Yes, a century, vocoders were 1930s technology, but they can essentially read the time aloud.

... except they didn't. In fact most AI tech were good for a nice demo and little else.

In some cases, really unfairly. For instance, convnet map matching doesn't work well not because it doesn't work well, but because you can't explain to humans when it won't work well. It's unpredictable, like a human. If you ask a human to map a building in heavy fog they may come back with "sorry". SLAM with lidar is "better", except no, it's a LOT worse. But when it fails it's very clear why it fails because it's a very visual algorithm. People expect of AIs that they can replace humans but that doesn't work, because people also demand AIs never say no, never fail, like the Star Trek computer (the only problem the star trek computer ever has is that it is misunderstood or follows policy too well). If you have a delivery person occasionally they will radically modify the process, or refuse to deliver. No CEO is ever going to allow an AI drone to change the process and No CEO will ever accept "no" from an AI drone. More generally, no business person seems to ever accept a 99% AI solution, and all AI solutions are 99%, or actually mostly less.

AI winters. I get the impression another one is coming, and I can feel it's going to be a cold one. But in 10 years, LLMs will be in a lot of stuff, like with every other AI winter. A lot of stuff ... but a lot less than CEOs are declaring it will be in today.


Luckily for us, technologies like SQL made similar promises (for more limited domains) and C suites couldn't be bothered to learn that stuff either.

Ultimately they are mostly just clueless, so we will either end up with legions of way shittier companies than we have today (because we let them get away with offloading a bunch of work to tools they rms int understand and accepting low quality output) or we will eventually realize the continued importance of human expertise.


There are plenty of good tasks left, but they're often one-off/internal tooling.

Last one at work: "Hey, here are the symptoms for a bug, they appeared in <release XYZ> - go figure out the CL range and which 10 CLs I should inspect first to see if they're the cause"

(Well suited to AI, because worst case I've looked at 10 CLs in vain, and best case it saved me from manually scanning through several 1000 CLs - the EV is net positive)

It works for code generation as well, but not in a "just do my job" way, more in a "find which haystack the needle is in, and what the rough shape of the new needle is". Blind vibecoding is a non-starter. But... it's a non-starter for greenfields too, it's just that the FO of FAFO is a bit more delayed.


My internal mnemonic for targeting AI correctly is 'It's easier to change a problem into something AI is good at, than it is to change AI into something that fits every problem.'

But unfortunately the nuances in the former require understanding strengths and weaknesses of current AI systems, which is a conversation the industry doesn't want to have while it's still riding the froth of a hype cycle.

Aka 'any current weaknesses in AI systems are just temporary growing pains before an AGI future'


> 'any current weaknesses in AI systems are just temporary growing pains before an AGI future'

I see we've met the same product people :)


I had a VP of a revenue cycle team tell me that his expectation was that they could fling their spreadsheets and Word docs on how to do calculations at an AI powered vendor, and AI would be able to (and I direct quote) "just figure it all out."

That's when I realized how far down the rabbit hole marketing to non-technical folks on this was.


I think it’s a fair point that google has more stakeholders with a serious investment in some flubbed AI generated code not tanking their share value, but I’m not sure the rest of it is all that different from what engineer at $SOME_STARTUP does after the first ~8monthes the company is around. Maybe some folks throwing shit at a wall to find PMF are really getting a lot out of this, but most of us are maintaining and augmenting something we don’t want to break.


Yeah but Google won’t expect you to use AI tools developed outside Google and trained on primarily OSS code. It would expect you to use the Google internal AI tools trained on google3, no?


I feel like none of these discussions can ever go anywhere, if they don't start from a place of recognizing that "AI is a massive bubble" and "AI is a very interesting and useful technology that will continue to increase its impact" are not mutually exclusive statements


I personally am very sympathetic to "AI is a very interesting and useful technology that will continue to increase its impact"

However, it's a bit of a non-statement - Isn't it true for all technology ever? Therefore it seems like a retreating point spouted while moving from the now untenable position of "AI will revolutionize everything". But that's just my impression


I think the OP meant something far simpler (and perhaps less interesting), which is that you simply cannot encounter key errors due to missing fields, since all fields are always initialized with a default value when deserializing. That's distinct from what a "required" field is in protobuf


Depending on the language/library, you can get exactly the same behavior with JSON.


Yes, at about 1% of this scale. OpenAI's obligations are not something they can just run to daddy VC to pay for; he can't afford it either


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: