No I'm not. Because the people who need this mentality shift are also people who won't listen anyway and have a negative attitude. And the people who already understand this don't need to see this article.
> code generation today is the worst that it ever will be, and it's only going to improve from here.
I'm also of the mindset that even if this is not true, that is, even if current state of LLMs is best that it ever will be, AI still would be helpful. It is already great at writing self contained scripts, and efficiency with large codebases has already improved.
> I would imagine the chance of many of us being on the losing side of this within the next decade is non-trivial.
Yes, this is worrisome. Though its ironic that almost every serious software engineer at some point in time in their possibly early childhood / career when programming was more for fun than work, thought of how cool it would be for a computer program to write a computer program. And now when we have the capability, in front of our eyes, we're afraid of it.
But, one thing humans are really good at is adaptability. We adapt to circumstances / situation -- good or bad. Even if the worst happens, people loose jobs, for a short term it will be negatively impactful for the families, however, over a period of time, humans will adapt to the situation, adapt to coexist with AI, and find next endeavour to conquer.
Rejecting AI is not the solution. Using it as any other tool, is. A tool that, if used correctly, by the right person, can indeed produce faster results.
I mean, some are good at adaptability, while others get completely left in the dust. Look at the rust belt: jobs have left, and everyone there is desperate for a handout. Trump is busy trying to engineer a recession in the US—when recessions happen, companies at the margin go belly-up and the fat is trimmed from the workforce. With the inroads that AI is making into the workforce, it could be the first restructuring where we see massive losses in jobs.
Exactly, and it does't help with agentic use cases that tend to solve problem in on-shot, for example, there is 0 requirement from a model to be conversational when it is trying to triage a support question to preset categories.
> We heard clearly from users that great AI should not only be smart, but also enjoyable to talk to.
That is what most people asked for. No way to know if that is true, but if it indeed is the case, then from business point of view, it makes sense for them to make their model meet the expectation of users even. Its extremely hard to make all people happy. Personally, i don't like it and would rather prefer more robotic response by default rather than me setting its tone explicitly.
> No way to know if that is true, but if it indeed is the case, then from business point of view, it makes sense for them to make their model meet the expectation of users even.
It makes sense if your target is the general public talking to an AI girlfriend.
I don't know if that will fill their pockets enough to become profitable given the spending they announced but isn't this like they are admitting that all the AGI, we cure cancer, ... stuff was just bullshitting? And if it was bullshitting aren't they overvalued? Sex sells but will it sell enough?
> i don't like it and would rather prefer more robotic response by default rather than me setting its tone explicitly.
Ai interfaces are going the same way the public internet has; initially it's audience was a subset of educated westerners, now it's the general public.
One suggestion that possibly is not covered is that you/we can document clearly how AI generated PRs will be handled, make it easy for contributors to discover it and if/when such PR shows up refer the documented section to save yourself time.
> if you can find the logic and the will to do it.
This is important. Both logic and will are required. If only one of the 2 exists the impact can be limited if any at all. Broadly speaking, mostly, people have the logic but not the "will" in a sense that latter gets diluted by factors like ego, seniority, org lag etc.
I think that is where the power of current AI chat interfaces like chatgpt beats other digital interfaces. You ask a question. Get just an answer back in more or less same format or grammer. And no ads. No distractions. Clean.
Though it is tough for ai chat providers to keep it that way for long if revenue from subscriptions / apis does not offset the exorbitant compute costs.
Youtube premium. $12 CAD/month. No ads + videos can play in the background.
On the other hand it wasn't worth for us to spend time/money on Netflix/Amazon prime (streaming stuff) so we just killed the subscription and channeled it to Youtube.
Once upon a time (about 10 years ago), I could turn off my iPad’s screen and still hear YouTube playing without paying Google for the privilege. Used to be HN would never accept such a thing, but here it is on a list of things we are happy to pay for. Amazing how times change.
It makes way more sense to do value analysis purely on what what you gain vs what it costs rather than trying to factor in the cost of making or implementing said item.
I don't care if the bill of materials is really high, that's no reason for a consumer to be any more sympathetic for a price; similarly, costing almost nothing is no reason to deride a price. That's the company's problem.
It's optimal to just focus on what you get for what you pay.
Yeah YouTube premium is where I put my money where my is. Always said I would prefer if I could pay for content outright rather than be advertised to. Well, this is it.
but it doesn't do it on all platforms. PC only sure. Red works on smart tvs, phones, tablets, xbox, computer..
and you get the youtube music with it.
it costs me about 20 min worth of work once a month to remove ads from my primary media consumption site.. totally worth it, and don't have to mess with 3rd party BS. Completely changes the youtube experience
reply