They need to be more worried about creating a viable economic model for the present AI craze. Right now there’s no clear path to making any of the present insanity a profitable endeavor. Yes NVIDIA is killing it, but with money pumped in from highly upside down sources.
Things will regulate themselves pretty quickly when the financial music stops.
Do you mean that they need to find better ways to create value by using AI, or that they need better ways to extract value from end-users of AI?
I'd argue that "value creation" is already at a decent position considering generative AI and the usecase as "interactive search engine" alone.
Regarding "value extraction": Advertising should always be an option here, just like it was for radio, television and online content in general in the past.
Preventing smaller entities (or private persons even) from just doing their own thing and making their own models seems like the biggest difficulty long term to me (from the perspective of the "rent seeking" tech giant).
> I'd argue that "value creation" is already at a decent position considering generative AI and the usecase as "interactive search engine" alone.
> Regarding "value extraction": Advertising should always be an option here, just like it was for radio, television and online content in general in the past.
Not at the actual price it's going to cost though. The cost of an "interactive search" (LLM) vs a "traditional search" (Google) is exponentially higher. People tolerate ads to pay Google for the service, but imagine how many ads would ChatGPT need, or how much it will have to cost, to compensate an e.g. 10x difference. Last time I read about this a few months ago, ChatGPT were losing money on their paid tier because the people paying for it were using it a lot.
It's more likely that ChatGPT will just be spamming ads sprinkled in the responses (like you ask for a headphone comparison, and it gives you the sponsored brand one, from a sponsored vendor, with an affiliate link), and hope it's enough.
> Not at the actual price it's going to cost though.
But we don't know that pricepoint yet; current prices for all this are inflated because of the gold-rush situation, and there are lots of ways to trim marginal costs. At worst, high longterm un-optimizable costs are going to decrease use/adoption a bit, but I don't even think that is going to happen.
Just compare the situation with video hosting: That was not profitable at first, but hardware (and bandwidth) got predictably cheaper, technology more optimized and monetization more effective and now its a good chunk of googles total revenue.
You could have made the same arguments about video hosting in 2005 (too expensive, nobody pays for this, where's the revenue) but this would have led to extremely bad business decisions in hindsight.
Not to mention, most arguments about costs of AI inference are plain inane.
AI search being 10x more expensive than Google query? That's just a silly, meaningless number - especially considering that a good AI response easily stops the user from making 5+ search queries to get the same results, and AI query itself can easily issue the equivalent of 10-20 search queries + spends compute analyzing their results.
The “well just pay for it with ads” concept is financially flawed. The cost to serve all this up is 30+x more expensive than traditional search. There’s no way the “online ad” space will suddenly start paying 30x more for ads. There’s simply not enough advertising dollars out there to pay for it all.
You might be thinking of old models like banner ads or keyword results at the top of search and not when you ask ChatGPT the best way to clean up something and it suggests Dawn™ Dish Soap!
I don’t disagree that then AI is of “value.” The issue at the moment is the whole thing is being kept alive by hype and circular financing.
There’s not anywhere near enough money entering in from outside (ie consumers and businesses buying this stuff) to remotely support the amount of money being spent. Not even close. Not even “we just need to scale more.” It’s presently one big spectacular burning pile of cash with no obvious way forward other than throwing more cash on the burning pile.
This is crazy for me with how inaccurate Google’s AI summaries are. They’ve basically just added a chunk of lies to the top of every search page that I have to scroll past.
The music is just getting started. The way it is going, AI will be inevitable. Companies are CONVINCED it’s adopt AI or die, whether it is effective or not.
The race is to be the first to make a self-improving model (and have the infrastructure it will demand).
This is a winner-takes-all game, that stands a real chance of being the last winner-takes-all game humans will ever play. Given that, the only two choices are either throw everything you can at becoming the winner, or to sit out and hope no one wins.
The labs know that substantial losses will be had, they aren't investing in this to get a return, they are investing in it to be the winner. The losers will all be financially obliterated (and whoever sat out will be irrelevant).
I doubt they are sweating to hard though, because it seems overwhelmingly likely that most people would pay >$75/mo for LLM inference monthly (similar to cell phone costs), and at that rate without going hard on training, the models are absolute money printers.
There is zero evidence that the current approach will ever lead to a self-improving model, or that current GPU/TPU infrastructure is even capable of running self-improving models.
Things will regulate themselves pretty quickly when the financial music stops.