Viewing OpenAI as a consulting firm whose services Microsoft is willing to be tens of billions for, then yes. With the transaction spread across Azure credits, buying equity at above market rates and maybe even direct payment from the GitHub/Office orgs.
Narrowing the discussion to marginal profit per token on the API I reckon it’s possible they’re break even but hard to tell.
The average profit per token allowing for R & D is almost certainly negative though, as AI devs and training models is super expensive. Objectively this is hinted at by them consuming more capital in recent fundraising.
Not everyone, I personally won't pay for it in it's current state. Reasons I won't pay:
* OpenAI's charter is a load of nonsense everyone should read it before paying for the tool..
* Ultimately ChatGPT is incredibly expensive to run. It's so expensive OpenAI won't tell us, they say it's for our safety for us not to know...lol, this means it's a contributor to climate change, probably a huge one. People and animals are suffering immeasurably while we experiment with building synthetic brains at scale? Sorry, but this is bull shit, I get the idea that we "might" use ChatGPT to solve climate change, but we have no proof that will happen. So it's an excuse at this point.
* Ultimately, they've put a price tag on peoples intellectual work without referencing or compensating them, IMO that's theft. I believe not having open access to the source of information for a given answer is wrong. Submitting information to OpenAI's product should be "opt-in", not "opt-out" if you can prove you're work is being used, which is impossible.
* ChatGPT4 has become more convincing and not necessarily more accurate, maybe more capable, yes but please RTFM.
* The system is not an objective, factual "thinker", it's bias, they've done this is by design because "ethics", and because of the training data. I don't like systems with inbuilt bias, especially when used in scientific and engineering applications. There is no place for it.
* No clear way to verify it's accuracy, it's a black box. Even if it had a "confidence meter" I wouldn't blindly trust it.
So it might be part of yours, but I think a lot of users are aware of the pitfalls and failure modes still present, the legal hurdles ahead and ethical concerns still. I'm still unconvinced that this is actually a technology that by default will improve our lives.
I have definitely tried to use it, I've experimented with it, definitely not interested to pay for what is ultimately, the works of nearly all mankind up until this point. You could say that about anything sure but out of all the technologies I've seen in my life, ChatGPT + Dall-E are some of the greatest instances of IP theft we've ever seen seen in our lives, all sold under the guise of the absolutely, unfettered critical need for progress.
I'm excited at the prospects, but this isn't the way to go about it.
I could agree with every thing you said, but despite all cons, humanity will never stop to develop technology because of the potential harm it can cause, unless well proved and documented.
Not the commenter, but I've used it to create a search engine that filters results containing ads and SEO junk while giving summaries instead of click-bait descriptions
I published it on aisearch.vip (only 1 search is free because I don't want to go bankrupt)
Viewing OpenAI as a consulting firm whose services Microsoft is willing to be tens of billions for, then yes. With the transaction spread across Azure credits, buying equity at above market rates and maybe even direct payment from the GitHub/Office orgs.
Narrowing the discussion to marginal profit per token on the API I reckon it’s possible they’re break even but hard to tell.
The average profit per token allowing for R & D is almost certainly negative though, as AI devs and training models is super expensive. Objectively this is hinted at by them consuming more capital in recent fundraising.