Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway277432's commentslogin

Unironically yes.

I predict that costs will grow to 80% of what it would cost a human, across the board for everything AI can do.

"It's still cheaper than a human" they'll say. Loudly here on HN too.

Of course this will happen slowly, very slowly. Lets meet again in 10-20 years.


If openai / anthropic / google were the only game in town then yea, we’d already be paying 5x as much as we do. But local models are so close to sota that it just isn’t going to happen. If I’m a lawyer getting billed $500k/yr on $600k profit I’d rather buy a chonky server and run a model that’s 90% as good and get my money back in 2 years, then pay $5k electricity on $600k profit.

Nobody will successfully lobby for banning local models either, it just isn’t going to happen when the rest of the world will happily avoid paying 80% of their profits to some US bigco for the privilege of existing.


Could you really build something sophisticated with a local model? Let's say a linux kernel.

I'm using Codex with the Linux kernel and I discard maybe 80% of what it produces. This isn't an area which the top models have solved.

> "It's still cheaper than a human" they'll say.

The question is how much friction there will be for people to switch over to Gemini, GPT or maybe even DeepSeek or Mistral or whatever. Even if price hikes are inevitable across the board, the moat any single org has is somewhat limited, so prices definitely will be a factor they'll compete on with one another at least a bit.


> the moat any single org has is somewhat limited

I disagree. The models are going to become commodities (we're already almost there), but the tooling and integrations will be the moat. Reproducing everything Anthropic has already built with Claude Code, Cowork, and all their connectors would be nontrivial, and they're just getting started.

Anyone can implement an AI chatbot. But few will be able to provide AI that's deeply integrated into our daily lives.


> Reproducing everything Anthropic has already built with Claude Code, Cowork, and all their connectors would be nontrivial, and they're just getting started.

They're one org with presumably some specific direction. As the actual models get better, expect a large part of the dev community iterating on tools way more easily, sometimes ones that Anthropic doesn't quite have an equivalent to - for example, just recently Cline released their Kanban solution to dish out tasks to agents (https://cline.bot/kanban), OpenCode has been around for a while for the agentic stuff (https://opencode.ai/) and now has a desktop and web version as well, alongside dozens of others. Cline and KiloCode also have decent browser automation.

I will admit that everyone working on everything at the same time definitely means limitless reinvention of the wheel and some genuinely good initiatives dying off along the way (I personally liked RooCode more than both the Cline and KiloCode for Visual Studio Code, sad to see them go), but I doubt we're gonna see a lack of software. Maybe a lack of good software, though; not like Anthropic or any org has any moat there either, since they're under the additional pressure of having to do a shitload of PR and release new models and keep up appearances, compared to your average dev just pushing to GitHub (unless they want corporate money, in which case they do need some polish).


I don't think costs will grow on either side in the long term. In the short term, yes, but once they get the infrastructure in place to support AI, costs will go down. Right now, they're on borrowed infra.

> I predict that costs will grow to 80% of what it would cost a human, across the board for everything AI can do.

80% of a human's price varies greatly by region. 80% of the lowest-priced effort-of- humans in this space right now will probably not be sustainable for the sellers.


This is assuming there will be no competition. But why wouldn't there be? Especially since you can use open source models, which are not too far from frontier models (from now).

Kimi and GLM 5.1 are already capable of handling a good chunk of my tasks. They about to lose the leverage to allow them to drastically increase prices - enough models are 6-12 months away from being good enough large proportions of their customers uses.

Its not20 years. Its now. Nvidia has already said that tokens cost more than humans.

https://finance.yahoo.com/sectors/technology/articles/cost-c...


So? Everyone is saying to just look at the LLM outputs for PRs etc. and just ignore how it was created. We should apply that standard right here too.

This is Anthropics initial response, which they walked back ONLY because of the HN outrage. Without HN, that would've been tge official answer.

I'll judge them on that, thank you.


Having just worked my behind off for the last months to deliver on an impossible deadline, successfully: more bodies definitely would have helped.

Even just to keep the fluff off my back and to allow me to fully concentrate on what's important.

The situation will repeat itself in 6 months and I'm not going to do that again. Hiring now would fix that.


>tell me if I earned your star

Since you asked: Not in a million years, no.

A bug of this type is either an honest typo or a sign that the author(s) don't take security seriously. Even if it were a typo, any serious author would've put a large FIXME right there when adding that line disabling verification. I know I would. In any case a huge red flag for a mitm tool.

Seeing that it's vibe coded leads me believe it's due to AI slop, not a simple typo from debugging.


I love the real feedback tbh, I am still learning, and want to learn as much as possible. Would love if you can review it and tell me bluntly either in the repo or here the things that should be improved. I would love to learn more from you and get better :D


I'm not going to review it in full, sorry. Reviewing is so much more effort compared to producing something with AI. But don't let me deter you, keep on learning and keep on building.

I wish I had the possibilities to learn and build on such a large scale when I started out. AI is a blessing and a curse I guess.

My own early projects were most definitely crap, and I made the exact same mistakes in the past. Honestly my first attempts were surely worse. But my projects were also tiny and incomplete, so I never published them.

However: What little parts I did publish as open-source or PRs were meticulously reviewed before ever hitting send, and I knew these inside and out and they were as good as I could make it.

Vibe-coded software is complete but never as good as you could make it, so the effort in reviewing it is mostly wasted.

I guess what I'm trying to say is I'm a bit tired of seeing student-level projects on HN / Github cosplaying as production ready software built by an experienced engineer. It used to be possible to distinguish these from the README or other cues, but nowadays they all look professional and are unintentionally polluting the software space when I'm actually looking for something.

Please understand that this is not specifically directed at you, it's pent up frustration from reading HN projects over the last months. Old guy yelling at clouds.


The README is really annoying.

You used to be able to tell so easily what was a good well looked after repo by viewing the effort and detail that had gone into the README.

Now it's too easy to slop up a README.


it is incredible that people pointed out very specifically what's wrong and you fell back to weaponized incompetence to shift the intellectual and mental burden of reviewing the code to outsiders instead of thinking for yourself. this is the problem with relying on LLM,s instead of thinking for yourself you just ask LLMs, and now other real people "idk just fix it for me make it work". do you really not see the problem with this?


I appreciate that attitude. Keep it up.


unlikely to get that from a throwaway


You can always try right?


Only if you don’t care about your reputation.

“Give me your time for free” is not the kind of request that earns respect.


I got a major reprimand because I answered too many questions posted in the public channel. All in my area of expertise, mostly after hours.

At first they said it was "great". But it soon turned sour and resulted in "it seems like you spend too much time answering questions", and I should "focus" and "free up" that time to work on my assigned tasks.

Well, I don't answer anything anymore. In fact nobody does. It used to be that you got precise technical answers from someone directly working on the tool or problem you asked about. The previous CEO would sometime even answer themself. Not anymore.

Now people ask, but nobody answers. The rest has devolved into LinkedIn style self-promotions and announcements.


But now the layer of management above you can justify organizing meetings to "get aligned" and "communicate". Structurally, i see how it happens


Lots of people haven't had to actually restore their data. Somehow it has good marketing. I used it for a while and was not impressed. Random Python errors, requires too much scripting, and at least on my data terrible restore speed.

I followed development on Github and what I saw in terms of fixes and commits gave me pause. Not how I like my critical backup software written.

I now use restic and sleep much better.


Don't ever travel, never change anything related to billing except to update your cards before they expire. Don't change your name, email adresses or lose access to your phone number, and as we know now also don't ask support.

Then don't use any uncommon tools, e.g. ones associated with 'hacking', or store any copyrighted files in their cloud.

If there's any issue or error with logins etc., don't retry too quickly or too often or that in itself will be suspicious. Wait a day between requests, and double-check everything before retrying. Do not retry from a different IP or worse a VPN, or that will also be suspicious.

That should just about cover the bases for most providers.

Yes, it's insane and obviously you still need a backup of all your stuff just in case.


> Don't change your name, email adresses or lose access to your phone number, and as we know now also don't ask support.

This reads like some list of instructions from the Brazil film.


That’s the only movie to have truly disturbed me. It made me feel awful. And the feeling lingered a long time.


> "What do you think about the comments of user XYZ"

Wow that is really scary. Never did I ever think someone would actually go through all my old comments, analyze them in detail and then judge me based on them (my real account, not this throwaway).

Yes I knew it would be theoretically possible, but you'd have to be a total stalker and real creep to actually do it. Now anyone with an LLM can just do it without a second thought.

And it'll only get worse from here on. I'm sure there is at least 1 comment somewhere on the internet by me where I wasn't too nice, or a like / upvote on a questionable opinion or something.

If it's in any way connectable to me future AI tech is going to find it. Probably even across accounts, matching writing styles and whatnot.

I seriously think I'm going to stop posting on the internet for good.


Wouldn’t surprise me if some throwaways could be linked to real afcounts, and if real accounts could be linked to other real accounts, Both ones on HN and elsewhere on the intenet, from Reddit to usenet.

I suspect doxing with AI would be quite easy too, judging the way accounts talk in the same way things like gait recognition can work, link the accounts, narrow down the person, build a profile. Suddenly it becomes user abc123 is linked to (list of 30 accounts from discord to flyertalk), based on these posts about flying on us airways a lot in 2015 and these posts about Las Vegas and these about a morning flight and this picture from linked Twitter account the person worked in this industry and lived in this location from this time to that time and is likely this person on linked in.

Anonymity is dead. Historically as well as in the future. But HN still think governemt is the problem and the gdpr is bad because it disincentivises holding onto data.


> Wouldn’t surprise me if some throwaways could be linked to real afcounts

"Reproducing Hacker News writing style fingerprinting" - https://antirez.com/news/150

It's not entirely accurate but some people have found their own alt accounts via this apparently.


> I seriously think I'm going to stop posting on the internet for good.

I had similar thoughts, but it would probably not make a difference, at this stage. What is there stays there - either online, as in the case of HN, or as part of some collected dataset.

In hindsight: the world changed in so many ways, from the world I knew some twenty years ago, and I am not even talking about politics or technology: the attitudes and perception of people seems to have changed in many ways. Back then I thought it would be of benefit to be open and upfront about things. Now that is no longer a common perception.

Enough said.


>chore: change to MIT license

What does "chore" mean in this context? Is the license just leftover from some MS open source template? If so there is perhaps some leeway, and the author maybe just didn't realize he needed to use the original MIT license file including the notices and not just a template one grabbed from the internet.

Any other explanation for such a "relicensing" would be extremely worrisome.


"chore" is a common conventional commit message type, see https://www.conventionalcommits.org/en/v1.0.0/


"chore" just means the type of change; as opposed to a fix, a feature, refactoring, there are some things that you have to do in the repo that can be called "chores".


I'd say, in this case "chore" means "boring, nothing to see here".


It's interesting, because "chore" to me has strong connotations of "tedious, unpleasant".


Right. It derives from the idea that programmers are supposed to find "solving interesting problems" pleasant. On the other hand, boring, repetitive tasks are called "chores".


I don’t find it appropriate nor useful to place such a sentiment in a commit message, much less as a standard tag.


It's a nerdy colloquialism. ie, it's not that serious


That’s part of the reason why I’d object to it in a commit message, in a professional setting.


Some organizations strongly encourage marking all commits as one of a list of categories such as "feature/fix/chore/...". The tags are then bound to loose all meaning (literal or figurative) very soon.

Unless there was some "conspiracy" to violate the license (my original comment was an attempt at playfully hinting at that possibility, though I don't find it very likely), I'm sure the person who wrote that commit message thought about it for less than three seconds.


The easiest way to do it is to add your own copyright line above the original LICENSE copyright line.

That way anyone touching the project can just add their own line on top.

Done.

EDIT: Example: https://github.com/go-gitea/gitea/blob/main/LICENSE

A more complicated way to do it is to add a folder that contains the original LICENSE file or files. Sometimes there is more than one license, or the license texts differ. In that case, you must preserve all the different variants, even if they all call themselves MIT.

Then, you can optionally add your additional own LICENSE file * only iff* it is compatible with all existing LICENSES. In the case of the MIT license, you may relicense, sublicense, or use a different license in addition, provided it is MIT-compatible. With e.g. GPL you can't. Note that you still have to preserve all the original LICENSE files in the repo.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: