Hacker Newsnew | past | comments | ask | show | jobs | submit | more Anon1096's commentslogin

Besides the fact that this article is obviously AI generated (and not even well, why is there mismatches in british/american english? I can only assume that the few parts in british english are the human author's writing or edits), yes "overutilization" is not a real thing. There is a level of utilization at every price point. If something is "overutilizated" that actually means it's just being offered at a low price, which is good for consumers. It's a nice scare word though and there's endless appetite at the moment for ai-doomer articles.


Author here, I mix up American and British English all the time. It's pretty common for us Brits to do that imo.

See also how all (?) Brits pronounce Gen Z in the American way (ie zee, not zed).


Brit here… I say Gen Zed!


Sorry but it's highly suspect to be spelling the same word multiple different ways across paragraphs. You switch between using centre/center or utilization/utilisation? It is a very weird mistake to make for a human.


I mix British and American English all the time. Subconsciously I type in British English but since I work in American English, my spell checkers are usually configured for en-US and that usually means a weird mix of half and half by the time I've fixed the red squiggles I notice.


Yes exactly!


I dunno, I switch between grey and gray all the time; comes with having worked in so many different countries.


> why is there mismatches in british/american english

You sometimes see this with real live humans who have lived in multiple counties.


> You sometimes see this with real live humans who have lived in multiple counties.

Also very common with... most Canadians. We officially use an English closer to British English (Zed not zee, honour not honor) however geographically and culturally we're very close to the US.

At school you learn "X, Y, Zed". The toy you buy your toddler is made for the US and Canadian market and sings "X, Y, Zee" as does practically every show on TV. The dictionary says it's spelled "colour" but most of the books you read will spell it "color". Most people we communicate with are either from Canada or the US, so much of our personal communication is with US English.

But also there are a number of British shows that air here, so some particularly British phrases do sneak in to a lot of people's lexicon.

See a similar thing in the way we measure things.

We use celsius for temperature but most of our thermostats default to Fahrenheit and most cookbooks are primarily in imperial measures and units because they're from the US. The store sells everything in grams and kilograms, but most recipes are still in tablespoons/cups/etc.

Most things are sold in metric, but when you buy lumber it's sold in feet, and any construction site is going to be working primarily in feet and inches.

If anything I expect any AI-written content would be more consistent about this than I usually am.


For Canadian units I always like this handy flow chart: https://www.reddit.com/r/HelloInternet/comments/d1hwpx/canad...


> multiple counties

Pay no attention to those fopheads from Kent. We speak proper British English here in Essex


I do this because I'm a non-native english speaker. My preference varies from word to word. I write color, but i also write aliminium.


> why is there mismatches in british/american english

Some people are not from usa or England.


One of my least favorite things to come from AI is labelling any writing someone doesn't like as "obviously AI generated". I've read 3 of these kinds of comments on HN just today.


As non native English speaker I mix British and American English all the time, and you should hear me speaking. I mix in some novel accent too. Anyway, the author answered in a sibling reply.


By this logic loss leaders to drive out competition are good gor the consumer, no?


To be honest it doesn't feel manually edited.

Bullet points hell, a table that feels it came straight out of grok.


They don't stream to your phone when taking a video or picture. The data is on device and transferred later. It also uses wifi direct not BLE. It seems many many people on HN have absolutely no clue how the meta glasses work lol, there's barely any accurate information in this thread.


Like I mentioned in the text, I haven't looked into Wi-Fi yet. The picture/video -> transfer through the app is correct, and why an alternative method for detecting actual recording is necessary, but I'd expect to see that most events like battery status updates would be over directed BLE, since the initial boot + battery status is broadcast. And likely BTC for streaming audio. I'm unfamiliar with Wi-Fi Direct specifically, are you familiar with the process of scanning for active Wi-Fi Direct services?


Sorry don't mean to demean your effort, I read the GH post and like the hacker spirit :D. It's the rest of the people in the HN comments with 0 clue.

I like my glasses and don't really agree with your goals (nor see the point of letting you know when someone's wearing them; in my city your device would be beeping constantly) so I'm not interested in helping unfortunately. But I do wish you luck, as I said I like the spirit.


It's truly astonishing to me that your account has existed since 2008 and you decided to pull this.

As a troll job for the lulz it is some amazing work. Hats off


I work on rec systems.

The ideal here will be a multi tiered approach where the LLM first identifies that a book should be recommended, a traditional recommendation system chooses the best book for the user (from a bank of books that are part of an ads campaign), and then finally the LLM weaving that into the final response by prompt suggestion. All of this is individually well tested for efficacy within the social media industry.

I'll probably get comments calling this dystopian but I'm just addressing the claim that LLMs don't do good recommendations right now, which is not fundamental to the chatbot system.


All this would imply that the core value derives from better rec systems and not LLMs, which will merely embed the recommendation into their polite fluff.

Rec systems are in use right now everywhere, and they're not exactly mindblowing in practice. If we take my example of books with certain plotlines, it would need some super-high quality feature extraction from books (which would be even more valuable imo, than having better algorithms working on worse data). LLMs can certainly help with that, but that's just one domain.

And that would be a bespoke solution for just books, which would, if worked, would work with a standard search bar, no LLM needed in the final product.

We would need people to solve every domain for recommendation, whereas a group of knowledgeable humans can give you great tips on every domain they're familiar with on what to read, watch, buy to fix your leaky roof, etc.

So in essence, what you suggest would amount to giving up on LLMs (except as helpers for data curation and feature extraction) and going back to things we know work.


For one, in a mortgage the loan is secured by the house. But more importantly: you can get simple loans for startups too! Banks provide loans that are personally guaranteed (ie if the business goes under the founder is still on the hook). But if you want more money or something that is limited in liability then your pool of people willing to give money is much smaller and they usually want a stake in the business as a condition.


The opposite argument would be that the default should behave like the house then, so ownership should switch over to the person providing the loan entirely - instead of passing on part ownership forever.

But that's obviously less desirable to the person providing the money, and they've obviously got all the cards... Hence the argument of this post.

I wouldn't call it evil myself, unless I wanted to classify capitalism as evil in it's entirety - which would feel disingenuous to me, considering the alternatives were always worse in hindsight.


You aren’t thinking this through. If a startup defaults, it is because they have no money left (which is because they do not have a viable business yet). So there is nothing of value to repossess.

This is the same reason the bank asks for an independent valuation of a house (and requires the buyer to maintain insurance) before releasing the money to pay for it: The value of the collateral needs to plausibly match the value of the loan, so that the value of the loan can be recovered in case of default.

The only way this works is for the founder to personally guarantee the loan. Which means the founder needs to have sufficient personal assets to keep the bank happy. It also means the founder risks personal bankruptcy if those assets are not enough to cover the loan if the startup defaults.


Naw, you're making a claim that's just not true.

The company will have some value left on default. E.g if it's a software company it will have the IP for the software etc pp

Now wherever that's enough for anyone to be willing to take that risk with the loan is another story and thus I could now quote my previous comment in its entirety


Meanwhile, back here in reality…

Failed startups don’t have value left at the end. They go until they run out of money. Then they liquidate the office furniture to make the last payroll. Sometimes they don’t wrap up early enough and the founder and board members are personally liable for it.

Nobody wants to buy the custom software needed to run “The pets.com for GenAI” because it would be cheaper to start from scratch than to understand the codebase and make it do what you want.

Companies like 23-and-me that accumulate valuable data while going bankrupt are the rare exception… but banks/VCs do not know a-priori which ones will be that exception! If they did, they just wouldn’t make the bad loans in the first place!

> Now wherever that's enough for anyone to be willing to take that risk

Well, but it clearly isn’t, right? So everything else you wrote is sort of irrelevant.

I mean, why don’t you lend a startup $1000 on the condition they pay you back $1500 in two years[1] if they succeed and nothing if they fail? Pass the hat around your neighborhood and I bet you could fund a few real startups!

Except that.. oh.. when it’s your money on the line, suddenly you realize those are very stupid terms. You lose the whole $1000 90% of the time, break even 5% of the time and make a +$500 profit 5% of the time. The math isn’t mathing here.

So you’ll want to very carefully vet the founders and their plan. Be very picky about who you fund. Maybe you’ll ask them to personally guarantee some fraction of the loan. Suddenly, your highly moral terms look exactly like the business loans that approximately 0% of startups use because VCs offer them a better deal.

[1] Any more than that would be usury, which is immoral, right?


Take a deep breath and reread my comment please, you're interpreting things into it that I never said, and I'm not sure how you could've gotten the impression were in there - I merely pointed out that this behavior is core to capitalism, because the people with money own said money - and are in no way responsible to create a "fair" (from the perspective of the person receiving money) playing field


The residual value in the failed startup will be such a small fraction of the funding. You aren't making a convincing argument.


Did either of you actually read my comment?

I acknowledged as much... In both comments even.


Stated without a shred of evidence and getting no pushback. Classic for a nonsense claim about big tech company HN doesn't like, lol.




Stated with no more evidence than the figure of $100M of compensation, which was started by Sam Altman on his brother's podcast. But surprisingly everyone seems to be entirely fine with this wild claim and not asking for proof.


> On top of that, Meta, like many major tech companies, has been shifting its focus toward LLM-based AI, moving away from more traditional PyTorch use cases.

This is very wrong. Meta is on the forefront of recommendation algorithms and that's all done with traditional ML models made using PyTorch.


Meta is definitely at the forefront of recommendation algorithms built. However, the leadership team likely has shifted focus to LLMs.


Some recommendations are uncanny, except that I don't want any of them in my Facebook news feed and no matter how often I select "never show me this feed again," it keeps trying.


Slide threads are an age old tactic for pushing views onto others. It doesn't work quite as well on a voted board but as long as 1 of your threads stick you've achieved your goal.


Eventually english textbooks are going to start including this isn't... it's pattern because it's so prevalent in ai slop. I close anything I read now at the first sign of it.


I mean any time a service goes down even 1/100 the size of AWS you have people crawling out of the woodworks giving armchair advice while having no domain relevant experience. It's barely even worth taking the time to respond. The people with opinions of value are already giving them internally.


> The people with opinions of value are already giving them internally.

interesting take, in light of all the brain drain that AWS has experienced over the last few years. some outside opinions might be useful - but perhaps the brain drain is so extreme that those remaining don't realize it's occurring?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: