Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Same here. I really hope in a near future local model will be good enough and hardware fast enough to run them to become viable for most use cases
 help



No need to hope; it is inevitable.

Is it inevitable though? Open-weight models large enough to come close to an API model are insanely expensive to run for con/prosumers. I'd put the “expensive” bar at ≥24GB since that's already well into 4 digits, which gives you quite many months of a subscription, not including the power will for >400W continuous.

Color me pessimistic, but this feels like a pipe dream.


A decent amount of software developers and gamers do spend 3000 USD on a PC. That kind of hardware is going go get more and more capable over time wrt genAI models.

Of course there will always be a gap to frontier closed hosted models. It is not an either or proposition.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: