Hacker Newsnew | past | comments | ask | show | jobs | submit | Fourwheels2512's commentslogin

the DIY version of what ModelBrew.ai does in one click.


cool work. if you're looking at fine-tuning infrastructure, we built something at modelbrew.ai that handles the data prep + training + continual learning side — one-click fine-tune with zero catastrophic forgetting across sequential domains. different angle but similar pain points.


we do finetuning too. your number one complaint of bad dataset, we solved it by creating a better dataset optimizer than what is available in the market today. we have continual learning where you can train domain B on top of domain A and domain C on top of Domains A and B. with out catastrophic forgetting. you should try it out at modelbrew.ai , test it and compare.


  Interesting take, but what you're describing is sophisticated RAG with a feedback loop. The model's weights never change. It writes better notes — it
  doesn't actually know more.

  That works for agentic workflows. But for organizations fine-tuning models on proprietary data, it falls apart. Add a second domain, catastrophic
  forgetting destroys the first. Context windows are finite. Memory notes are lossy. The model never internalizes anything.

  I built the actual weight-update solution. Sequential multi-domain fine-tuning on Mistral 7B with -0.16% drift across 5 domains. No replay buffers, no
  frozen params. The model genuinely accumulates knowledge.

  Top labs may not need continual learning for foundation models. Every organization deploying fine-tuned models on their own data absolutely does.
  Different problem, both real.

  Try it: modelbrew.ai


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: