Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I try to be mindful of what I share with ChatGPT, but even then, asking it to describe my family produced a response that was unsettling in its accuracy and depth.

> Worse, after attempting to delete all chats and disable memory, I noticed that some information still seemed to persist.

Maybe I'm missing something, but why wouldn't that be expected? The chat history isn't their only source of information - these models are trained on scraped public data. Unless there's zero information about you and your family on the public internet (in which case - bravo!), I would expect even a "fresh" LLM to have some information even without you giving it any.



I think you are underestimating how notable a person needs to be for their information to be baked into a model.


LLMs can learn from a single example.

https://www.fast.ai/posts/2023-09-04-learning-jumps/


That doesn’t mean they learn from every single example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: