Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do you know they are observing false inputs, as opposed to creating false outputs? (acting as if they have seen halucinations)

How do you know that the LLM is not observing false inputs but creating false outputs? Would an LLM which tells you very convincingly about how it obtained a false information make you change your mind?

> This experience is available to you and is well documented.

You are misunderstanding what I'm asking. Sure, drug induced hallucinations in humans is very well documented. What I'm asking if this purported difference between "hallucinating on the inputs" vs "creating false outputs" is meaningful distinction.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: