Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If by "artificial intelligence" you mean that an LLM can simulate some aspect of what is considered intelligent behavior, no. Clearly LLMs can do that. But then so can a Markov chain.

If by "artificial intelligence" you mean something like the computer in Star Trek - essentially a sentient and self-aware being - then yes, that is a myth. That isn't what LLMs are. Although plenty of people believe otherwise, for whatever reason.

The problem is because LLMs can use and respond to natural language, we humans are hardwired to see them as the latter and anthropomorphize them. We imagine that if we give them a problem there's basically a little man inside the machine smart enough to understand that problem, searching through its data and trying to solve that problem the way a human would. But no, the only thing they're doing is constructing semantically correct output matching an input.

And it's wild that it works as well as it does, but most of that appearance of intelligence comes down to training on human effort and the result of human assumptions and bias.



It really goes to show how much humans associate intelligence with language — that if it sounds intelligent, then it must be intelligent.


It doesn't help that they've been trained to refer to themselves in ways that imply a sense of self-awareness ("as a large language model, I") or that they employ emotional language, which is far more effective at influencing people than rationality.

If people on HN can believe LLMs are sentient, sapient and intelligent beings (even more so than other humans I suspect) then there isn't much chance for average people getting caught in the intersection of LLM marketing, a hundred years of pop sci-fi cultural conditioning and a million years of primate evolution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: