2) There's no fundamental reason preventing some future technology to do everything humans can, and
3) LLMs are explicitly designed and trained to mimic human capabilities in fully general sense.
Point 2) is the "or else magic exists" bit; point 3) says you need a more specific reason to justify assertion that LLMs can't create new concepts/abstractions, given that they're trained in order to achieve just that.
Note: I read OP as saying they fundamentally can't and thus never will. If they meant just that the current breed can't, I'm not going to dispute it.
> 3) LLMs are explicitly designed and trained to mimic human capabilities in fully general sense.
This is wrong, LLM are trained to mimic human writing not to mimic human capabilities. Writing is just the end result not the inner workings of a human, most of what we do happens before we write it down.
You could argue you think that writing captures everything about humans, but that is another belief you have to add to your takes. So first that LLM are explicitly designed to mimic human writing, and then that human writing captures human capabilities in a fully general sense.
It's more than that. The overall goal function in LLM training is judging predicted text continuation by whether it looks ok to humans, in fully general sense of that statement. This naturally captures all human capabilities that are observable through textual (and now multimodal) communication, including creating new abstractions and concepts, as well as thinking, reasoning, even feeling.
Whether or not they're good at it or have anything comparable to our internal cognitive processes is a different, broader topic - but the goal function on the outside, applying tremendous optimization pressure to a big bag of floats, is both beautifully simple and unexpectedly powerful.
Humans are trained on the real world. With real world sensors and the ability to act on their world. A baby starts with training hearing, touching (lots of that), smelling, tasting, etc. Abstract stuff comes waaayyyyy later.
LLMs are trained on our intercepted communication - and even then only the formal part that uses words.
When a human forms sentences it is from a deep model of the real world. Okay, people are also capable of talking about things they don't actually know, they have only read about, in which case they have a superficial understanding and unwarranted confidence similar to AI...
All true, but note I didn't make any claims on internal mechanics of LLMs here - only on the observable, external ones, and the nature of the training process.
Do consider however that even the "formal part that uses words" of human communication, i.e. language, is strongly correlated with our experience of the real world. Things people write aren't arbitrary. Languages aren't arbitrary. The words we use, their structure, similarities across languages and topics, turns of phrases, the things we say and the things we don't say, even the greatest lies, they all carry information about the world we live in. It's not unreasonable to expect the training process as broad and intense as with LLMs to pick up on that.
I said nothing about internals earlier, but I'll say now: LLMs do actually form a "deep mofel of the real world", at least in terms of concepts and abstractions. That has already been empirically demonstrated ~2 years ago, there's e.g. research done by Anthropic where they literally find distinct concepts within the neural network, observe their relationships, and even suppress and amplify them on demand. So that ship has already sailed, it's surprising to see people still think LLMs don't do concepts or don't have internal world models.
1) LLMs cannot do everything humans can, but
2) There's no fundamental reason preventing some future technology to do everything humans can, and
3) LLMs are explicitly designed and trained to mimic human capabilities in fully general sense.
Point 2) is the "or else magic exists" bit; point 3) says you need a more specific reason to justify assertion that LLMs can't create new concepts/abstractions, given that they're trained in order to achieve just that.
Note: I read OP as saying they fundamentally can't and thus never will. If they meant just that the current breed can't, I'm not going to dispute it.