parent is only half wrong; given a sufficiently long iteration process any of the chatGPT versions would surely start losing coherency regarding far past requests -- this may be less evident when used within a well confined and (somewhat) easily self-referenced system (like Blender, for example) -- but it's especially evident when trying to prompt GPT to write a fiction story from basics or otherwise work entirely on its own without a place to store outputs and then refer back.
tl;dr: it's easier to tell chatgpt to "Rewrite this story: " and then feed back previous outputs when writing a story than it is to get to an acceptable output from massively detailed prompts or long chains of iteration; this trait has far-reaching consequence rather than just writing fiction.
I do understand , however, that 'long-term memory' is a very active point of discussion and development.
What I am saying is you can't get GPT to solve any problem you throw at it. What you can probably do is get it to give you a correct answer that it was trained on. Those are different things.
You can't teach it new information, and often that is required to solve a problem.
tl;dr: it's easier to tell chatgpt to "Rewrite this story: " and then feed back previous outputs when writing a story than it is to get to an acceptable output from massively detailed prompts or long chains of iteration; this trait has far-reaching consequence rather than just writing fiction.
I do understand , however, that 'long-term memory' is a very active point of discussion and development.