This is a beautiful piece of work. The actual data or outputs seem to be more or less...trash? Maybe too strong a word. But perhaps you are outsourcing too much critical thought to a statistical model. We are all guilty of it. But some of these are egregious, obviously referential LLM dog. The world has more going on than whatever these models seem to believe.
Edit/update: if you are looking for the phantom thread between texts, believe me that an LLM cannot achieve it. I have interrogated the most advanced models for hours, and they cannot do the task to any sort of satisfactory end that a smoked-out half-asleep college freshman could. The models don't have sufficient capacity...yet.
The links drawn between the books are “weaker than weak” (to quote Little Richard). This is akin to just thumbing the a book and saying, “oh, look, they used the word fracture and this other book used the word crumble, let’s assign a theme.” It’s a cool idea, but fails in the execution.
It’s any interesting thread for sure, but while reading through this I couldn’t help but think that the point of these ideas are for a person to read and consider deeply. What is the point of having a machine do this “thinking” for us? The thinking is the point.
And that’s the problem with a lot of chatbot usage in the wild: it’s saving you from having to think about things where thinking about them is the point. E.g. hobby writing, homework, and personal correspondence. That’s obviously not the only usage, but it’s certainly the basis for some of the more common use cases, and I find that depressing as hell.
This is a software engineering forum. Most of the engineer types here lack the critical education needed to appreciate this sort of thing. I have a literary education and I’m actually shocked at how good most of these threads are.
Programmers tend to lean two ways: math-oriented or literature-oriented. The math types tend to become FAANG engineers. The literature oriented ones tend to start startups and become product managers and indie game devs and Laravel artisans.
We should try posting this on a literary discussion forum and see the responses there. I expect a lot of AI FUD and envy but that’ll be evidence in this tools favor.
I had a look at that. The notion of a "collective brain" is similar to that of "civilization". It is not a novel notion, and the connections shown there are trivial and uninspiring.
Build a rag with significant amount of text, extract it by key word topic, place, date, name, etc.
… realize that it’s nonsense and the LLM is not smart enough to figure out much without a reranker and a ton of technology that tells it what to do with the data.
You can run any vector query against a rag and you are guaranteed a response. With chunks that are unrelated any way.
unrelated in any way? that's not normal. have you tested the model to make sure you have sane output? unless you're using sentence-transformers (which is pretty foolproof) you have to be careful about how you pool the raw output vectors
Take for example the OODA loop. How are the connections made here of any use? Seems like the words are semantically related but the concept are not. And even if they are, so what?
I am missing the so what.
Now imagine a human had read all these books. It would have come up with something new, I’m pretty sure about that.
Indeed, I'm not seeing a "so what" here. LLMs make mental models cheap, but all models are wrong, and this one is too. The inclusion of Donalla Meadows' book and the quote from The Guns of August are particularly tenuous.
Edit/update: if you are looking for the phantom thread between texts, believe me that an LLM cannot achieve it. I have interrogated the most advanced models for hours, and they cannot do the task to any sort of satisfactory end that a smoked-out half-asleep college freshman could. The models don't have sufficient capacity...yet.