LLMs are really good for things that don’t have to be 100% correct, presuming that the artifact generated is checked by someone with expertise. We used it to suggest some places to visit in Italy the past year. Great use case for AI: it brainstorms spots for an itinerary that we then check for ourselves to see if it fits our requirements. The machine helps us think beyond the usual destinations, and we vet the results. Hallucinations are not an issue.
Hallucinations seem fundamental to how LLMs work now, so AI is probably still a ways off from being able to absorb responsibility for decisions like a human can.
I’m sure this comment will offend both pro and anti AI camps. :)
Hallucinations seem fundamental to how LLMs work now, so AI is probably still a ways off from being able to absorb responsibility for decisions like a human can.
I’m sure this comment will offend both pro and anti AI camps. :)