Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is that both constructive logic and "normal" logic are part of the training data. You might be able to say "using constructive logic, prove X". But even that depends on none of the non-constructive training data "leaking" into the part of the model that it uses for answering such a query. I don't think LLMs have hard partitions like that, so you may not get a purely constructive proof even if that's what you ask for. Worse, the non-constructive part may be not obvious.




I know this reply is late, but what the hell.

Your comment is certainly correct and I agree that the various implementations of LLM probably can not actually partition attempts to find proofs into any given logical system.

My comment was more tongue in cheek than your response. I phrased it awkwardly obviously. I was humorously hoping that at some fundamental level, involving unintentionally taking advantage of a previously unknown foundational rule of computer science, that LLM’s as ‘thinking’ algorithms simply could not understand or utilize non-constructive logical means to formulate a proof.

As I said, I did not think this is actually what’s going on with GPT not being able to actually, or convincingly, ‘understand’ the law of the excluded middle. It was more of backhanded insult at LLMs, particularly, and those sales people who want to talk about them as thinking, reasoning, semi-conscious algorithmic ‘beings’.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: