This is pretty much how I use LLMs as well. These interactions have convinced me that while the LLMs are very convincing with persuasive arguments, they are wrong often on things I am good at; so much so that I would have a hard time opening PRs for code edited by them without reading it carefully. Gell-man amnesia and all that seems appropriate here even though that anthropomorphizes LLMs to an uncomfortable extent. At some point in the future I can see them becoming very good at recognizing my intent and also reasoning correctly. Not there yet.