Love the post. I think it comes from a good place.
> And I got to where I am thanks to people like me who wrote down and shared their knowledge openly and freely. I’ve benefited from open source. From books people have published online for free and courses they’ve given away. I’ve learned a great deal from people I chat with online, over forums, and at meetups where people give presentations to share their knowledge and work.
I can relate to that.
However, though I don't need an LLM, I have found them to be extremely useful in learning new stuff. I probably used an LLM to learn a dozen different new things, just today.
It has been generally agreed for years that different people learn best differently. I have found that I tend to learn very well reading books and taking notes (and, as applicable, doing projects) and not so great watching videos. I'd probably even prefer audio lectures rather than video content, if maximum learning was my goal.
I find LLM learning to be mixed. I can ask questions, seek clarification, and that helps me get to a specific answer quickly, or helps me to get past misconceptions quickly. But it seems to fall somewhere in between reading books and watching videos for me -- I still feel like I learn best through books, even if it takes longer. Specifically, it feels like actually being a little bit harder forces me to think deeper and/or retain more.
I do not wish for LLM learning to go away, but nor do I wish for it to replace books. I hope that many people continue to write in traditional formats.
It's interesting about text versus video -- I never ever look for video instruction for code, probably because I just came up on thick-ass books from the library and actual text on the computer in the 90's.
THAT SAID, a while back I stumbled across some Three.js video tutorials on YouTube by Wael Yasmina [0] that were so informative and crystal clear that it completely changed my opinion about learn-code-through-video. I guess it just depends on the subject matter and presentation. I'm way more open to it now, and find some odd videos on there that cover topics that never seem to come up in blog posts and searches. YMMV
LLMs can be incredible at cutting through misconceptions. I remember learning to code 20 years ago, and getting stuck building a mental model of a Hash. I remember being able to recite the definition verbatim, but I just couldn't put it into use until it eventually clicked after what felt like an eternity.
I think about how an LLM could have dramatically shorted that, like it did recently to teach me Bayes' theorem.
This seems like an example of what the parent comment was saying, which is that everyone learns best in different ways. Learning via book works better for them, but maybe not as well you you. It still isn't clear to me that an LLM would be more effective for everyone in the circumstance you described though, and I think that's the point they were trying to make; new learning techniques are mostly helpful because of the variance in how people learn, and in practice the best choice is likely going to depend at least as much on the individual as the circumstance and topic.
I agree. We still need people like the author to write things down. But I do think that LLMs will be one of the important methods of consumption for this material. Many/most people will still just directly read what the author writes, but a large percentage of people will get it via an LLM -- and I think that's a good thing.
> I probably used an LLM to learn a dozen different new things, just today.
Did you learn 12 new things or did you find out about 12 new things? Or did you use it as a component in the learning process?
Everyone probably has a different interpretation of what it means to learn, or how to go about doing it effectively, but my hot take might be that there's not much learning going on if there's not much understanding going on, and understanding rarely comes quickly or without practice, and by extension, most reading or watching doesn't constitute learning unless it's a multifaceted activity of exploration and practice.
The ability to produce information that adds clarity to subject matter certainly can aid in learning and finding out what to learn or where to look further, but I can't learn guitar by reading about how to play guitar, nor can I learn German by exclusively listening to podcasts, and I think this us true for many things.
That one. I gotta know what I want, and what questions to ask. I’ve been self-directing learning my whole life, and have gotten good at consulting references.
I often know an answer, but maybe not the correct answer, so I simply ask the Delphic Oracle.
I will ask it something like “Here’s how I would do it. Does this look correct? What alternatives are available?”.
You're probably completely correct and know more about the actual day to day than I do, but it sure seems like most of the famous ones were kinda jerks about sharing.
I think LLMs are helpful for understanding code. I used to spend like an hour trying to find where something very specific was made, and now I can just ask an LLM and it finds it right a way and is able to explain how the code works. This is probably the thing that has saved me most of the time.
I've never learned so much and so fast as I do with LLMs.
1. Most of the learning before, especially technical related involved a lot of google searching for the information I needed. LLM here removes a lot of the friction and boring parts of the process.
2. At work I can leverage LLMs for some very mundane tasks, again, mostly related to information gathering. There was a time when I needed days to connect the dots in some very convoluted code written by your average developer. Even more to figure out the purpose of choices and how they connected to the business domain, often in situations where the relevant stakeholders and people with the know-how left the company. This kind of work, made of tons and tons of paper pages with my notes would sincerely exhaust me. And this kind of work has been the bulk of my career because coding was never the hard part. On this LLMs are increasingly better. This leaves me a lot of energy more to actually investigate the overall architectural decisions and technical details both of my projects and their dependencies (which have never been as easy to traverse).
3. Since I am less mentally exhausted (the only way to get mentally exhausted with LLMs is if you're "half vibecoding" so producing tons and tons of code which you are actually thoroughly reviewing) I have way more space to dedicate to learning. I do it both by practicing manual coding stuff for fun or editing the things I don't like in the work codebases I see, or by doing more katas on codewars or leetcode exercises. Also, I end up just in general asking more questions I would've not made just out of sheer curiosity and often learn a lot of things that suddenly "click". Another thing I do is way more spaced repetition exercises on topics I care (such as the many odd things you can learn with a language like C or metaprogramming coolness you encounter in Ruby and similar) on Remnote.
Honestly I don't get how you can learn less by having such a tool that removes so much friction.
But of course, if every AI naysayer conflates every LLM usage with vibecoding and with delegating the thinking and reasoning to LLM messages then sure, they are a disaster used like that. But that's on the user, not the tool.
Article doesn't really match the title, it's more of pulpit sermon on what the author thinks newcomers need to know about learning.
But, it feels a bit random, like a mix of feel-good motivational things they want to blurt out, but very short on concrete advice. Not sure it's really useful for anyone.
I agree with the point that learning requires work. In general, everything worth doing requires work. This is one of the things I often have to remind myself, otherwise I spend the whole day 'learning' and I just read a bunch of stuff online that I then forget, instead of trying something out which I actually will learn and understand.
I agree, though, you can still work toward understanding using an LLM (and take it from a skeptical) by, e.g., using them as challengers to your ideas.
That said, I think it requires a lot of self discipline and should be complemented with other methods and sources of information to be useful. As a teacher, I really try to prevent my undergraduate students from taking the easy road of using LLMs to solve every easyish problem I give them to *learn*. Sure, they did the homework but most of them did not learn anything while doing it and they finish their first year without having learnt anything actionable regarding computer science (observe that I use a different approach with students from other areas, though I still think it is good to spend a few days without relying on LLMs).
I often use a sport analogy to land my point which works with them, so let me share it here. If you want to learn how to run a marathon and drive 42km every day, then you are certainly (hopefully ?) a better driver but nowhere near to running a marathon (fortunately, no one has yet challenged me with the fact that running a marathon is way less useful to get a job than driving).
The section headed "A World Without People" is the most interesting. We all need someone to tell us we're wrong-headed every now and again. Simply because we often are, and that's perfectly fine.
With how much big tech/big corps in general are shoving AI down everyones throats its not surprising people feel the need to say stuff like this. I do find them helpful for learning, but obviously it s not required.
Yes. I work with teenagers and young adults teaching them how to program, both directly and as a support skill. These are bright young people who can work through problems with enough time—but LLMs allow them through these problems in a minute or two.
I've taken, particularly in people learning programming as a support skill, to teaching them how to verify the solution rather than asking them to deal with hours of frustration while their peers don't bother. A tool is a tool, as sad as it makes me to say looking back on teaching myself how to code 20 years ago.
The hidden assumption here is that "learning programming" means replicating the author’s path: deep curiosity, lots of time, comfort asking humans, decent reading stamina. For people who already have those traits, yeah, you absolutely don’t need LLMs. But that’s a bit like a strong reader in 1995 saying "you don’t need Google to learn anything, the library is enough" - technically true, but it misses what changes when friction drops.
What LLMs do is collapse the activation energy. They don’t replace the hard work, they make it more likely you’ll start and keep going long enough for the hard work to kick in. The first 20 confusing hours are where most people bounce: you can’t even formulate a useful question for a human, you don’t know the right terms, and you feel dumb. A tool that will patiently respond to "uhh, why is this red squiggly under my thing" at 1am, 200 times in a row, is not a shortcut to mastery, it’s scaffolding to reach the point where genuine learning is even possible.
The "you won’t retain it if an LLM explains it" argument is about how people use the tool, not what the tool is. You also don’t retain it if you copy-paste Stack Overflow, or skim blog posts until something compiles. People have been doing that long before GPT. The deep understanding still comes from struggle, debugging, building mental models. An LLM can either be a summarization crutch or a Socratic tutor that keeps pushing you one step past where you are, depending on how you interact with it.
And "just talk to people" is good advice if you’re already inside the social graph of programmers, speak the language, and aren’t terrified of looking stupid. But the "nothing is sacred, everyone is eager to help" culture is unevenly distributed. For someone in the wrong geography, wrong time zone, wrong background, with no colleagues or meetups, LLMs are often the first non-judgmental contact with the field. Maybe after a few months of that, they’ll finally feel confident enough to show up in a Discord, or ask a maintainer a question.
There’s no royal road, agreed. But historically we’ve underestimated how much of the "road" was actually just gate friction: social anxiety, jargon, bad docs, hostile forums. LLMs don’t magically install kung-fu in your brain, but they do quietly remove a lot of that friction. For some people, that’s the difference between "never starts" and "actually learns the hard way."
"LLM as Socratic tutor" isn't quite right, because the LLM can't be trusted. But I have had great results with "LLM as debating partner". Basically, I try to explain the thing I'm learning and have the LLM critique me. Then I critique the LLM, because it usually says something that doesn't quite make sense (or I ask it cite its source and it recants its statement). A few rounds of this is (I think) really helpful for cementing my understanding.
> The first 20 confusing hours are where most people bounce: you can’t even formulate a useful question for a human, you don’t know the right terms, and you feel dumb.
> argument is about how people use the tool, not what the tool is.
> The deep understanding still comes from struggle, debugging, building mental models. An LLM can either be a summarization crutch or a Socratic tutor that keeps pushing you one step past where you are, depending on how you interact with it.
> But historically we’ve underestimated how much of the "road" was actually just gate friction: social anxiety, jargon, bad docs, hostile forums.
> And I got to where I am thanks to people like me who wrote down and shared their knowledge openly and freely. I’ve benefited from open source. From books people have published online for free and courses they’ve given away. I’ve learned a great deal from people I chat with online, over forums, and at meetups where people give presentations to share their knowledge and work.
I can relate to that.
However, though I don't need an LLM, I have found them to be extremely useful in learning new stuff. I probably used an LLM to learn a dozen different new things, just today.
reply