> And then, inevitably, comes the character evaluation, which goes something like this:
I saw a version of this yesterday where a commenter framed LLM-skepticism as a disappointing lack of "hacker" drive and ethos that should be applied to making "AI" toolchains work.
As you might guess, I disagreed: The "hacker" is not driven just by novelty in problems to solve, but in wanting to understand them on more than a surface layer. Messing with kludgy things until they somehow work is always a part of software engineering... but the motive and payoff comes from knowing how things work, and perceiving how they could work better.
What I "fear" from LLMs-in-coding is that they will provide an unlimited flow of "mess around until it works" drudgery tasks with none of the upside. The human role will be hammering at problems which don't really have a "root cause" (except in a stochastic sense) and for which there is never any permanent or clever fix.
Would we say someone is "not really an artist" just because they don't want to spend their days reviewing generated photos for extra-fingers, circling them, and hitting the "redo" button?
We have a hard enough time finding juniors (hell, non-juniors) that know how to program and design effectively.
The industry jerking itself off over Leetcode practice already stunted the growth of many by having them focus on rote memorization and gaming interviews.
With ubiquitous AI and all of these “very smart people” pushing LLMs as an alternative to coding, I fear we’re heading into an era where people don’t understand how anything works and have never been pushed to find out.
Then again, the ability of LLMs to write boilerplate may be the reset that we need to cut out all of the people that never really had an interest in CS that have flocked to the industry over the last decade or so looking for an easy big paycheck.
> to cut out all of the people that never really had an interest in CS
I had assumed most of them had either filtered out at some stage (an early one being college intro CS classes), ended up employed somewhere that didn't seem to mind their output, or perpetually circle on LinkedIn as "Lemons" for their next prey/employer.
My gut feeling is that messy code-gen will increase their numbers rather than decrease them. LLMs make it easier to generate an illusion of constant progress, and the humans can attribute the good parts of the output to themselves, while blaming bad-parts on the LLM.
> filtered out at some stage (an early one being college intro CS classes)
Most schools' CS departments have shifted away from letting introductory CS courses perform this function— they go out of their way to court students who are unmotivated or uninterested in computer science fundamentals. Hiring rates for computer science majors are good, so anything to up those enrollment numbers makes the school look better on average.
That's why intro courses (which were often already paced painfully slowly for anyone with talent or interest, even without any prior experience) are being split into more gradual sequences, Python has gradually replaced Scheme virtually everywhere in schools (access to libs subordinating fundamental understanding even in academia), the relaxation of the major's math requirements, etc.
Undergraduate computer science classrooms are increasingly full of mercenaries who not only don't give a shit about computer science, but lack basic curiosity about computation.
From my dated experience in a CS-adjacent major, I'm torn between "that's bad, people need to care about the craft" versus "that's good, CS was a bit too ivory-tower/theory focused".
As someone who ended up getting two bachelor's degrees so that I could somewhat deeply explore diverse subjects, I think schools would do well to have strong, distinct programs in:
- computer science
- computer engineering
- software engineering
- mathematics
- some kind(s) of interdisciplinary programs that interweave computing with fine arts, liberal arts, or business, e.g.,
- digital humanities
- information science
- idk what other disciplines
and provide generously list courses taught in one department but highly relevant in another under multiple headings, for use as electives in adjacent minors and majors.
IIRC, when I was in school, my university only had programs in "computer science", "electrical and computer engineering", "management information systems", "mathematics", and an experimental interdisciplinary thing they called "information science, technology, and the arts". Since then, they've created a "software engineering" major, which I imagine may have alleviated some of the misalignment I saw in my computer science classes.
I loved the great range of theory classes available to me, and they were my favorite electives. If there had been more (e.g., in programming language design, type theory, or functional programming), I definitely would have taken them. But if we'd had a software engineering program, I likely would have tried to minor in that as well!
To me, it's an old-school liberal art (like geometry and arithmetic) that specialists typically pursue as a formal science (that is, a science of logical structure rather than experimentation, like mathematics or Chomskyan grammar). The engineering elements that I see as vital to computer science per se are not really software engineering in the broadest sense, but mostly about fundamentals of computing that are taught in most computer science programs already (compilers, operating systems, binary operations, basic organization of CPUs, mainframes, etc.).
My computer science program technically had only one course on software engineering per se, and I think schools should really offer more than that. In fact, I think that's not enough even within a "computer science" program. But I think the most beneficial way to provide courses of broader interest is with "clear but porous" boundaries between the various members of this cluster of related disciplines, rather than revising core computer science curricula to court students who aren't really interested in computer science per se.
> What I "fear" from LLMs-in-coding is that they will provide an unlimited flow of "mess around until it works" drudgery tasks with none of the upside.
I feel like its very true to the hacker spirit to spend more time customizing your text editor than actually programming, so i guess this is just the natural extension.
Even when 100% issue-oriented (that is, spending no time on editor-customizatons or developing other skill and toolkits) consider the difference between:
1. This thing at work broke. Understand why it broke, and fix it in a way which stays and has preventative power. In the rare case where the cause is extremely shallow, like a typo, at least the fix is still reliable.
2. This thing at work broke. The LLM zigged when it should have zagged for no obvious reason. There is plausible-looking code that is wrong in a way that doesn't map to any human (mis-)understanding. Tweak it and hope for the best.
There’s plenty of understanding we need to get in order to learn to steer the agents precisely, rather than, as you put it, mess around until it works. Some people are actively working on it, while others make a point of looking the other way.
I saw a version of this yesterday where a commenter framed LLM-skepticism as a disappointing lack of "hacker" drive and ethos that should be applied to making "AI" toolchains work.
As you might guess, I disagreed: The "hacker" is not driven just by novelty in problems to solve, but in wanting to understand them on more than a surface layer. Messing with kludgy things until they somehow work is always a part of software engineering... but the motive and payoff comes from knowing how things work, and perceiving how they could work better.
What I "fear" from LLMs-in-coding is that they will provide an unlimited flow of "mess around until it works" drudgery tasks with none of the upside. The human role will be hammering at problems which don't really have a "root cause" (except in a stochastic sense) and for which there is never any permanent or clever fix.
Would we say someone is "not really an artist" just because they don't want to spend their days reviewing generated photos for extra-fingers, circling them, and hitting the "redo" button?