> He’s merely conflating an adoption curve with capabilities.
Sure, programmers would still adopt LLMs faster than the rest of the work-force whether or not the LLMs were good at writing code. But you have to at credit at least some of that adoption rate to the fact that LLMs are significantly better at text (e.g. code) generation than they are at most other white-collar tasks (e.g. using a web browser)
> I've seen organizations where 300 of 500 people could effectively be replaced by AI, just by having some of the the remaining 200 orchestrate and manage automation workflows that are trivially within the capabilities of current frontier models
Curious, what industries? And what capabilities do LLMs present to automate these positions that previous technologies do not?
'Bullshit jobs' and the potential to automate them are very real, but I think many of them could have been automated long before LLMs, and I don't think the introduction of LLMs is going to solve the bottleneck that prevents jobs like these from being automated.
The percentage of jobs that are actually bullshit as opposed to the percentage of jobs the person making the claim thinks are bullshit merely because they are not that person's own job.
Which is, of course, conveniently never a bullshit job but a Very Important One.
Sure, programmers would still adopt LLMs faster than the rest of the work-force whether or not the LLMs were good at writing code. But you have to at credit at least some of that adoption rate to the fact that LLMs are significantly better at text (e.g. code) generation than they are at most other white-collar tasks (e.g. using a web browser)