Yeah, I see both those points and really I agree with both. Actually, I think problem 1 is exacerbating problem 2 by a lot - I get just as mad at the postmillenial dudebro with the get-rich-quick-on-AI scam video as I do with the AI-MBAs of the world.
Actually, that's a lie. The MBAs are still worse. They ought to know better at least.
All I'm getting at is that while we put totally legitimate backpressure on the hype cycle, we should at the same time be able to talk about and develop those elements of this new tech that will benefit us. Not "us the tech vcs" (I am not one of them) but "us the engineers and creatives".
Yes it's disruptive. Yes it's already caused significant damage to our world, and in a lot of ways. I'm not at all trying to downplay that. But we have two ways this goes:
- people (individuals) manage to adopt and leverage this tech to their own benefit and the benefit of the commons. Large AI companies develop their models and capture large sectors of industry, but the diffusion of the disruption means that individuals also have been empowered, in many ways that we can't even predict yet.
- people (individuals) fight tooth and nail against this tech, and lose the battle to create laws that will contain it (because let's be honest, our leadership was captured by private interests long ago and OpenAI / MSFT / Google / Meta have deep enough pockets to afford to buy the legislature). Large AI companies still develop their models and capture whole sectors of industry, but this time they go unchecked due to a fragile and damaged AI industry in the commons. We learn too late that the window to make use of this stuff has closed because all the powerful stuff is gated behind corporate doors and there ARE laws about AI now but basically those laws make it impossible to challenge the entrenched powers (kinda like they do now with pre-AI tech - patent laws and legal challenges to threats to power - like what the EFF is constantly battling).
If we do not begin to steer towards a robust open conversation about creating and using these models, it's only going to empower the people that we are worried about empowering already. Yes, we need to check the spread of "AI in fucking everything". Yes we need to do something about scraping all data everywhere all the time for free. But if we don't adopt the new weapon in the information space, we'll just be left with digital muskets versus armies of indefatigable robots with heat-seeking satellite munitions. Metaphorically(?) speaking.
> Actually, I think problem 1 is exacerbating problem 2 by a lot
100%, the fear mongering is just to trigger rallies of investment both in stock and funding. What bad sounding to us "AI took my jerb!" sounds great to the c-suite.
I think you might overestimating the power of AI, a little. It's really good at creating flashy things, nice looking videos and code, but the reasoning and logic is still lacking. I don't see it replacing human oversight anytime soon.
Oh, neither do I. We see eye to eye on this point - it isn't good at the things people have learned to be good at, and that's a good thing.
What it excels at is empowering people with good ideas about architecture and function to explore them without being burdened by SCRUM, or managers, or other such trappings of large orgs. A solo dev, who has a hot take on a new way to structure a cluster or iterate on a dev tool, can just throw the pasta rather than spend tons of time nitpicking boilerplate and details with a team of 10. Someone who uses computers a lot but doesn't know how to do specific thing x or y can now discover that in seconds, with full documentation and annotations and (most importantly) links to relevant non-AI learning material.
What I feel like people are getting wrong most is this idea that AI is coming for your job and it's going to be a powerslave to the MBA types who can then kick the engineers out of the picture. It's not happening (if anything, enabling smaller teams to get more done is going to deprecate the large org outside of the places it's not needed). That's the bubble, and while gargantuan amounts of money go to these AI startups it's all going to fall on it's face when they realize that what AI allows us to do is bootstrap good projects without megalith VC bucks.
Actually, that's a lie. The MBAs are still worse. They ought to know better at least.
All I'm getting at is that while we put totally legitimate backpressure on the hype cycle, we should at the same time be able to talk about and develop those elements of this new tech that will benefit us. Not "us the tech vcs" (I am not one of them) but "us the engineers and creatives".
Yes it's disruptive. Yes it's already caused significant damage to our world, and in a lot of ways. I'm not at all trying to downplay that. But we have two ways this goes:
- people (individuals) manage to adopt and leverage this tech to their own benefit and the benefit of the commons. Large AI companies develop their models and capture large sectors of industry, but the diffusion of the disruption means that individuals also have been empowered, in many ways that we can't even predict yet.
- people (individuals) fight tooth and nail against this tech, and lose the battle to create laws that will contain it (because let's be honest, our leadership was captured by private interests long ago and OpenAI / MSFT / Google / Meta have deep enough pockets to afford to buy the legislature). Large AI companies still develop their models and capture whole sectors of industry, but this time they go unchecked due to a fragile and damaged AI industry in the commons. We learn too late that the window to make use of this stuff has closed because all the powerful stuff is gated behind corporate doors and there ARE laws about AI now but basically those laws make it impossible to challenge the entrenched powers (kinda like they do now with pre-AI tech - patent laws and legal challenges to threats to power - like what the EFF is constantly battling).
If we do not begin to steer towards a robust open conversation about creating and using these models, it's only going to empower the people that we are worried about empowering already. Yes, we need to check the spread of "AI in fucking everything". Yes we need to do something about scraping all data everywhere all the time for free. But if we don't adopt the new weapon in the information space, we'll just be left with digital muskets versus armies of indefatigable robots with heat-seeking satellite munitions. Metaphorically(?) speaking.