- it adds superfluous logic that is assumed but isn’t necessary
- as a result the code is more complex, verbose, harder to follow
- it doesn’t quite match the domain because it makes a bunch of assumptions that aren’t true in this particular domain
They’re things that can often be missed in a first pass look at the code but end up adding a lot of accidental complexity that bites you later.
When reading an unfamiliar code base we tend to assume that a certain bit of logic is there for a good reason, and that helps you understand what the system is trying to do. With generative codebases we can’t really assume that anymore unless the code has been thoroughly audited/reviewed/rewritten, at which point I find it’s easier to just write the code myself.
This has been my experience as well. But, these are things we developers care about.
Coding aside, LLM's aren't very good at following nice practices in general unless explicitly prompted to. For example if you ask an LLM to create an error modal box from scratch, will it also implement the ability to select the text, or being able to ctrl c to copy the text, or perhaps a copy message button? Maybe this is a bad example, but they usually don't do things like this unless you explicitly ask them to. I don't personally care too much about this, but I think it's noteworthy in the context of lay people using LLM's to vibe code.
> it’d be wasteful for evolution to only use the brain for computation
Even what we consciously experience as the brain is really only a tiny part of the brain.
The little language centre and the capacity to imagine are only a tiny subset of a multitude of brain functions and yet we believe that those two functions make up “me”. Actually it’s just those two functions telling a story that they are me.
A common trick is that the first click on the X will go to the ad, but if you return and click the X again it will close, gaslighting you into thinking you just misclicked the first time.
Another trick that I’ve noticed on the Reddit app is that the tappable area is much larger for ads than normal posts. If you tap even near the ad it will visit the ad
> Every previous job I've had has a similar pattern. The engineer is not supposed to engage directly with the customer.
Chiming in to say I’ve experienced the same.
A coworker who became a good friend ended up on a PIP and subsequently fired for “not performing” soon after he helped build a non technical team a small tool that really helped them do their job quicker. He wasn’t doing exactly as he was told and I guess that’s considered not performing.
Coincidentally the person who pushed for him to be fired was an ex-Google middle manager.
I’ve also seen so commonly this weird stigma around engineers as if we’re considered a bit unintelligent when it comes to what users want.
Maybe there is something to higher ups having some more knowledge of the business processes and the bigger picture, but I’m not convinced that it isn’t also largely because of insecurity and power issues.
If you do something successful that your manager didn’t think of and your manager is insecure about their own abilities, good chance they’ll feel threatened.
> - this argument may well be stuck in the collective unconscious of lots of people (albeit in the religious context)
Another example of such a belief is that "humans are inherently evil" which seems to have been planted in Western society by the concept of original sin. Interestingly the idea that sin was about our inherent badness didn't really arise until the struggle against Gnosticism [1] hundreds of years after Jesus died.
Now the belief is pervasive in secular society thanks to stories like "Lord of the Flies".
It's fascinating how even though we can call ourselves non religious we can still carry these beliefs around.
If you're still in Sydney I'd argue that large numbers of people paddled out off of Bondi Beach because of a pervasive Australian belief that there's always a few arseholes but most people are fundamentally good and community support is better than nothing.
You likely saw a couple of extremists repeatedly tackled and then dropped by the public and police, and near real time running in to help victims.
That's somewhat contra to the bleak of "humans are inherently evil".
Maybe the message of Lord of the Flies was that nuclear weapons and the Cold War depressed at least one author and that boys need mentors.
Don't get me wrong, I don't think humans are inherently evil. In fact in times of crisis (like the one you mention) we do tend to come together and I think that's evidence that the belief is incorrect.
I just had a discussion the other day with somebody who outright told me that they think humans are inherently evil and must be managed under a system to keep in order. I don't think it's an uncommon belief and nor do I think it's a bleak world because that belief exists, it's just a mistaken belief.
I would argue that you see the belief raise its head far more when people are interacting with others who they don't consider in their "in-group".
Same, I think there's an idealistic belief in people who write those tickets that something can be perfectly specified upfront.
Maybe for the most mundane, repetitive tasks that's true.
But I'd argue that the code is the full specification, so if you're going to fully specify it you might as well just write the code and then you'll actually have to be confronted with your mistaken assumptions.
> I suspect the wealthy think they can shield themselves by exerting control over
Agreed and I think this is a result of a naive belief that we humans tend to have that controlling thoughts can control reality. Politicians still live by this belief but eventually reality and lived experience does catch up. By that time all trust is long gone.
It would be kinda funny if not so tragic how economists will argue both "[productive improvement] will make things cheaper" and then in the next breath "deflation is bad and must be avoided at all costs"
I think the idea of dollars as purely a trading medium where absolute prices don't matter wouldn't be such an issue if wages weren't always the last thing to rise with inflation.
As it is now anyone with assets is only barely affected by inflation while those who earn a living from wages have their livelihood eroded over time covertly.
Yeah, from the perspective of the ultra-wealthy us humans are already pretty worthless and they'll be glad to get rid of us.
But from the perspective of a human being, an animal, and the environment that needs love, connection, mutual generosity and care, another human being who can provide those is priceless.
I propose we break away and create our own new economy and the ultra-wealthy can stay in their fully optimised machine dominated bunkers.
Sure maybe we'll need to throw a few food rations and bags of youthful blood down there for them every once in a while, but otherwise we could live in an economy that works for humanity instead.
I first saw this about 15 years ago and it had a profound impact on me. It's stuck with me ever since
"Don't give yourselves to these unnatural men, machine men, with machine minds and machine hearts. You are not machines, you are not cattle, you are men. You have the love of humanity in your hearts."
Yeah I know it's an unrealistic ideal but it's fun to think about.
That said my theory about power and privilege is that it's actually just a symptom of a deep fear of death. The reason gaining more money/power/status never lets up is because there's no amount of money/power/status that can satiate that fear, but somehow naively there's a belief that it can. I wouldn't be surprised if most people who have any amount of wealth has a terrible fear of losing it all, and to somebody whose identity is tied to that wealth, that's as good as death.
Going off your earlier comment, what if instead of a revolution, the oligarchs just get hooked up to a simulation where they can pretend to rule over the rest of humanity forever? Or what if this already happened and we're just the peasants in the simulation
This would make a good black mirror episode. The character lives in a total dystopian world making f'd up moral choice. Their choices make the world worse. It seems nightmarish to us the viewer. Then towards then end they pull back, they unplug and are living in a utopia. They grab a snack, are greeted by people that love and care about them, then they plug back in and go back to being their dystopian tech bro ideal self in their dream/ideal world.
- it adds superfluous logic that is assumed but isn’t necessary
- as a result the code is more complex, verbose, harder to follow
- it doesn’t quite match the domain because it makes a bunch of assumptions that aren’t true in this particular domain
They’re things that can often be missed in a first pass look at the code but end up adding a lot of accidental complexity that bites you later.
When reading an unfamiliar code base we tend to assume that a certain bit of logic is there for a good reason, and that helps you understand what the system is trying to do. With generative codebases we can’t really assume that anymore unless the code has been thoroughly audited/reviewed/rewritten, at which point I find it’s easier to just write the code myself.
reply