Hacker Newsnew | past | comments | ask | show | jobs | submit | wyum's commentslogin

I was curious why we don't hear about locust swarms anymore in the US

Turns out this species - the Rocky Mountain locust was made accidentally extinct by settlers. Although the swarms could be huge, they had a small and concentrated breeding ground that was destroyed by farming and cows. Wikipedia says the last specimen was collected in 1904.


So, to de-extinct them probably would be too controversial as they were a pest?

>The locusts not only ate the grass and valuable crops, but also leather, wood, sheep's wool, and—in extreme cases—even clothes off peoples' backs

https://en.wikipedia.org/wiki/Rocky_Mountain_locust#Last_maj...

What if a mad scientist would do that? :)

UPD Found a reddit discussion on the topic: https://old.reddit.com/r/megafaunarewilding/comments/17gxmux...


I believe there is a Verification Complexity Barrier

As you add components to a system, the time it takes to verify that the components work together increases superlinearly.

At a certain point, the verification complexity takes off. You literally run out of time to verify everything.

AI coding agents hit this barrier faster than ever, because of how quickly they can generate components (and how poorly they manage complexity).

I think verification is now the problem of agentic software engineering. I think formal methods will help, but I don't see how they will apply to messy situations like end-to-end UI testing or interactions between the system and the real world.

I posted more detailed thoughts on X: https://x.com/i/status/2027771813346820349


Hi William, thank you for the interesting post!

> At a certain point, the verification complexity takes off. You literally run out of time to verify everything.

Could you elaborate on this? Your post makes it sound as if the verification complexity diverged as the number of components n approaches a certain finite value n_0, but that seems unlikely to me. If, in contrast, the verification complexity remains finite at n_0, then verification should still be possible in finite time, shouldn't it? Yes, it might be a considerable amount of time but I assume your theorem doesn't predict lower bounds for the involved constants?

Either way, this entire discussion assumes n will increase as more and more software gets written by AI. Couldn't it also be the opposite, though? AI might also lead us to removing unnecessarily complex dependencies from our software supply chain or stripping them down to the few features we need.


Thank you for reading and the very thoughtful observations.

>> At a certain point, the verification complexity takes off. You literally run out of time to verify everything. > Could you elaborate on this?

I plan to publish a thorough post with an interactive model. Whether human or AI, you are capacity constrained, and I glossed over `C` (capacity within a given timeframe) in the X post.

You are correct that verification complexity remains finite at n_0. The barrier is practical: n_0 is where V(n) exceeds your available capacity C. If V(n) = n^(1+k), then n_0 = C^(1/(1+k)). Doubling your capacity doesn't double n_0. It increases by a factor of 2^(1/(1+k)), which is always less than 2.

So the barrier always exists for, say, a given "dev year" or "token budget," and the cost to push it further grows superlinearly. It's not absolutely immovable, but moving it gets progressively harder. That's what I mean by "literally run out of time." At any given capacity, there is a finite n beyond which complete verification is not possible. Expanding capacity buys diminishing returns.

> Either way, this entire discussion assumes n will increase as more and more software gets written by AI. Couldn't it also be the opposite, though?

You are getting at my core motivation for exploring this question.

Verification requires a definition of "done" and I wonder if it will ever be possible (or desirable) for AI to define done on its own, let alone verify it and simplify software based on its understanding of our needs.

You make a great point that we are not required to add more components and "go right" along the curve. We can choose to simplify, and that is absolutely the right takeaway. AI has made many people believe that by generating more code at a faster pace they are accomplishing more. But that's not how software productivity should be judged.

To answer your question about assumptions, while AI can certainly be prompted to help reduce n or k in isolated cases where "done" is very clear, I don't think it's realistic to expect this in aggregate for complex systems where "done" is subjective and dynamic.

I'm speaking mainly in the context of commercial software dev here, informed by my lived experience building hundreds of apps. I often say software projects have a fractal complexity. We're constantly identifying new needs and broader scope the deeper we go, not to mention pivots and specific customer asks. You rarely get to stand still.

I don't mean to be pessimistic, but my hunch is that complexity growth outpaces the rate of simplification in almost every software project. This model attempts to explain why that is so. And notably, simplification itself requires verification and so it is in a sense part of the verification cost, too.


Thank you for the post. It's a good read. I'm working on governance/validation layers for n-LLMs and making them observable so your comments on runaway AIs resonated with me. My research is pointing me to reputation and stake consensus mechanisms being the validation layer either pre inference or pre-execution, and the time to verify decisions can be skipped with enough "decision liquidity" via reputation alone aka decision precedence.


Very deep post on the problem. AI seems to worsen the issue of software correctness and given the nature of business, it won’t be ever solved.


My blog: https://williamhuster.com

I have a few deeper posts that I'm proud of. My favorite is an exploration of battle probabilities in the board game war room.


I think you and the article actually agree and you are arguing only with their use of the word "development."

The article uses "development" to refer only to the part where code is generated, while you are saying "development" is the process as a whole.

You both agree that latency in the real-world validation feedback loop leads to longer cycles and fewer promising solutions and that is the bottleneck.


I was surprised to learn that the British had finished laying undersea telegraph cables around the world as early as 1902! Incredible.


Yes, the All Red Line <https://en.wikipedia.org/wiki/All_Red_Line> was a remarkable achievement.

Most people reading this will assume that the entire world has long since become accessible by radio. Not so. Actually, the ignorance has multiple layers:

* Part of the general public: "Cell phones work anywhere in the world". I've seen this even on Hacker News, during discussions of Apple's emergency satellite texting service.

* Another part of the public: "Cell phones have dropouts, but a radio can reach anywhere". Many signals can't cross the horizon, and those that can (MW and SW) can be very difficult to tune into.

* The most savvy among the general public: "The military must have radios that work anywhere". The military has various ways to communicate around the globe, but it's very expensive, bulky, or both; think Air Force One, a large network of ground relay stations and geosynchronous satellites, etc. Even a few miles can be difficult; in Iraq and Afghanistan, US forces outsie the wire had no assurance of being always able to talk to the FOB. This is just voice and perhaps low-bandwidth data, mind you. Yes, it is possible for a drone pilot in Nevada to control a vehicle in the Middle East, but the films that show high-quality real-time video from a drone being transmitted to a Pentagon conference room from which decisionmakers can make instantaneous decisions on what to hit are fiction.

This is why the military has bought into Starlink/Starshield so quickly. Suddenly, the status quo has moved from the above to voice, video, and broadband being available anywhere on the planet, whether air, sea, or land, possibly excepting a dense forest. While sailors appreciate having off-duty Starlink being rolled out aboard ship (so much so that sometimes they illicitly jump the gun, as we recently learned), the real breakthrough is in every single patrol in Indian country, FOB, base, ship, and aircraft being able to always talk to each other via a dish the size of a medium pizza. That it has been extensively combat tested in Ukraine is a bonus; the Ukrainian officials who said that without Musk's emergency delivery of Starlink dishes early on in the war Russia would have quickly won were not joking. I am told that now every single company, school, factory, building, in Ukraine has a dish.


It's interesting - with such large defense funding, why did it take Starlink, a private company, to establish this network of satellites?

Why wasn't it done by the American military - surely this potential has been known for decades now.

They have the budget for it.


> It's interesting - with such large defense funding, why did it take Starlink, a private company, to establish this network of satellites?

SpaceX has simultaneous pioneered vastly lowered launch cost by reusing rockets, and very sophisticated phased-array broadcasting technology that lets a small dish talk to thousands of satellites moving rapidly across the sky in low earth orbit without constantly moving around. SpaceX is still the only entity on earth reusing orbital rockets a decade after first doing so, and without that ability no one else—not even the US military—can afford to launch thousands of satellites and constantly replenish them because they only stay up for 3-5 years.


How else would they tap them?


This is the standard Google analytics snippet. It's probably automatically minified, maybe code-golfed. In any case, the author of the page did not also write this snippet.


The funny part that people start to forget is that this version of snippet was introduced when original one (ga.js) was blocked by most of the world (back in the days when Firefox became browser number one and Adblock Plus was in use).

New snipped was designed to break Adblock Plus, which was unable to block inline scripts (and simple blocking analytics.js caused js failures in the same script block). So it was malicious dark pattern. Obviously eventually it was bypassed, which required fake replacements like https://github.com/gorhill/uBlock/blob/master/src/web_access...


Good point, didn't know that. Figured it was a cheeky dev.


The deduction is flawed because the success of one method (thinking with writing) does not necessarily disprove the success of other methods (such as thinking without writing).


You're objecting to the premise, not the conclusion*. The deduction is valid for the premise (the part in the 'if'). Well, assuming you accept that an idea that can be "more complete" isn't "fully formed", but I'd say that's definitional.

* Although it's not really right to use this kind of language here (premise, conclusion, deduction). It's a casual statement, so I suppose people can somewhat reasonably argue about it, but the assertion is tautological ('if something is incomplete, it isn't fully formed').


The keyword is "always". IF writing about something always improves it, that implies it cannot ever reach full potential without writing about it.


Or with writing about it. But there's an implicit "if you haven't already written about it". We might wonder what other implicit preconditions there are.

Similarly, if walking North always brings you closer to the North Pole, then you can never reach the North Pole without walking North, or at all. But look out for oceans.


I think you are right, and what's happening with rooftop solar in urban environments like DC (as in OP) is a case in point. By and large, the rooftop solar here is directly integrated with the grid and net metered. It is not off grid whatsoever like a diesel generator would be. It's possible to install a backup battery and inverter, but if the power goes out, you don't get to use your solar directly.

Energy companies in DC, are incentivized by the govt to produce energy from renewable sources. They have to pay a penalty for SRECs they fail to produce, so they have good reason to convince consumers to put solar panels on their roofs and then buy SRECs from them at a rate lower than the value of the penalty.


> If you spend $25K on a solar installation, does the value of your home increase by $25K?

I spent $25k to install solar in 2020 (also in DC like OP). The solar company estimated that my home value increased by $14K.

As part of the install, I bundled a "heavy-up" for $5K, which is an upgrade my house needed, but not strictly part of the renewable energy system. So total cash outlay was $30K.

I immediately got back 26% in a federal tax credit: $7.8K

So on balance, my system cost $8K cash, which was paid down by energy savings and SRECs within three years. The SRECs made the biggest impact.


When we went solar in DC (like the OP), the solar company offered a zero-cost option. The way this works is that the solar company owns the panels, while the customer enjoys the energy savings and IIRC a small slice of the SRECs.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: