Hacker Newsnew | past | comments | ask | show | jobs | submit | cpgxiii's commentslogin

The depressing part is that these "modern practices" were essentially invented in the 1960s by defense and aerospace projects like the NTDS, LLRV/LLTV, and Digital Fly-by-Wire to produce safety-critical software, and the rest of the software industry simply ignored them until the last couple of decades.


Developers of "IR" seeking missiles have been trying to distinguish between aircraft and countermeasures since the very beginning. I say "IR" because the first really robust countermeasure-avoiding seekers were dual-band IR+UV and have been around for decades. IR+UV is useful because making a decoy that looks like jet exhaust in both IR and UV is quite difficult. Imaging "IR" is a bit more capable, particularly for all-aspect use, but once again this has existed for decades.


Imodium also does wonders for slowing things down and avoiding bowel movements, provided you use it carefully and infrequently such that you don't totally mess up your normal gut functioning.


Almost all of the complexity of PoE is fundamental. To get enough power over 100m of ethernet cable (10x longer than USB) you have to run at much higher voltages like 48V. The same has eventually come to USB: for USB-C Pd to reach 240W, it also has to use 48V.

There have long been lower-voltage "passive PoE" systems which expect a lower always-on voltage on some of the ethernet pairs (usually 12V, 24V, or rarely 48V), which can be very easy to implement so long as your users can handle the setup and incompatibility with other ethernet devices (in the most extreme case of passive PoE on 100Mb/s ethernet, you simply connect the positive pair to the input of the device and the negative pair to the ground, no additional hardware needed).


Multidrop SPE isn't going to outperform newer CAN versions though. Somewhere in the sub-100Mb/s (e.g. 10-20Mb/s range) is the practical maximum speed of a multidrop bus at useful lengths, and that essentially applies equally to CAN or SPE. The only way to really get faster in a "multidrop-like" sense is with logically loop-like systems like ethercat and Fibre Channel where each network segment is point-to-point and the nodes are responsible for the routing.


Are the newer CAN versions single pair with power delivery? Those are the real sweet spots of SPE.


There are several different levels of ballistic missiles.

ICBMs, for which the GBI is intended, are the most challenging to defend against and show the least interceptor success.

In contrast, we do have some pretty definitive evidence that theater and "lower" MRBM/IRMB ballistic missiles can be intercepted successfully. If you define "effective defense" as "most missiles that would cause damage are intercepted", then it is clearly possible with current technology. If you define "effective defense" as "all missiles are intercepted", then it remains beyond the current technology.


If you define "effective" in terms of cost ratios: R = (cost of defense system + cost from failed intercepts) / (cost of attack system)

then N < 100 is well beyond current technology, regardless of whether the defense system is perfect or non-existent.

There's no magic Pareto-optimal point where investing the right amount in missile defense means that starting a war against a medium-sized country makes economic sense. Russia figured this out in Ukraine, and the US figured it out in Iran.

Israel's genocide worked pretty well tactically, but is a long-term strategic disaster. If the US continues to be a democracy, polls say that it will cause us to withdraw support sometime this decade. Also, it only works if you have an incredibly asymmetric fight.


In reality the "swiss cheese" holes for major accidents often turn out to be large holes that were thought to be small at the time.

> [Fukushima] No small alignment of circumstances needed.

The tsunami is what initiated the accident, but the consequences were so severe precisely because of decades of bad decisions, many of which would have been assumed to be minor decisions at the time they were made. E.g.

- The design earthquake and tsunami threat

- Not reassessing the design earthquake and tsunami threat in light of experience

- At a national level, not identifying that different plants were being built to different design tsunami threats (an otherwise similar plant avoid damage by virtue of its taller seawall)

- At a national level, having too much trust in nuclear power industry companies, and not reconsidering that confidence after a number of serious incidents

- Design locations of emergency equipment in the plant complex (e.g. putting pumps and generators needed for emergency cooling in areas that would flood)

- Not reassessing the locations and types of emergency equipment in the plant (i.e. identifying that a flood of the complex could disable emergency cooling systems)

- At a company and national level, not having emergency plans to provide backup power and cooling flow to a damaged power plant

- At a company and national level, not having a clear hierarchy of control and objective during serious emergencies (e.g. not making/being able to make the prompt decision to start emergency cooling with sea water)

Many or all of these failures were necessary in combination for the accident to become the disaster it was. Remove just a few of those failures and the accident is prevented entirely (e.g. a taller seawall is built or retrofitted) or greatly reduced (e.g. the plant is still rendered inoperable but without multiple meltdowns and with minimal radioactive release).


To be blunt; that isn't an appropriate application of the swiss cheese model to Fukushima. It isn't a swiss cheese failure if it was hit by an out-of-design-spec event. Risk models won't help there. Every engineered system has design tolerances. And that system will eventually be hit by a situation outside the tolerances and fail. Risk models aren't to overcome that reality - they are one of a number of tools for making sure that systems can tolerate situations that they were designed for.

If Japan gets traumatised and changes their risk tolerance in response then sure, that is something they could do. But from an engineering perspective it isn't a series of small circumstances leading to a failure - it is a single event that the design was never built to tolerate leading to a failure. There is a lot to learn, but there isn't a chain of small defence failures leading to an unexpected outcome. By choice, they never built defences against this so the defences aren't there to fail.

> Many or all of these failures were necessary in combination for the accident to become the disaster it was.

Most of those items on your list aren't even mistakes. Japan could reasonably re-do everything they did all over again in the same way that they could simply rebuild all the other buildings that were destroyed in much the same way they did the first time. They probably won't, but it is a perfectly reasonable option.

Again I'm going from memory with the numbers but doubling the cost of a rare disaster in a way that injures ... pretty much nobody ... is a great trade for cheap secure energy. It isn't a clear case that anything needs to change or even went wrong in the design process. Massive earthquakes and tsunamis aren't easy to deal with.


> It isn't a swiss cheese failure if it was hit by an out-of-design-spec event

First of all, the design basis accident is a design choice by the developers of the plant and regulators. The decision process that produced that DBA was clearly faulty - the economic and social costs of the disaster so clearly have exceeded those of a building to a more serious DBA.

> Again I'm going from memory with the numbers but doubling the cost of a rare disaster in a way that injures ... pretty much nobody ... is a great trade for cheap secure energy. It isn't a clear case that anything needs to change or even went wrong in the design process. Massive earthquakes and tsunamis aren't easy to deal with.

This is absolute nonsense. For the cost of maybe maybe tens of millions at most in additional concrete to build the seawall a few meters higher, the entire disaster would have been avoided entirely (i.e. plant restored to operation). With backup cooling that could have survived the tsunami (a lower expense than building a higher seawall), all that would have happened at Fukushima Daiichi is what happened at its neighbor Fukushima Daini (plant rendered inoperable, no meltdown, no significant radioactive release). Instead, we are talking about a disaster that will cost a (current) estimated $180 billion USD to clean up (and there is no way this estimate is realistic, when the methods required to perform the cleanup barely exist yet).


> The decision process that produced that DBA was clearly faulty - the economic and social costs of the disaster so clearly have exceeded those of a building to a more serious DBA.

That isn't clear at all. We're effectively sampling from the entire globe and we've had 2-3x bad nuclear disasters since the 70s. Our safety standards appear to be overcautious given the relatively small amount of damage done vs ... pretty much every alternative. The designs seem to be fine. I'm still waiting to see the justification for the evacuations from Fukushima; they seemed excessive. People died.

> For the cost of maybe maybe tens of millions at most...

You haven't thought for long enough before you typed that. For this particular disaster, sure. But hardening against all the possible disasters is what needs to happen when you become less risk tolerant. It is the millions of dollars to prevent against this disaster multiplied by the number of potential disasters that you have to consider. Safety is expensive.

The numbers aren't small, safety of that magnitude might not even be economically feasible. To say nothing of whether it is actually sensible. And once you get into one in 500 or thousand year events, some really catastrophic stuff starts happening that just can't be reasonably defended against. San Francisco and its fault springs to mind, I forget what sort of even that is but it is probably once a millennium or more often.


Fukushima was designed to be constructed on a hill 30-35 meters above the ocean, but someones decided would be cheaper to construct it at sea level in order to reduce costs in water pumping, others decided to approve this, and much latter, one decade before the disaster when was requested to reinforce the security measures within all the reactors at Japan, those in charge of Fukushima decided to ignore it, again, pushing for extensions year after year until it all blew up. Decades of bad decisions with a strong smell to corruption.

https://warp.da.ndl.go.jp/info:ndljp/pid/3856371/naiic.go.jp...

https://warp.da.ndl.go.jp/info:ndljp/pid/3856371/naiic.go.jp...

https://web.archive.org/web/20210314022059/https://carnegiee...


I mean, ok. So say they build the plant 35m higher up, then get hit by a tsunami that is 36 meters higher [0] than the one that caused the Fukushima disaster? If we're going to start worrying about events outside the design spec we may as well talk about that one. If they're designing to tolerate an event, we can pretty reliably imagine a much worse event that will happen sooner or later and take the plant out. That is the nature of engineering. Eventually everything fails; time is generally against a design engineer.

Caveating that I'm not really sure it was even an out-of-design event, but if it was then it is case closed and the swiss cheese model is an inappropriate choice of model to understand the failure. If you hit a design with things it wasn't designed to handle then it may reasonably fail because of that.

[0] https://en.wikipedia.org/wiki/Megatsunami homework for the interested, it is cool stuff. Japan has seen some quite large waves, 57 meters seems to be the record in recent history.


In Japan they have the "Tsunami Stones" [0] across the coast, memorials to remind future generations of the highest point the water reached.

It was negligent to construct a nuclear plant at sea level, it was just a plant waiting to be flooded, and for such case they had ten years to design protections after being requested to reinforce measures (along with the other Japanese plants), but I can imagine the ones that should put the money was not very collaborative (I even doubt if such responsible learnt the lesson).

[0] https://www.smithsonianmag.com/smart-news/century-old-warnin...

If it was a cheese model or not I do not enter (notice that parent of parent and me are different users), their negligence breaks all the possible logic we could apply without introducing the corruption's variable behind such decades of bad decisions.


> It was negligent to construct a nuclear plant at sea level, it was just a plant waiting to be flooded,

So why did they build it there? It isn't a gentleman in a clown hat hitting himself on the head with a rubber mallet, they had a reason. These things are always trade-offs.

Maybe if they'd built it up on the hill there'd have been an earthquake, a landslide then the plant slides into the sea and gets waterlogged. I dunno. If we're talking about things without a clearly defined bounds of risk tolerance that is the sort of scenario that can be bought up. You're talking about negligence, but you aren't saying what tolerances this plant was built with, what you want it to be built to or what the trade-offs you want made are going to be. Once you start getting in to those details it becomes a lot less obvious that Fukushima is even a bad thing (probably is, the tech is pretty old and we wouldn't build a plant that way any more is my understanding). It isn't possible to just demand that engineers prevent all bad outcomes, reality is too messy. It isn't negligent if there are reasonable design constraints, then something outside the design considerations happens and causes a failure, is the theoretical point I'm bringing up. It is just bad luck.

The whole affair seems pretty responsible from where I sit a long way away. Fukushima is possibly the gentlest engineering disaster to ever enter the canon. It is much better than a major dam or bridge failure for example, and again assuming the event that caused the whole thing was unexpected not even evidence of bad management. Most engineering failures involve a chain of horrific choices the leave the reader with tears in their eyes, not just a fairly mild "well we were hit with a wild tsunami and doubled the nominal price tag of the cleanup with no obvious loss of life or limb". And bear in mind we're scouring the world for the worst nuclear disaster in the 21st century.

And besides, they did build it above sea level.


> "well we were hit with a wild tsunami and doubled the nominal price tag of the cleanup with no obvious loss of life or limb"

This is a bit of a wild understatement. (1) the tsunami was by no means wild, as multiple posts here have referenced, and (2) the incident resulted in a number of significant injuries, not including for deaths involved in the evacuation. And those deaths very much count - you can't hand-wave away the consequences of the evacuation on the basis of hindsight that the evacuation was larger than the final outcome necessitated.


> And those deaths very much count - you can't hand-wave away the consequences

I don't. If it is what it looks like, the government officials that ordered/organised the evacuations should be harshly censured and the next time evacuation orders should be more risk-based and executed in a safer way. What little I've gleaned suggests an appalling situation where a bunch of presumably old people were forced from their homes to their deaths. The main thing keeping me quiet on the topic is I don't speak Japanese and I don't really know what happened in detail there.


Did you read the report I put? the pdf,

    << The Fukushima Daiichi Nuclear Power Plant construction was based on the seismological knowledge of more than 40 years ago. As research continued over the years, researchers repeatedly pointed out the high possibility of tsunami levels reaching beyond the assumptions made at the time of construction, as well as the possibility of reactor core damage in the case of such a tsunami. However, TEPCO downplayed this danger. Their countermeasures were insufficient, with no safety margin.>>

    << By 2006, NISA and TEPCO shared information on the possibility of a station blackout occurring at the Fukushima Daiichi plant should tsunami levels reach the site. They also shared an awareness of the risk of potential reactor core damage from a breakdown of sea water pumps if the magnitude of a tsunami striking the plant turned out to be greater than the assessment made by the Japan Society of Civil Engineers.>>
Even leaving aside they ignored the original placement in order to reduce costs by using biased seismological reports of their convenience, TEPCO knew the plant was at risk, they was warned successively it was at risk. And the supposed regulator NISA [0] closed the eyes conveniently (conveniently for someones).

    << TEPCO was clearly aware of the danger of an accident. It was pointed out to them many times since 2002 that there was a high possibility that a tsunami would be larger than had been postulated, and that such a tsunami would easily cause core damage.>>
From the other url I put (I updated it with a cached url, I didn't noticed the article was deleted),

    << there appear to have been deficiencies in tsunami modeling procedures, resulting in an insufficient margin of safety at Fukushima Daiichi. A nuclear power plant built on a slope by the sea must be designed so that it is not damaged as a tsunami runs up the slope.>>
[0] https://en.wikipedia.org/wiki/Nuclear_and_Industrial_Safety_...

> the gentlest engineering disaster

EU raised the maximum permitted levels of radioactive contamination for imported food following Fukushima, this is not a gentlest gesture to the Europeans. Japanese citizens also received their dose, at time the more vulnerable ones was recruited by the Yakuza to clean up the zone.


> Did you read the report I put?

No, I'm just trusting that you'll be honest about what it is saying. I don't need to read a report to persuade myself that a 40 year old plant was designed based on the best available knowledge of 40 years ago. That seems like something of a given. I'm just not sure where you are going with that, it doesn't obviously suggest negligence to me.

You're not saying what tolerances you want them to design to. We both agree that there are scenarios that can and might happen. Obviously is is possible for a tsunami to take out buildings built near the shore in Japan so it doesn't surprise me that people raised it as a risk. A lot of buildings got taken out that day. That doesn't obviously suggest negligence to me; obviously a lot of people were happy living with the risk.

> EU raised the maximum permitted levels of radioactive contamination for imported food following Fukushima

Oh well then. I had no idea. I thought the consequences were minor and now I have learned ... there you go, I suppose. I'm not really sure what to do with this new information.


> I'm just not sure where you are going with that, it doesn't obviously suggest negligence to me.

You didn't read the report or search for information about the matter, but I have not problem to repeat it for you,

The General Electric's design was originally designed to be placed 30-35 meters above the ocean, instead of this TEPCO modified such design and constructed at sea level (almost) recurring to studies convenient to their purpose, cheaper, this in one of the more tsunami-prone countries, with an history of ones reaching 20-30 meters. When those -for them- convenient studies was not longer justifiable, as deeper studies did finally refute them, they decided to just keep ignoring all the warnings and requests to reinforce the safety. They knew the nuclear plant was in danger, they always knew it, General Electric didn't designed at 30-35 meters above the ocean by coincidence, and this happened with a supposed regulator always closing the eyes to this, conveniently, across those years, ignoring even pipes with fissures.

Well, this obviously suggest negligence to me. Decades of bad decisions with a strong smell to corruption.

> You're not saying what tolerances you want them to design to.

What about tolerance to avoid a meltdown of the core, specially under two events, an earthquake and a tsunami, exactly what happened after ignoring the warnings and requests to reinforce the safety.

> Oh well then. I had no idea. I thought the consequences were minor and now I have learned ... there you go, I suppose. I'm not really sure what to do with this new information.

Keep the sarcasm for other places, if you don't mind. It is not a mere gentlest engineering disaster as it reached the whole planet, with ate TEPCO's cesium-137, specially the Japanese. And it is not a mere gentlest engineering disaster when you have to force vulnerable people to go to ground zero to move contaminated land and water.


> What about tolerance to avoid a meltdown of the core, specially under two events, an earthquake and a tsunami, exactly what happened after ignoring the warnings and requests to reinforce the safety.

I wasn't going to reply but that seems like it moves the conversation forward; so why not?

It seems to me your design goal is fundamentally incompatible with a lot of the specific complaints of negligence. If you want a design that doesn't melt down when there is an earthquake and a tsunami, then moving the reactor to higher ground isn't helpful because it won't achieve the design goal. The design is still fundamentally vulnerable. Moving the reactor up 35m still leaves it vulnerable to a large enough tsunami and a big enough earthquake.

If your solution is moving the site uphill, then your design goal should be talking in terms of a 1 in X year event. If you want the risk completely mitigated then in this case it isn't relevant where the site is since the obvious way to achieve that design goal is just build something that doesn't fail when flooded. Coincidentally that seems to be the approach that the newer generation designs use - change how the cooling works so that it can't melt down in any reasonable circumstances, tsunami or otherwise.

I will note that there is a reading of your comment where you want the design to be able to tolerate this specific event. I'm ignoring that reading as unreasonable since it requires hindsight, but in the unlikely event that is what you meant then just pretend I didn't reply.

> Keep the sarcasm for other places, if you don't mind. It is not a mere gentlest engineering disaster as it reached the whole planet, with ate TEPCO's cesium-137, specially the Japanese. And it is not a mere gentlest engineering disaster when you have to force vulnerable people to go to ground zero to move contaminated land and water.

Which one do you think was gentler and a story of similar popularity as Fukushima? It is pretty usual to have multiple people actually die and it be the engineer's responsibility once something becomes international news. Even something as basic as a port explosion usually has a number of missing people in addition to a chunk of city being taken out. To anchor this in reality, Fukushima at a class 7 meltdown might have done less damage than a coal plant in normal operation. Coal plants aren't pretty places and air pollution is nasty, nasty stuff.


> It seems to me your design goal is fundamentally incompatible with a lot of the specific complaints of negligence. If you want a design that doesn't melt down when there is an earthquake and a tsunami, then moving the reactor to higher ground isn't helpful because it won't achieve the design goal.

My goal? My solution? My design!? you must be now kidding,

- GE original design 30-35 meters above the sea.

- Warnings about reinforce safety along one decade.

- Tsunami at Fukushima's nuclear plant, 15 meters above the sea.

> I wasn't going to reply but that seems like it moves the conversation forward; so why not?

Foward to... nothing it seems. You just replied with hypotheticals like if the event didn't happened, and as if such event would have been impossible to avoid, with some kind of dissociative reflexions that surpass the cynicism. I'm the one that is not going to reply.


> Caveating that I'm not really sure it was even an out-of-design event but if it was then it is case closed and the swiss cheese model is an inappropriate choice of model to understand the failure.

This is not how safe systems are designed and operated. Safety is not a one-time item, it is a process. All safety-critical systems receive attention throughout their operating lives to identify and mitigate potential safety risks. Throughout history, many safety-critical systems have received significant changes during their operating lives as a result of newly-discovered threats or recognition that threats identified during the initial design were not adequately addressed. Many (if not most) commercial aircraft have required significant modifications to address problems that were not understood at the time they were initially built and certified. Likewise, nuclear power plants in many countries have received major modifications over the years to address potential safety issues that were not understood or properly modeled at the time of their design. Sometimes, this process determines that there is no safe way to continue operation - usually that there is no economically viable way to mitigate the potential failure mode - and the system is simply shut down. This has happened to a few aircraft over the years, as well as several nuclear power plants (in many cases justified, in others not so much).

Fukushima existed in just such a system, and that the disaster occurred was the result of failures throughout the system, not a one-off failure at the design stage.

> I mean, ok. So say they build the plant 35m higher up, then get hit by a tsunami that is 36 meters higher [0] than the one that caused the Fukushima disaster? If we're going to start worrying about events outside the design spec we may as well talk about that one. If they're designing to tolerate an event, we can pretty reliably imagine a much worse event that will happen sooner or later and take the plant out. That is the nature of engineering.

I think you are missing the point. Obviously it is possible that a tsunami higher than any possible design threshold could occur (it is, after all, possible that an asteroid will strike in the pacific and kick up a wave of debris that wipes everything off the home islands). However, the tsunami that struct Fukushima Daiichi was no higher than a number of tsunamis that were recorded in Japan within the last century. The choice of DBA tsunami height was clearly an underestimate, and underestimates were identified for Fukushima and other plants prior to the accident but not acted upon. This was not a cases of "a bigger wave is always possible", it was a case where the design, operation, and supervision were wrong, and known (by some) to be so prior to the accident.


> The choice of DBA tsunami height was clearly an underestimate, and underestimates were identified for Fukushima and other plants prior to the accident but not acted upon.

Not much of a swiss cheese failure then though. The failure is just that they committed hard to an assumption that was wrong.

My point is that unless it is actually an example of multiple failures lining up then this is a bad example of a swiss-cheese model. Seems to be an example of a tsunami hitting a plant that wasn't designed to cope with it. And a plant with owners who were committed to not designing against that tsunami despite being told that it could happen. It is a one-hole cheese if the plant was performing as it was designed to. The stance was that if a certain scenario eventuated then the plant was expected to fail and that is what happened.

Swiss cheese failures are there are supposed to be a number of independent or semi-independent controls in different systems that all fail leading to an outcome. This is just that they explicitly chose not to prepare for a certain outcome. Not a lot of systems failing; it even seems like a pretty reasonable place to draw the line for failure if we look at the outcomes. Expensive, unlikely, not much actual harm done to people and likely to be forgotten in a few decades.


I don't think you understand how a swiss cheese failure happens. They're not independent or semi-independent. Latent failures, expose active failures, like:

"Committed hard to an assumption that was wrong"

Then causes damage to the seawater pumps along the shoreline, and flooded emergency diesel generators.

That causes total loss of AC and DC power.

Loss of AC and DC power causes the reactor to overheat.


There was a strong corporate cultural component to Fukushima as well. Tepco had spent decades telling the Japanese public that nuclear power was completely safe. A tall order in Japan obviously, but by and large it worked.

During the operation of Fukushima Daiichi, various studies had been done that recommended upgraded safety features like enlarging the seawall, moving the emergency generators above ground so they couldn't be flooded, etc.

In every case, management rejected the recommendations because:

1. They would cost money.

2. Upgrading safety would be tantamount to admitting the reactors were less than safe before, and we can't have that.

3. See 1.


> Having to have thread safe code all over the place just for the 1% of users who need to have multi-threading in Python and can't use subinterpreters for some reason is nuts.

Way more than 1% of the community, particularly of the community actively developing Python, wants free-threaded. The problem here is that the Python community consists of several different groups:

1. Basically pure Python code with no threading

2. Basically pure Python with appropriate thread safety

3. Basically pure Python code with already broken threaded code, just getting lucky for now

4. Mixed Python and C/C++/Rust code, with appropriate threading behavior in the C or C++ components

5. Mixed Python and C or C++ code, with C and C++ components depending on GIL behavior

Group 1 gets a slightly reduced performance. Groups 2 and 4 get a major win with free-threaded Python, being able to use threading through their interfaces to C/C++/Rust components. Group 3 is already writing buggy code and will probably see worse consequences from their existing bugs. Group 5 will have to either avoid threading in their Python code or rewrite their C/C++ components.

Right now, a big portion of the Python language developer base consists of Groups 2 and 4. Group 5 is basically perceived as holding Python-the-language and Python-the-implementations back.


Where is the major win? Sorry but I just don't see the use case for free-threading.

Native code can already be multi-threaded so if you are using Python to drive parallelized native code, there's no win there. If your Python code is the bottleneck, well then you could have subinterpreters with shared buffers and locks. If you really need to have shared objects, do you actually need to mutate them from multiple interpreters? If not, what about exploring language support for frozen objects or proxies?

The only thing that free threading gives you is concurrent mutations to Python objects, which is like, whatever. In all my years of writing Python I have never once found myself thinking "I wish I could mutate the same object from two different threads".


> Native code can already be multi-threaded so if you are using Python to drive parallelized native code, there's no win there.

When using something like boost::python or pybind11 to expose your native API in Python, it is not uncommon to have situations where the native API is extensible via inheritance or callbacks (which are easy to represent in these binding tools). Today with the GIL you are effectively forced to choose between exposing the native API parallelism or exposing the native API extensibility; e.g. you can expose a method that performs parallel evaluation of some inputs, OR you can expose a user-provided callback to be run on the output of each evaluation, but you cannot evaluate those inputs and run a user-provided callback in parallel.

The "dumbest" form of this is with logging; people want to redirect whatever logging the native code may perform through whatever they are using for logging in Python, and that essentially creates a Python callback on every native logging call that currently requires a GIL acquire/release.

Could some of this be addressed with various Python-specific workarounds/tools? Probably. But doing so is probably also going to tie the native code much more tightly to problematic/weird Pythonisms (in many cases, the native library in question is an entirely standalone project).

> The only thing that free threading gives you is concurrent mutations to Python objects, which is like, whatever.

The big benefit is that you get concurrency without the overhead of multi-process. Shared memory is always going to be faster than having to serialize for inter-process communication (let alone that not all Python objects are easily serializable).


> The big benefit is that you get concurrency without the overhead of multi-process.

Bigger thing imo is that multiprocessing is just really annoying. In/out has to be pickleable, anything global gets rerun in each worker which often requires code restructuring, it doesn't work with certain frameworks, and other weird stuff happens with it.


> In practice CPython reliably calls it cuz it reference counts ... In a world where more people were using PyPy we could have pressure from that perspective to avoid leaning into it

A big part of the problem is that much of the power of the Python ecosystem comes specifically from extensions/bindings written in languages with manual (C) or RAII/ref-counted (C++, Rust) memory management, and having predictable Python-level cleanup behavior can be pretty necessary to making cleanup behavior in bound C/C++/Rust objects work. Breaking this behavior or causing too much of a performance hit is basically a non-starter for a lot of Python users, even if doing so would improve the performance of "pure" Python programs.


That cleanup can be explicit when needed by using context managers. Mixing resource handling with object lifetime is a bad design choice


> That cleanup can be explicit when needed by using context managers.

It certainly can be, but if a large part of the Python code you are writing involves native objects exposed through bindings then using context managers everywhere results in an incredible mess.

> Mixing resource handling with object lifetime is a bad design choice

It is a choice made successfully by a number of other high-performance languages/runtimes. Unfortunately for Python-the-language, so much of the utility of Python-the-ecosystem depends on components written in those languages (unlike, for example, JVM or CLR languages where the runtime is usually fast enough to require a fairly small portion of non-managed code).


Tell that to the C++ guys...


The point is that almost all of the signatories considered themselves to be immune to a "real war" in their futures at the time they signed. E.g. basically all of the European signatories assumed that the end of the cold war and existence of NATO would ensure the end of any possible threat. Given that assumption, as obviously flawed as it was, signing on to a ban was cheap PR (literally cheap, too, because it meant they could divest those weapons and their delivery mechanisms to reduce defense expenditures).


which is exactly why european countries threatened by russia are starting to withdraw from the treaty, five had recently announced so


> Given that assumption, as obviously flawed as it was, signing on to a ban was cheap PR (literally cheap, too, because it meant they could divest those weapons and their delivery mechanisms to reduce defense expenditures).

Doubly so, since they understood themselves to be backed up by a non-signatory (the US).


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: