Train crashes like this are _so_ rare. It's not as safe as flying but AFAICT in rich countries it's the same rough order of magnitude in terms of danger level.
I don't have data but I would imagine crashes on these high speed lines (which always seem to be run at a higher level of professionalism than the general networks) are rarest of all.
I don't think it's a good use of mental energy to plan for a crash like this. You're better off using your brain cycles on hygiene or not losing your luggage.
At first, when seeing it was in 2015 I was extremely surprised I didn't heard about it at the time. Then I saw the date: Nov 14th 2015, just the day after the ISIS terror attacks in Paris, France's 9/11. Of course we barely heard about a train crash at that time…
I remember this day because I worked in a company that made software for train networks.
It did briefly made the news but not for long due to the terror attacks and also there wasn’t any passenger on this train, it was a train testing.
In fact the story is even more tragic when you know that the day before, they also were too fast in the same turn and in the records you hear something like « few, that was close, better take care next time ».
However, for sure this crash should have never happened but it only happened because they were testing the limits of both the train and the track.
It’s literally like a test pilot crashing an airplane while testing all the limits : it should never happen but they are still there for it not to happen in commercial flights.
> However, for sure this crash should have never happened but it only happened because they were testing the limits of both the train and the track.
No. It happened because they were under-prepared and disorganized, and thereby didn't respect the speed restrictions for the segment of track they were on.
They crashed entering a 175 km/h segment at 265 km/h, which is well above the 10% overspeed they were theoretically testing that day.
I would not consider an accident during a test run with partially disabled safety procedures a regular part of operations - on a normal run, the train should have slowed down or stopped automatically before derailing because it did significantly exceed the design speed of the track.
Most railway deaths in the EU are due to unauthorized people on the tracks or due to crossings. The actual number of passengers deaths has been really low in the past years.
In the EU it's safer than flying, with 0.5 deaths per 100 billion km/ passenger vs 3 deaths per 100 billion kms/ passenger. However, since an airplane flies at, let's say, six times the average speed of a train, the actual probability of dying during a 1-hour trip is almost 40 times more on a plane than on a train.
Do your stats include all rail? Because the average airplane definitely does not travel at 6 times the speed of high-speed rail (more like 2.5-3x), and definitely way faster than regional rail (in the order of 12x)
Brain cycles aren’t a limited supply. Besides, you’ll get to feel a nice jolt of serotonin when you remember to sit backwards.
> I would imagine crashes on these high speed lines (which always seem to be run at a higher level of professionalism than the general networks) are rarest of all
If this crash is anything like the other ones, you might be surprised. Safety complacency tends to cause maintenance failures. Plus the low speed lines are less deadly since the total energy is proportional to velocity squared, and v is low.
In other words, it might be more helpful to look at it as "if they’re run at a higher level of standards, it’s because they have to be".
Statistically you’re probably right, but considering how many brain cycles we waste on non-essentials, it’s just as fun to waste them on this. That way you can start a nerdy conversation with your travel companions, and they can learn to travel without you next time.
> Plus the low speed lines are less deadly since the total energy is proportional to velocity squared, and v is low.
You're forgetting about the probability of a crash.
The vast majority of train crashes is due to an impact with a vehicle on a railway crossing.
However, high-speed rail is grade separated, so it doesn't have railway crossings, which means the main cause of crashes is fundamentally impossible.
In other words: Regular rail has a high rate of crashes (with a small number of fatalities each) due to car/truck drivers screwing up. High-speed rail has a low rate of crashes (with a large-ish number of fatalities each) due to catastrophic failure of track & train equipment.
Zero-risk bias at work. If it’s actually fun for you, don’t let anyone stop you, but I wouldn’t go as far as making it a confident general recommendation.
In my experience, in practice, it usually isn't that hard to figure out what people meant by a READ/WRITE_ONCE().
Most common cases I see are:
1. I'm sharing data between concurrent contexts but they are all on the same CPU (classic is sharing a percpu variable between IRQ and task).
2. I'm reading some isolated piece of data that I know can change any time, but it doesn't form part of a data structure or anything, it can't be "in an inconsistent state" as long as I can avoid load-tearing (classic case: a performance knob that gets mutated via sysfs). I just wanna READ it ONCE into a local variable, so I can do two things with it and know they both operate with the same value.
I actually don't think C++ or Rust have existing semantics that satisfy this kinda thing? So will be interesting to see what they come up with.
I really don't understand why people have all these "lightweight" ways of sandboxing agents. In my view there are two models:
- totally unsandboxed but I supervise it in a tight loop (the window just stays open on a second monitor and it interrupts me every time it needs to call a tool).
- unsupervised in a VM in the cloud where the agent has root. (I give it a task, negotiate a plan, then close the tab and forget about it until I get a PR or a notification that it failed).
I want either full capabilities for the agent (at the cost of needing to supervise for safety) or full independence (at the cost of limited context in a VM). I don't see a productive way to mix and match here, seems you always get the worst of both worlds if you do that.
Maybe the usecase for this particular example is where you are supervising the agent but you're worried that apparently-safe tool calls are actually quietly leaving a secret that's in context? So it's not that it's a 'mixed' usecase but rather it's just increasing safety in the supervised case?
It's been ages since I used VirtualBox and reading the following didn't make me miss the experience at all:
> Eventually I found this GitHub issue. VirtualBox 7.2.4 shipped with a regression that causes high CPU usage on idle guests.
The list of viable hypervisors for running VMs with 3D acceleration is probably short but I'd hope there are more options these days for running headless VMs. Incus (on Linux hosts) and Lima come to mind and both are alternatives to Vagrant as well.
I totally understand, Vagrant and VirtualBox are quite a blast from the past for me as well. But besides the what-are-the-odds bug, it's been smooth sailing.
> VMs with 3D acceleration
I think we don't even need 3D acceleration since Vagrant is running the VMs headless anyways and just ssh-ing in.
> Incus (on Linux hosts)
That looks interesting, though from a quick search it doesn't seem to have a "Vagrantfile" equivalent (is that correct?), but I guess a good old shell script could replace that, even if imperative can be more annoying than declarative.
And since it seems to have a full-VM mode, docker would also work without exposing the host docker socket.
Thanks for the tip, it looks promising, I need to try it out!
Depends on what you do. If you need to have a fully working site with external integrations, SSL and so on, it's just easier to spend $4 a month on a VPS. But you're right, for many backend-based projects a local VM like multipass or a kind/microk8s cluster are perfectly fine.
You mentioned "deleting the actual project, since the file sync is two-way", my solution (in agentastic.dev) was to fist copy the code with git-worktree, then share that with the container.
As someone that does this, it's Turtles All The Way Down [1]. Every layer has escapes. I require people to climb up multiple turtles thus breaking most skiddie [2] scripts. Attacks will have to targeted and custom crafted by people that can actually code thus reducing the amount of turds in the swimming pool I must avoid. People should not write apps that make assumptions around accessing sensitive files.
It's turtles all the way down but there is a VERY big gap between VM Isolation Turtle and <a half-arse seccomp policy> turtle. It's a qualitative difference between those two sandboxes.
It’s a risk/convenience tradeoff. The biggest threat is Claude accidentally accesses and leaks your ssl keys, or gets prompt-hijacked to do the same. A simple sandbox fixes this.
There are theoretical risks of Claude getting fully owned and going rogue, and doing the iterative malicious work to escape a weaker sandbox, but it seems substantially less likely to me, and therefore perhaps not (currently) worth the extra work.
Is there a premade VM image or docker container I can just start with for example Google Antigravity, Claude or Kilocode/vscode? Right now I have to install some linux desktop and all the tools needed, a bit of a pain IMO.
I see there are cloud VMs like at kilocode but they are kind if useless IMO. I can only interact with the prompt and not the code base directly. Too many things go wrong and maybe I also want kilo code to run a docker stack for me which it can't in the agent cloud.
The UI is obviously vibe-coded garbage but the underlying system works. And most of the time you don't have to open the UI after you've set it running you just comment on the Github PR.
This is clearly an unloved "lab" project that Google will most likely kill but to me the underlying product model is obviously the right one.
I assume Microsoft got this model right first with the "assign issue to Copilot" thing and then fumbled it by being Microsoft. So whoever eventually turns this <correct product model> into an <actual product that doesn't suck> should win big IMO.
Locally, I'd use Vagrant with a provisioning script that installs whatever you need on top of one of the prebuilt Vagrant boxes. You can then snapshot that if you want and turn that into a base image for subsequent containers.
- Run the dev container CLI command to start the container: `devcontainer --workspace-folder . up`
- Run another dev container command to start Claude in the container: `devcontainer exec --workspace-folder . claude`
And there you go! You have a sandboxed environment for Claude to work in. (As sandboxed as Docker is, at least.)
I like this method because you can just manage it like any other Docker container/volumes. When you want to rebuild it, or reset the volume, you just use the appropriate Docker (and the occasional dev container) commands.
I guess whether container isolation is good enough just comes down to the threat you're protecting against:
- confused/misaligned agent: probably good enough (as of Q1 2026...).
- hijacked agent: definitely not good enough.
But also it's kinda weird that we still have high-level interfaces that force you to care this much about the type of virtualization it's giving you. We probably need to be moving more towards stuff like Incus here that treats VMs and system containers basically as variants of the same thing that you can manage at a higher level of abstraction. (I think k8s can be like that too).
I was using opencode the other day. It took me a while to realize the that the agent couldn't read/write the .env file but didn't realize it. When I pushed it first it was able to create a temp file and copy it over .env AND write and opencode.json file that disables the .env protection and go wild.
Yeah ban is the answer. Trouble is that, as shown in the article, even if they include the charging and refilling bits they can be cheap enough to throw away after use.
Taxing waste is one part of the story but it's actually a really good thing that vaping is cheaper than smoking so this can only go so far before it's counterproductive.
I think the answers lie in stuff like banning sale of pre-filled ones. If you make people buy a separate bottle of nicotine liquid (and you enforce that this is quite a large minimum size, like we already do with tobacco) and fill the device up before they use it, I think they are much more likely to refill it when it's empty and recharge it when it's dead.
Maybe another thing could be restricting points of sale. I bet a lot of the waste comes from drunk people buying them at 10pm in the corner shop near the pub. If you make people plan ahead that might also help.
> Trouble is that, [...], even if they include the charging and refilling bits they can be cheap enough to throw away after use.
Well that is fixable, it's even one of the solutions posited here. Just make them artificially expensive by adding a deposit, which you'll get back when you return it to the shop (instead of throwing it away).
> The biggest predictor for people who prefer starting late is how crowded their schedules are. Managers tend to have very crowded schedules which means they want a break between meetings, while ICs prefer not having to waste time waiting.
I have had a few senior managers (at Google) who ask for all the meetings _they_ attend to start 5 minutes late.
This seems 100% reasonable to me. No need for it to be an org policy. Just a affordance for the people who spend 95% of their working hours in meetings.
I've also had several senior managers at Google who _don't_ do this, but are 5 minutes late for every meeting anyway. This alternative is pretty annoying!
The problem is that final decisions tend to be made in the last 30 seconds of a meeting. If you're a manager with a stake in the outcome, you can't leave the meeting until you've ensured that the outcome works for you. Leaving 5 min early is often simply not an option. While arriving 5 minutes late is. It's not an ego thing -- it's the fact that meeting leaders often let meetings run long.
This isn't just useful for high-level application logic! (If I'm catching your drift from "the compiler writes the state machines for you).
I used to write extremely low-lebel NIC firmware that was basically a big bundle of state machines. Rust wasn't ready back then but I was desperate for something like coroutines/async. I think it would have been incredibly valuable.
(There are tricks to do coroutines in C but I think they're too hacky. Also, back then we believed that RTOS threads with their own stack were too expensive, in retrospect I'm not sure we ever entirely proved that).
I may be naïve in this case but I think it would also have been super useful for the high level protocol stuff. Like: 802.11 association flows. If we could have just spun up a Tokio task for each peer, I feel the code would have been an order of magnitude smaller and simpler.
A way to implement coroutines in C is via protothreads. I admit that they are a bit hacky - but you can get quite far with them. I used them to implement an Esterel/Blech like environment: https://github.com/frameworklabs/proto_activities
Protothreads are amazing, but really expose you to a lot of subtle bugs. I would not recommend them for any new projects if something like async rust or an RTOS are options.
Yeah this was exactly my conclusion at the time. It's a cool trick but I just don't think I wanna write a serious system in a special half-language like that.
Why should we want prediction markets to be fair? I want them to reveal facts about the world.
They only need to be fair inasmuch as it serves that goal. I don't think it wouldn serve that goal to forbid insider trading?
(Why do we forbid insider trading in the stock market? Not because it's unfair. Because it makes the market worse at doing what we want it to do, which is funding the most productive enterprises).
>Why should we want prediction markets to be fair? I want them to reveal facts about the world.
I do not think this is a reasonable position. The problem is that predicting "real facts about the world" creates a situation where you could take out an place a sizable bet on your neighbors house burning down. Allowing prediction markets is, thusly, inherently problematic. Allowing them to be anonymized with insider information, and you've a recipe for disastrous externalities.
And to spell it out, you can then go burn your neighbours house down and profit from it. Predictions markets don't just predict, they can cause "unlikely" things to happen because they give a way of profiting from making "unlikely" things happen. This is a problem with plain old stock market insider trading too, and with sports betting when the participants take out bets on themselves.
You can say you don't want prediction markets and that's fine. It's not responding to my comment though which is saying that if we have prediction markets, the goal should be for them to reveal facts about the world.
If you think prediction or stock markets should be "fair" then you have a very bizarre definition of fair (one that's compatible with a game that only rich people can win).
Also desiring "fairness" implies that to imply the market's primary purpose is as a game. If it's just a game, it's nothing but toxic gambling.
ANYWAY: pointing to one possible case of misuse and then leaping to "inherently problematic" is pretty weak IMO. Hopefully you can tell I'm not a prediction market booster, there's a lot about them that I think it's pretty suspect. But this is a pretty lame line of reasoning IMO.
"Someone might shoot people with it" -> valid reason for gun control. Guns are for shooting people.
"Someone might stab someone with it" -> not a valid reason to ban knives. We already ban stabbing people. We take precautions to try and mitigate the dangers of knives, because they have a very big upside.
> It's not responding to my comment though which is saying that if we have prediction markets, the goal should be for them to reveal facts about the world.
My point is you can’t have this without creating a massive incentive for people to create events, and it’s inherently easier to create disorder, which hurts everyone in a free society.
> pointing to one possible case of misuse and then leaping to "inherently problematic" is pretty weak IMO.
This is like saying nuclear weapons are “one possible case of misuse of nuclear technology.” Yes, obviously, but it’s serious enough to justify massive regulation of nuclear technology.
I’m not saying prediction markets should be illegal. I’m saying that they should have a very small maximum wager, require public disclosure of who is taking which bets, and it should not pay out if the actor has any connection to agents that create the change in the world.
Sports gamblers have known about how problematic this concern is forever. Organized crime used predictions markets they could influence (sports betting) as a major source of revenue throughout history.
The result was not, and could never have been, known a priori. All sorts of random things could have gone wrong with the operation causing the raid to fail and Maduro to remain in power. Trump could have just randomly changed his mind, or postponed the raid to beyond the end of January (the cutoff for the betting market). The person placing the bet was still taking a chance, but it was an informed chance that shifted the market probability more in line with reality.
If it helps, you can think of the money made as the payment to a confidential informant for information that contributed to a more complete picture of the world. It just happens via a distributed algorithm, using market forces, rather than at the discretion of some intelligence officer or whoever. The more important the information you have to share, the more it moves the market and the bigger your "fee". It's not being a "grifter" to provide true information that moves the market correctly. In fact, this mechanism filters out the actual grifters - you can't make money (in expectation) by providing false information, like traditional informants sometimes can.
This "intelligence gathering" function is the primary goal of a prediction market. It's the only reason it makes sense to even have them. If you turn it into some parlor game where everybody who participates has access to all the same information, then what are we even doing here?
> This "intelligence gathering" function is the primary goal of a prediction market. It's the only reason it makes sense to even have them. If you turn it into some parlor game where everybody who participates has access to all the same information, then what are we even doing here?
If everyone has the same information, then whoever does better analysis wins. That's far from a parlor game.
Ideally people with good info and people with good analysis can both make money. (And ideally nobody takes real-world actions to make their bet come true.)
> The result was not, and could never have been, known a priori.
This is a level of solipsism not worth discussing.
Yes, Superman could be a real person and we all had our minds altered to think he’s a superhero.
Yes, when someone pulls the trigger of a gun pointed at someone’s head, it could misfire and explode in their hand.
The point is that someone influencing prediction markets can push this probability to very, very near zero. So much so, as to make the outcome effectively certain for all intents and purposes.
Literally everyone can burn their neighbor's house down! Everyone has access to "valuable predictive information" when that information is being created by the person making the bet.
Get the last word in if you must. We're going in circles.
OK, I see now that you're specifically referring to the case where someone places a bet and then actively goes out and causes the event to happen themselves. I was specifically replying to the people who were saying it was unfair for insiders to profit on the information they already possess.
I agree, I can see how that's a potential edge case, though I don't think it's as likely to happen in practice as you do. Certainly, anybody who commits a crime to cause a payout should be barred from receiving that payout, though you can tell a plausible story where someone manages to conceal it. I also really really doubt that that's what happened in this particular case.
I mean, yes, unironically. If the goal of a prediction market is to find out the truth about the world, that's what will get you there faster.
Google might feel differently about whether it's OK in that case, but that's their prerogative.
Ask yourself: If the CIA really needed to know in advance what the top search result was going to be with as much accuracy as possible (for some weird reason, doesn't matter why), how would they go about doing it? Would they spend a bunch of time evaluating all of the public information, or would they just bribe (or otherwise convince) an insider at Google to tell them?
Given that inside information makes prediction markets more accurate, why do you believe it doesn't make stock markets more accurate?
If, say, Enron insiders could've shorted their own stock, that would have improved accuracy and thereby diverted more funding to more productive enterprises.
I guess there could be second-order effects when insiders can actually change the outcome, which is why athletes aren't allowed to bet on their own games.
> I guess there could be second-order effects when insiders can actually change the outcome
You don't need to guess. That's exactly why it's illegal: It creates bad incentives, similarly to e.g. taking out a life insurance policy on a random person you have no financial dependence on.
> Given that inside information makes prediction markets more accurate, why do you believe it doesn't make stock markets more accurate?
He didn't say that it doesn't. It obviously does make stock markets more accurate.
But it tends to drive down the total amount of money available to be invested in stocks, which is compatible with the claim that it makes the market worse at funding productive enterprises.
> But it tends to drive down the total amount of money available to be invested in stocks
That seems like a big claim. For most market participants, there's always a counterparty that's so much more sophisticated that it doesn't make a difference if they're an insider or not.
The only "facts about the world" revealed by prediction markets are facts about what people betting in prediction markets believe. Which I guess is interesting in itself if you're a sociologist. Otherwise, not so much.
In this case the insiders only made their bets a few hours before the kidnapping was publicly announced, so it's not clear whether the betting "revealed facts about the world" in any publicly useful way.
Lol. Why should we make car ownership fair? I want cars to reveal facts about the world. I don't think it would serve that goal to forbid people from stealing your car.
Zed is faster and less annoying than VSCode, I hope to switch to it permanently sooner or later.
Annoyingly the only hard blocker I have right now is lack of a call-graph navigation widget. In VSCode you can bring up an expandable tree of callers for a function. Somehow I am completely dependent on this tiny little feature for reading complex code!
The annoying thing is: "can't you just use an extension for this?" No, Zed extensions are much more constrained, Zed is not a web browser. And I like it this way! But... My widget...
I also have some performance issues with searching large remote repos, but I'm pretty confident that will get fixed.
Does "Find All References opt+shift+F12" function help mitigate this for you? It opens a buffer that you can use to navigate, and it's a built-in feature not an extension.
I don't have data but I would imagine crashes on these high speed lines (which always seem to be run at a higher level of professionalism than the general networks) are rarest of all.
I don't think it's a good use of mental energy to plan for a crash like this. You're better off using your brain cycles on hygiene or not losing your luggage.
reply