Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Which is another way of saying that the modern software world is already using lightweight formal methods in an end-to-end fashion.

Sure, but only for specific properties that are inductive. Most properties that programmers care about aren't. It is precisely this kind of extrapolation from one property to another that doesn't work.

> But they're not a kind of verification - they provide no guarantee of correctness.

Correct 1. nothing else does either (again, sound formal verification only works in very limited situations -- either in the property they guarantee or the program they can guarantee it in -- and is falling further and further behind what the software world needs) and 2. unlike for algorithms, a "guarantee" is not and cannot be the bar for real software running on physical hardware, as you cannot guarantee the real behaviour of a software system even if the algorithm is guaranteed to be correct. Unlike an algorithm, a software system is ultimately a physical object, not a mathematical one. To crudely oversimplify, you don't care about any confidence beyond the probability of a bit flip.

In other words, you're pointing out that the current state-of-the-art in the field is not as good as something we don't (perhaps can't) have and don't need to begin with.



> Sure, but only for specific properties that are inductive. Most properties that programmers care about aren't.

Yes but in practice, we keep finding ways of making more such interesting properties "inductive" by placing appropriate restrictions on how the program can be designed. This is how the Rust borrow checker works, at a high level.


Not really, and this is what I meant by having an order of magnitude improvement is very exciting to the researcher even if in practice there are still 29 to go, but it doesn't make a big impact. Even something as simple as the borrow checker -- that only brings you to about where most of software, with all its bugs, already is -- comes at the cost of only being able to use memory in certain ways. I.e. to make the property inductive, it had to be made more coarse (indeed, that is what is always done when looking for inductive properties [1]) which requires more effort on behalf of the user.

As I've written above, over the past two or three decades we've only learned the theory that things are more difficult than we thought, and at the same time the capabilities of sound verification have fallen further away from the needs of mainstream software. The trajectory of both the theory and practice of sound methods has been negative. All the while, we've made some nice strides in unsound methods. Tony Hoare started recognising it in the nineties (which somehow shifted his 70s view on the subject), and the trend has continued: Yes, there are improvements in sound methods, but the problems become harder faster than those improvements, and the improvements in unsound methods is also faster.

[1]: That is seen very elegantly in TLA+, but I digress.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: