We have well established techniques for developing systems which are safe and exhibit high levels of integrity. We just need to make the tools that support these techniques freely available.
90% of the techniques for making reliable systems are careful requirements engineering and even more careful testing. There is no secret sauce.
I don't think these techniques transfer easily to the AI field. While I might be able to prove that the state machine that controls my nuclear power plant always rams in the control rods in case something bad happens, it's a lot harder to show that some fuzzy system like a neural network doesn't exhibit kill-all-humans behaviours.
You are right. There is no secret sauce. There are no magic bullets. Careful requirements engineering and careful testing is absolutely what you need.
However -- many of these techniques do transfer to the AI field -- albeit with some tweaking and careful thought.
Requirements are still utterly critical. Phrasing the requirements right is important and requires more than a passing thought -- particularly as concerns testability.
A lot of it boils down to requirements that get placed on the training and validation data sets; and the statistical tests that need to be passed: how much data is required and how you can demonstrate that the test data provides sufficient coverage of the operating envelope of the system to give you confidence that you understand how it behaves.
The architecture is critical also -- how the problem is decomposed into safe, testable and understandable subsets -- which has much more to do with how the system is tested than how it solves the primary problem.
This is not quite the full picture -- there are V&V issues which are specific to machine learning systems -- but lest we put the cart before the horse, these should properly build upon a mature V&V infrastructure, toolchain support for which isn't so great in the open source world.