Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Wednesday June 03 2015, @09:48AM   Printer-friendly
from the wishful-thinking-and-faith dept.

Your average scripter likely isn't writing a whole lot of proofs or going through the rigors of formal program verification, generally. Which is fine because your average scripter also isn't writing software for jet airliners or nuclear power plants or robotic surgeons. But somebody is—and the odds are pretty good that your life has been in their hands very recently. How do you know they're not a complete hack ?

Well, you don't really. Which prompts the question: How is this sort of code tested? It was a short blog post written by Gene Spafford, a professor of computer science at Purdue University, that inspired this particular asking of the question.

http://motherboard.vice.com/read/how-is-critical-life-or-death-software-tested

[Related]: They Write the Right Stuff by Charles Fishman at Fast Company


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by Beryllium Sphere (r) on Wednesday June 03 2015, @05:01PM

    by Beryllium Sphere (r) (5062) on Wednesday June 03 2015, @05:01PM (#191680)

    First, no testing of non-trivial software can ever be complete, not within the age of the universe. Code coverage is one thing. Covering every possible execution path is a combinatorial explosion. Imagine the expense of the testing that would have been required to prevent the famous ATT outage caused by a cascade of reboots.

    Second, and worse, testing at best helps with reliability. One bitter lesson of systems safety engineering is that reliability is not safety. The Therac was one of the few cases where disaster was caused by a malfunction. More typically, accidents happen when equipment tragically does exactly what it was designed to do, in a situation where that's the wrong thing to do. See the book Safeware.

    The best you can do for testing is to make it as realistic as possible. "Test what you fly, fly what you test" is a cliche because it's true. Consider the 787 battery fires. According to one account, the charging software got tested against a dummy load and not an actual lithium battery. No amount of software quality could ever compensate for that fundamental blunder. (Which, according to one account, was actually a safety measure. The contractor used a battery simulator because they'd had a fire in their facility caused by a battery. Reflect on that.)

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   2