A surprisingly simple bug afflicts computers controlling planes, spacecraft and more – they get confused by big numbers. As Chris Baraniuk discovers, the glitch has led to explosions, missing space probes and more.
Tuesday, 4 June 1996 will forever be remembered as a dark day for the European Space Agency (Esa). The first flight of the crewless Ariane 5 rocket, carrying with it four very expensive scientific satellites, ended after 39 seconds in an unholy ball of smoke and fire. It's estimated that the explosion resulted in a loss of $370m (£240m).
What happened? It wasn't a mechanical failure or an act of sabotage. No, the launch ended in disaster thanks to a simple software bug. A computer getting its maths wrong – essentially getting overwhelmed by a number bigger than it expected.
How is it possible that computers get befuddled by numbers in this way? It turns out such errors are answerable for a series of disasters and mishaps in recent years, destroying rockets, making space probes go missing, and sending missiles off-target. So what are these bugs, and why do they happen?
Imagine trying to represent a value of, say, 105,350 miles on an odometer that has a maximum value of 99,999. The counter would "roll over" to 00,000 and then count up to 5,350, the remaining value. This is the same species of inaccuracy that doomed the 1996 Ariane 5 launch. More technically, it's called "integer overflow", essentially meaning that numbers are too big to be stored in a computer system, and sometimes this can cause malfunction.
Such glitches emerge with surprising frequency. It's suspected that the reason why Nasa lost contact with the Deep Impact space probe in 2013 was an integer limit being reached.
And just last week it was reported that Boeing 787 aircraft may suffer from a similar issue. The control unit managing the delivery of power to the plane's engines will automatically enter a failsafe mode – and shut down the engines – if it has been left on for over 248 days.
(Score: 2) by istartedi on Thursday May 07 2015, @09:29PM
I've seen people say, "just use arbitrary precision and be done". The counter-point
to this is that it doesn't perform as well. You could also raise overflow exceptions; but I've
never been a big fan of exceptions due to the unwinding problem (Linus Torvalds is in that camp, IIRC).
The solution you hear less often is to enhance the type system so that every
integer actually has something like "type Int | Fail". This actually seems like the
best compromise to me. You do take some performance hit, but it's probably not
as bad as a bignum library. You don't throw exceptions, so unwinding isn't a problem.
As others have said though, this problem isn't going away any time soon.
There's just too much code floating around that uses bare integers, and too many
people thinking that "big enough now" == "big enough later".
Appended to the end of comments you post. Max: 120 chars.