Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday July 03 2020, @10:41PM   Printer-friendly
from the CPE-1704-TKS dept.

Software is making it easier than ever to travel through space, but autonomous technologies could backfire if every glitch and error isn’t removed.

When SpaceX’s Crew Dragon took NASA astronauts to the ISS near the end of May, the launch brought back a familiar sight. For the first time since the space shuttle was retired, American rockets were launching from American soil to take Americans into space.

Inside the vehicle, however, things couldn’t have looked more different. Gone was the sprawling dashboard of lights and switches and knobs that once dominated the space shuttle’s interior. All of it was replaced with a futuristic console of multiple large touch screens that cycle through a variety of displays. Behind those screens, the vehicle is run by software that’s designed to get into space and navigate to the space station completely autonomously.

[...] But over-relying on software and autonomous systems in spaceflight creates new opportunities for problems to arise. That’s especially a concern for many of the space industry’s new contenders, who aren’t necessarily used to the kind of aggressive and comprehensive testing needed to weed out problems in software and are still trying to strike a good balance between automation and manual control.

Nowadays, a few errors in over one million lines of code could spell the difference between mission success and mission failure. We saw that late last year, when Boeing’s Starliner capsule (the other vehicle NASA is counting on to send American astronauts into space) failed to make it to the ISS because of a glitch in its internal timer.

[...] There’s no consensus on how much further the human role in spaceflight will—or should—shrink. Uitenbroek thinks trying to develop software that can account for every possible contingency is simply impractical, especially when you have deadlines to make.

Chang Díaz disagrees, saying the world is shifting “to a point where eventually the human is going to be taken out of the equation.”

Which approach wins out may depend on the level of success achieved by the different parties sending people into space. NASA has no intention of taking humans out of the equation, but if commercial companies find they have an easier time minimising the human pilot’s role and letting the AI take charge, than[sic] touch screens and pilot-less flight to the ISS are only a taste of what’s to come.

MIT Technology Review

Which approach, do you think, is the best way to go forward ??


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Saturday July 04 2020, @09:31AM (1 child)

    by Anonymous Coward on Saturday July 04 2020, @09:31AM (#1016054)

    I find it hard to believe you can't have both extensive automation and also the ability to selectively turn things off and have a large degree of manual control when required. Different layers of software abstraction are the perfect tool for the job.
    Auto pilot of course has flight plan containing all the steps needed to go somewhere, and recalculates whenever something unexpected happens.
    Executing those steps will involve actions like 'add X deltaV in direction Y'
    Which breaks down into the functions to orient the craft and functions to fire the appropriate thrusters for a specific amount of time.
    Which break down again into functions to prepare the fuel pressure and valves and all other stuff required to operate the thrusters.
    And so on, and so on, until you get to the functions that flips the actual bit that manipulate the hardware, like closing a circuit relay to provide power to a specific motor to operate a specific valve. Much like the switches in an airline cockpit.
    Of course the same goes for other systems like eg. life support or thermal management.
    At each of those stages you could decide at any moment to put the high level program into some sort of standby mode where it will not execute any actions (or only certain types of actions), and you could instead manually call the lower level functions yourself, if the software contains the appropriate interface to allow that. There is no reason a pilot couldn't give the command to 'fire main thruster 500ms' or 'close valve A' at any time.

  • (Score: 2) by Lagg on Saturday July 04 2020, @04:46PM

    by Lagg (105) on Saturday July 04 2020, @04:46PM (#1016169) Homepage Journal

    Good point, but the problem as I see it - especially because this stuff is really dumb hardware oftentimes compared to what we have on desktops - is like how Windows behaved with ISA and all that non-plug-and-play crap. The 737 Max drama (they're putting it back into production this year btw) kind of informed me that these aeronautics projects have something of a problem in that vein with interdependency buildup/coupling, and I've always reckoned that this "AI" starts getting fucky when you give it unexpected input like that of a human's.

    Basically the 737 Max features an entire simulation of the yaw/pitch/pilot-words-i-dont-know to make transitioning to the Max easier for a pilot without recertification. The complexity of simulating the in-flight behavior of an entire other plane model seems complex and incredibly reliant on all those tiny bits of system operating concurrently. Or at least this is how it was explained to me. So I can't imagine the sheer levels of interdependency required to make that stage of "halt execution here for a second" work. Because if the pilot gives input of any kind, you have to also give the code a "step back to this pointer" behavior of some kind so that it can retrain based on what the pilot just did.

    I'm not making any claims of deeply understanding how this "AI" works because it's such a hype hellhole at this point, but it seems like behavior inherent to FSMs and NNs are making people like those quoted be like "well shit this is a hard problem to solve, let's remove the human risk altogether"

    --
    http://lagg.me [lagg.me] 🗿