Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday July 26 2018, @08:01AM   Printer-friendly
from the if-it-walks-like-a-duck,-sinks-like-a-duck,-oh,-wait... dept.

The Los Angeles Times reports:

The duck boat that sank in a Missouri lake last week, killing 17 people, was built based on a design by a self-taught entrepreneur who had no engineering training, according to court records reviewed by the Los Angeles Times.

The designer, entrepreneur Robert McDowell, completed only two years of college and had no background, training or certification in mechanics when he came up with the design for "stretch" duck boats more than two decades ago, according to a lawsuit filed over a roadway disaster in Seattle involving a similar duck boat in 2015.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Knowledge Troll on Thursday July 26 2018, @09:57PM (1 child)

    by Knowledge Troll (5948) on Thursday July 26 2018, @09:57PM (#713384) Homepage Journal

    if you do enough testing, maybe you can approach a acceptable level of reliability?

    I'm not looking forward to a future where we can no longer prove a control system is operating correctly. We are going from control systems based on state transition charts and a dynamics model that can be analyzed and proven to have the properties that it is designed for. There can be errors in design that lead to problems but fundamentally you can go back and say "this is exactly what went wrong".

    You can't do that with a NN control system. The best you can say is "we made a test case and it no longer faults in that test case." Maybe in the future as network analysis gets better but certainly not right now.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 1, Informative) by Anonymous Coward on Friday July 27 2018, @06:26AM

    by Anonymous Coward on Friday July 27 2018, @06:26AM (#713578)

    And after that you'll need robot psychologists to figure out what might have gone wrong...

    Seriously though, AI is still at the alchemy stage. They throw stuff into a pot until some recipe works. But they have no real idea on how and why it works.

    https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ [technologyreview.com]
    https://www.theguardian.com/technology/2017/nov/03/googles-ai-turtle-rifle-mit-research-artificial-intelligence [theguardian.com]
    https://www.infoworld.com/article/3263755/artificial-intelligence/something-is-still-rotten-in-the-kingdom-of-artificial-intelligence.html [infoworld.com]

    I can't find the actual link I have in mind which is about AI's current success showing how it's failing i.e. currently why stuff can "mostly work" is we have huge amounts of samples and we're kinda brute-forcing the solution.

    To train a current "AI" that something is a bus we show it huge numbers of photos of buses and huge numbers of other vehicles.

    But you don't need to do that when training a dog. A dog can learn what a bus is with far fewer samples. Even a crow with a walnut sized brain can do it with just a few samples.

    So current AI tech is actually at quite a dismal and disappointing stage despite some seeming successes.