The Los Angeles Times reports:
The duck boat that sank in a Missouri lake last week, killing 17 people, was built based on a design by a self-taught entrepreneur who had no engineering training, according to court records reviewed by the Los Angeles Times.
The designer, entrepreneur Robert McDowell, completed only two years of college and had no background, training or certification in mechanics when he came up with the design for "stretch" duck boats more than two decades ago, according to a lawsuit filed over a roadway disaster in Seattle involving a similar duck boat in 2015.
(Score: 1, Informative) by Anonymous Coward on Friday July 27 2018, @06:26AM
And after that you'll need robot psychologists to figure out what might have gone wrong...
Seriously though, AI is still at the alchemy stage. They throw stuff into a pot until some recipe works. But they have no real idea on how and why it works.
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ [technologyreview.com]
https://www.theguardian.com/technology/2017/nov/03/googles-ai-turtle-rifle-mit-research-artificial-intelligence [theguardian.com]
https://www.infoworld.com/article/3263755/artificial-intelligence/something-is-still-rotten-in-the-kingdom-of-artificial-intelligence.html [infoworld.com]
I can't find the actual link I have in mind which is about AI's current success showing how it's failing i.e. currently why stuff can "mostly work" is we have huge amounts of samples and we're kinda brute-forcing the solution.
To train a current "AI" that something is a bus we show it huge numbers of photos of buses and huge numbers of other vehicles.
But you don't need to do that when training a dog. A dog can learn what a bus is with far fewer samples. Even a crow with a walnut sized brain can do it with just a few samples.
So current AI tech is actually at quite a dismal and disappointing stage despite some seeming successes.