Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday July 26 2018, @08:01AM   Printer-friendly
from the if-it-walks-like-a-duck,-sinks-like-a-duck,-oh,-wait... dept.

The Los Angeles Times reports:

The duck boat that sank in a Missouri lake last week, killing 17 people, was built based on a design by a self-taught entrepreneur who had no engineering training, according to court records reviewed by the Los Angeles Times.

The designer, entrepreneur Robert McDowell, completed only two years of college and had no background, training or certification in mechanics when he came up with the design for "stretch" duck boats more than two decades ago, according to a lawsuit filed over a roadway disaster in Seattle involving a similar duck boat in 2015.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Thursday July 26 2018, @06:39PM (4 children)

    by Anonymous Coward on Thursday July 26 2018, @06:39PM (#713298)

    I'm a real EE graduate with decades of experience in safety critical control systems, emphasis in optimal control (https://en.wikipedia.org/wiki/Optimal_control [wikipedia.org]). I played around with NN (and fuzzy logic) a little before and after graduation, but was told by fellow engineers and mentors that they'd never by permitted or licensed because of non-determinism and no formal proofs. And even then, why would you want to when you can design systems differently and mathematically prove there is nothing better in terms of efficiency, correctness or any other metric you care to use? Then there was an article in I think Sectrum showing that fuzzy logic was simply a restatement of classical control afterwhich it kinda dissapeared, at least from the hype wagon.

    I've worked with some of the smartest software people out there, but often they miss little details like Coriolis force, limited control power or energy, energy ellipsoids. From what I've read about autonomous cars (a few papers on Stanley), nobody is, or at least was, using NN

    Starting Score:    0  points
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   1  
  • (Score: 2) by tibman on Thursday July 26 2018, @07:46PM (1 child)

    by tibman (134) Subscriber Badge on Thursday July 26 2018, @07:46PM (#713335)

    Ranting on NN without even recognizing what it's good for. It's a fuzzy matching system so it's far far superior to a deterministic expert logic system when dealing with fuzzy problems. It does approximate pattern recognition. This widget coming off the product line is 15% different than the reference picture, kick it out for a human to examine. See a face in a high security zone that isn't on the clearance list, notify security. NN does basically what a human does. Humans are allowed to do stuff in warehouses, factories, refineries, and so on. But you wouldn't use a human as your control system. They'd take a short break and something would catch fire.

    --
    SN won't survive on lurkers alone. Write comments.
    • (Score: 2) by Knowledge Troll on Friday July 27 2018, @02:52AM

      by Knowledge Troll (5948) on Friday July 27 2018, @02:52AM (#713535) Homepage Journal

      I think the way to state this is that the neural network can solve problems that are not solvable well or at all with classic control theory. This is specifically because the NN is always an approximation of a function that achieves the goal and can be represented in a smaller space (less code, less memory, less everything) than a function that can be proven to work correctly or optimally. That has a lot of utility and I recognize that.

      What I don't recognize is the cost vs benefit of using a NN in a space like the robot cars. I do recognize that the problem is most likely so complex that normal control theory can't handle it. Restating it simply: NN or other AI is the only way that robot cars can ever work.

      That's frighting and I can't see how the NN control system is fit for this purpose in any way.

      I think at the very least that the robot car control system is trying to solve a problem so complex that it will always be marginal. I have this theory that there are problems that get to a sufficient level of complexity such that no (practical?) system can solve them. Our brains are such a system - we need a ton of heuristics or we couldn't make it through a day. The robot needs a ton of heuristics or it can't move.

      We already have highly heuristic AI control systems in cars: humans. It would not surprise me at all if the robot car winds up never safer than or no more safe than a human.

  • (Score: 3, Informative) by Knowledge Troll on Thursday July 26 2018, @11:02PM

    by Knowledge Troll (5948) on Thursday July 26 2018, @11:02PM (#713421) Homepage Journal

    Thank you for joining the conversation!

    From what I've read about autonomous cars (a few papers on Stanley), nobody is, or at least was, using NN

    I'm familiar with Stanley and the whole DARPA grand challenge because this has been very interesting to me since the original grand challenge announcement. Stanley was an amazing machine and Sebastion went off to Google to start their project (and then left but I don't know why, also Sebastion is actively hostile to steering wheels because of a family death in a car). The Stanley technique of color recognition fusion with the local sensor data was really clever and all kinds of good especially in that there were minimal moving parts or at least no gimbals.

    Tesla is NN: https://electrek.co/2018/06/11/tesla-ai-director-insights-autopilot-computer-vision-neural-net/ [electrek.co]

    Waymo was and I think still is NN: https://opendatascience.com/the-history-of-neural-networks-and-ai-part-iii/ [opendatascience.com]

    I cant find a better reference but this suggests and I believe that GM is also NN: https://finance.yahoo.com/news/gm-apos-self-driving-car-133500602.html [yahoo.com]

    I can't find any reference at all for Uber but I think it is NN too.

    It's fucking crazy isn't it? And people act like this is the same thing as an airplane autopilot and base how well this is going to go on provable models. Shesh.

  • (Score: 2) by Knowledge Troll on Thursday July 26 2018, @11:11PM

    by Knowledge Troll (5948) on Thursday July 26 2018, @11:11PM (#713430) Homepage Journal

    Oh I forgot to mention the real allure for an NN based control system or at least what I noticed and thought was absolutely remarkable as a lay person trying to understand them.

    The process for creating a control system with NN is to setup the model with inputs from the controlled process during the learning phase. Examples of normal operating behavior are fed into the system and the NN begins training and refinement to get to it's approximation of producing a system that doesn't dump chemicals on the floor.

    After you feed it enough training data you can then rearrange the neural network using a formally defined process and now the trained network can receive inputs from the controlled system and produce outputs to control it. Run that in a simulation to find out when it let the pressure in a pipe get to 85k PSI, it blew up and killed someone then go back into training mode and teach it that is bad, bad network, bad.

    The allure seems to be that you can do this with the known transformation of the NN from learning mode to control mode so that you don't have to actually produce dynamics models and all that. That sounds like a much easier task to me than building a formal or optimal control system.

    I also think it's rather insane for anything with higher stakes than a video game.