Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Sunday January 23 2022, @09:39AM   Printer-friendly
from the you-got-that-way-and-I'll-go-this-way-and-that-other-one...? dept.

New AI navigation prevents crashes in space:

What do you call a broken satellite? Today, it's a multimillion-dollar piece of dangerous space junk.

But a new collision-avoidance system developed by students at the University of Cincinnati [UC] is getting engineers closer to developing robots that can fix broken satellites or spacecraft in orbit.

UC College of Engineering and Applied Science doctoral students Daegyun Choi and Anirudh Chhabra presented their project at the Science and Technology Forum and Exposition in January in San Diego, California. Hosted by the American Institute of Aeronautics and Astronautics, it's the world's largest aerospace engineering conference.

"We have to provide a reliable collision-avoidance algorithm that operates in real time for autonomous systems to perform a mission safely. So we proposed a new collision-avoidance system using explainable artificial intelligence," Choi said.

He has been working on similar projects at UC for the past two years, publishing three articles in peer-reviewed journals based on Choi's novel algorithms.

UC researchers tested their system in simulations, first by deploying robots in a two-dimensional space. Their chosen digital battlefield? A virtual supermarket where multiple autonomous robots must safely navigate aisles to help shoppers and employees.

"This scenario presents many of the same obstacles and surprises that an autonomous car sees on the road," study co-author and UC assistant professor Donghoon Kim said.

[...] "Emerging AI is physics-informed rather than relying solely on data," Kim said. "If we know the physical behavior, we can use that as well as the data so we can get more meaningful information and a reliable AI model."

Journal Reference:
Daegyun Choi, Anirudh Chhabra, and Donghoon Kim. Collision Avoidance of Unmanned Aerial Vehicles Using Fuzzy Inference System-Aided Enhanced Potential Field, (DOI: 10.2514/6.2022-0272)


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Informative) by Frosty Piss on Sunday January 23 2022, @09:41AM (8 children)

    by Frosty Piss (4971) on Sunday January 23 2022, @09:41AM (#1214971)

    Look, this pile of nonsense of calling any complex programming “AI” is simply ridiculous.

    • (Score: 1, Insightful) by Anonymous Coward on Sunday January 23 2022, @01:51PM (1 child)

      by Anonymous Coward on Sunday January 23 2022, @01:51PM (#1214986)

      You cannot sell 1970s tech on 2020s market without dressing it into 2020s bling and saying the trendiest 2020s buzzwords.

      • (Score: 0) by Anonymous Coward on Sunday January 23 2022, @07:31PM

        by Anonymous Coward on Sunday January 23 2022, @07:31PM (#1215072)

        Step-down chill-out of highflyin' robots Using hip-to-the-shakin' jive turkey boogie lowdown.

    • (Score: 3, Interesting) by mcgrew on Sunday January 23 2022, @09:49PM (5 children)

      by mcgrew (701) <publish@mcgrewbooks.com> on Sunday January 23 2022, @09:49PM (#1215103) Homepage Journal

      AI itself is nonsense. The only intelligence in AI is the programmer's intelligence. It's just huge computers with giant, cleverly programmed databases. AI is no more intelligent than a doorknob; computers are just giant piles of switches.

      I was born the year Eisenhower was elected, six years after ENIAC, the first programmable, electronic computer was patented. CBS TV had it (or rather, a mock-up) on their election results, which beat the human experts badly (Growing Up with Computers [mcgrewbooks.com]). They called it an "electronic brain".

      It was less powerful than a musical Hallmark card.

      --
      mcgrewbooks.com mcgrew.info nooze.org
      • (Score: 0) by Anonymous Coward on Sunday January 23 2022, @11:00PM

        by Anonymous Coward on Sunday January 23 2022, @11:00PM (#1215131)

        It was less powerful than a musical Hallmark card.

        There seems to be plenty of humans with brains less powerful than that.

      • (Score: 0) by Anonymous Coward on Sunday January 23 2022, @11:33PM (1 child)

        by Anonymous Coward on Sunday January 23 2022, @11:33PM (#1215144)

        I hear what you're saying but your argument fails. I expect you can easily see the recursion "paradox" fails because humans also, through arcane processes, make humans, who are typically considered intelligent.

        giant, cleverly programmed databases

        Decision tables don't scale well, and no human working with AI/ML/etc would call those AI. A database and a neural net might both be turing complete but so is a human brain, again.

        • (Score: 2) by PiMuNu on Monday January 24 2022, @07:44AM

          by PiMuNu (3823) on Monday January 24 2022, @07:44AM (#1215220)

          > A database and a neural net might both be turing complete but so is a human brain

          You hint that a human brain is just a big computer. Is that what you intended to say?

      • (Score: 0) by Anonymous Coward on Monday January 24 2022, @12:49AM

        by Anonymous Coward on Monday January 24 2022, @12:49AM (#1215156)

        Nice story, grandpa.

      • (Score: 0) by Anonymous Coward on Monday January 24 2022, @05:51PM

        by Anonymous Coward on Monday January 24 2022, @05:51PM (#1215307)

        That's the intelligence that led to higher levels of intelligence. If our minds didn't have that, there'd be little point in having higher order intelligence as we'd be constantly reinventing the wheel and be incapable of inductive reasoning. The whole "subconscious mind" is mainly tasked with that.

  • (Score: -1, Troll) by Anonymous Coward on Sunday January 23 2022, @01:10PM (1 child)

    by Anonymous Coward on Sunday January 23 2022, @01:10PM (#1214981)

    If Musk designed this AI, satellites will start crashing into emergency vehicles parked roadside.

    • (Score: 1, Interesting) by Anonymous Coward on Sunday January 23 2022, @05:48PM

      by Anonymous Coward on Sunday January 23 2022, @05:48PM (#1215042)

      Musk doesn't design shit.
      He has factories built with no worker protection because he hates backup buzzers and the color yellow.

  • (Score: 2) by mcgrew on Sunday January 23 2022, @09:51PM (1 child)

    by mcgrew (701) <publish@mcgrewbooks.com> on Sunday January 23 2022, @09:51PM (#1215104) Homepage Journal

    This tech was around half a century ago, albeit with only two dimensions rather than three. It takes a post-grad to program a 3-D Space Invaders?

    --
    mcgrewbooks.com mcgrew.info nooze.org
    • (Score: 3, Interesting) by PiMuNu on Monday January 24 2022, @07:47AM

      by PiMuNu (3823) on Monday January 24 2022, @07:47AM (#1215221)

      Typically most grad studies involve some boring work. We typically expect a PhD thesis to have three blobs:
      1. Intro/literature review
      2. Service task
      3. Novel/analysis task

      The service task is a "turn-the-handle" (and out pops a thesis) task, typically involving some application of existing work. This is a conference submission, so almost certainly it is not intended to be entirely novel. It seems like they have managed to get a bit of outreach along with it. Good for them.

(1)