Hungry penguins have inspired a novel way of making sure computer code in smart cars does not crash. Tools based on the way the birds co-operatively hunt for fish are being developed to test different ways of organising in-car software. The tools look for safe ways to organise code in the same way that penguins seek food sources in the open ocean. Experts said such testing systems would be vital as cars get more connected.
Engineers have often turned to nature for good solutions to tricky problems, said Prof Yiannis Papadopoulos, a computer scientist at the University of Hull who, together with Dr Youcef Gheraibia from Algeria, developed the penguin-inspired testing system. The way ants pass messages among nest-mates has helped telecoms firms keep telephone networks running, and many robots get around using methods of locomotion based on the ways animals move.
Penguins were another candidate, said Prof Papadopoulos, because millions of years of evolution has helped them develop very efficient hunting strategies. This was useful behaviour to copy, he said, because it showed that penguins had solved a tricky optimisation problem - how to ensure as many penguins as possible get enough to eat. [...] "There must be something special about their hunting strategy," he said, adding that an inefficient strategy would mean many birds starved.
Tux was not involved.
(Score: 0) by Anonymous Coward on Tuesday January 31 2017, @10:24AM
From the article:
This sounds as if they want to automate the job of the software engineer.
(Score: 0) by Anonymous Coward on Wednesday February 01 2017, @02:14AM
This already exists and it's called genetic programming (basically you run a genetic algorithm on the code structure itself, which is easiest in Lisp, or on command sequences that act as code). There's nothing to see here except a new name so someone can clam their search heuristic is something new.
It sounds like they don't know what fuzzing is. With a software program you can automatically target every input path. You don't need to have your testing code flock to the area which has triggered the most bugs (I think that's what they're trying to describe? The article is completely lacking in real technical details, it's mostly hand-waving marketing bullshit.). Testing buggy areas harder generally means you're wasting time finding ways to re-exploit bugs you've already discovered. If a negative number causes a crash, there's no value in retesting that feature with 200 different negative numbers.