The control of modern infrastructure such as intelligent power grids needs lots of computing capacity. Scientists of the Interdisciplinary Centre for Security, Reliability and Trust (SnT) at the University of Luxembourg have developed an algorithm that might revolutionise these processes. With their new software the SnT researchers are able to forego the use of considerable amounts of computing capacity, enabling what they call micro mining. Their achievements, which the team headed by Prof. Yves Le Traon published in the International Conference on Software Engineering and Knowledge Engineering, earned the scientists a Best Paper Award during this event.
Modern infrastructure – from the telephone network and alarm systems to power supply systems – is controlled by computer programmes. This intelligent software continuously monitors the state of the equipment, adjusts system parameters if they deviate, or generates error messages. To monitor the equipment, the software compares its current state with its past state by continuously measuring the status quo, accumulating this data, and analysing it. That uses a considerable portion of available computing capacity. Thanks to their new algorithm, the SnT researchers' software no longer has to continuously analyse the state of the system to be monitored the way established techniques do. In carrying out the analysis of the system, it instead seamlessly moves between state values that were measured at different points in time.
http://phys.org/news/2014-10-lots-capacity-algorithm.html
[Source]: http://wwwen.uni.lu/university/news/latest_news/saving_lots_of_computing_capacity_with_a_new_algorithm
(Score: 2) by frojack on Thursday October 30 2014, @07:58PM
"Our software stores only the changes of the system state at specific points in time. In order to be able to correctly evaluate the current situation in the network, our algorithm automatically identifies suitable measurement values from the past.
That sounds to me like they take some baseline measurements (of whatever) and from then on they just collect deltas.
Saves sending complete state information. Not sure this cuts down on processing all that much, because in order to find the state you can't just read the last transmission, you have to read the last delta, look up prior state, apply the delta, and only then have the current state. But it might cut down on data transmission.
Admittedly, if you have a new data delta that looks like past data deltas, you might be able to predict future states simply by applying past deltas that occurred the last time you got a delta like the current one. That implies a history search. Even more processing.
And saving in processing would seem to come from programming embedded processors in reporting equipment to STFU until/unless something changes. (don't send the temperature reading every 5 minutes, just when it changes, etc.)
That's the best I can make out of the gibberish in TFA.
No, you are mistaken. I've always had this sig.
(Score: 1) by EETech1 on Friday October 31 2014, @04:08AM
We did something very similar for diagnostics on fuel injected engines. We would model the response of the sensor, for example, the oxygen sensor. It was easy to determine how fast they responded, so we stored that, and continuously monitored them for abrupt changes in their response. By keeping track of the change in fuel flow and how it effected the voltage coming from the oxygen sensors , we could very easily see when a sensor was taking more or less fuel to respond and very accurately diagnose failures, and aging.