Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Thursday October 30 2014, @03:16PM   Printer-friendly
from the n(log(n)) dept.

The control of modern infrastructure such as intelligent power grids needs lots of computing capacity. Scientists of the Interdisciplinary Centre for Security, Reliability and Trust (SnT) at the University of Luxembourg have developed an algorithm that might revolutionise these processes. With their new software the SnT researchers are able to forego the use of considerable amounts of computing capacity, enabling what they call micro mining. Their achievements, which the team headed by Prof. Yves Le Traon published in the International Conference on Software Engineering and Knowledge Engineering, earned the scientists a Best Paper Award during this event.

Modern infrastructure – from the telephone network and alarm systems to power supply systems – is controlled by computer programmes. This intelligent software continuously monitors the state of the equipment, adjusts system parameters if they deviate, or generates error messages. To monitor the equipment, the software compares its current state with its past state by continuously measuring the status quo, accumulating this data, and analysing it. That uses a considerable portion of available computing capacity. Thanks to their new algorithm, the SnT researchers' software no longer has to continuously analyse the state of the system to be monitored the way established techniques do. In carrying out the analysis of the system, it instead seamlessly moves between state values that were measured at different points in time.

http://phys.org/news/2014-10-lots-capacity-algorithm.html

[Source]: http://wwwen.uni.lu/university/news/latest_news/saving_lots_of_computing_capacity_with_a_new_algorithm

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by bob_super on Thursday October 30 2014, @03:27PM

    by bob_super (1357) on Thursday October 30 2014, @03:27PM (#111545)

    > no longer has to continuously analyse the state of the system (...)
    > it instead seamlessly moves between state values that were measured at different points in time

    That's gonna be one hell of a state machine, even in a tiny place like Luxembourg.
    I've seen enough FPGA to know what happens when you get to that system state that doesn't match the state machine, but I've never seen it controlling enough power for real fireworks...
    Where's my popcorn?

    • (Score: 2) by Nerdfest on Thursday October 30 2014, @03:49PM

      by Nerdfest (80) on Thursday October 30 2014, @03:49PM (#111558)

      No time to read TFA at the moment, but it sounds to me like the equivalent of switching from polling to event-driven. Basically it offloads the computation and comparison of state to the places that can do it more efficiently, or perhaps in more concise pieces.

      • (Score: 1) by lizardloop on Thursday October 30 2014, @05:05PM

        by lizardloop (4716) on Thursday October 30 2014, @05:05PM (#111587) Journal

        Maybe it does, maybe it doesn't. After reading the article I'm still absolutely clueless as to how this works. Some examples would help.

      • (Score: 2) by edIII on Thursday October 30 2014, @07:48PM

        by edIII (791) on Thursday October 30 2014, @07:48PM (#111646)

        I thought that too, but it also seems to do a trick that I'm sure many developers do as well with state changes and databases.

        Our software stores only the changes of the system state at specific points in time. In order to be able to correctly evaluate the current situation in the network, our algorithm automatically identifies suitable measurement values from the past. It therefore pulls the correct measurement values from the archive to carry out a correct analysis of the current state – thereby essentially jumping back and forth in time. That translates into an enormous reduction in computing overhead and thus an increase in computing efficiency for the same standard of security and dependability.

        Thought this was a real breakthrough. It's just an archive!

        All they really did is just increase the efficiency in which they are pulling reports. I had that problem in the past many times where the amounts of data were so large, that we didn't run a 2nd database copy just for reports (didn't work in a Forbes 500 where they could just deploy Oracle and be done with it). Too many execs on the system pulling reports from massive ranges and they tank the system, it runs slow, and then they call us up to say we're idiots that can't create working systems for them. As if dealing with millions upon million of records is an easy thing to do, and showing a martini soaked exec after lunch just how much his report pulls from the system is fairly pointless :) Truthfully, they just had too much damn fun with it and were running reports constantly on us just to *display* it to another executive instead of saving it and printing it. Pains of being successful with a pretty and popular UI which was right around when javascript was getting popular I think and we had a seriously talented web dev that made beautiful dynamic report pages.

        So we were motivated to create running counts. No reports required the resources to do intensive queries against the DB anymore, and instead just pulled a simple select statement and shoved into a JSON object and sent it to the report engine. We even went further and just started caching the report data client side.

        As a problem, it was solved by us within a few days as a routine issue that just required some elbow work to refactor some stuff. Honestly, the people in charge of me asked why I didn't design it that way in the first place and kind of made me feel lazy and stupid as if it was a rookie mistake.

        All this seems to do is select against the database for stale records, update just the stale records (limited polling), and then analyze the state from your DB archive instead of everything being near perfectly real time.

        So the real question is, just how long is the delay? In my particular use case, 24 hours was not unacceptable, and state changes usually took a few days to happen. With power grids serving commercial, government, and residential customers I'm betting 24 hours will not work. Probably something like 15 minutes.

        From what I can derive from TFA, their premise is that actual real time values are not required to adequately test the system state. Serious resource savings right there I guess. Maybe what they are really saying is generate 10% of the logs you do, 90% or greater reduction in network traffic associated with log transfers across the network, and that managers will be okay with an entirely different level of confidence . If that's not true, and you need 10 second resolution, I fail to see how their tech reduces the effort as state data must still be sent to a collector (syslog).

        TFA does not state any delay, and what I quoted was the extent of the information on their "breakthrough". Other than the interesting, and entirely unnecessary bit about time travel, I'm sincerely at a loss about how this is a breakthrough at all. Just some DB and platform optimizations that are routine when designing truly scalable systems.

        We dealt with hundreds of millions of state changes on about a few million polling points at any one time. This was with an underfunded IT department, and we delivered reporting on it that was practically instantaneous and *accurate*. I have a really hard time believing the resources to poll hundreds of millions, or billions of points is economically unfeasible in the first place. Reducing some of the overhead is always nice, but just obvious at that scale as something to do.

        What about this is a breakthrough and newsworthy is beyond me. They've hyped up some little problem they knew about, and the rather obvious solution they found for it, and are trying to push it as a product. Sounds rather harsh, but all we have access to is an abstract that explains a problem really well, then poorly explains the solution. What I quoted was the only real part of it that explained the solution; Rest was description of the problem and marketing.

        So apparently I was coding "time travel" software 10 years ago in a medium sized business by using scalable coding practices. Adding that to my resume...

  • (Score: 0) by Anonymous Coward on Thursday October 30 2014, @05:10PM

    by Anonymous Coward on Thursday October 30 2014, @05:10PM (#111589)
    Do what our brains do - keep trying to predict the future and throw an alert if there's a deviation.
    • (Score: 0) by Anonymous Coward on Thursday October 30 2014, @05:28PM

      by Anonymous Coward on Thursday October 30 2014, @05:28PM (#111597)

      But please don't implement the denial mode many human brains display when the facts don't fit their preferred model. I don't want a grid control tell the operators "everything is OK, there's no problem" while the grid is already moving towards total disaster, ignoring/explaining away all the alarming measurement data that doesn't fit the model.

  • (Score: 2) by frojack on Thursday October 30 2014, @07:58PM

    by frojack (1554) on Thursday October 30 2014, @07:58PM (#111649) Journal

    "Our software stores only the changes of the system state at specific points in time. In order to be able to correctly evaluate the current situation in the network, our algorithm automatically identifies suitable measurement values from the past.

    That sounds to me like they take some baseline measurements (of whatever) and from then on they just collect deltas.
    Saves sending complete state information. Not sure this cuts down on processing all that much, because in order to find the state you can't just read the last transmission, you have to read the last delta, look up prior state, apply the delta, and only then have the current state. But it might cut down on data transmission.

    Admittedly, if you have a new data delta that looks like past data deltas, you might be able to predict future states simply by applying past deltas that occurred the last time you got a delta like the current one. That implies a history search. Even more processing.

    And saving in processing would seem to come from programming embedded processors in reporting equipment to STFU until/unless something changes. (don't send the temperature reading every 5 minutes, just when it changes, etc.)

    That's the best I can make out of the gibberish in TFA.

    --
    No, you are mistaken. I've always had this sig.
    • (Score: 1) by EETech1 on Friday October 31 2014, @04:08AM

      by EETech1 (957) on Friday October 31 2014, @04:08AM (#111785)

      We did something very similar for diagnostics on fuel injected engines. We would model the response of the sensor, for example, the oxygen sensor. It was easy to determine how fast they responded, so we stored that, and continuously monitored them for abrupt changes in their response. By keeping track of the change in fuel flow and how it effected the voltage coming from the oxygen sensors , we could very easily see when a sensor was taking more or less fuel to respond and very accurately diagnose failures, and aging.