Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.
posted by LaminatorX on Thursday October 30 2014, @03:16PM   Printer-friendly
from the n(log(n)) dept.

The control of modern infrastructure such as intelligent power grids needs lots of computing capacity. Scientists of the Interdisciplinary Centre for Security, Reliability and Trust (SnT) at the University of Luxembourg have developed an algorithm that might revolutionise these processes. With their new software the SnT researchers are able to forego the use of considerable amounts of computing capacity, enabling what they call micro mining. Their achievements, which the team headed by Prof. Yves Le Traon published in the International Conference on Software Engineering and Knowledge Engineering, earned the scientists a Best Paper Award during this event.

Modern infrastructure – from the telephone network and alarm systems to power supply systems – is controlled by computer programmes. This intelligent software continuously monitors the state of the equipment, adjusts system parameters if they deviate, or generates error messages. To monitor the equipment, the software compares its current state with its past state by continuously measuring the status quo, accumulating this data, and analysing it. That uses a considerable portion of available computing capacity. Thanks to their new algorithm, the SnT researchers' software no longer has to continuously analyse the state of the system to be monitored the way established techniques do. In carrying out the analysis of the system, it instead seamlessly moves between state values that were measured at different points in time.

http://phys.org/news/2014-10-lots-capacity-algorithm.html

[Source]: http://wwwen.uni.lu/university/news/latest_news/saving_lots_of_computing_capacity_with_a_new_algorithm

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Nerdfest on Thursday October 30 2014, @03:49PM

    by Nerdfest (80) on Thursday October 30 2014, @03:49PM (#111558)

    No time to read TFA at the moment, but it sounds to me like the equivalent of switching from polling to event-driven. Basically it offloads the computation and comparison of state to the places that can do it more efficiently, or perhaps in more concise pieces.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 1) by lizardloop on Thursday October 30 2014, @05:05PM

    by lizardloop (4716) on Thursday October 30 2014, @05:05PM (#111587) Journal

    Maybe it does, maybe it doesn't. After reading the article I'm still absolutely clueless as to how this works. Some examples would help.

  • (Score: 2) by edIII on Thursday October 30 2014, @07:48PM

    by edIII (791) on Thursday October 30 2014, @07:48PM (#111646)

    I thought that too, but it also seems to do a trick that I'm sure many developers do as well with state changes and databases.

    Our software stores only the changes of the system state at specific points in time. In order to be able to correctly evaluate the current situation in the network, our algorithm automatically identifies suitable measurement values from the past. It therefore pulls the correct measurement values from the archive to carry out a correct analysis of the current state – thereby essentially jumping back and forth in time. That translates into an enormous reduction in computing overhead and thus an increase in computing efficiency for the same standard of security and dependability.

    Thought this was a real breakthrough. It's just an archive!

    All they really did is just increase the efficiency in which they are pulling reports. I had that problem in the past many times where the amounts of data were so large, that we didn't run a 2nd database copy just for reports (didn't work in a Forbes 500 where they could just deploy Oracle and be done with it). Too many execs on the system pulling reports from massive ranges and they tank the system, it runs slow, and then they call us up to say we're idiots that can't create working systems for them. As if dealing with millions upon million of records is an easy thing to do, and showing a martini soaked exec after lunch just how much his report pulls from the system is fairly pointless :) Truthfully, they just had too much damn fun with it and were running reports constantly on us just to *display* it to another executive instead of saving it and printing it. Pains of being successful with a pretty and popular UI which was right around when javascript was getting popular I think and we had a seriously talented web dev that made beautiful dynamic report pages.

    So we were motivated to create running counts. No reports required the resources to do intensive queries against the DB anymore, and instead just pulled a simple select statement and shoved into a JSON object and sent it to the report engine. We even went further and just started caching the report data client side.

    As a problem, it was solved by us within a few days as a routine issue that just required some elbow work to refactor some stuff. Honestly, the people in charge of me asked why I didn't design it that way in the first place and kind of made me feel lazy and stupid as if it was a rookie mistake.

    All this seems to do is select against the database for stale records, update just the stale records (limited polling), and then analyze the state from your DB archive instead of everything being near perfectly real time.

    So the real question is, just how long is the delay? In my particular use case, 24 hours was not unacceptable, and state changes usually took a few days to happen. With power grids serving commercial, government, and residential customers I'm betting 24 hours will not work. Probably something like 15 minutes.

    From what I can derive from TFA, their premise is that actual real time values are not required to adequately test the system state. Serious resource savings right there I guess. Maybe what they are really saying is generate 10% of the logs you do, 90% or greater reduction in network traffic associated with log transfers across the network, and that managers will be okay with an entirely different level of confidence . If that's not true, and you need 10 second resolution, I fail to see how their tech reduces the effort as state data must still be sent to a collector (syslog).

    TFA does not state any delay, and what I quoted was the extent of the information on their "breakthrough". Other than the interesting, and entirely unnecessary bit about time travel, I'm sincerely at a loss about how this is a breakthrough at all. Just some DB and platform optimizations that are routine when designing truly scalable systems.

    We dealt with hundreds of millions of state changes on about a few million polling points at any one time. This was with an underfunded IT department, and we delivered reporting on it that was practically instantaneous and *accurate*. I have a really hard time believing the resources to poll hundreds of millions, or billions of points is economically unfeasible in the first place. Reducing some of the overhead is always nice, but just obvious at that scale as something to do.

    What about this is a breakthrough and newsworthy is beyond me. They've hyped up some little problem they knew about, and the rather obvious solution they found for it, and are trying to push it as a product. Sounds rather harsh, but all we have access to is an abstract that explains a problem really well, then poorly explains the solution. What I quoted was the only real part of it that explained the solution; Rest was description of the problem and marketing.

    So apparently I was coding "time travel" software 10 years ago in a medium sized business by using scalable coding practices. Adding that to my resume...