Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Saturday September 13 2014, @05:49PM   Printer-friendly
from the using-only-what-you-need? dept.

Cloud computing involves displacing data storage and processing from the user's computer on to remote servers. It can provide users with more storage space and computing power that they can then access from anywhere in the world rather than having to connect to a single desktop or other computer with its finite resources. However, some observers have raised concerns about the increased energy demands of sustaining distributed servers and having them up and running continuously, where an individual user's laptop might be shut down when it is not in use or the resources utilization of the server is less than the lower threshold, for instance.

Now, writing in the International Journal of Information Technology, Communications and Convergence, researchers at the University of Oran in Algeria, have investigated how cloud computing systems might be optimized for energy use and to reduce their carbon footprint. Jouhra Dad and Ghalem Belalem in the Department of Computer Science at Oran explain how they have developed an algorithm to control the virtual machines running on computers in a cloud environment so that energy use of the core central processing units (CPUs) and memory capacity (RAM as opposed to hard disk storage space) can be reduced as far as possible without affecting performance overall.

Unfortunately, there is little detailed information on the algorithm itself in the article.

I suspect some Soylents have home servers which they access from within their home as well as remotely. What, if anything, do you do to reduce your energy costs?

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Adamsjas on Saturday September 13 2014, @07:52PM

    by Adamsjas (4507) on Saturday September 13 2014, @07:52PM (#92798)

    I suspect there is nothing to be gained by avoiding disk IO, unless you manage to avoid ALL disk IO.

    Starting and stopping a disk is more energy intensive than letting it spin for 5 hours. Disk motors pull almost no power.
    The cost of running a drive is in the head movement and electronics. Spin up is the worst power consumer.

    Xbitlabs did a comparison a few years ago 2008 and showed that typical laptop drives take around .15 amps (zero point 15) at rest, and .45 during rea/write operations. In 2012 the power utilization had gone up by quite a bit according to Toms Hardware.

    Caching saves no power. (Its done for speed, not power reduction). Sooner or later you are going to have to hit that disk drive.

    Machines with only SSDs will save power, but not enough to offset their cost. Not yet anyway. And not if they have to be backed up with spinning rust anyway. Very small storage devices (tablets) can get away with no disk just by storing almost nothing.

    Want to save power: Fan-less passive cooling and no monitor attached. Anything else is just tinkering at the margins.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by Foobar Bazbot on Saturday September 13 2014, @09:39PM

    by Foobar Bazbot (37) on Saturday September 13 2014, @09:39PM (#92817) Journal

    (I'm mostly Devil's advocating here, FWIW.TL;DR I think spin up/down can be made better, but it's not worthwhile unless you're producing at least thousands of systems.)

    As for disk IO, I think part of the reality you see is due to current design choices that could just as well be different, but aren't for historical reasons. The big one to me is the all-or-nothing spin-up.

    In an ideal world, if you're just doing a periodic write-out of RAM/flash-backed data, or a predictive read-ahead, the drive could spin up quite slowly, using barely more current than running steady-state; you could do your business, then shut it back down. If you need to read data, of course, you want it to spin up quickly, because you're waiting on the results. AFAIK, no current drives allow the host to control this -- the drive either stays spun up, or it spins up at a given rate (usually pretty fast, because users don't like waiting) every time.

    An example of where this could make a huge (well, at least significant) difference: a home media server with tons of video (ripped BDs or whatever) on it. If you cache the entire filesystem's metadata in RAM, and certain sections of each video file (first minute's worth of a/v stream data, plus the seek table at the beginning or end of the file, depending on the container used) on an SSD, you could start playing from that while winding up the disk at a just-in-time rate, so the drive is ready just as you need to read more. Then you read up to n GB ahead to fill a buffer on the SSD (in practice, you'd usually get the whole file in one go), do an opportunistic flush (see below), and spin the disk down. If the user seeks outside the buffer, (most likely during the first minute, but also with a file too big to cache at once), switch over to normal, full-power spin-up, to respond as quickly as possible, but you wouldn't need the full power draw of a panic-start most of the time.

    Likewise, on receiving new files over the network, you can cache them on the SSD, then flush them to disk next time you have it spun up. Or, if the SSD starts to get full before you've written them out, you can spin the disk up -- gently -- and start flushing away.

    Now there's several obstacles -- one is that current drive firmwares don't (AFAIK) have any interface to control spin-up rate, but another is that it requires close integration of the media server and the playback front-end (or a small readahead buffer in the front-end, and a pile of heuristics on the back-end) to make sure you have the right stuff cached on time. And then there's the need for more heuristics to determine which SSD caches should be kept around (e.g. if you're watching a TV series, it should try to cache the next episode(s); it should try to keep recently-watched stuff for a little while, in case you want to rewatch with a friend; it shouldn't waste cache on something you accidentally clicked on and backed out after 5 seconds; it should keep recently-uploaded stuff; etc.). It's really a whole lot of effort touching a lot of pieces to save a little on disk power, but it is theoretically possible to save significant power vs. keeping the disks always spun up, and I think the effort (including HDDs with custom firmware) is within reach for an Apple/MS/Sony/Nintendo product, if they decided to make energy efficiency a marketing point.

    On the other hand, a system that statistically models your typical use hours (weekends and evenings, one expects), spins up the disk (even at a fast rate, since you only do it once a day) a half-hour before it expects to be needed, and doesn't spin it down until it's at the end of a typical usage pattern and no longer being accessed, might get a large fraction of the power savings while also rarely making the user wait for a spin-up, and not even need an SSD to accomplish it...

    • (Score: 2) by frojack on Sunday September 14 2014, @01:03AM

      by frojack (1554) on Sunday September 14 2014, @01:03AM (#92862) Journal

      An interesting concept, cache till you can't then spin up up slowly.

      You would still have inertia as an enemy, so fight inertia not only with fewer encounters, but less aggressive encounters.

      We spin our drives fast because we fly our heads on a cushion of air (or some other gas).

      But if we found some other way to control the head-to-disk distance we could spin them slower, and write to disk at any speed from a crawl, just enough to write logs, to massive bursts at full speed, and buffer according to then current speed.

      --
      No, you are mistaken. I've always had this sig.
    • (Score: 2) by maxwell demon on Sunday September 14 2014, @11:08AM

      by maxwell demon (1608) on Sunday September 14 2014, @11:08AM (#92972) Journal

      You wouldn't save energy by slowly spinning up. The energy of spinning up is the energy stored the rotation; that one depends only on the speed of rotation. Slowly spinning up would just distribute drawing of that energy over a larger time.

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by Foobar Bazbot on Sunday September 14 2014, @07:17PM

        by Foobar Bazbot (37) on Sunday September 14 2014, @07:17PM (#93124) Journal

        The work output is the same, yes, but that's not the same as the energy input. Since torque is proportional to current, instantaneous I²R losses scales as the square of torque. Of course, the time it's running scales inversely with torque, so total energy wasted only scales linearly with torque. Still, if you run 1/4 the torque for 4x as long, reducing the total energy lost to I²R to 1/4 is pretty good.

        Granted, I have no data (other than GGP Adamsjas's [soylentnews.org] presumably hyperbolic assertion that spin-up energy is equal to 5 hours of spun-up idling) to suggest that I²R losses during spin-up are very significant, but motors optimized for long periods of constant-speed, low-power operation (as one assumes HDD motors are) will be quite inefficient when using bursts of 30x as much power. (30x comes from comparing hankwang's [soylentnews.org] figure of 5W during a 4s spin-up, vs. Adamsjas's figure of 0.15W for idle, which may not represent similar drives.)

  • (Score: 2) by hankwang on Saturday September 13 2014, @10:24PM

    by hankwang (100) on Saturday September 13 2014, @10:24PM (#92825) Homepage

    "Starting and stopping a disk is more energy intensive than letting it spin for 5 hours. Disk motors pull almost no power."

    Do you have a source for that? A typical 2.5 inch disk will draw 5 W (1 A at 5 V) for 4 seconds to spin up. I highly doubt that air drag on the platters takes just 1 mW of power.

    • (Score: 0) by Anonymous Coward on Sunday September 14 2014, @12:35AM

      by Anonymous Coward on Sunday September 14 2014, @12:35AM (#92848)

      .. if only we could couple it to some sort of flywheel to store the energy.

    • (Score: 2) by frojack on Sunday September 14 2014, @03:29AM

      by frojack (1554) on Sunday September 14 2014, @03:29AM (#92901) Journal

      Math fail.
      You can't compare one spinup with an hour of idle time.'

      You have to compare dozens of spinup, maybe even hundreds that you would have to over the hours that the drive is pulling .15 watts (150 mW) spinning.

      And spin up is longer than 4 seconds. Closer to 10, plus you have head unload and retract time, head re-fly and track calibration time EACH time you power down. More than one drive? You might have to stager the spinups.

      --
      No, you are mistaken. I've always had this sig.