Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by martyb on Saturday September 13 2014, @05:49PM   Printer-friendly
from the using-only-what-you-need? dept.

Cloud computing involves displacing data storage and processing from the user's computer on to remote servers. It can provide users with more storage space and computing power that they can then access from anywhere in the world rather than having to connect to a single desktop or other computer with its finite resources. However, some observers have raised concerns about the increased energy demands of sustaining distributed servers and having them up and running continuously, where an individual user's laptop might be shut down when it is not in use or the resources utilization of the server is less than the lower threshold, for instance.

Now, writing in the International Journal of Information Technology, Communications and Convergence, researchers at the University of Oran in Algeria, have investigated how cloud computing systems might be optimized for energy use and to reduce their carbon footprint. Jouhra Dad and Ghalem Belalem in the Department of Computer Science at Oran explain how they have developed an algorithm to control the virtual machines running on computers in a cloud environment so that energy use of the core central processing units (CPUs) and memory capacity (RAM as opposed to hard disk storage space) can be reduced as far as possible without affecting performance overall.

Unfortunately, there is little detailed information on the algorithm itself in the article.

I suspect some Soylents have home servers which they access from within their home as well as remotely. What, if anything, do you do to reduce your energy costs?

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by E_NOENT on Saturday September 13 2014, @06:03PM

    by E_NOENT (630) on Saturday September 13 2014, @06:03PM (#92777) Journal

    Hosting personal websites out of your home is a great option. It's secure, cheap, and fun!

    Here's what I use:

    http://beagleboard.org/black [beagleboard.org]

    --
    I'm not in the business... I *am* the business.
    • (Score: 2) by opinionated_science on Saturday September 13 2014, @06:07PM

      by opinionated_science (4031) on Saturday September 13 2014, @06:07PM (#92778)

      hmm interesting. I personally don't care how they energy, so long as it doesn't effect performance...!

      • (Score: 2, Interesting) by Ethanol-fueled on Saturday September 13 2014, @11:00PM

        by Ethanol-fueled (2792) on Saturday September 13 2014, @11:00PM (#92830) Homepage

        This is another instance of me putting my dick into the discussion's mashed potatoes, but I agree with you in that I always thought it was silly to take into consideration the energy costs of home computing.

        Of all the things that can make a real difference in savings, like eating cheaper or using less water or ever other power-related savings like not running the air conditioner all day; the consideration of power draw in a home-computing scenario is just goddamn silly -- even for a family setup with 2-3 PCs in the house.

        There are scenarios when worrying about the power-draw of computing makes sense, like when you're using a mobile device on battery power or actually running a fuck-huge Beowulf cluster in your basement -- but if you're running a Beowulf cluster in an American home you can expect the DEA to kick down your door and shoot your kids and pets to death because you might have been using all that power to grow marijuana. Yes, in America, the power company alerts the authorities when you're drawing a suspicious amount of juice from the grid.

        • (Score: 2) by Phoenix666 on Sunday September 14 2014, @12:45AM

          by Phoenix666 (552) on Sunday September 14 2014, @12:45AM (#92851) Journal

          You must not rely on ConEd for electricity, or live in Hawaii. ConEd charges us $0.35/kwh in NYC (Hawaii is more on the order of $0.45/kwh). My old server (Dell) cost me $30/mo. to run. My new server (Lotus) has a run-hot CPU and costs me maybe $4/mo. Scale that up for your own personal computing needs, and it makes a difference. Efficiency is good.

          --
          Washington DC delenda est.
        • (Score: 2) by opinionated_science on Sunday September 14 2014, @12:45PM

          by opinionated_science (4031) on Sunday September 14 2014, @12:45PM (#92991)

          oooh juicy! does this happen? Because I thought they used the IR camera's first...looks better on "COPS".

    • (Score: 0) by Anonymous Coward on Saturday September 13 2014, @06:44PM

      by Anonymous Coward on Saturday September 13 2014, @06:44PM (#92783)

      Skinny pipe, less availability, using your freetime to maintain it is not 'fun'.

      • (Score: 0) by Anonymous Coward on Saturday September 13 2014, @07:35PM

        by Anonymous Coward on Saturday September 13 2014, @07:35PM (#92790)

        Skinny pipe,

        Only if you're stingy...

        less availability,

        Only if you don't know how to set up things...

        using your freetime to maintain it is not 'fun'.

        Are you on a wrong site?

        • (Score: 0) by Anonymous Coward on Saturday September 13 2014, @07:52PM

          by Anonymous Coward on Saturday September 13 2014, @07:52PM (#92797)

          Stinginess is not an outbound bottleneck.

          No, you still have less availability. You do not have staff doing things like switching over to redundant systems.

          Plenty of people here have busy professional lives.

          • (Score: 2) by frojack on Saturday September 13 2014, @08:41PM

            by frojack (1554) on Saturday September 13 2014, @08:41PM (#92803) Journal

            Did you miss the part where it said:
                 

            Hosting personal websites

            If you have staff to deal with your personal website, I can only assume I'm replying to your butler.

            --
            No, you are mistaken. I've always had this sig.
            • (Score: 0) by Anonymous Coward on Saturday September 13 2014, @08:52PM

              by Anonymous Coward on Saturday September 13 2014, @08:52PM (#92805)
              Did you miss the part where it said 'from your home', meaning that you are the one responsible for its up-time?
              • (Score: 2) by frojack on Sunday September 14 2014, @02:40AM

                by frojack (1554) on Sunday September 14 2014, @02:40AM (#92887) Journal

                So what?

                Uptime on my main home server is approaching two years.

                --
                No, you are mistaken. I've always had this sig.
                • (Score: 0) by Anonymous Coward on Sunday September 14 2014, @02:52AM

                  by Anonymous Coward on Sunday September 14 2014, @02:52AM (#92893)
                  Does that include 100% internet up-time? No router or modem restarts? No software updates or hardware upgrades? No fiddling with it because something needs tinkering? Assuming you aren't just pulling out a favorable metric, that is impressive. I've only gotten months of up-time out of my server due to power failures, and I have to spend more free-time than I'd like to keep rotating drives off-site. It adds up to a lot that most do not account for when having this discussion.
                  • (Score: 2) by frojack on Sunday September 14 2014, @08:04AM

                    by frojack (1554) on Sunday September 14 2014, @08:04AM (#92950) Journal

                    Who gives a flying fuck about 5 9s?
                    It's a personal website FFS?
                    My god you are dense.

                    --
                    No, you are mistaken. I've always had this sig.
                    • (Score: 0) by Anonymous Coward on Sunday September 14 2014, @09:24AM

                      by Anonymous Coward on Sunday September 14 2014, @09:24AM (#92962)
                      "My uptime is the winning argument in this debate... oh, wait, no it's not all the sudden!" Smooth.
                • (Score: 0) by Anonymous Coward on Sunday September 14 2014, @03:04AM

                  by Anonymous Coward on Sunday September 14 2014, @03:04AM (#92896)
                  Same AC here. Just after posting that last remark I lost my internet connection for about half an hour. Luckily it came back on in a fairly short amount of time. If I were hosting somebody's web page....
  • (Score: 0) by Anonymous Coward on Saturday September 13 2014, @06:11PM

    by Anonymous Coward on Saturday September 13 2014, @06:11PM (#92779)

    Want to save some carbon? [hpcwire.com]

    The Optalysys Optical Solver Supercomputer will initially offer 9 petaflops of compute power, increasing to 17.1 exaflops by 2020.

    Perhaps the most impressive trait of all is the reduced energy footprint. Power remains one of the foremost barriers to reaching exascale with a traditional silicon processor approach, but these optical computers are said to need only a standard mains supply. Estimated running cost: just £2,100 per year (US$3,500).

    If it works, that's 1,000 to 1 million GFLOPS/W, vs. 4.4 GFLOPS/W [green500.org].

    • (Score: 0) by Anonymous Coward on Saturday September 13 2014, @07:39PM

      by Anonymous Coward on Saturday September 13 2014, @07:39PM (#92791)

      By contrast, 50 GFLOPS/Watt would enable a 20 MW, 1 exaflops supercomputer, which is the [hpcwire.com] target [hpcwire.com].

    • (Score: 0) by Anonymous Coward on Saturday September 13 2014, @10:43PM

      by Anonymous Coward on Saturday September 13 2014, @10:43PM (#92826)

      mod parent up! the linked article deserves its own story (probably has already but i never heard of it before)

      • (Score: 0) by Anonymous Coward on Sunday September 14 2014, @12:19AM

        by Anonymous Coward on Sunday September 14 2014, @12:19AM (#92842)

        They say they will demo the 340 GFLOPS version in January, so that'd be a good time to follow-up.

  • (Score: 1) by MichaelDavidCrawford on Saturday September 13 2014, @06:52PM

    by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Saturday September 13 2014, @06:52PM (#92785) Homepage Journal

    Traditional computer science courses don't consider modern computer architecture. For example it is assumed that the cost of accessing any byte of RAM is the same as accessing any other one.

    But modern computers have multiple layers of RAM caching. There is also disk paging.

    Accessing memory that is not cached will cost electric power, to transfer the data into the cache.

    In principle it should be possible to refactor traditional algorithms to make better use of the cache.

    This will most commonly be an architecture-specific optimization.

    --
    Yes I Have No Bananas. [gofundme.com]
    • (Score: 3, Informative) by sjames on Saturday September 13 2014, @07:18PM

      by sjames (2882) on Saturday September 13 2014, @07:18PM (#92789) Journal

      There is already considerable effort in that direction in the HPC world for performance reasons. An algorithm thaty stays in cache runs much faster. Failing that, one that stays in memory (and in the case of NUMA, within local memory) will be best. If you hit swap, you lose the game.

      Over-all, this is known as data locality.

    • (Score: 2) by Adamsjas on Saturday September 13 2014, @07:52PM

      by Adamsjas (4507) on Saturday September 13 2014, @07:52PM (#92798)

      I suspect there is nothing to be gained by avoiding disk IO, unless you manage to avoid ALL disk IO.

      Starting and stopping a disk is more energy intensive than letting it spin for 5 hours. Disk motors pull almost no power.
      The cost of running a drive is in the head movement and electronics. Spin up is the worst power consumer.

      Xbitlabs did a comparison a few years ago 2008 and showed that typical laptop drives take around .15 amps (zero point 15) at rest, and .45 during rea/write operations. In 2012 the power utilization had gone up by quite a bit according to Toms Hardware.

      Caching saves no power. (Its done for speed, not power reduction). Sooner or later you are going to have to hit that disk drive.

      Machines with only SSDs will save power, but not enough to offset their cost. Not yet anyway. And not if they have to be backed up with spinning rust anyway. Very small storage devices (tablets) can get away with no disk just by storing almost nothing.

      Want to save power: Fan-less passive cooling and no monitor attached. Anything else is just tinkering at the margins.

      • (Score: 2) by Foobar Bazbot on Saturday September 13 2014, @09:39PM

        by Foobar Bazbot (37) on Saturday September 13 2014, @09:39PM (#92817) Journal

        (I'm mostly Devil's advocating here, FWIW.TL;DR I think spin up/down can be made better, but it's not worthwhile unless you're producing at least thousands of systems.)

        As for disk IO, I think part of the reality you see is due to current design choices that could just as well be different, but aren't for historical reasons. The big one to me is the all-or-nothing spin-up.

        In an ideal world, if you're just doing a periodic write-out of RAM/flash-backed data, or a predictive read-ahead, the drive could spin up quite slowly, using barely more current than running steady-state; you could do your business, then shut it back down. If you need to read data, of course, you want it to spin up quickly, because you're waiting on the results. AFAIK, no current drives allow the host to control this -- the drive either stays spun up, or it spins up at a given rate (usually pretty fast, because users don't like waiting) every time.

        An example of where this could make a huge (well, at least significant) difference: a home media server with tons of video (ripped BDs or whatever) on it. If you cache the entire filesystem's metadata in RAM, and certain sections of each video file (first minute's worth of a/v stream data, plus the seek table at the beginning or end of the file, depending on the container used) on an SSD, you could start playing from that while winding up the disk at a just-in-time rate, so the drive is ready just as you need to read more. Then you read up to n GB ahead to fill a buffer on the SSD (in practice, you'd usually get the whole file in one go), do an opportunistic flush (see below), and spin the disk down. If the user seeks outside the buffer, (most likely during the first minute, but also with a file too big to cache at once), switch over to normal, full-power spin-up, to respond as quickly as possible, but you wouldn't need the full power draw of a panic-start most of the time.

        Likewise, on receiving new files over the network, you can cache them on the SSD, then flush them to disk next time you have it spun up. Or, if the SSD starts to get full before you've written them out, you can spin the disk up -- gently -- and start flushing away.

        Now there's several obstacles -- one is that current drive firmwares don't (AFAIK) have any interface to control spin-up rate, but another is that it requires close integration of the media server and the playback front-end (or a small readahead buffer in the front-end, and a pile of heuristics on the back-end) to make sure you have the right stuff cached on time. And then there's the need for more heuristics to determine which SSD caches should be kept around (e.g. if you're watching a TV series, it should try to cache the next episode(s); it should try to keep recently-watched stuff for a little while, in case you want to rewatch with a friend; it shouldn't waste cache on something you accidentally clicked on and backed out after 5 seconds; it should keep recently-uploaded stuff; etc.). It's really a whole lot of effort touching a lot of pieces to save a little on disk power, but it is theoretically possible to save significant power vs. keeping the disks always spun up, and I think the effort (including HDDs with custom firmware) is within reach for an Apple/MS/Sony/Nintendo product, if they decided to make energy efficiency a marketing point.

        On the other hand, a system that statistically models your typical use hours (weekends and evenings, one expects), spins up the disk (even at a fast rate, since you only do it once a day) a half-hour before it expects to be needed, and doesn't spin it down until it's at the end of a typical usage pattern and no longer being accessed, might get a large fraction of the power savings while also rarely making the user wait for a spin-up, and not even need an SSD to accomplish it...

        • (Score: 2) by frojack on Sunday September 14 2014, @01:03AM

          by frojack (1554) on Sunday September 14 2014, @01:03AM (#92862) Journal

          An interesting concept, cache till you can't then spin up up slowly.

          You would still have inertia as an enemy, so fight inertia not only with fewer encounters, but less aggressive encounters.

          We spin our drives fast because we fly our heads on a cushion of air (or some other gas).

          But if we found some other way to control the head-to-disk distance we could spin them slower, and write to disk at any speed from a crawl, just enough to write logs, to massive bursts at full speed, and buffer according to then current speed.

          --
          No, you are mistaken. I've always had this sig.
        • (Score: 2) by maxwell demon on Sunday September 14 2014, @11:08AM

          by maxwell demon (1608) on Sunday September 14 2014, @11:08AM (#92972) Journal

          You wouldn't save energy by slowly spinning up. The energy of spinning up is the energy stored the rotation; that one depends only on the speed of rotation. Slowly spinning up would just distribute drawing of that energy over a larger time.

          --
          The Tao of math: The numbers you can count are not the real numbers.
          • (Score: 2) by Foobar Bazbot on Sunday September 14 2014, @07:17PM

            by Foobar Bazbot (37) on Sunday September 14 2014, @07:17PM (#93124) Journal

            The work output is the same, yes, but that's not the same as the energy input. Since torque is proportional to current, instantaneous I²R losses scales as the square of torque. Of course, the time it's running scales inversely with torque, so total energy wasted only scales linearly with torque. Still, if you run 1/4 the torque for 4x as long, reducing the total energy lost to I²R to 1/4 is pretty good.

            Granted, I have no data (other than GGP Adamsjas's [soylentnews.org] presumably hyperbolic assertion that spin-up energy is equal to 5 hours of spun-up idling) to suggest that I²R losses during spin-up are very significant, but motors optimized for long periods of constant-speed, low-power operation (as one assumes HDD motors are) will be quite inefficient when using bursts of 30x as much power. (30x comes from comparing hankwang's [soylentnews.org] figure of 5W during a 4s spin-up, vs. Adamsjas's figure of 0.15W for idle, which may not represent similar drives.)

      • (Score: 2) by hankwang on Saturday September 13 2014, @10:24PM

        by hankwang (100) on Saturday September 13 2014, @10:24PM (#92825) Homepage

        "Starting and stopping a disk is more energy intensive than letting it spin for 5 hours. Disk motors pull almost no power."

        Do you have a source for that? A typical 2.5 inch disk will draw 5 W (1 A at 5 V) for 4 seconds to spin up. I highly doubt that air drag on the platters takes just 1 mW of power.

        • (Score: 0) by Anonymous Coward on Sunday September 14 2014, @12:35AM

          by Anonymous Coward on Sunday September 14 2014, @12:35AM (#92848)

          .. if only we could couple it to some sort of flywheel to store the energy.

        • (Score: 2) by frojack on Sunday September 14 2014, @03:29AM

          by frojack (1554) on Sunday September 14 2014, @03:29AM (#92901) Journal

          Math fail.
          You can't compare one spinup with an hour of idle time.'

          You have to compare dozens of spinup, maybe even hundreds that you would have to over the hours that the drive is pulling .15 watts (150 mW) spinning.

          And spin up is longer than 4 seconds. Closer to 10, plus you have head unload and retract time, head re-fly and track calibration time EACH time you power down. More than one drive? You might have to stager the spinups.

          --
          No, you are mistaken. I've always had this sig.
  • (Score: 2) by jasassin on Saturday September 13 2014, @09:47PM

    by jasassin (3566) <jasassin@gmail.com> on Saturday September 13 2014, @09:47PM (#92818) Homepage Journal

    Jouhra Dad and Ghalem Belalem in the Department of Computer Science at Oran explain how they have developed an algorithm to control the virtual machines running on computers in a cloud environment so that energy use of the core central processing units (CPUs) and memory capacity (RAM as opposed to hard disk storage space) can be reduced as far as possible without affecting performance overall.

    Wow! I wonder if it's as awesome as RAM Doubler!?

    --
    jasassin@gmail.com GPG Key ID: 0xE6462C68A9A3DB5A
  • (Score: 2) by gallondr00nk on Saturday September 13 2014, @11:05PM

    by gallondr00nk (392) on Saturday September 13 2014, @11:05PM (#92833)

    For basic NAS stuff, Coppermine Celerons are ideal. They have wonderfully low TDP figures - ranging from 11w with the 533MHz to 29w with the 1.1GHz. I bought a 566MHz with main board and RAM for a few pounds on Ebay, added a PCI SATA card, and it now makes a rock solid file server. As an added bonus I installed an oversized copper socket A heatsink, which allows it to run fanless.

    Drive IO doesn't benchmark too badly either - 35 to 40MB a second using bog standard 7200rpm drives according to hdparm, and that's with a very cheap SATA card.

    • (Score: 0) by Anonymous Coward on Sunday September 14 2014, @12:29AM

      by Anonymous Coward on Sunday September 14 2014, @12:29AM (#92846)

      how does that compare with the throughput of a Raspberry Pi or BeagleBone Black?

      would you trade 37MB/sec at 11 watts for 20MB/sec at 5 watts?

      • (Score: 2) by Foobar Bazbot on Sunday September 14 2014, @03:21AM

        by Foobar Bazbot (37) on Sunday September 14 2014, @03:21AM (#92898) Journal

        Hey, that's not a very fair comparison.

        The nominal power consumption of a Raspberry Pi model B is AIUI 3.5W (700mA, 5V) without peripherals (model A is less), and the Beaglebone Black is something like 2.5W (500mA, 5V) with no peripherals (the manual says peak consumption during boot is 460mA with HDMI, ethernet, USB hub, and USB flash drive connected) -- a 5W minimum PSU is commonly recommended for either board, to allow for powering some peripherals as well, and is in both cases probably more than enough for a headless NAS (assuming disk power is considered separately).

        The 11W TDP for the P3-based Celeron, OTOH, is just the CPU -- not only are peripherals not included, but neither is the chipset.

        So for a fair comparison, we either need to increase 11W to a reasonable whole-board or whole-system power consumption for the Celeron, or to substantially reduce the 5W figure.

        • (Score: 2) by gallondr00nk on Sunday September 14 2014, @07:33AM

          by gallondr00nk (392) on Sunday September 14 2014, @07:33AM (#92943)

          I wasn't really comparing it to anything - A RPi with a decent power supply, powered hub and two external hard drive caddies would use a lot less power for sure.

          • (Score: 2) by maxwell demon on Sunday September 14 2014, @11:10AM

            by maxwell demon (1608) on Sunday September 14 2014, @11:10AM (#92973) Journal

            And Foobar Bazbot wasn't replying to you.

            Try the parent link next time (or adjust your settings so that no posts are completely hidden).

            --
            The Tao of math: The numbers you can count are not the real numbers.
            • (Score: 2) by gallondr00nk on Sunday September 14 2014, @11:04PM

              by gallondr00nk (392) on Sunday September 14 2014, @11:04PM (#93192)

              Fucking hell, I never even knew I didn't see modded down replies.

              Thanks :)