Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by LaminatorX on Thursday July 02 2015, @06:31AM   Printer-friendly
from the trip-through-your-wires dept.

The Platform reports that Google's networking ambitions have scaled up along with their datacenters:

The gap between what a hyperscaler can build and what it might need seems to be increasing, and perhaps at a ridiculous pace. When the networking industry was first delivering 40 Gb/sec Ethernet several years back, search engine giant Google was explaining that it would very much like to have 1 Tb/sec switching. And as we enter the epoch of 100 Gb/sec networking and contemplate perhaps delivering 1 Tb/sec Ethernet gear maybe in 2020 or so, Google has once again come forward and explained that what will be really needed soon is something closer to 5 Pb/sec networking.

You heard that right. In a recent presentation where Amin Vahdat, a Google Fellow and technical lead for networking, gave some details on ten years' worth of homemade networking inside of the company, he said "petabits per second." This, in a 2015 world where getting a switch to do something north of 6 terabits per second out of a single switch ASIC is an accomplishment. Forget for a second that the Ethernet industry has no idea whatsoever about how it might increase the switching bandwidth by close to three orders of magnitude. The math that Vahdat walked through is as fascinating as it is fun.

[...] To illustrate the issues facing datacenter and system designers in the future, Vahdat brought up another of Amdahl's Laws, the one that says you should have 1 Mbit/sec of I/O for every 1 MHz of computation to maintain a balanced system in a parallel computing environment. And pointed out that with the adoption of flash today and other non-volatile memories in the future, which have very high bandwidth and very low latency requirements, keeping the network in balance with compute and storage is going to be a big challenge. [...] For 50,000 servers running at [2.5 GHz - see here], Amdahl's lesser law suggests, according to Vahdat, that we need a 5 Pb/sec network. And even with a 10:1 oversubscription rate on the network, you are still talking about needing a 500 Tb/sec network.

To put that into perspective, Vahdat estimates that the backbone of the Internet has around 200 Tb/sec of bandwidth. And bandwidth is not the only issue. If you want that NVM memory to be useful, then this future hyper-bandwidth network has to provide 10 microsecond latencies between the servers and the storage, and even for flash-based storage, you need 100 microsecond latencies to make the storage look local (more or less) to the servers. Otherwise, the servers, which are quite expensive, will be sitting there idle a lot of the time, waiting for data.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Gravis on Thursday July 02 2015, @08:05AM

    by Gravis (4596) on Thursday July 02 2015, @08:05AM (#204135)

    hyperscaler - one who makes hyperscale systems
    hyperscale system - a system that can scale almost indefinitely through the use of parallelism. see also: supercomputer.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 4, Interesting) by takyon on Thursday July 02 2015, @08:19AM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday July 02 2015, @08:19AM (#204136) Journal

    https://en.wikipedia.org/wiki/Hyperscale [wikipedia.org]

    In computing, hyperscale is the ability of an architecture to scale appropriately as increased demand is added to the system. This typically involves the ability to seamlessly provision and add compute, memory, networking, and storage resources to a given node or set of nodes that make up a larger computing, distributed computing, or grid computing environment. Hyperscale computing is necessary in order to build a robust and scalable cloud, big data, map reduce, or distributed storage system and is often associated with the infrastructure required to run large distributed sites such as Facebook,[1] Google,[2] Microsoft,[3] or Amazon.[4][5]

    It's different than most supercomputers which are usually set in stone. It handles a lot more communication/data, and more nodes in more places. Your scientific simulation-running supercomputer doesn't need to be distributed on different continents in order to connect to home users faster.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: -1, Flamebait) by Anonymous Coward on Thursday July 02 2015, @08:22AM

      by Anonymous Coward on Thursday July 02 2015, @08:22AM (#204137)

      hyperpower (plural hyperpowers)

      (international relations) An international hegemon, more powerful than a superpower  

      What is a hegemon?

      From Ancient Greek ἡγεμών (hēgemṓn, “a leader, guide, commander, chief”), from ἡγέομαι (hēgéomai, “to lead”).

      What is a commander in chief?

      OBAMA's MOTHERfucking ASS owns YOU

    • (Score: 2) by threedigits on Thursday July 02 2015, @01:26PM

      by threedigits (607) on Thursday July 02 2015, @01:26PM (#204220)

      It's different than most supercomputers which are usually set in stone.

      It's different in the goals also. A supercomputer is architected with the purpose of running a single (complex) job as fast as possible. An hyperscale datacenter runs multiple (simpler) jobs concurrently. In the later case, it's the number of jobs what scales.

  • (Score: 0) by Anonymous Coward on Thursday July 02 2015, @10:15AM

    by Anonymous Coward on Thursday July 02 2015, @10:15AM (#204157)

    Perhaps they meant 'hyperscalar'? (a higher dimensional scalar)

    In any case, it's heavy on the 'hype'.