Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Thursday July 02 2015, @06:31AM   Printer-friendly
from the trip-through-your-wires dept.

The Platform reports that Google's networking ambitions have scaled up along with their datacenters:

The gap between what a hyperscaler can build and what it might need seems to be increasing, and perhaps at a ridiculous pace. When the networking industry was first delivering 40 Gb/sec Ethernet several years back, search engine giant Google was explaining that it would very much like to have 1 Tb/sec switching. And as we enter the epoch of 100 Gb/sec networking and contemplate perhaps delivering 1 Tb/sec Ethernet gear maybe in 2020 or so, Google has once again come forward and explained that what will be really needed soon is something closer to 5 Pb/sec networking.

You heard that right. In a recent presentation where Amin Vahdat, a Google Fellow and technical lead for networking, gave some details on ten years' worth of homemade networking inside of the company, he said "petabits per second." This, in a 2015 world where getting a switch to do something north of 6 terabits per second out of a single switch ASIC is an accomplishment. Forget for a second that the Ethernet industry has no idea whatsoever about how it might increase the switching bandwidth by close to three orders of magnitude. The math that Vahdat walked through is as fascinating as it is fun.

[...] To illustrate the issues facing datacenter and system designers in the future, Vahdat brought up another of Amdahl's Laws, the one that says you should have 1 Mbit/sec of I/O for every 1 MHz of computation to maintain a balanced system in a parallel computing environment. And pointed out that with the adoption of flash today and other non-volatile memories in the future, which have very high bandwidth and very low latency requirements, keeping the network in balance with compute and storage is going to be a big challenge. [...] For 50,000 servers running at [2.5 GHz - see here], Amdahl's lesser law suggests, according to Vahdat, that we need a 5 Pb/sec network. And even with a 10:1 oversubscription rate on the network, you are still talking about needing a 500 Tb/sec network.

To put that into perspective, Vahdat estimates that the backbone of the Internet has around 200 Tb/sec of bandwidth. And bandwidth is not the only issue. If you want that NVM memory to be useful, then this future hyper-bandwidth network has to provide 10 microsecond latencies between the servers and the storage, and even for flash-based storage, you need 100 microsecond latencies to make the storage look local (more or less) to the servers. Otherwise, the servers, which are quite expensive, will be sitting there idle a lot of the time, waiting for data.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by kaszz on Thursday July 02 2015, @11:53AM

    by kaszz (4211) on Thursday July 02 2015, @11:53AM (#204172) Journal

    Put storage where the computers that uses it are? And be a lot smarter about using it. Yes one can make pigs fly but it wastes a lot of resources.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by forkazoo on Thursday July 02 2015, @09:05PM

    by forkazoo (2561) on Thursday July 02 2015, @09:05PM (#204411)

    Storage locality certainly matters. It's absolutely a part of the equation.

    But, you still need to get the data to some user outside your data center, and to replicate data between data centers. 1 Pbit/sec is 1 Mbit/sec for a billion people if I have kept track of the decimal places. It's a lot, but on the scale of something like Google, it's not actually an unreasonable amount of data to need to get out to users as individual users might expect to get 100 Mbit + from their mobile phones and have gigabit connections at home, and expect to cloud sync data between 3-5 devices per user moving forward. (Tablet, phone, laptop, desktop is a pretty reasonable range of devices for a person to use regularly.)

    • (Score: 2) by kaszz on Thursday July 02 2015, @11:36PM

      by kaszz (4211) on Thursday July 02 2015, @11:36PM (#204468) Journal

      I think part of the solution is to parallelize things otherwise one will end up with these bottlenecks.