Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday August 01 2015, @07:25PM   Printer-friendly
from the now-that-is-fast dept.

Research Scientists to Use Network ("Pacific Research Platform") Much Faster Than Internet

A series of ultra-high-speed fiber-optic cables will weave a cluster of West Coast university laboratories and supercomputer centers into a network called the Pacific Research Platform as part of a five-year $5 million dollar grant from the National Science Foundation.

The network is meant to keep pace with the vast acceleration of data collection in fields such as physics, astronomy and genetics. It will not be directly connected to the Internet, but will make it possible to move data at speeds of 10 gigabits to 100 gigabits among 10 University of California campuses and 10 other universities and research institutions in several states, tens or hundreds of times faster than is typical now.

The challenge in moving large amounts of scientific data is that the open Internet is designed for transferring small amounts of data, like web pages, said Thomas A. DeFanti, a specialist in scientific visualization at the California Institute for Telecommunications and Information Technology, or Calit2, at the University of California, San Diego. While a conventional network connection might be rated at 10 gigabits per second, in practice scientists trying to transfer large amounts of data often find that the real rate is only a fraction of that capacity.

Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by RedBear on Saturday August 01 2015, @11:14PM

    by RedBear (1734) on Saturday August 01 2015, @11:14PM (#216885)

    Back in the mid-90s I was consulting for a company that used a bunch of PCs to hold commercials encoded with MPEG-1. We needed to move these files around fairly often (every 10 minutes or so). As it was Linux, and both the ethernet drivers and the hardware supported larger packet sizes, I experimented with making them. Turned out making the packets up to 4.5k (largest supported) made essentially 0 difference in the transfer time.
    What finally solved out congestion problems was the introduction of network switches. Saw em at Comdex, ordered a couple, and all our congestion issues went away.
    Fun fact. We were running on '486 boxes. We tried Pentiums but they did nothing except cost more. We were I/O bound, the CPU was idling.

    Exactly. The switch, unlike the hub, does a tremendously better job at multiplexing packets together from different connections and isolating connections from being impacted by collisions with all the other connections on the network. Proper scheduling and multiplexing is what really lets things flow as well as they possibly can.

    What's really interesting is that 20 years later we are still basically I/O limited by the hardware we use to store data (even SSDs) rather than by the CPU or the network (although we have gone from 10-Megabit to 1-Gigabit network speeds in the meantime). You basically have to create a RAM disk in order to successfully saturate even a 1-Gigabit connection for very long. Most of the time, in practice, the only place you'll get a boost from going to 10-Gigabit is in heavily used trunk lines connecting large numbers of machines. But still, the packet size remains the same.

    Even with 100-Gigabit networking I doubt they will be able to get any reliable performance gains from increasing packet size beyond possibly a maximum of 3K (twice the current standard packet size), even though the connection is orders of magnitude faster. I wouldn't try even that except between the two 100-Gigabit endpoints, and would expect it to end up a total wash or cause a tiny decrease in performance. I have little doubt that trying to increase the packet size by an order of magnitude to 15K to match the order of magnitude speed increase would be an absolute disaster.

    I would be willing to put good money on the probability that eventually some clever mathematician will come up with a mathematical proof that says there are exponentially diminishing returns with any increase (or decrease) in packet size from what we use right now. Being able to slice digital things up into suitably small pieces seems to be one of the essential cornerstones that allows us to do anything useful with any type of digital technology, and 1.5K seems to be a sweet spot in network packet size that we really have had a hard time improving upon. And it's definitely not for lack of trying.

    (Again, not a networking expert, just interested in the field. Follow your doctor's orders regarding total salt intake.)

    --
    ¯\_ʕ◔.◔ʔ_/¯ LOL. I dunno. I'm just a bear.
    ... Peace out. Got bear stuff to do. 彡ʕ⌐■.■ʔ
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0, Redundant) by CyprusBlue on Sunday August 02 2015, @02:11AM

    by CyprusBlue (943) on Sunday August 02 2015, @02:11AM (#216909)

    So you think =)

    • (Score: 2) by RedBear on Thursday August 06 2015, @03:20AM

      by RedBear (1734) on Thursday August 06 2015, @03:20AM (#218935)

      I shall take that as a compliment.

      --
      ¯\_ʕ◔.◔ʔ_/¯ LOL. I dunno. I'm just a bear.
      ... Peace out. Got bear stuff to do. 彡ʕ⌐■.■ʔ