Research Scientists to Use Network ("Pacific Research Platform") Much Faster Than Internet
A series of ultra-high-speed fiber-optic cables will weave a cluster of West Coast university laboratories and supercomputer centers into a network called the Pacific Research Platform as part of a five-year $5 million dollar grant from the National Science Foundation.
The network is meant to keep pace with the vast acceleration of data collection in fields such as physics, astronomy and genetics. It will not be directly connected to the Internet, but will make it possible to move data at speeds of 10 gigabits to 100 gigabits among 10 University of California campuses and 10 other universities and research institutions in several states, tens or hundreds of times faster than is typical now.
The challenge in moving large amounts of scientific data is that the open Internet is designed for transferring small amounts of data, like web pages, said Thomas A. DeFanti, a specialist in scientific visualization at the California Institute for Telecommunications and Information Technology, or Calit2, at the University of California, San Diego. While a conventional network connection might be rated at 10 gigabits per second, in practice scientists trying to transfer large amounts of data often find that the real rate is only a fraction of that capacity.
(Score: 3, Interesting) by RedBear on Saturday August 01 2015, @11:14PM
Exactly. The switch, unlike the hub, does a tremendously better job at multiplexing packets together from different connections and isolating connections from being impacted by collisions with all the other connections on the network. Proper scheduling and multiplexing is what really lets things flow as well as they possibly can.
What's really interesting is that 20 years later we are still basically I/O limited by the hardware we use to store data (even SSDs) rather than by the CPU or the network (although we have gone from 10-Megabit to 1-Gigabit network speeds in the meantime). You basically have to create a RAM disk in order to successfully saturate even a 1-Gigabit connection for very long. Most of the time, in practice, the only place you'll get a boost from going to 10-Gigabit is in heavily used trunk lines connecting large numbers of machines. But still, the packet size remains the same.
Even with 100-Gigabit networking I doubt they will be able to get any reliable performance gains from increasing packet size beyond possibly a maximum of 3K (twice the current standard packet size), even though the connection is orders of magnitude faster. I wouldn't try even that except between the two 100-Gigabit endpoints, and would expect it to end up a total wash or cause a tiny decrease in performance. I have little doubt that trying to increase the packet size by an order of magnitude to 15K to match the order of magnitude speed increase would be an absolute disaster.
I would be willing to put good money on the probability that eventually some clever mathematician will come up with a mathematical proof that says there are exponentially diminishing returns with any increase (or decrease) in packet size from what we use right now. Being able to slice digital things up into suitably small pieces seems to be one of the essential cornerstones that allows us to do anything useful with any type of digital technology, and 1.5K seems to be a sweet spot in network packet size that we really have had a hard time improving upon. And it's definitely not for lack of trying.
(Again, not a networking expert, just interested in the field. Follow your doctor's orders regarding total salt intake.)
¯\_ʕ◔.◔ʔ_/¯ LOL. I dunno. I'm just a bear.
... Peace out. Got bear stuff to do. 彡ʕ⌐■.■ʔ
(Score: 0, Redundant) by CyprusBlue on Sunday August 02 2015, @02:11AM
So you think =)
(Score: 2) by RedBear on Thursday August 06 2015, @03:20AM
I shall take that as a compliment.
¯\_ʕ◔.◔ʔ_/¯ LOL. I dunno. I'm just a bear.
... Peace out. Got bear stuff to do. 彡ʕ⌐■.■ʔ