Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday August 01 2015, @07:25PM   Printer-friendly
from the now-that-is-fast dept.

Research Scientists to Use Network ("Pacific Research Platform") Much Faster Than Internet

A series of ultra-high-speed fiber-optic cables will weave a cluster of West Coast university laboratories and supercomputer centers into a network called the Pacific Research Platform as part of a five-year $5 million dollar grant from the National Science Foundation.

The network is meant to keep pace with the vast acceleration of data collection in fields such as physics, astronomy and genetics. It will not be directly connected to the Internet, but will make it possible to move data at speeds of 10 gigabits to 100 gigabits among 10 University of California campuses and 10 other universities and research institutions in several states, tens or hundreds of times faster than is typical now.

The challenge in moving large amounts of scientific data is that the open Internet is designed for transferring small amounts of data, like web pages, said Thomas A. DeFanti, a specialist in scientific visualization at the California Institute for Telecommunications and Information Technology, or Calit2, at the University of California, San Diego. While a conventional network connection might be rated at 10 gigabits per second, in practice scientists trying to transfer large amounts of data often find that the real rate is only a fraction of that capacity.

Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Sunday August 02 2015, @04:17PM

    by Anonymous Coward on Sunday August 02 2015, @04:17PM (#217035)

    In my home network, I am using jumbo frame packet sizes of 4088 bytes where possible on wired connections. Not all of my hardware has the option; some don't do JF at all and some only allow 9000+ sizes.

    I found that the MTU of 4088, despite the fact that some hardware did not support it [those continue to use 1500) was the "sweet spot" after doing much testing. My original goal of the testing was to determine if my firewall had a bottleneck (I had modified it to some extent and wished to see how it handled the routing between different subnets across multiple interfaces, without first worrying about actual access-lists or security sorts of stuff... the firewall has to perform first, otherwise applying security just makes the performance worse...)

    What I had found is that it was difficult to come to a standard for the home network, but I got close enough and a performance boost as well. The various network cards, installed in machines of various ages and operating systems, had few common denominators, but making sure they were consistent, that the switches connecting them were configured appropriately, and that any localized bottlenecks were taken care of--I did see an improvement.

    Note that you can't expect to hit gigabit network speeds (well, over 100mbps anyway) on average on 486/pentium class hardware without a good NIC and storage that supported higher speed IO, like a RAID 5 or RAID 10 or something.

    CPU speed did make a difference; gigabit speeds do cause the CPU to become quite busy. I have seen hosts max out their CPU and drop packets when using a cheap card trying to go gigabit on an otherwise well designed network. The network is nearly always blamed...

    Thinking of your experience, I'd have to think that setting jumbo frames on a network consisting of hubs would make the performance markedly worse, so I have to believe your performance was not very good to begin with. There would be little to no buffering on such equipment and it'd rely entirely on the hosts and their nics. There were certainly some high end cards available back then (server grade gigabit cards from then can still out perform integrated nics in consumer stuff today), but jumbo frames on a hub based network? I'd think you'd have seen an improvement if you REDUCED the MTU size to something smaller... not make it bigger. So much has to be retransmitted if a big packet like that has a CRC error or collision. Adding the switches was, even without needing any facts... would be the biggest bang for your buck. Jumbo frames only add single digit percent returns (on gigabit). I couldn't imagine you being able to realize a gain on hubs due to the problems inherent in a collision domain like that.

    If you remember dial-up, which I am sure you do, think of the differences between xmodem and ymodem and jmodem and zmodem -- they all had different packet sizes, and squeezed out more CPS rates with various methods of CRC or error control. Jmodem sent larger blocks and could give a 2400bps connection something like 266cps compared to zmodem's 232 or so. xmodem was pretty crummy in comparison and ymodem was a generic 'works on most systems and was better than xmodem' ymodem-g gave you a few extra cps, but nothing like jmodem.

    But, if you had a crummy connection, a cheap uart like an 8250, or some really chintzy modem/serial cable -- basically if you looked too hard, jmodem would have issues and frustrate you.

    Not too different than jumbo frames...