Though only an experiment, it shows that transfer standards can accelerate quickly

Nov 23, 2011 14:41 GMT  ·  By

Supercomputers are all well and good, but it can be tricky to actually get data in need of research to them, so a team from Indiana University set about to make it possible.

Several universities got to test out, during the SC11 conference, particularly the SCinet Research Sandbox (SRS), a network that operated at 100 Gbps.

The conference took place in Seattle, Washington and the SRS phase allowed for the assessment of this experimental method set up by SCinet, ESnet, and Internet2.

The link used was 2,300 miles-long (3,701 kilometers) and had complete cluster and file systems (Lustre in particular) on both ends (Indianapolis was the other location).

The peak throughput was of 96Gbps for network benchmarks, 6.5Gbps using IOR (a standard file system benchmark), and 5.2Gbps with a mix of eight real world application workflows.

“100 Gigabit per second networking combined with the capabilities of the Lustre file system could enable dramatic changes in data-intensive computing,” said Stephen Simms, manager of the High Performance File Systems group at Indiana University.

“Lustre's ability to support distributed applications, and the production availability of 100 gigabit networks connecting research universities in the US, will provide much needed and exciting new avenues to manage, analyze, and wrest knowledge from the digital data now being so rapidly produced.”

Consumers don't really have much cause to worry about this all that much, since it will be a long time before common PCs even need such connections.

Still, the technology opens new doors for data centers, supercomputers and any instances that need large data chunks to be sent form a point to another, or to cope with many incoming and outgoing connections.

Another milestone in this field was announced by Rohm not long ago: a special chip that has the potential to wirelessly transmit at 30 Gbps (so far it 'just' manages 1.5 Gbps).