Bandwidth Challenge Will Push the Limits of Technology at SC2003

PHOENIX - A new winner of the coveted High Performance Bandwidth Challenge award will be crowned this year at SC2003, the annual conference for high-performance computing and networking, as three-time winner Lawrence Berkeley National Laboratory retires from the field.

In the Bandwidth Challenge, contestants from science and engineering research communities around the world will use the conference's state-of-the-art network infrastructure to demonstrate the latest technologies and applications, many of which are so demanding that no ordinary computer network could sustain them.

At SC2003, to be held in Phoenix, Arizona, November 15-21, 2003, the eight contending teams will be challenged to "significantly stress" the SCinet network infrastructure while moving meaningful data files across the multiple research networks that connect to SCinet, the conference's temporary but powerful on-site network infrastructure. The primary standard of performance will be verifiable network throughput as measured from the contestant's equipment through the SCinet switches and routers to external connections. SCinet will provide three OC-192c wide-area network interconnects, at 9.6 gigabits per second apiece, to the Phoenix Civic Plaza Convention Center.

Continuing a tradition started at SC2000, Qwest Communications is awarding monetary prizes for applications that make the most effective or "courageous" use of SCinet resources.

"The Bandwidth Challenge is important for many reasons," Walsh said. "It publicizes the state of the art to our colleagues in the high-performance networking and computing community. Hardware and software vendors try to prove that their products are at the forefront of technology. And perhaps most significant of all, it serves as an inspiration to the scientific community -- researchers see applications doing things that were impossible a year or two ago, and think to themselves, 'Hmmm ... I wonder what I could do with that much bandwidth?'"

The Bandwidth Challenge entrants for SC2003 include:

Bandwidth Lust: Distributed Particle Physics Analysis using Ultra High Speed TCP on the GRiD. Stanford University will show several components of a Grid-enabled distributed analysis environment being developed to search for the Higgs particles thought to be responsible for mass in the universe, and for other signs of new physics processes to be explored using particle colliders at CERN, SLAC and FNAL. It will use data from simulated events resulting from proton-proton collisions at 14 TeraelectronVolts (TeV) as they would appear in CERN 's Large Hadron Collider Compact Muon Solenoid (CMS) experiment (under construction).

Concurrent Multi-Location Write to Single File. An industry-academic-government team (YottaYotta, SGI., Navy Research Labs, University of Tennessee, StarLight, CANARIE, Netera Alliance, WestGrid) will demonstrate collaborative data processing over very large distances using a distributed file system. Servers across North America (StarLight, Phoenix and Edmonton) will read and write concurrently to a single file. Transmission of data between sites using TCP/IP will leverage distributed cache coherence so that all sites read and write to a single globally consistent data image yet maintain near local I/O rates. Aggregate throughput to disk should be in excess of 10 gigabits per second.

Distributed Lustre File System Demonstration. Using the Lustre File System, a team of NCSA, SDSC and several industry partners will demonstrate both clustered and remote file system access over local (the ASCI booth) very high bandwidth links (80 gigabits per second) combined with remote (10 Gbps) access across the 2,000 miles between Phoenix and NCSA in Illinois. Compute nodes in both locations will access servers in both locations and will read and write concurrently to a single file and to multiple files spread across the servers. Aggregate performance to disk should be tens of gigabits per second.

Grid Technology Research Center Gfarm File System. The National Institute of Advanced Industrial Science and Technology of Japan (AIST) will replicate terabyte-scale experimental data between the United States and Japan over several OC-48 links. Four clusters in Japan and one in the U.S. constitute a Grid virtual file system, federating local file systems on each cluster node. The expected file transfer rate between the U.S. and Japan -- about 10,000 km or 6,000 miles -- is expected to be at least 3 gigabits per second.

High Performance Grid-Enabled Data Movement with Striped GridFTP. A team based at the San Diego Supercomputer Center, with Argonne National Laboratory participation, will demonstrate striped GridFTP from an application, transferring several files over 1 TB in size between a 40-node grid site at SDSC and a 40-node grid site at SC2003. By harnessing the power of multiple nodes and multiple network interfaces with Striped GridFTP, data can be transferred in parallel efficiently. The GridFTP file transfer is integrated into VISTA, a rendering toolkit from SDSC, for data visualization. Applications and datasets include code and files from the National Virtual Observatory project and the Southern California Earthquake Center.

High Performance Grid-Enabled Data Movement with GPFS. The same SDSC-based team will demonstrate the use of the IBM General Parallel File System (GPFS) to transfer several files over 1 TB in size between a 40-node grid site at SDSC and a 40-node grid site at SC2003. Again, the file transfer is integrated into the VISTA rendering toolkit for data visualization, and applications and datasets include code and files from the National Virtual Observatory project and the Southern California Earthquake Center.

Multi-Continental Telescience. This multidisciplinary entry will showcase technology and partnerships encompassing telescience, microscopy, biomedical informatics, optical networking, next-generation protocols and collaborative research. The demo will include globally distributed resources and users in the U.S., Argentina, Japan, Korea, the Netherlands, Sweden and Taiwan. High network bandwidth and IPv6 enhance the control of multiple high data-rate instruments of different types, enable interactive multi-scale visualization of data pulled from the BIRN Grid and facilitate large-scale grid-enabled computation. High-performance visualization, tele-instrumentation and infrastructure for collaborative data-sharing all converge to solve multi-scale challenges in biomedical imaging.

Project DataSpace. A team associated with the National Center for Data Mining at the University of Illinois at Chicago will transport a terabyte of geoscience data between Amsterdam and Phoenix, and will demonstrate high performance Web services for a distributed application involving astronomical data distributed between Chicago, Amsterdam and Phoenix. The demo involves the Web service-based DataSpace Transfer Protocol (DSTP) and the SABUL and UDT application layer libraries for high performance network transport, which were developed as part of Project DataSpace.

For more information on the contest entries, go to http://scinet.supercomp.org/2003/bwc/entries.html.

The winners will be announced at SC2003 on Thursday, November 20. Judging criteria for the Bandwidth Challenge go beyond raw speed. Judges also will base their awards on such factors as the quality of first-time demonstrations, improvement over previously demonstrated methods, the efficiency and effectiveness of multi-continent implementations, and applicability to real-world problems. Technological criteria also involve the measurement of sustained TCP utilization, innovative features of "custom" TCP implementations, and the quality of IPv6 implementations.

An Outstanding Tradition
The High Performance Bandwidth Challenge began at SC2000 in Dallas, and has grown in complexity and aspirations every year. Ten outstanding entries competed in the third High-Performance Bandwidth Challenge at SC2002 in Baltimore, Maryland.

For the third year in a row, a team from Lawrence Berkeley National Laboratory won the competition for "Highest Performing Application" with a wide-area distributed simulation that demonstrated a peak data transfer rate of 16.8 gigabits per second -- more than five times higher than the team’s record-setting performance at the SC2001 conference and roughly 300,000 times faster than a home Internet user would get from a 56K dial-up connection.

Project DataSpace, which was demonstrated by a team from the National Center for Data Mining at the University of Illinois at Chicago, CANARIE, Northwestern University, SARA (Stichting Academisch Rekencentrum), and StarLight, won the award for "Best Use of Emerging Network Infrastructure." Will the Project DataSpace team win two in a row?

"Data Reservoir," an application demonstrated by Fujitsu Laboratories and the University of Tokyo, won the award for the "Most Efficient Use of Available Bandwidth," with a peak of 585 megabits per second. The trans-Pacific links were relatively slow, but the judges appreciated that the links were used to their fullest capacity.
Now in its 16th year, the annual SC conference is sponsored by the Institute of Electrical and Electronics Engineers Computer Society and the Association for Computing Machinery's Special Interest Group on Computer Architecture. See http://www.sc-conference.org/sc2003/ for more information.