LinuxPR: World's Fastest Linux Supercomputer to Bolster National Security Project
Jul 16, 2002, 01:00 (16 Talkback[s])
SALT LAKE CITY, July 15, 2002 - Lawrence Livermore National
Laboratory (LLNL) selected Linux NetworX to design, integrate and
deliver what will be the largest and most powerful Linux
supercomputer by Fall 2002. Multiple programs at LLNL will use the
Linux NetworX Evolocity clustered supercomputer to support the
Laboratory's national security mission. When delivered, the
Intel-based cluster is expected to be one of the five fastest
supercomputers in the world.
"A machine of this size is very complex to integrate and manage.
The partnership between Linux NetworX and LLNL is essential to the
success of this endeavor," said Dr. Mark Seager, LLNL's Asst. Dept.
Head for TeraScale Systems. "This Linux NetworX system will
significantly expand the computing resources available to
Livermore's researchers. We are very excited about the unclassified
scientific simulations that will be accomplished on this
world-class Linux Cluster."
The Linux NetworX Evolocity cluster, when delivered to LLNL, will
be the fastest Intel-based or Linux cluster machine ever built, as
it will harness 1,920 Intel Xeon processors at 2.4 GHz with a
theoretical peak of 9.2 teraFLOPS, or 9.2 trillion calculations per
"This Intel-based Linux NetworX system is historic in that it
represents a viable method of using standards-based technologies to
create some of the fastest supercomputers in the world," said Lisa
Hambrick, director of enterprise processor marketing for Intel.
"Linux NetworX and Intel are expanding the possibilities of
supercomputing into a world where the fastest machines are powered
by cost-effective and very powerful Intel Xeon processors."
Several factors enabled Linux NetworX to win this competitive
procurement, some of which include:
- Clustering expertise, http://www.lnxi.com/company/index.html,
gained from years of installing and supporting some of the largest
clusters in the world. Linux NetworX designed and delivered the
world's first commercial Linux cluster in 1997.
- LinuxBIOS, http://www.linuxnetworx.com/products/linuxbios.php,
an open BIOS alternative that can boot nodes in seconds, is
remotely manageable and is designed specifically for cluster
- ICE BoxTM, http://www.lnxi.com/products/icebox.php,
a Linux NetworX appliance designed specifically for management of
Linux clusters, providing system monitoring and control
functionality such as power control and serial access.
- Sub 1U EvolocityTM II, http://www.linuxnetworx.com/products/e2.php,
a double-density node design with an innovative architecture.
*Product to be unveiled Fall 2002.
- Co-development of SLURM (Simple Linux Utility for Resource
Management) with LLNL, http://www.llnl.gov/linux/slurm/slurm.html.
SLURM is an open source resource management system developed for
Linux clusters providing scalability, portability, interconnect
independence, fault-tolerance and security.
"This Linux NetworX system is representative of the next stage
in the evolution of supercomputing," said Stephen Hill, Linux
NetworX President and CEO. "Clustering allows organizations to
achieve results quicker, with far greater flexibility at a lower
cost-of-ownership than is possible with competing technologies.
This is why Linux clusters are rapidly becoming the standard in
high performance computing."
System Fact Sheet
For a diagram of the system and more specific detail on the
supercomputer Linux NetworX is building for LLNL, visit http://www.linuxnetworx.com/news/llnl_info.php.
Each node within the cluster contains QsNet ELAN3 by Quadrics, 4 GB
of DDR SDRAM memory per node, 120 GB Disk Space. ICE Box will also
be used to help LLNL manage and maintain the clusters. Below is the
breakdown of the Linux cluster system specs.
- 9.2 Tflops Linux cluster of Dual Intel® Xeon processors at
- 3.8 TB of aggregate memory
- 115.2 TB of aggregate local disk space
- 962 total nodes plus separate hot spare cluster and development
- 1,920 Intel Xeon processors at 2.4 GHz
- Sub 1U Evolocity node for 924 compute nodes
- LinuxBIOS on all nodes
- ICE Box 3.0
- Blue Arc Si7500 Storage Systems with a combined storage
capacity of 115 terabytes
- Cluster File Systems, Inc. supplied the Lustre Open Source
cluster wide file system