[2] The OLCF’s flagship supercomputer, the IBM AC922 Summit, is supported by advanced data management and analysis tools.
The center hosted the Cray XK7 Titan system, one of the most powerful scientific tools of its time, from 2012 through its retirement in August 2019.
[3] On December 9, 1991, Congress signed the High-Performance Computing Act (HPCA) of 1991, created by Senator Al Gore.
Eagle also had eight Winterhawk-II “wide” nodes - each with two 375 MHz Power3-II processors and 2 GB of memory—for use as filesystem servers and other infrastructure tasks.
Falcon was a 64-node Compaq AlphaServer SC operated by the CCS and acquired as part of an early-evaluation project.
It had four 667 MHz Alpha EV67 processors with 2 GB of memory per node and 2 TB of Fiber Channel disk attached, resulting in an estimated computational power of 342 gigaflops.
By the time of its ultimate transformation into Titan in 2012,[23] Jaguar contained nearly 300,000 processing cores and had a theoretical performance peak of 3.3 petaflops.
Hawk was installed in 2006 and was used as the Center’s primary visualization cluster until May 2008 when it was replaced by a 512-core system named Lens.
[29] Eos provided a space for tool and application porting, small scale jobs to prepare capability runs on Titan, as well as software generation, verification, and optimization.
[30] Titan was a hybrid-architecture Cray XK7 system with a theoretical peak performance exceeding 27,000 trillion calculations per second (27 petaflops).
It contained both advanced 16-core AMD Opteron CPUs and NVIDIA Kepler graphics processing units (GPUs).
[31] Titan featured 18,688 compute nodes, a total system memory of 710 TB, and Cray’s high-performance Gemini network.
Its 299,008 CPU cores guided simulations and the accompanying GPUs handled hundreds of calculations simultaneously.
The system provided decreased time to solution, increased complexity of models, and greater realism in simulations.
As an extremely high-performance system, Spider has over 20,000 clients, providing 32 PB of disk space, and it can move data at more than 1 TB/s.
Spider comprises two filesystems, Atlas1 and Atlas2, in order to provide high availability and load balance across multiple metadata servers for increased performance.
[37] EVEREST (Exploratory Visualization Environment for Research in Science and Technology) is a large-scale venue for data exploration and analysis.
These 14 nodes have NVIDIA QuadroFX 3000G graphics cards connected to the projectors, providing a very-high-throughput visualization capability.
It houses a 12-panel tiled LCD display, test cluster nodes, interaction devices, and video equipment.
Rhea provides a conduit for large-scale scientific discovery via pre- and post-processing of simulation data generated on the Titan supercomputer.
These nodes each have 1 TB of main memory and two NVIDIA K80 GPUs with two 14-core 2.30 GHz Intel Xeon processors with HT Technology.
This system is available to support computer science research projects aimed at exploring the ARM architecture.
The Wombat cluster has 16 compute nodes, four of which have two AMD GPU accelerators attached (eight GPUs total in the system).
[40] Summit is also the first computer to reach exascale performance, achieving a peak throughput of 1.88 exaops through a mixture of single- and half-precision floating point operations.
[43] Originally scheduled for delivery in 2021 with user access becoming available the following year, Frontier is ORNL’s first sustainable exascale system, meaning it is capable of performing one quintillion—one billion billion—operations per second.