The El Capitan Center of Excellence provides a conduit between national labs and commercial vendors, ensuring that the exascale system will meet everyone’s needs.
Topic: Emerging Architectures
UMap uniquely exploits the prominent role of complex memories in today’s servers and offers new capabilities to directly access large memory-mapped datasets.
The system will enable researchers from the National Nuclear Security Administration weapons design laboratories to create models and run simulations, previously considered challenging, time-intensive or impossible, for the maintenance and modernization of the United States’ nuclear weapons stockpile.
Can novel mathematical algorithms help scientific simulations leverage hardware designed for machine learning? A team from LLNL’s Center for Applied Scientific Computing aimed to find out.
A record number of attendees—more than 14,000—experts, researchers, vendors and enthusiasts in the field of HPC descended on the Mile High City for the 2023 International Conference for High Performance Computing, Networking, Storage and Analysis, colloquially known as SC23.
Over several years, teams have prepared the infrastructure for El Capitan, designing and building the computing facility’s upgrades for power and cooling, installing storage and compute components and connecting everything together. Once all the pieces are in place, the life of El Cap as world-class supercomputer begins.
Quandary is an open-source C++ package for optimal control of quantum systems on classical high performance computing platforms.
The Center for Efficient Exascale Discretizations has developed innovative mathematical algorithms for the DOE’s next generation of supercomputers.
Hosted at LLNL, the Center for Efficient Exascale Discretizations’ annual event featured breakout discussions, more than two dozen speakers, and an evening of bocce ball.
A team from LLNL and seven other DOE labs is a finalist for the new ACM Gordon Bell Prize for Climate Modeling for running an unprecedented high-resolution global atmosphere model on the world’s first exascale supercomputer.
Siting a supercomputer requires close coordination of hardware, software, applications, and Livermore Computing facilities.
Flux, next-generation resource and job management software, steps up to support emerging use cases.
The Tri-Lab Operating System Stack (TOSS) ensures other national labs’ supercomputing needs are met.
Livermore Computing is making significant progress toward siting the NNSA’s first exascale supercomputer.
Innovative hardware provides near-node local storage alongside large-capacity storage.
The report lays out a comprehensive vision for the DOE Office of Science and NNSA to expand their work in scientific use of AI by building on existing strengths in world-leading high performance computing systems and data infrastructure.
LLNL CTO Bronis de Supinski talks about how the Lab deploys novel architecture AI machines and provides an update on El Capitan.
As CTO of Livermore Computing, de Supinski is responsible for formulating, overseeing, and implementing LLNL’s large-scale computing strategy, requiring managing multiple collaborations with the HPC industry and academia.
Livermore CTO Bronis de Supinski joins the Let's Talk Exascale podcast to discuss the details of LLNL's upcoming exascale supercomputer.
The addition of the spatial data flow accelerator into LLNL’s Livermore Computing Center is part of an effort to upgrade the Lab’s cognitive simulation (CogSim) program.
The Lab was already using Elastic components to gather data from its HPC clusters, then investigated whether Elasticsearch and Kibana could be applied to all scanning and logging activities across the board.
LLNL participates in the ISC High Performance Conference (ISC23) on May 21–25.
An LLNL Distinguished Member of Technical Staff, Gokhale is considered an expert in her field, and continues to enjoy the fast pace of innovation and change in computing.
The 2022 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC22) returned to Dallas as a large contingent of LLNL staff participated in sessions, panels, paper presentations and workshops centered around HPC.
The award recognizes progress in the team's ML-based approach to modeling ICF experiments, which has led to the creation of faster and more accurate models of ICF implosions.