The Tools Working Group delivers debugging, correctness, and performance analysis solutions at an unprecedented scale.
Topic: Performance, Portability, and Productivity
LLNL is participating in the 35th annual Supercomputing Conference (SC23), which will be held both virtually and in Denver on November 12–17, 2023.
The Center for Efficient Exascale Discretizations has developed innovative mathematical algorithms for the DOE’s next generation of supercomputers.
With this year’s results, the Lab has now collected a total of 179 R&D 100 awards since 1978. The awards will be showcased at the 61st R&D 100 black-tie awards gala on Nov. 16 in San Diego.
LLNL's zfp and Variorum software projects are winners. LLNL is a co-developing organization on the winning CANDLE project.
Collecting variants in low-level hardware features across multiple GPU and CPU architectures.
Siting a supercomputer requires close coordination of hardware, software, applications, and Livermore Computing facilities.
Variorum provides robust, portable interfaces that allow us to measure and optimize computation at the physical level: temperature, cycles, energy, and power. With that foundation, we can get the best possible use of our world-class computing resources.
Combining specialized software tools with heterogeneous HPC hardware requires an intelligent workflow performance optimization strategy.
The 2022 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC22) returned to Dallas as a large contingent of LLNL staff participated in sessions, panels, paper presentations and workshops centered around HPC.
Highlights include MFEM community workshops, compiler co-design, HPC standards committees, and AI/ML for national security.
LLNL is participating in the 34th annual Supercomputing Conference (SC22), which will be held both virtually and in Dallas on November 13–18, 2022.
The Advanced Technology Development and Mitigation program within the Exascale Computing Project shows that the best way to support the mission is through open collaboration and a sustainable software infrastructure.
LLNL participates in the ISC High Performance Conference (ISC22) on May 29 through June 2.
LLNL’s Python 3–based ATS tool provides scientific code teams with automated regression testing across HPC architectures.
The Exascale Computing Project (ECP) 2022 Community Birds-of-a-Feather Days will take place May 10–12 via Zoom. The event provides an opportunity for the HPC community to engage with ECP teams to discuss our latest development efforts.
The MAPP incorporates multiple software packages into one integrated code so that multiphysics simulation codes can perform at scale on present and future supercomputers.
Highlights include power grid challenges, performance analysis, complex boundary conditions, and a novel multiscale modeling approach.
A Livermore-developed programming approach helps software to run on different platforms without major disruption to the source code.
Supported by the Advanced Simulation and Computing program, Axom focuses on developing software infrastructure components that can be shared by HPC apps running on diverse platforms.
LLNL participates in the digital ISC High Performance Conference (ISC21) on June 24 through July 2.
Computing relies on engineers like Stephanie Brink to keep the legacy codes running smoothly. “You’re only as fast as your slowest processor or your slowest function,” says Brink, who works in CASC. By analyzing a legacy code’s performance, Brink and her team can reduce the amount of time it takes to run and allow for more critical science to be accomplished.
Highlights include scalable deep learning, high-order finite elements, data race detection, and reduced order models.
Our researchers will be well represented at the virtual SIAM Conference on Computational Science and Engineering (CSE21) on March 1–5. SIAM is the Society for Industrial and Applied Mathematics with an international community of more than 14,500 individual members.
Proxy apps serve as specific targets for testing and simulation without the time, effort, and expertise that porting or changing most production codes would require.