Highlights include response to the COVID-19 pandemic, high-order matrix-free algorithms, and managing memory spaces.
Topic: Performance, Portability, and Productivity
Computational Scientist Ramesh Pankajakshan came to LLNL in 2016 directly from the University of Tennessee at Chattanooga. But unlike most recent hires from universities, he switched from research professor to professional researcher.
FGPU provides code examples that port FORTRAN codes to run on IBM OpenPOWER platforms like LLNL's Sierra supercomputer.
Computer scientist Greg Becker contributes to HPC research and development projects for LLNL’s Livermore Computing division.
LLNL's Advanced Simulation Computing program formed the Advanced Architecture and Portability Specialists team to help LLNL code teams identify and implement optimal porting strategies.
Highlights include the latest work with RAJA, the Exascale Computing Project, algebraic multigrid preconditioners, and OpenMP.
A new software model helps move million-line codes to various hardware architectures by automating data movement in unique ways.
Apollo, an auto-tuning extension of RAJA, improves performance portability in adaptive mesh refinement, multi-physics, and hydrodynamics codes via machine learning classifiers.
LLNL computer scientists use machine learning to model and characterize the performance and ultimately accelerate the development of adaptive applications.
LLNL researchers are finding some factors are more important in determining HPC application performance than traditionally thought.
Performance analysis of parallel scientific codes is difficult. The HAC model allows direct comparison of data across domains with data viz and analysis tools available in other domains.
This tool that automatically diagnoses performance and correctness faults in MPI applications. It identifies abnormal MPI tasks and code regions and finds the least-progressed task.
These techniques emulate the behavior of anticipated future architectures on current machines to improve performance modeling and evaluation.
Olga Pearce studies how to detect and correct load imbalance in high performance computing applications.
Kathryn Mohror develops tools that give researchers the information they need to tune their programs and maximize results. After all, she says, “It’s all about getting the answers more quickly.”