Researchers develop innovative data representations and algorithms to provide faster, more efficient ways to preserve information encoded in data.
Topic: HPC Systems and Software
Computational Scientist Ramesh Pankajakshan came to LLNL in 2016 directly from the University of Tennessee at Chattanooga. But unlike most recent hires from universities, he switched from research professor to professional researcher.
Highlights include perspectives on machine learning and artificial intelligence in science, data driven models, autonomous vehicle operations, and the OpenMP standard 5.0.
FGPU provides code examples that port FORTRAN codes to run on IBM OpenPOWER platforms like LLNL's Sierra supercomputer.
Umpire is a resource management library that allows the discovery, provision, and management of memory on next-generation architectures.
Computer scientist Greg Becker contributes to HPC research and development projects for LLNL’s Livermore Computing division.
Highlights include debris and shrapnel modeling at NIF, scalable algorithms for complex engineering systems, magnetic fusion simulation, and data placement optimization on GPUs.
Users need tools that address bottlenecks, work with programming models, provide automatic analysis, and overcome the complexities and changing demands of exascale architectures.
This open-source file system framework supports hierarchical HPC storage systems by utilizing node-local burst buffers.
Highlights include CASC director Jeff Hittinger's vision for the center as well as recent work with PruneJuice DataRaceBench, Caliper, and SUNDIALS.
LLNL's interconnection networks projects improve the communication and overall performance of parallel applications using interconnect topology-aware task mapping.
The PRUNERS Toolset offers four novel debugging and testing tools to assist programmers with detecting, remediating, and preventing errors in a coordinated manner.
LLNL's Advanced Simulation Computing program formed the Advanced Architecture and Portability Specialists team to help LLNL code teams identify and implement optimal porting strategies.
BLT software supports HPC software development with built-in CMake macros for external libraries, code health checks, and unit testing.
Highlights include the latest work with RAJA, the Exascale Computing Project, algebraic multigrid preconditioners, and OpenMP.
Highlights include complex simulation codes, uncertainty quantification, discrete event simulation, and the Unify file system.
Highlights include recent LDRD projects, Livermore Tomography Tools, our work with the open-source software community, fault recovery, and CEED.
A new software model helps move million-line codes to various hardware architectures by automating data movement in unique ways.
Sphinx, an integrated parallel microbenchmark suite, consists of a harness for running performance tests and extensive tests of MPI, Pthreads and OpenMP.
“If applications don’t read and write files in an efficient manner,” system software developer Elsa Gonsiorowski warns, “entire systems can crash.”
Highlights include the HYPRE library, recent data science efforts, the IDEALS project, and the latest on the Exascale Computing Project.
Apollo, an auto-tuning extension of RAJA, improves performance portability in adaptive mesh refinement, multi-physics, and hydrodynamics codes via machine learning classifiers.
Large Linux data centers require flexible system management. At Livermore Computing, we are committed to supporting our Linux ecosystem at the high end of commodity computing.
This project's techniques reduce bandwidth requirements for large unstructured data by making use of data compression and optimizing the layout of the data for better locality and cache reuse.
Researchers are developing a standardized and optimized operating system and software for deployment across Linux clusters to enable HPC at a reduced cost.