Unique among data compressors, zfp is designed to be a compact number format for storing data arrays in-memory in compressed form while still supporting high-speed random access.
Topic: Data Science
The addition of the spatial data flow accelerator into LLNL’s Livermore Computing Center is part of an effort to upgrade the Lab’s cognitive simulation (CogSim) program.
Since 2018, the Lab has seen tremendous growth in its data science community and has invested heavily in related research. Five years later, the Data Science Institute has found its stride.
A novel ML method discovers and predicts key data about networked devices.
Open-source software has played a key role in paving the way for LLNL's ignition breakthrough, and will continue to help push the field forward.
libROM is a library designed to facilitate Proper Orthogonal Decomposition (POD) based Reduced Order Modeling (ROM).
The prestigious fellow designation is a lifetime honorific title and honors SIAM members who have made outstanding contributions to fields served by the organization.
The new model addresses a problem in simulating RAS behavior, where conventional methods come up short of reaching the time- and length-scales needed to observe biological processes of RAS-related cancers.
A new component-wise reduced order modeling method enables high-fidelity lattice design optimization.
Women data scientists, Lab employees, and other attendees interested in the field gathered at the Livermore Valley Open Campus for the annual Livermore Women in Data Science (WiDS) regional event held in conjunction with the global WiDS conference.
A principal investigator at LLNL shares how machine learning on the world’s fastest systems catalyzed the lab’s breakthrough.
Register by February 27 for this free, hybrid Women in Data Science event. Everyone is welcome.
Collaborative autonomy software apps allow networked devices to detect, gather, identify and interpret data; defend against cyber-attacks; and continue to operate despite infiltration.
From our fall 2022 hackathon, watch as participants trained an autonomous race car with reinforcement learning algorithms.
A new collaboration will leverage advanced LLNL-developed software to create a “digital twin” of the near-net shape mill-products system for producing aerospace parts.
Adding machine learning and other artificial intelligence methods to the feedback cycle of experimentation and computer modeling can accelerate scientific discovery.
High performance computing was key to the December 5 breakthrough at the National Ignition Facility.
Two supercomputers powered the research of hundreds of scientists at Livermore’s NNSA National Ignition Facility, which recently achieved ignition.
LLNL researchers have developed a novel machine learning (ML) model that can predict 10 distinct polymer properties more accurately than was possible with previous ML models.
The 2022 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC22) returned to Dallas as a large contingent of LLNL staff participated in sessions, panels, paper presentations and workshops centered around HPC.
Highlights include MFEM community workshops, compiler co-design, HPC standards committees, and AI/ML for national security.
High-precision numerical data from computer simulations, observations, and experiments is often represented in floating point and can easily reach terabytes to petabytes of storage.
The award recognizes progress in the team's ML-based approach to modeling ICF experiments, which has led to the creation of faster and more accurate models of ICF implosions.
LLNL is participating in the 34th annual Supercomputing Conference (SC22), which will be held both virtually and in Dallas on November 13–18, 2022.
In a time-trial competition, participants trained an autonomous race car with reinforcement learning algorithms.