Highlights include MFEM community workshops, compiler co-design, HPC standards committees, and AI/ML for national security.
Topic: AI/ML
The award recognizes progress in the team's ML-based approach to modeling ICF experiments, which has led to the creation of faster and more accurate models of ICF implosions.
In a time-trial competition, participants trained an autonomous race car with reinforcement learning algorithms.
The second article in a series about the Lab's stockpile stewardship mission highlights computational models, parallel architectures, and data science techniques.
The Adaptive Computing Environment and Simulations (ACES) project will advance fissile materials production models and reduce risk of nuclear proliferation.
More than 100 million smart meters have been installed in the U.S. to record and communicate electric consumption, voltage, and current to consumers and grid operators. LLNL has developed GridDS to help make the most of this data.
An LLNL team will be among the first researchers to perform work on the world’s first exascale supercomputer—Oak Ridge National Laboratory’s Frontier—when they use the system to model cancer-causing protein mutations.
Livermore’s machine learning experts aim to provide assurances on performance and enable trust in machine-learning technology through innovative validation and verification techniques.
The Accelerating Therapeutic Opportunities in Medicine (ATOM) consortium is showing “significant” progress in demonstrating that HPC and machine learning tools can speed up the drug discovery process, ATOM co-lead Jim Brase said at a recent webinar.
Winning the best paper award at PacificVis 2022, a research team has developed a resolution-precision-adaptive representation technique that reduces mesh sizes, thereby reducing the memory and storage footprints of large scientific datasets.
LLNL participates in the International Parallel and Distributed Processing Symposium (IPDPS) on May 30 through June 3.
Technologies developed through the Next-Generation High Performance Computing Network project are expected to support mission-critical applications for HPC, AI and ML, and high performance data analytics.
Sponsored by the DSI, LLNL’s winter hackathon took place on February 16–17. In addition to traditional hacking, the hackathon included a special datathon competition in anticipation of the Women in Data Science (WiDS) conference on March 7.
From molecular screening, a software platform, and an online data to the computing systems that power these projects.
LLNL’s cyber programs work across a broad sponsor space to develop technologies addressing sophisticated cyber threats directed at national security and civilian critical infrastructure.
LC sited two different AI accelerators in 2020: the Cerebras wafer-scale AI engine attached to Lassen; and an AI accelerator from SambaNova Systems into the Corona cluster.
This project advances research in physics-informed ML, invests in validated and explainable ML, creates an advanced data environment, builds ML expertise across the complex, and more.
LLNL researchers and collaborators have developed a highly detailed, ML–backed multiscale model revealing the importance of lipids to RAS, a family of proteins whose mutations are linked to many cancers.
Highlights include power grid challenges, performance analysis, complex boundary conditions, and a novel multiscale modeling approach.
Brian Gallagher works on applications of machine learning for a variety of science and national security questions. He’s also a group leader, student mentor, and the new director of LLNL’s Data Science Challenge.
New research debuting at ICLR 2021 demonstrates a learning-by-compressing approach to deep learning that outperforms traditional methods without sacrificing accuracy.
Highlights include scalable deep learning, high-order finite elements, data race detection, and reduced order models.
BUILD tackles the complexities of HPC software integration with dependency compatibility models, binary analysis tools, efficient logic solvers, and configuration optimization techniques.
Three papers address feature importance estimation under distribution shifts, attribute-guided adversarial training, and uncertainty matching in graph neural networks.
StarSapphire is a collection of scientific data mining projects focusing on the analysis of data from scientific simulations, observations, and experiments.