Scientific applications often have a broad range of real-world variations—data bias, noise, unknown transformations, adversarial corruptions, or other changes in distribution. Many of LLNL’s mission-critical applications are considered high regret, implying that faulty decisions can risk human safety or incur significant costs. As vulnerable ML systems are pervasively deployed, manipulation and misuse can have serious consequences.

A sustainable acceptance of ML requires evolving from an exploratory phase into development of assured ML systems that provide rigorous guarantees on robustness, fairness, and privacy. We’re using techniques from optimization, information theory, and statistical learning theory to achieve these properties, as well as designing tools to efficiently apply these techniques to large-scale computing systems.

four people at computer workstations in a dark blue server room

Model Failure and Resiliency

A paper from the 2024 International Conference on Machine Learning investigates how likely AI/ML models are to be inaccurate.

shield icon with a lock icon on an abstract blue background with rays of lines fanning out from the bottom of the shield

AI/ML Model Robustness

LLNL researchers study model robustness in a paper accepted to the 2024 International Conference on Machine Learning.

eight people stand in a group

Workshop on AI Safety

DOE national labs, academia, and industry convened recently at LLNL for a workshop aimed at aligning strategies for ensuring safe AI.

collage of screen shots of NLPVis user interface

NLPVis Software Repository

NLPVis is designed to visualize the attention of neural network based natural language models.