Sandia Labs FY21 LDRD Annual Report

FY21 ANNUAL REPORT

Impacting future nuclear deterrence products through new unique discriminators. A new unique signal (UQS) discriminator could replace the current all-mechanical discriminator and increase the ease of integration and reduce time-to-field for nuclear deterrence surety systems. This LDRD project team’s near-term goal was to define requirements and explore technology, design, and architecture options (proof-of-concept prototypes) for a new discriminator that would maintain or improve surety (safety, security, reliability). Building on those successes, the expected outcome of this ongoing project is a prototype UQS discriminator design with the enhanced agility to have significant national security impact by lowering system lifetime costs and reducing time-to-field for new systems. (PI: Paul Galambos) Successful machine-learning (ML)-based radioisotope identification (RIID) will significantly impact the nation’s ability to perform real-time, autonomous RIID on deployed detectors and provide additional capabilities to analysts. In pursuit of this goal, researchers created a Python-based software package (PyRIID ) that enables users to rapidly create a radioisotope-targeting ML model based on signatures observed with their detector(s) and then use that model to perform RIID in real-time. Note that the most efficient way to build models requires a well-crafted detector response function, for which GADRAS (Gamma Detector Response and Analysis Software) is recommended. With this base technology established, the next step is to continue building trust through demonstrations of ML-based RIID effectiveness in multiple global security applications. (PI: Tyler Morrow) Sensitivity analysis-guided explainability for machine learning. Despite the potential of machine learning (ML) models to address the efficient analysis of increasing amounts of data, ML algorithms are not fully leveraged in many high consequence national security domains because their output cannot be sufficiently validated and trusted. This research investigated the quality and impact of explanations for ML models using sensitivity analysis and user studies, and exposed significant gaps in explainable ML that should be examined further. Using sensitivity analysis methods to quantitatively assess the quality of explanations revealed that most state-of-the-art explainable ML methods explain the ML model with low fidelity. The low fidelity explanations are due to simplifying assumptions (e.g., independent features and linear approximations) made by explainable ML methods. The analyses from this LDRD provides a foundation for trusting conditions explanations. It also explains how to identify and address gaps, and significantly improve both the fidelity of explanations and the Extending the capability of machine learning for use in radioisotope identification of gamma spectra.

usage of the model outputs in deployed settings. This research also highlighted the need for more disciplines to be represented in the teams supporting the effort, including computer science, human cognition, mathematics, and domain experts where the ML model will be deployed. (PI: Michael Smith)

SAGE: Sensitivity Analysis Guided Explainability (SAGE). Idealized information for validating the output of an ML model capturing model uncertainty, prediction uncertainty, and explaining the decision process.

33

LABORATORY DIRECTED RESEARCH & DEVELOPMENT

Made with FlippingBook Ebook Creator