Sandia Labs FY21 LDRD Annual Report

FY21 ANNUAL REPORT

Multiple applications to benefit from distributed-in-time techniques for optimization at extreme scales (DITTO-X). Pioneering parallelization across the time dimension as a transformative computational technique for parameter estimation, design optimization and control of engineered systems driven by long-time scale dynamics was at the heart of this LDRD project. The research team developed new optimization algorithms and software geared toward extreme-scale computing, enabling the solution of critical optimization problems in power grid, pulsed power, and reentry applications. Speed-ups of 10x to 360x have been demonstrated. (PI: Denis Ridzal) Analyzing complex datasets through quantum manifold learning. Manifold learning, the task of understanding the geometric structure of a dataset, is a critical tool for reducing the dimensionality of large, complex datasets and making interpretation and analysis tractable. This research team developed a new theoretical framework and associated software tools for manifold learning on large datasets using quantum dynamical processes. While developing the algorithm, which exploited the properties of propagating quantum wavepackets, researchers connected the disparate fields of quantum dynamics and data analysis and revealed the intimate relationship between discretization induced by sampling and quantum uncertainty. The convergence and accuracy properties of their algorithm was proven and illustrated on several datasets, including an analysis of adherence to social distancing measures during the COVID-19 pandemic. (PI: Mohan Sarovar)

Increasing computing performance through high-precision sparse and dense analog matrix multiplication. Co-designing the hardware architecture and the corresponding algorithms, researchers developed several novel analog linear algebra accelerators that could revolutionize computing by enabling orders

of magnitude reduction in computation energy and time for matrix operations. Using crossbar memories accelerates both sparse and dense matrix multiplication and matrix inversion/solution. These matrix operations are critical for any large-scale simulation, including those for electromagnetics, climate, material science, structural mechanics, and fusion energy. Accelerating matrix multiplication will also accelerate neural networks and other data processing algorithms critical for cyber analytics, image processing, and more. This project has shown increased performance of greater than 100x over digital architectures for both high performance computing and high throughput data analytics. (PI: Sapan Agarwal)

Example of analog pre-conditioning solver array fabricated using Global Foundries 45 nm silicon on insulator process.

49

LABORATORY DIRECTED RESEARCH & DEVELOPMENT

Made with FlippingBook Ebook Creator