The Artificial Intelligence Enhanced Co-Design for Next Generation Microelectronics virtual workshop was held April 4-5, 2023, and attended by subject matter experts from universities, industry, and national laboratories. This was the third in a series of workshops to motivate the research community to identify and address major challenges facing microelectronics research and production. The 2023 workshop focused on a set of topics from materials to computing algorithms, and included discussions on relevant federal legislation and such as the Creating Helpful Incentives to Produce Semiconductors and Science Act (CHIPS Act) which was signed into law in the summer of 2022. Talks at the workshop included edge computing in radiation environments, new materials for neuromorphic computing, advanced packaging for microelectronics, and new AI techniques. We also received project updates from several of the Department of Energy (DOE) microelectronics co-design projects funded in the fall of 2021, and from three of the Energy Frontier Research Centers (EFRCs) that had been funded in the fall of 2022. The workshop also conducted a set of breakout discussions around the five principal research directions (PRDs) from the 2018 Department of Energy workshop report: 1) define innovative material, device, and architecture requirements driven by applications, algorithms, and software; 2) revolutionize memory and data storage; 3) re-imagine information flow unconstrained by interconnects; 4) redefine computing by leveraging unexploited physical phenomena; 5) reinvent the electricity grid through new materials, devices, and architectures. We tasked each breakout group to consider one primary PRD (and other PRDs as relevant topics arose during discussions) and to address questions such as whether the research community has embraced co-design as a methodology and whether new developments at any level of innovation from materials to programming models requires the research community to reevaluate the PRDs developed back in 2018.
The tension between accuracy and computational cost is a common thread throughout computational simulation. One such example arises in the modeling of mechanical joints. Joints are typically confined to a physically small domain and yet are computationally expensive to model with a high-resolution finite element representation. A common approach is to substitute reduced-order models that can capture important aspects of the joint response and enable the use of more computationally efficient techniques overall. Unfortunately, such reduced-order models are often difficult to use, error prone, and have a narrow range of application. In contrast, we propose a new type of reduced-order model, leveraging machine learning, that would be both user-friendly and extensible to a wide range of applications.
Imaging methods driven by probes, electrons, and ions have played a dominant role in modern science and engineering. Opportunities for machine vision and AI that focus on consumer problems like driving and feature recognition, are now presenting themselves for automating aspects of the scientific processes. This proposal aims to enable and drive discovery in ultra-low energy implantation by taking advantage of faster processing, flexible control and detection methods, and architecture-agnostic workflows that will result in higher efficiency and shorter scientific development cycles. Custom microscope control, collection and analysis hardware will provide a framework for conducting novel in situ experiments revealing unprecedented insight into surface dynamics at the nanoscale. Ion implantation is a key capability for the semiconductor industry. As devices shrink, novel materials enter the manufacturing line, and quantum technologies transition to being more mainstream. Traditional implantation methods fall short in terms of energy, ion species, and positional precision. Here we demonstrate 1 keV focused ion beam Au implantation into Si and validate the results via atom probe tomography. We show the Au implant depth at 1 keV is 0.8 nm and that identical results for low energy ion implants can be achieved by either lowering the column voltage, or decelerating ions using bias – while maintaining a sub-micron beam focus. We compare our experimental results to static calculations using SRIM and dynamic calculations using binary collision approximation codes TRIDYN and IMSIL. A large discrepancy between the static and dynamic simulation is found that is due to lattice enrichment with high stopping power Au and surface sputtering. Additionally, we demonstrate how model details are particularly important to the simulation of these low-energy heavy-ion implantations. Finally, we discuss how our results pave a way to much lower implantation energies, while maintaining high spatial resolution.
Batched sparse linear algebra operations in general, and solvers in particular, have become the major algorithmic development activity and foremost performance engineering effort in the numerical software libraries work on modern hardware with accelerators such as GPUs. Many applications, ECP and non-ECP alike, require simultaneous solutions of many small linear systems of equations that are structurally sparse in one form or another. In order to move towards high hardware utilization levels, it is important to provide these applications with appropriate interface designs to be both functionally efficient and performance portable and give full access to the appropriate batched sparse solvers running on modern hardware accelerators prevalent across DOE supercomputing sites since the inception of ECP. To this end, we present here a summary of recent advances on the interface designs in use by HPC software libraries supporting batched sparse linear algebra and the development of sparse batched kernel codes for solvers and preconditioners. We also address the potential interoperability opportunities to keep the corresponding software portable between the major hardware accelerators from AMD, Intel, and NVIDIA, while maintaining the appropriate disclosure levels conforming to the active NDA agreements. The presented interface specifications include a mix of batched band, sparse iterative, and sparse direct solvers with their accompanying functionality that is already required by the application codes or we anticipated to be needed in the near future. This report summarizes progress in Kokkos Kernels and the xSDK libraries MAGMA, Ginkgo, hypre, PETSc, and SuperLU.
This report summarizes the work towards developing stochastic weighted particle methods (SWPM) for future application in hypersonic flows. Extensive changes to Sandia’s direct simulation Monte Carlo (DSMC) solver, SPARTA (Stochastic Particle Real Time Analyzer), were made to enable the necessary particle splitting and reduction capabilities for SWPM. The results from one-dimensional Couette and Fourier flows suggest that SWPM can reproduce the correct transport for a large range of Knudsen numbers with adequate accuracy. The associated velocity and temperature profiles are in good agreement with DSMC. An issue with particle placement during particle number reduction, is identified, to which, a simple but effective solution based on minimizing the center of mass error is proposed. High Mach wheel flows are simulated using the SWPM and DSMC methods. SWPM is capable of providing nearly an order of magnitude increase in efficiency over DSMC while retaining high accuracy.
Parekh, Ojas D.; Lougovski, Pavel; Broz, Joe; Byrd, Mark; Chapman, Joseph C.; Chembo, Yanne; De Jong, Wibe A.; Figueroa, Eden; Humble, Travis S.; Larson, Jeffrey; Quiroz, Gregory; Ravi, Gokul; Shammah, Nathan; Svore, Krysta M.; Wu, Wenji; Zeng, William J.
Employing quantum mechanical resources in computing and networking opens the door to new computation and communication models and potential disruptive advantages over classical counterparts. However, quantifying and realizing such advantages face extensive scientific and engineering challenges. Investments by the Department of Energy (DOE) have driven progress toward addressing such challenges. Quantum algorithms have been recently developed, in some cases offering asymptotic exponential advantages in speed or accuracy, for fundamental scientific problems such as simulating physical systems, solving systems of linear equations, or solving differential equations. Empirical demonstrations on nascent quantum hardware suggest better performance than classical analogs on specialized computational tasks favorable to the quantum computing systems. However, demonstration of an end-to-end, substantial and rigorously quantifiable quantum performance advantage over classical analogs remains a grand challenge, especially for problems of practical value. The definition of requirements for quantum technologies to exhibit scalable, rigorous, and transformative performance advantages for practical applications also remains an outstanding open question, namely, what will be required to ultimately demonstrate practical quantum advantage?
Actinide thin-film coatings such as uranium dioxide (UO2) play an important role in nuclear reactors and other mission-relevant applications, but realization of their potential requires a deep fundamental understanding of the chemical vapor deposition (CVD) processes used for their growth. The slow experimental progress can be attributed, in part, to the standard safety guidelines associated with handling uranium byproducts, which are often corrosive, toxic, and radioactive. Accurate simulation techniques, when used in concert with experiment, can improve laboratory safety, material durability, and deliverable timeframes. However, state-of-the-art computational methods are either insufficiently accurate or intractably expensive. To remedy this situation, in this project we suggested a machine-learning (ML) accelerated workflow for simulating molecular clustering toward deposition. As a benchmark test case, we considered molecular clustering in steam and assessed independent components of our workflow by comparing with measured thermodynamic properties of water. After analyzing each component individually and finding no fundamental barrier to realization of the workflow, we attempted to integrate the ML component, a Sandia-developed tool called FitSNAP. As this was the first application of FitSNAP to atoms and molecules in the gas phase at Sandia, the method required more fitting data than was originally anticipated. Systematic improvements were made by including in the fit data diatomic potentials, molecular single-bond-breaking curves, and symmetry-constrained intermolecular potentials. We concluded that our strategy provides a feasible pathway toward modeling CVD and related processes, but that extensive training data must be generated before it can be of practical use.
Sandia is a federally funded research and development center (FFRDC) focused on developing and applying advanced science and engineering capabilities to mitigate national security threats. This is accomplished through the exceptional staff leading research at the Labs and partnering with universities and companies. Sandia’s LDRD program aims to maintain the scientific and technical vitality of the Labs and to enhance the Labs’ ability to address future national security needs. The program funds foundational, leading-edge discretionary research projects that cultivate and utilize core science, technology, and engineering (ST&E) capabilities. Per Congressional intent (P.L. 101-510) and Department of Energy (DOE) guidance (DOE Order 413.2C, Chg 1), Sandia’s LDRD program is crucial to maintaining the nation’s scientific and technical vitality
Our research was focused on forecasting the position and shape of the winter stratospheric polar vortex at a subseasonal timescale of 15 days in advance. To achieve this, we employed both statistical and neural network machine learning techniques. The analysis was performed on 42 winter seasons of reanalysis data provided by NASA giving us a total of 6,342 days of data. The state of the polar vortex for determined by using geometric moments to calculate the centroid latitude and the aspect ratio of an ellipse fit onto the vortex. Timeseries for thirty additional precursors were calculated to help improve the predictive capabilities of the algorithm. Feature importance of these precursors was performed using random forest to measure the predictive importance and the ideal number of precursors. Then, using the precursors identified as important, various statistical methods were tested for predictive accuracy with random forest and nearest neighbor performing the best. An echo state network, a type of recurrent neural network that features sparsely connected hidden layer and a reduced number of trainable parameters that allows for rapid training and testing, was also implemented for the forecasting problem. Hyperparameter tuning was performed for each methods using a subset of the training data. The algorithms were trained and tuned on the first 41 years of data, then tested for accuracy on the final year. In general, the centroid latitude of the polar vortex proved easier to predict than the aspect ratio across all algorithms. Random forest outperformed other statistical forecasting algorithms overall but struggled to predict extreme values. Forecasting from echo state network suggested a strong predictive capability past 15 days, but further work is required to fully realize the potential of recurrent neural network approaches.
Spontaneous isotope fractionation has been reported under nanoconfinement conditions in naturally occurring systems, but the origin of this phenomena is currently unknown. Two existing hypotheses have been proposed, one based on changes in the solvation environment of the isotopes that reduces the non-mass dependent hydrodynamics contribution to diffusion. The other is that isotopes have mass-dependent surface adsorption, varying their total diffusion through nanoconfined channels. To investigate these hypotheses, benchtop experiments, nuclear magnetic resonance (NMR) spectroscopy, and molecule scale modeling were applied. Classical molecular dynamics simulations identified that the Na+ and Cl- hydration shells across the three different salt solutions (22Na35Cl, 23Na35Cl, 24Na35Cl) did not vary as a function of the Na+ isotope, but that there was a significant pore size effect, with larger hydration shells at larger pore sizes. Additionally, while total adsorption times did not vary as a function of the Na+ isotope or pore size, the free ion concentration, or those adsorbed on the surface for <5% of the simulation time did exhibit isotope dependence. Experimentally, challenges occurred developing a repeatable experiment, but NMR characterization of water diffusion rates through ordered alumina membranes was able to identify the existence of two distinct water environments associated with water inside and outside the pore. Further NMR studies could be used to confirm variation in hydration shells and diffusion rates of dissolved ions in water. Ultimately, mass-dependence adsorption is a primary driver of variations in isotope diffusion rates, rather than variation in hydration shells that occur under nanoconfinement.
This Sandia National Laboratories Mission Campaign (MC) seeks to create the technical basis that allows national leaders to efficiently assess and manage the digital assurance of high consequence systems. We will call for transformative research that enables efficient (1) development of provably secure systems and secure integration of untrusted products, (2) intelligent threat mitigation, and (3) digital risk-informed engineering trade-offs. Ultimately, this MC will impact multiple national security missions; it will develop an informed Digital Assurance for High Consequence Systems (DAHCS) community and expand Sandia partnerships to build this national capability.
A colinear Second-Harmonic Orthogonal Polarized (SHOP) interferometer diagnostic capable of making electron areal density measurements of plasmas formed in Magnetically Insulated Transmission Lines (MITLs) has been developed.
Copper is a challenging material to process using laser-based additive manufacturing due to its high reflectivity and high thermal conductivity. Sintering-based processes can produce solid copper parts without the processing challenges and defects associated with laser melting; however, sintering can also cause distortion in copper parts, especially those with thin walls. In this study, we use physics-informed Gaussian process regression to predict and compensate for sintering distortion in thin-walled copper parts produced using a Markforged Metal X bound powder extrusion (BPE) additive manufacturing system. Through experimental characterization and computational simulation of copper’s viscoelastic sintering behavior, we can predict sintering deformation. We can then manufacture, simulate, and test parts with various compensation scaling factors to inform Gaussian process regression and predict a compensated as-printed (pre-sintered) part geometry that produces the desired final (post-sintered) part.
Photocatalytic water splitting using suspensions of nanoparticle photocatalysts is a promising route to economically sustainable production of green hydrogen. The principal challenge is to develop photocatalysts with overall solar-to-hydrogen conversion efficiency that exceeds 10 percent. In this project we have developed a new platform for investigating candidate materials for photocatalytic water splitting. Our platform consists of patterned Au electrodes and a Ag/AgCl reference electrode on an insulating substrate onto which we disperse nanoparticle photocatalysts. We then cover the substrate with a thin layer of ionogel containing a protic ionic liquid that dissolves water from the ambient. Using this platform we have demonstrated photoelectrochemical activity mapping for single and small clusters of BiVO4 nanoparticle photocatalysts and correlated these results to their Raman and photoluminescence spectra. The preliminary results suggest a strong correlation for low efficiency nanoparticles, followed by saturation for those with higher activities, indicating that interface reaction or electrolyte transport become the limiting factor. We anticipate that further application of this platform to investigation of candidate photocatalyst materials will provide useful insights into the mechanisms that limit their performance.
Concentrating Solar Power (CSP) requires precision mirrors, and these in turn require metrology systems to measure their optical slope. In this project we studied a color-based approach to the correspondence problem, which is the association of points on an optical target with their corresponding points seen in a reflection. This is a core problem in deflectometry-based metrology, and a color solution would enable important new capabilities. We modeled color as a vector in the [R,G,B] space measured by a digital camera, and explored a dual-image approach to compensate for inevitable changes in illumination color. Through a series of experiments including color target design and dual-image setups both indoors and outdoors, we collected reference/measurement image pairs for a variety of configurations and light conditions. We then analyzed the resulting image pairs by selecting example [R,G,B] pixels in the reference image, and seeking matching [R,G,B] pixels in the measurement image. Modulating a tolerance threshold enabled us to assess both match reliability and match ambiguity, and for some configurations, orthorectification enabled us to assess match accuracy. Using direct-direct imaging, we demonstrated color correspondence achieving average match accuracy values of 0.004 h, where h is the height of the color pattern. We found that wide-area two-dimensional and linear one-dimensional color targets outperformed hybrid linear/lateral gradient targets in the cases studied. Introducing a mirror degraded performance under our current techniques, and we did not have time to evaluate whether matches could be reliably achieved despite varying light conditions. Nonetheless, our results thus far are promising.
Nuclear magnetic resonance spectroscopy (NMR) is a form of spectroscopy that yields detailed mechanistic information about chemical structures, reactions, and processes. Photochemistry has widespread use across many industries and holds excellent utility for additive manufacturing (AM) processes. Here, we use photoNMR to investigate three photochemical processes spanning AM relevant timescales. We first investigate the photodecomposition of a photobase generator on the slow timescale, then the photoactivation of a ruthenium catalyst on the intermediate timescale, and finally the radical polymerization of an acrylate system on the fast timescale. In doing so, we gain fundamental insights to mission relevant photochemistries and develop a new spectroscopic capability at SNL.
The aviation industry stands at a crossroads, facing the dual challenge of meeting the growing global demand for air travel while mitigating its environmental impact. As concerns over climate change intensify, sustainable aviation fuels (SAFs) have emerged as a promising solution to reduce the carbon footprint of air travel. The aviation sector has long been recognized as a contributor to greenhouse gas emissions, with carbon dioxide (CO2) being a primary concern. SAFs, derived from renewable feedstocks such as biomass, waste oils, or synthetic processes, offer a promising avenue for reducing the net carbon emissions associated with aviation. While SAFs have shown potential in lowering CO2 emissions, the combustion process introduces complexities related to soot particle formation and contrail generation that require comprehensive exploration. These aspects are pivotal not only for their environmental implications but also for their influence on atmospheric climate interactions. As the aviation industry increasingly embraces SAFs to meet sustainability goals, it is imperative to assess their combustion characteristics, unravel the mechanisms of soot formation, and scrutinize the factors influencing contrail development.
We extend an existing approach for efficient use of shared mapped memory across Chapel and C++ for graph data stored as 1-D arrays to sparse tensor data stored using a combination of 2-D and 1-D arrays. We describe the specific extensions that provide use of shared mapped memory tensor data for a particular C++ tensor decomposition tool called GentenMPI. We then demonstrate our approach on several real-world datasets, providing timing results that illustrate minimal overhead incurred using this approach. Finally, we extend our work to improve memory usage and provide convenient random access to sparse shared mapped memory tensor elements in Chapel, while still being capable of leveraging high performance implementations of tensor algorithms in C++.
Protocols play an essential role in Advance Reactor systems. A diverse set of protocols are available to these reactors. Advanced Reactors benefit from technologies that can minimize their resource utilization and costs. Evaluation frameworks are often used when assessing protocols and processes related to cryptographic security systems. The following report discusses the various characteristics associated with these protocol evaluation frameworks, and derives a novel evaluative framework.
This report summarizes Fiscal Year 2023 accomplishments from Sandia National Laboratories Wind Energy Program. The portfolio consists of funding provided by the DOE EERE Wind Energy Technologies Office (WETO), Advanced Research Projects Agency-Energy (ARPA-E), Advanced Manufacturing Office (AMO), the Sandia Laboratory Directed Research and Development (LDRD) program, and private industry. These accomplishments were made possible through capabilities investments by WETO, internal Sandia investment, and partnerships between Sandia and other national laboratories, universities, and research institutions around the world. Sandia’s Wind Energy Program is primarily built around core capabilities as expressed in the strategic plan thrust areas, with 29 staff members in the Wind Energy Design and Experimentation department and the Wind Energy Computational Sciences department leading and supporting R&D at the time of this report. Staff from other departments at Sandia support the program by leveraging Sandia’s unique capabilities in other disciplines.
We propose the average spectrum norm to study the minimum number of measurements required to approximate a multidimensional array (i.e., sample complexity) via low-rank tensor recovery. Our focus is on the tensor completion problem, where the aim is to estimate a multiway array using a subset of tensor entries corrupted by noise. Our average spectrum norm-based analysis provides near-optimal sample complexities, exhibiting dependence on the ambient dimensions and rank that do not suffer from exponential scaling as the order increases.
98% of the budget is deployed, remaining $325,000 to be assigned by the end of May. Costs plus Commitments total 58% of the deployed budget. 29 projects have been kicked off and are in progress. Five project plans are being finalized and will be kicked off early summer. The February start has contributed to the risk of not costing all of the FY23 budget.
The design of high consequence controllers (in weapons systems, autonomy, etc.) that do what they are supposed to do is a significant challenge. Testing simply does not come close to meeting the requirements for assurance. Today circuit designers at Sandia (and elsewhere) typically capture the core behavior of their components using state models in tools such as STATEFLOW. They then check that their models meet certain requirements (e.g. “The system bus must not deadlock” or “both traffic lights at an intersection must not be green at the same time”) using tools called model checkers. If the model checker returns “yes” then the property is guaranteed to be satisfied by the model. However, there are several drawbacks to this industry practice: (1) there is a lot of detail to get right, this is particularly challenging when there are multiple components requiring complex coordination (2) any errors returned by the model checker have to be traced back through the design and fixed, necessitating rework, (3) there are severe scalability problems with this approach, particularly when dealing with concurrency. All this places high demands on the designers who now face not only an accelerated schedule but also controllers of increasing complexity. This report describes a new and fundamentally different approach to the construction of safety-critical digital controllers. Instead of directly constructing a complete model and then trying to verify it, the designer can start with an initial abstract (think “sketch”) model plus the requirements, from which a correct concrete model is automatically synthesized. There is no need for post-hoc verification of required functional properties. Having tool to carry this out will significantly impact the nation’s ability to ensure the safety of high-consequence digital systems. The approach has been implemented in a prototype tool, along with a suite of examples, including ones that reflect actual problems faced by designers. Our approach operates on a variant of Statecharts developed at Sandia called Qspecs. Statecharts are a widely used formalism for developing concurrent reactive systems, supporting scalability through allowing state models containing composite states, which are the serial or parallel composition of substates which can themselves contain statecharts. Statecharts enable an incremental style of development, in which states are progressively refined to incorporate greater detail in an incremental model of software development. Our approach formulates a set of constraints from the structure of the models and the requirements and propagates these constraints to a fixpoint. The solution to the constraints is an inductive invariant along with guards on the transitions. We also show how our approach extends to implementation refinement, decomposition, composition, and elaboration. We currently handle safety requirements written in LTL (Linear Temporal Logic)
Condensation trails, or contrails, are aircraft-induced cirrus clouds. They come from the formation of water droplets, later converting to ice crystals as a result of water vapor condensing on aerosols either emitted by the aircraft engines or already present in the upper atmosphere. While there is ongoing debate about their true impact, contrails are estimated to be a major contributor to climate forcing from aviation. We remind that air transportation currently accounts for about 5 % of the global anthropogenic climate forcing, and that it is anticipated that air traffic will double in the coming decade or two. The expected growth reinforces the urgency of the need to develop a plan to better understand contrail formation and persistence, and deploy means to reduce or avoid contrail formation, or greatly mitigate their impact. It is evident that contrails should be part of the picture when developing a plan to make the aviation sector sustainable.