We extend an existing approach for efficient use of shared mapped memory across Chapel and C++ for graph data stored as 1-D arrays to sparse tensor data stored using a combination of 2-D and 1-D arrays. We describe the specific extensions that provide use of shared mapped memory tensor data for a particular C++ tensor decomposition tool called GentenMPI. We then demonstrate our approach on several real-world datasets, providing timing results that illustrate minimal overhead incurred using this approach. Finally, we extend our work to improve memory usage and provide convenient random access to sparse shared mapped memory tensor elements in Chapel, while still being capable of leveraging high performance implementations of tensor algorithms in C++.
Actinide thin-film coatings such as uranium dioxide (UO2) play an important role in nuclear reactors and other mission-relevant applications, but realization of their potential requires a deep fundamental understanding of the chemical vapor deposition (CVD) processes used for their growth. The slow experimental progress can be attributed, in part, to the standard safety guidelines associated with handling uranium byproducts, which are often corrosive, toxic, and radioactive. Accurate simulation techniques, when used in concert with experiment, can improve laboratory safety, material durability, and deliverable timeframes. However, state-of-the-art computational methods are either insufficiently accurate or intractably expensive. To remedy this situation, in this project we suggested a machine-learning (ML) accelerated workflow for simulating molecular clustering toward deposition. As a benchmark test case, we considered molecular clustering in steam and assessed independent components of our workflow by comparing with measured thermodynamic properties of water. After analyzing each component individually and finding no fundamental barrier to realization of the workflow, we attempted to integrate the ML component, a Sandia-developed tool called FitSNAP. As this was the first application of FitSNAP to atoms and molecules in the gas phase at Sandia, the method required more fitting data than was originally anticipated. Systematic improvements were made by including in the fit data diatomic potentials, molecular single-bond-breaking curves, and symmetry-constrained intermolecular potentials. We concluded that our strategy provides a feasible pathway toward modeling CVD and related processes, but that extensive training data must be generated before it can be of practical use.
Parekh, Ojas D.; Lougovski, Pavel; Broz, Joe; Byrd, Mark; Chapman, Joseph C.; Chembo, Yanne; De Jong, Wibe A.; Figueroa, Eden; Humble, Travis S.; Larson, Jeffrey; Quiroz, Gregory; Ravi, Gokul; Shammah, Nathan; Svore, Krysta M.; Wu, Wenji; Zeng, William J.
Employing quantum mechanical resources in computing and networking opens the door to new computation and communication models and potential disruptive advantages over classical counterparts. However, quantifying and realizing such advantages face extensive scientific and engineering challenges. Investments by the Department of Energy (DOE) have driven progress toward addressing such challenges. Quantum algorithms have been recently developed, in some cases offering asymptotic exponential advantages in speed or accuracy, for fundamental scientific problems such as simulating physical systems, solving systems of linear equations, or solving differential equations. Empirical demonstrations on nascent quantum hardware suggest better performance than classical analogs on specialized computational tasks favorable to the quantum computing systems. However, demonstration of an end-to-end, substantial and rigorously quantifiable quantum performance advantage over classical analogs remains a grand challenge, especially for problems of practical value. The definition of requirements for quantum technologies to exhibit scalable, rigorous, and transformative performance advantages for practical applications also remains an outstanding open question, namely, what will be required to ultimately demonstrate practical quantum advantage?
Imaging methods driven by probes, electrons, and ions have played a dominant role in modern science and engineering. Opportunities for machine vision and AI that focus on consumer problems like driving and feature recognition, are now presenting themselves for automating aspects of the scientific processes. This proposal aims to enable and drive discovery in ultra-low energy implantation by taking advantage of faster processing, flexible control and detection methods, and architecture-agnostic workflows that will result in higher efficiency and shorter scientific development cycles. Custom microscope control, collection and analysis hardware will provide a framework for conducting novel in situ experiments revealing unprecedented insight into surface dynamics at the nanoscale. Ion implantation is a key capability for the semiconductor industry. As devices shrink, novel materials enter the manufacturing line, and quantum technologies transition to being more mainstream. Traditional implantation methods fall short in terms of energy, ion species, and positional precision. Here we demonstrate 1 keV focused ion beam Au implantation into Si and validate the results via atom probe tomography. We show the Au implant depth at 1 keV is 0.8 nm and that identical results for low energy ion implants can be achieved by either lowering the column voltage, or decelerating ions using bias – while maintaining a sub-micron beam focus. We compare our experimental results to static calculations using SRIM and dynamic calculations using binary collision approximation codes TRIDYN and IMSIL. A large discrepancy between the static and dynamic simulation is found that is due to lattice enrichment with high stopping power Au and surface sputtering. Additionally, we demonstrate how model details are particularly important to the simulation of these low-energy heavy-ion implantations. Finally, we discuss how our results pave a way to much lower implantation energies, while maintaining high spatial resolution.
The Artificial Intelligence Enhanced Co-Design for Next Generation Microelectronics virtual workshop was held April 4-5, 2023, and attended by subject matter experts from universities, industry, and national laboratories. This was the third in a series of workshops to motivate the research community to identify and address major challenges facing microelectronics research and production. The 2023 workshop focused on a set of topics from materials to computing algorithms, and included discussions on relevant federal legislation and such as the Creating Helpful Incentives to Produce Semiconductors and Science Act (CHIPS Act) which was signed into law in the summer of 2022. Talks at the workshop included edge computing in radiation environments, new materials for neuromorphic computing, advanced packaging for microelectronics, and new AI techniques. We also received project updates from several of the Department of Energy (DOE) microelectronics co-design projects funded in the fall of 2021, and from three of the Energy Frontier Research Centers (EFRCs) that had been funded in the fall of 2022. The workshop also conducted a set of breakout discussions around the five principal research directions (PRDs) from the 2018 Department of Energy workshop report: 1) define innovative material, device, and architecture requirements driven by applications, algorithms, and software; 2) revolutionize memory and data storage; 3) re-imagine information flow unconstrained by interconnects; 4) redefine computing by leveraging unexploited physical phenomena; 5) reinvent the electricity grid through new materials, devices, and architectures. We tasked each breakout group to consider one primary PRD (and other PRDs as relevant topics arose during discussions) and to address questions such as whether the research community has embraced co-design as a methodology and whether new developments at any level of innovation from materials to programming models requires the research community to reevaluate the PRDs developed back in 2018.
This work demonstrates that classical shear-flow stability theory can be successfully applied to modify wind turbine wakes and also explains the success of several emerging, empirically-arrived control methods (i.e., dynamic induction and helix control). Linear stability theory predictions are extended to include the effects of non-axisymmetric inflow profiles, such as wind shear, which is shown to not strongly affect the primary forcing frequency. The predictions, as well as idealized large-eddy simulations using actuator-line representation of the turbine blades, agree that the n = 0 and ±1 modes have faster initial growth rates than higher-order modes, suggesting the lower-order modes are more appropriate for wake control. Exciting the lower-order modes with periodic pitching of the blades produces higher entrainment into the wake and consequently faster wake recovery.
The National Solar Thermal Test Facility (NSTTF) is a DOE Core Capability and Technology Deployment Center located in Albuquerque, NM. It is operated by Sandia National Laboratories (Sandia) for the U.S. Department of Energy (DOE). The NSTTF is the only multi-mission, multi-use, multi-story test facility of its type in the United States. The NSTTF was founded in 1978 and began testing with high heat flux that same year. Over the past 45 years, the NSTTF has been at the forefront of the research, design, fabrication, and testing of many of the critical Concentrating Solar Power (CSP) technologies. These technologies have allowed costs to be dramatically reduced from over $\$ $0.40 /kWh to $\$$0.12 /kWh since the conception of this renewable energy technology. The NSTTF has worked to make the Solar Energy Generating Systems (SEGS) parabolic trough plants successful, while also working with the Solar One and Solar Two facilities for successful implementation. Over the four decades since its founding, the mission of the NSTTF has grown to include new receiver technologies, like our generation 3 falling particle system (G3P3 Tower), optical metrology techniques like SOFAST, molten salt testing, thermal energy storage, solar thermal chemistry, and more. We continue to expand our capabilities in pursuit of the DOE SETO mission and the DOE SunShot 2030 goals: unsubsidized LCOE of $\$$0.05/kWh for CSP that includes 12 or more hours of thermal energy storage. To support both the DOE SETO mission and support the CSP sector as a whole, we are working to develop our operations and maintenance framework to provide a world class testing facility in support of our technological achievements. To accomplish both of these missions, the NSTTF draws on the decades of experience and expertise of our staff along with the world-class facilities at Sandia National Laboratories to further the science of concentrated solar thermal technologies in diverse applications. We remain a trusted partner for high-quality and impactful research in both fundamental and applied arenas. We are able to provide our partners with both one-of-a-kind testing platforms as well as world-class analytics.
Batched sparse linear algebra operations in general, and solvers in particular, have become the major algorithmic development activity and foremost performance engineering effort in the numerical software libraries work on modern hardware with accelerators such as GPUs. Many applications, ECP and non-ECP alike, require simultaneous solutions of many small linear systems of equations that are structurally sparse in one form or another. In order to move towards high hardware utilization levels, it is important to provide these applications with appropriate interface designs to be both functionally efficient and performance portable and give full access to the appropriate batched sparse solvers running on modern hardware accelerators prevalent across DOE supercomputing sites since the inception of ECP. To this end, we present here a summary of recent advances on the interface designs in use by HPC software libraries supporting batched sparse linear algebra and the development of sparse batched kernel codes for solvers and preconditioners. We also address the potential interoperability opportunities to keep the corresponding software portable between the major hardware accelerators from AMD, Intel, and NVIDIA, while maintaining the appropriate disclosure levels conforming to the active NDA agreements. The presented interface specifications include a mix of batched band, sparse iterative, and sparse direct solvers with their accompanying functionality that is already required by the application codes or we anticipated to be needed in the near future. This report summarizes progress in Kokkos Kernels and the xSDK libraries MAGMA, Ginkgo, hypre, PETSc, and SuperLU.
This report summarizes the work towards developing stochastic weighted particle methods (SWPM) for future application in hypersonic flows. Extensive changes to Sandia’s direct simulation Monte Carlo (DSMC) solver, SPARTA (Stochastic Particle Real Time Analyzer), were made to enable the necessary particle splitting and reduction capabilities for SWPM. The results from one-dimensional Couette and Fourier flows suggest that SWPM can reproduce the correct transport for a large range of Knudsen numbers with adequate accuracy. The associated velocity and temperature profiles are in good agreement with DSMC. An issue with particle placement during particle number reduction, is identified, to which, a simple but effective solution based on minimizing the center of mass error is proposed. High Mach wheel flows are simulated using the SWPM and DSMC methods. SWPM is capable of providing nearly an order of magnitude increase in efficiency over DSMC while retaining high accuracy.
98% of the budget is deployed, remaining $325,000 to be assigned by the end of May. Costs plus Commitments total 58% of the deployed budget. 29 projects have been kicked off and are in progress. Five project plans are being finalized and will be kicked off early summer. The February start has contributed to the risk of not costing all of the FY23 budget.
High-quality uncertainty quantification (UQ) is a critical component of enabling trust in deep learning (DL) models and is especially important if DL models are to be deployed in high-consequence applications. Conformal prediction (CP) methods represent an emerging nonparametric approach for producing UQ that is easily interpretable and, under weak assumptions, provides a guarantee regarding UQ quality. This report describes the research outputs of an Exploratory Express Laboratory Directed Research and Development (LDRD) project at Sandia National Laboratories. This project focused on how best to implement CP methods for DL models. This report introduces new methodology for obtaining high-quality UQ from DL models using CP methods, describes a novel system of assessing UQ quality, and provides experimental results that demonstrate the quality of the new methodology and utility of the UQ quality assessment system. Avenues for future research and discussion of potential impacts at Sandia and in the wider research community are also given.
We experimentally and computationally investigate a proposed frequency-domain method for detecting and tracking cislunar spacecraft and near-earth asteroids using heliostat fields at night. Unlike imaging, which detects spacecraft and asteroids by their streak in sidereally-fixed long-exposure photographs, our proposed detection method oscillates the orientation of heliostats concentrating light from the stellar field and measures the light’s photocurrent power spectrum at sub-milliHertz resolution. If heliostat oscillation traces out an ellipse fixed in the galactic coordinate system, spacecraft or asteroids produce a peak in photocurrent power spectrum at a frequency slightly shifted from the starlight peak. The frequency shift is on the scale of milliHertz and proportional to apparent angular rate relative to sidereal. Relative phase corresponds to relative angular position, enabling tracking. A potential advantage of this frequency-domain method over imaging is that detectivity improves with apparent angular rate and number of heliostats. Since heliostats are inexpensive compared to an astronomical observatory and otherwise unused at night, the proposed method may cost-effectively augment observatory systems such as NASA’s Asteroid Terrestrial-impact Last Alert System (ATLAS).
The goal of this Exploratory Express project was to explore the possibility of tunable ferromagnetism in Mn or Cr incorporated epitaxial Ga2O3 films. Tunability of magnetic properties can enable novel applications in spintronics, quantum computing, and magnetism-based logics by allowing control of magnetism down to the nanoscale. Carriers (electrons or holes) mediated ferromagnetic ordering in semiconductor can lead to tunable ferromagnetism by leveraging the tunability of carrier density with doping level, gate electric field, or optical pumping of the carriers. The magnetic ions (Cr or Mn) in Ga2O3 act as localized spin centers which can potentially be magnetically coupled through conduction electrons to enable ferromagnetic ordering. Here we investigated tunable ferromagnetism in beta Ga2O3 semiconductor host with various n-doping levels by incorporating 2.4 atomic percent Mn or Cr. The R&D approach involved growth of epitaxial Ga2O3 film on sapphire or Ga2O3 substrate, implantation of Mn or Cr ions, annealing of the samples post implantation, and magnetic measurements. We studied magnetic behavior of Mn:Ga2O3 as a function of different n-doping levels and various annealing temperatures. The vibrating sample magnetometry (VSM) measurement exhibited strong ferromagnetic signals from the annealed Mn:Ga2O3 sample with n-doping level of 5E19 cm-3. This ferromagnetic behavior disappears from Mn:Ga2O3 when the n-doping level is reduced to 5E16 cm-3. Although these results are to be further verified by other measurement schemes due to the observation of background ferromagnetism from the growth substrate, these results indicate the possibility of tunable ferromagnetism in Mn:Ga2O3 mediated by conduction electrons.
The aviation industry stands at a crossroads, facing the dual challenge of meeting the growing global demand for air travel while mitigating its environmental impact. As concerns over climate change intensify, sustainable aviation fuels (SAFs) have emerged as a promising solution to reduce the carbon footprint of air travel. The aviation sector has long been recognized as a contributor to greenhouse gas emissions, with carbon dioxide (CO2) being a primary concern. SAFs, derived from renewable feedstocks such as biomass, waste oils, or synthetic processes, offer a promising avenue for reducing the net carbon emissions associated with aviation. While SAFs have shown potential in lowering CO2 emissions, the combustion process introduces complexities related to soot particle formation and contrail generation that require comprehensive exploration. These aspects are pivotal not only for their environmental implications but also for their influence on atmospheric climate interactions. As the aviation industry increasingly embraces SAFs to meet sustainability goals, it is imperative to assess their combustion characteristics, unravel the mechanisms of soot formation, and scrutinize the factors influencing contrail development.
Our research was focused on forecasting the position and shape of the winter stratospheric polar vortex at a subseasonal timescale of 15 days in advance. To achieve this, we employed both statistical and neural network machine learning techniques. The analysis was performed on 42 winter seasons of reanalysis data provided by NASA giving us a total of 6,342 days of data. The state of the polar vortex for determined by using geometric moments to calculate the centroid latitude and the aspect ratio of an ellipse fit onto the vortex. Timeseries for thirty additional precursors were calculated to help improve the predictive capabilities of the algorithm. Feature importance of these precursors was performed using random forest to measure the predictive importance and the ideal number of precursors. Then, using the precursors identified as important, various statistical methods were tested for predictive accuracy with random forest and nearest neighbor performing the best. An echo state network, a type of recurrent neural network that features sparsely connected hidden layer and a reduced number of trainable parameters that allows for rapid training and testing, was also implemented for the forecasting problem. Hyperparameter tuning was performed for each methods using a subset of the training data. The algorithms were trained and tuned on the first 41 years of data, then tested for accuracy on the final year. In general, the centroid latitude of the polar vortex proved easier to predict than the aspect ratio across all algorithms. Random forest outperformed other statistical forecasting algorithms overall but struggled to predict extreme values. Forecasting from echo state network suggested a strong predictive capability past 15 days, but further work is required to fully realize the potential of recurrent neural network approaches.
The generation of synthetic seismograms through simulation is a fundamental tool of seismology required to run quantitative hypothesis tests. A variety of approaches have been developed throughout the seismological community and each has their own specific user interface based on their implementation. This causes a challenge to researchers who will need to learn new interfaces with each new software they wish to use and create substantial challenges when attempting to compare results from different tools. Here we provide a unified interface that facilitates interoperability amongst several simulation tools through a modern containerized Python package. Further, this package includes post-processing analysis modules designed to facilitate end-to-end analysis of synthetic seismograms. In this report we present the conceptual guidance and an example implementation of the new Waveform Simulation Framework.
Nuclear magnetic resonance spectroscopy (NMR) is a form of spectroscopy that yields detailed mechanistic information about chemical structures, reactions, and processes. Photochemistry has widespread use across many industries and holds excellent utility for additive manufacturing (AM) processes. Here, we use photoNMR to investigate three photochemical processes spanning AM relevant timescales. We first investigate the photodecomposition of a photobase generator on the slow timescale, then the photoactivation of a ruthenium catalyst on the intermediate timescale, and finally the radical polymerization of an acrylate system on the fast timescale. In doing so, we gain fundamental insights to mission relevant photochemistries and develop a new spectroscopic capability at SNL.
Photocatalytic water splitting using suspensions of nanoparticle photocatalysts is a promising route to economically sustainable production of green hydrogen. The principal challenge is to develop photocatalysts with overall solar-to-hydrogen conversion efficiency that exceeds 10 percent. In this project we have developed a new platform for investigating candidate materials for photocatalytic water splitting. Our platform consists of patterned Au electrodes and a Ag/AgCl reference electrode on an insulating substrate onto which we disperse nanoparticle photocatalysts. We then cover the substrate with a thin layer of ionogel containing a protic ionic liquid that dissolves water from the ambient. Using this platform we have demonstrated photoelectrochemical activity mapping for single and small clusters of BiVO4 nanoparticle photocatalysts and correlated these results to their Raman and photoluminescence spectra. The preliminary results suggest a strong correlation for low efficiency nanoparticles, followed by saturation for those with higher activities, indicating that interface reaction or electrolyte transport become the limiting factor. We anticipate that further application of this platform to investigation of candidate photocatalyst materials will provide useful insights into the mechanisms that limit their performance.
Condensation trails, or contrails, are aircraft-induced cirrus clouds. They come from the formation of water droplets, later converting to ice crystals as a result of water vapor condensing on aerosols either emitted by the aircraft engines or already present in the upper atmosphere. While there is ongoing debate about their true impact, contrails are estimated to be a major contributor to climate forcing from aviation. We remind that air transportation currently accounts for about 5 % of the global anthropogenic climate forcing, and that it is anticipated that air traffic will double in the coming decade or two. The expected growth reinforces the urgency of the need to develop a plan to better understand contrail formation and persistence, and deploy means to reduce or avoid contrail formation, or greatly mitigate their impact. It is evident that contrails should be part of the picture when developing a plan to make the aviation sector sustainable.
Copper is a challenging material to process using laser-based additive manufacturing due to its high reflectivity and high thermal conductivity. Sintering-based processes can produce solid copper parts without the processing challenges and defects associated with laser melting; however, sintering can also cause distortion in copper parts, especially those with thin walls. In this study, we use physics-informed Gaussian process regression to predict and compensate for sintering distortion in thin-walled copper parts produced using a Markforged Metal X bound powder extrusion (BPE) additive manufacturing system. Through experimental characterization and computational simulation of copper’s viscoelastic sintering behavior, we can predict sintering deformation. We can then manufacture, simulate, and test parts with various compensation scaling factors to inform Gaussian process regression and predict a compensated as-printed (pre-sintered) part geometry that produces the desired final (post-sintered) part.
Low-dimensional materials show great promise for enhanced computing and sensing performance in mission-relevant environments. However, integrating low-dimensional materials into conventional electronics remains a challenge. Here, we demonstrate a novel transfer method by which low-dimensional materials and their heterostructures can be transferred onto any arbitrary substrate. Our method relies on a water soluble GeO2 substrate from which lowdimensional materials are transferred without significant perturbation. We apply the method to transfer a working electronic device based on a low-dimensional material. Process developments are achieved to enable the fabrication and transfer of a working electronic device, including the growth of high-k dielectric on GeO2 by atomic layer deposition and inserting an indium diffusion barrier into the device gate stack. This work supports Sandia’s heterogeneous integration strategy to broaden the implementation of low-dimensional films and their devices.
The project objective is to develop high-magnetization, low-loss iron nitride based soft magnetic composites for electrical machines. These new SMCs will enable low eddy current losses and therefore highly efficient motor operation at rotational speeds up to 20,000 rpm. Additionally, iron nitride and epoxy composites will be capable of operating at temperatures of 150 °C or greater over a lifetime of 300,000 miles or 15 years.
We propose the average spectrum norm to study the minimum number of measurements required to approximate a multidimensional array (i.e., sample complexity) via low-rank tensor recovery. Our focus is on the tensor completion problem, where the aim is to estimate a multiway array using a subset of tensor entries corrupted by noise. Our average spectrum norm-based analysis provides near-optimal sample complexities, exhibiting dependence on the ambient dimensions and rank that do not suffer from exponential scaling as the order increases.
This Sandia National Laboratories Mission Campaign (MC) seeks to create the technical basis that allows national leaders to efficiently assess and manage the digital assurance of high consequence systems. We will call for transformative research that enables efficient (1) development of provably secure systems and secure integration of untrusted products, (2) intelligent threat mitigation, and (3) digital risk-informed engineering trade-offs. Ultimately, this MC will impact multiple national security missions; it will develop an informed Digital Assurance for High Consequence Systems (DAHCS) community and expand Sandia partnerships to build this national capability.
To move toward rational design of efficient organic light emitting diodes based on the radical idea of inverted singlet-triplet gap (INVEST) systems, we propose a set of novel quantum chemical approaches, predictive but low-cost, to unveil a set of structural-property relationships. We perform a computational study of a series of substituted molecules based on a small set of known INVEST molecules. Our study demonstrates a high degree of correlation between the intramolecular charge transfer and the singlet-triplet energy gap and hints towards the use of a quantitative estimate of charge transfer to predict and modulate these energy gaps. We aim to create a database of INVEST molecules that includes accurate benchmarks of singlet-triplet energy gaps. Furthermore, we aim to link structural features and molecular properties, enabling a control knob for rational design.
Sandia is a federally funded research and development center (FFRDC) focused on developing and applying advanced science and engineering capabilities to mitigate national security threats. This is accomplished through the exceptional staff leading research at the Labs and partnering with universities and companies. Sandia’s LDRD program aims to maintain the scientific and technical vitality of the Labs and to enhance the Labs’ ability to address future national security needs. The program funds foundational, leading-edge discretionary research projects that cultivate and utilize core science, technology, and engineering (ST&E) capabilities. Per Congressional intent (P.L. 101-510) and Department of Energy (DOE) guidance (DOE Order 413.2C, Chg 1), Sandia’s LDRD program is crucial to maintaining the nation’s scientific and technical vitality
Protocols play an essential role in Advance Reactor systems. A diverse set of protocols are available to these reactors. Advanced Reactors benefit from technologies that can minimize their resource utilization and costs. Evaluation frameworks are often used when assessing protocols and processes related to cryptographic security systems. The following report discusses the various characteristics associated with these protocol evaluation frameworks, and derives a novel evaluative framework.
We present a fast and Bayes-optimal-approximating tensor network decoder for planar quantum LDPC codes based on the tensor renormalization group algorithm, originally proposed by Levin, and Nave. By precomputing the renormalization group flow for the null syndrome, we need only recompute tensor contractions in the causal cone of the measured syndrome at the time of decoding. This allows us to achieve an overall runtime complexity of ($pnχ^6$) where p is the depolarizing noise rate, and χ is the cutoff value used to control singular value decomposition approximations used in the algorithm. We apply our decoder to the surface code in the code capacity noise model and compare its performance to the original matrix product state (MPS) tensor network decoder introduced by Bravyi, Suchara, and Vargo. The MPS decoder has a p-independent runtime complexity of $\mathcal{O}(nχ^3)$ resulting in significantly slower decoding times compared to our algorithm in the low-p regime.
A colinear Second-Harmonic Orthogonal Polarized (SHOP) interferometer diagnostic capable of making electron areal density measurements of plasmas formed in Magnetically Insulated Transmission Lines (MITLs) has been developed.
The design of high consequence controllers (in weapons systems, autonomy, etc.) that do what they are supposed to do is a significant challenge. Testing simply does not come close to meeting the requirements for assurance. Today circuit designers at Sandia (and elsewhere) typically capture the core behavior of their components using state models in tools such as STATEFLOW. They then check that their models meet certain requirements (e.g. “The system bus must not deadlock” or “both traffic lights at an intersection must not be green at the same time”) using tools called model checkers. If the model checker returns “yes” then the property is guaranteed to be satisfied by the model. However, there are several drawbacks to this industry practice: (1) there is a lot of detail to get right, this is particularly challenging when there are multiple components requiring complex coordination (2) any errors returned by the model checker have to be traced back through the design and fixed, necessitating rework, (3) there are severe scalability problems with this approach, particularly when dealing with concurrency. All this places high demands on the designers who now face not only an accelerated schedule but also controllers of increasing complexity. This report describes a new and fundamentally different approach to the construction of safety-critical digital controllers. Instead of directly constructing a complete model and then trying to verify it, the designer can start with an initial abstract (think “sketch”) model plus the requirements, from which a correct concrete model is automatically synthesized. There is no need for post-hoc verification of required functional properties. Having tool to carry this out will significantly impact the nation’s ability to ensure the safety of high-consequence digital systems. The approach has been implemented in a prototype tool, along with a suite of examples, including ones that reflect actual problems faced by designers. Our approach operates on a variant of Statecharts developed at Sandia called Qspecs. Statecharts are a widely used formalism for developing concurrent reactive systems, supporting scalability through allowing state models containing composite states, which are the serial or parallel composition of substates which can themselves contain statecharts. Statecharts enable an incremental style of development, in which states are progressively refined to incorporate greater detail in an incremental model of software development. Our approach formulates a set of constraints from the structure of the models and the requirements and propagates these constraints to a fixpoint. The solution to the constraints is an inductive invariant along with guards on the transitions. We also show how our approach extends to implementation refinement, decomposition, composition, and elaboration. We currently handle safety requirements written in LTL (Linear Temporal Logic)
This report summarizes Fiscal Year 2023 accomplishments from Sandia National Laboratories Wind Energy Program. The portfolio consists of funding provided by the DOE EERE Wind Energy Technologies Office (WETO), Advanced Research Projects Agency-Energy (ARPA-E), Advanced Manufacturing Office (AMO), the Sandia Laboratory Directed Research and Development (LDRD) program, and private industry. These accomplishments were made possible through capabilities investments by WETO, internal Sandia investment, and partnerships between Sandia and other national laboratories, universities, and research institutions around the world. Sandia’s Wind Energy Program is primarily built around core capabilities as expressed in the strategic plan thrust areas, with 29 staff members in the Wind Energy Design and Experimentation department and the Wind Energy Computational Sciences department leading and supporting R&D at the time of this report. Staff from other departments at Sandia support the program by leveraging Sandia’s unique capabilities in other disciplines.
Spontaneous isotope fractionation has been reported under nanoconfinement conditions in naturally occurring systems, but the origin of this phenomena is currently unknown. Two existing hypotheses have been proposed, one based on changes in the solvation environment of the isotopes that reduces the non-mass dependent hydrodynamics contribution to diffusion. The other is that isotopes have mass-dependent surface adsorption, varying their total diffusion through nanoconfined channels. To investigate these hypotheses, benchtop experiments, nuclear magnetic resonance (NMR) spectroscopy, and molecule scale modeling were applied. Classical molecular dynamics simulations identified that the Na+ and Cl- hydration shells across the three different salt solutions (22Na35Cl, 23Na35Cl, 24Na35Cl) did not vary as a function of the Na+ isotope, but that there was a significant pore size effect, with larger hydration shells at larger pore sizes. Additionally, while total adsorption times did not vary as a function of the Na+ isotope or pore size, the free ion concentration, or those adsorbed on the surface for <5% of the simulation time did exhibit isotope dependence. Experimentally, challenges occurred developing a repeatable experiment, but NMR characterization of water diffusion rates through ordered alumina membranes was able to identify the existence of two distinct water environments associated with water inside and outside the pore. Further NMR studies could be used to confirm variation in hydration shells and diffusion rates of dissolved ions in water. Ultimately, mass-dependence adsorption is a primary driver of variations in isotope diffusion rates, rather than variation in hydration shells that occur under nanoconfinement.
The tension between accuracy and computational cost is a common thread throughout computational simulation. One such example arises in the modeling of mechanical joints. Joints are typically confined to a physically small domain and yet are computationally expensive to model with a high-resolution finite element representation. A common approach is to substitute reduced-order models that can capture important aspects of the joint response and enable the use of more computationally efficient techniques overall. Unfortunately, such reduced-order models are often difficult to use, error prone, and have a narrow range of application. In contrast, we propose a new type of reduced-order model, leveraging machine learning, that would be both user-friendly and extensible to a wide range of applications.
Concentrating Solar Power (CSP) requires precision mirrors, and these in turn require metrology systems to measure their optical slope. In this project we studied a color-based approach to the correspondence problem, which is the association of points on an optical target with their corresponding points seen in a reflection. This is a core problem in deflectometry-based metrology, and a color solution would enable important new capabilities. We modeled color as a vector in the [R,G,B] space measured by a digital camera, and explored a dual-image approach to compensate for inevitable changes in illumination color. Through a series of experiments including color target design and dual-image setups both indoors and outdoors, we collected reference/measurement image pairs for a variety of configurations and light conditions. We then analyzed the resulting image pairs by selecting example [R,G,B] pixels in the reference image, and seeking matching [R,G,B] pixels in the measurement image. Modulating a tolerance threshold enabled us to assess both match reliability and match ambiguity, and for some configurations, orthorectification enabled us to assess match accuracy. Using direct-direct imaging, we demonstrated color correspondence achieving average match accuracy values of 0.004 h, where h is the height of the color pattern. We found that wide-area two-dimensional and linear one-dimensional color targets outperformed hybrid linear/lateral gradient targets in the cases studied. Introducing a mirror degraded performance under our current techniques, and we did not have time to evaluate whether matches could be reliably achieved despite varying light conditions. Nonetheless, our results thus far are promising.
Bilir, Baris; Kutanoglu, Erhan; Hasenbein, John J.; Austgen, Brent; Garcia, Manuel; Skolfield, Joshua K.
Here we develop two-stage stochastic programming models for generator winterization that enhance power grid resilience while incorporating social equity. The first stage in our models captures the investment decisions for generator winterization, and the second stage captures the operation of a degraded power grid, with the objective of minimizing load shed and social inequity. To incorporate equity into our models, we propose a concept called adverse effect probability that captures the disproportionate effects of power outages on communities with varying vulnerability levels. Grid operations are modeled using DC power flow, and equity is captured through mean or maximum adverse effects experienced by communities. We apply our models to a synthetic Texas power grid, using winter storm scenarios created from the generator outage data from the 2021 Texas winter storm. Our extensive numerical experiments show that more equitable outcomes, in the sense of reducing adverse effects experienced by vulnerable communities during power outages, are achievable with no impact on total load shed through investing in winterization of generators in different locations and capacities.
Excitation of iron pentacarbonyl [Fe(CO)5], a prototypical photocatalyst, at 266 nm causes the sequential loss of two CO ligands in the gas phase, creating catalytically active, unsaturated iron carbonyls. Despite numerous studies, major aspects of its ultrafast photochemistry remain unresolved because the early excited-state dynamics have so far eluded spectroscopic observation. This has led to the long-held assumption that ultrafast dissociation of gas-phase Fe(CO)5 proceeds exclusively on the singlet manifold. Herein, we present a combined experimental-theoretical study employing ultrafast extreme ultraviolet transient absorption spectroscopy near the Fe M2,3-edge, which features spectral evolution on 100 fs and 3 ps time scales, alongside high-level electronic structure theory, which enables characterization of the molecular geometries and electronic states involved in the ultrafast photodissociation of Fe(CO)5. We assign the 100 fs evolution to spectroscopic signatures associated with intertwined structural and electronic dynamics on the singlet metal-centered states during the first CO loss and the 3 ps evolution to the competing dissociation of Fe(CO)4 along the lowest singlet and triplet surfaces to form Fe(CO)3. Calculations of transient spectra in both singlet and triplet states as well as spin-orbit coupling constants along key structural pathways provide evidence for intersystem crossing to the triplet ground state of Fe(CO)4. Thus, our work presents the first spectroscopic detection of transient excited states during ultrafast photodissociation of gas-phase Fe(CO)5 and challenges the long-standing assumption that triplet states do not play a role in the ultrafast dynamics.
A highly parallelizable fluid plasma simulation tool based upon the first-order drift-diffusion equations is discussed. Atmospheric pressure plasmas have densities and gradients that require small element sizes in order to accurately simulate the plasm resulting in computational meshes on the order of millions to tens of millions of elements for realistic size plasma reactors. To enable simulations of this nature, parallel computing is required and must be optimized for the particular problem. Here, a finite-volume, electrostatic drift-diffusion implementation for low-temperature plasma is discussed. The implementation is built upon the Message Passing Interface (MPI) library in C++ using Object Oriented Programming. The underlying numerical method is outlined in detail and benchmarked against simple streamer formation from other streamer codes. Electron densities, electric field, and propagation speeds are compared with the reference case and show good agreement. Convergence studies are also performed showing a minimal space step of approximately 4 μm required to reduce relative error to below 1% during early streamer simulation times and even finer space steps are required for longer times. Additionally, strong and weak scaling of the implementation are studied and demonstrate the excellent performance behavior of the implementation up to 100 million elements on 1024 processors. Lastly, different advection schemes are compared for the simple streamer problem to analyze the influence of numerical diffusion on the resulting quantities of interest.
A technique is proposed for reproducing particle size distributions in three-dimensional simulations of the crushing and comminution of solid materials. The method is designed to produce realistic distributions over a wide range of loading conditions, especially for small fragments. In contrast to most existing methods, the new model does not explicitly treat the small-scale process of fracture. Instead, it uses measured fragment distributions from laboratory tests as the basic material property that is incorporated into the algorithm, providing a data-driven approach. The algorithm is implemented within a nonlocal peridynamic solver, which simulates the underlying continuum mechanics and contact interactions between fragments after they are formed. The technique is illustrated in reproducing fragmentation data from drop weight testing on sandstone samples.
Granular metals (GMs), consisting of metal nanoparticles separated by an insulating matrix, frequently serve as a platform for fundamental electron transport studies. However, few technologically mature devices incorporating GMs have been realized, in large part because intrinsic defects (e.g., electron trapping sites and metal/insulator interfacial defects) frequently impede electron transport, particularly in GMs that do not contain noble metals. Here, we demonstrate that such defects can be minimized in molybdenum-silicon nitride (Mo-SiNx) GMs via optimization of the sputter deposition atmosphere. For Mo-SiNx GMs deposited in a mixed Ar/N2 environment, x-ray photoemission spectroscopy shows a 40%-60% reduction of interfacial Mo-silicide defects compared to Mo-SiNx GMs sputtered in a pure Ar environment. Electron transport measurements confirm the reduced defect density; the dc conductivity improved (decreased) by 104-105 and the activation energy for variable-range hopping increased 10×. Since GMs are disordered materials, the GM nanostructure should, theoretically, support a universal power law (UPL) response; in practice, that response is generally overwhelmed by resistive (defective) transport. Here, the defect-minimized Mo-SiNx GMs display a superlinear UPL response, which we quantify as the ratio of the conductivity at 1 MHz to that at dc, Δ σ ω . Remarkably, these GMs display a Δ σ ω up to 107, a three-orders-of-magnitude improved response than previously reported for GMs. By enabling high-performance electric transport with a non-noble metal GM, this work represents an important step toward both new fundamental UPL research and scalable, mature GM device applications.
We present large-scale atomistic simulations that reveal triple junction (TJ) segregation in Pt-Au nanocrystalline alloys in agreement with experimental observations. While existing studies suggest grain boundary solute segregation as a route to thermally stabilize nanocrystalline materials with respect to grain coarsening, here we quantitatively show that it is specifically the segregation to TJs that dominates the observed stability of these alloys. Our results reveal that doping the TJs renders them immobile, thereby locking the grain boundary network and hindering its evolution. In dilute alloys, it is shown that grain boundary and TJ segregation are not as effective in mitigating grain coarsening, as the solute content is not sufficient to dope and pin all grain boundaries and TJs. Our work highlights the need to account for TJ segregation effects in order to understand and predict the evolution of nanocrystalline alloys under extreme environments.
Oakes et al. (2023) published a review article in this journal. In that paper, Oakes et al. (2023) developed thermodynamic models to describe electrolyte solutions for HClO4–NaClO4–H2O and HBr–NaBr–H2O systems, based on literature data. In their paper, previously published work from researchers in the field was criticized; some of it is ours. Here, in this brief Comment, we first comment on their models, and then we briefly provide a technical response to that criticism.
Frequency-modulated (FM) combs based on active cavities like quantum cascade lasers have recently emerged as promising light sources in many spectral regions. Unlike passive modelocking, which generates amplitude modulation using the field’s amplitude, FM comb formation relies on the generation of phase modulation from the field’s phase. They can therefore be regarded as a phase-domain version of passive modelocking. However, while the ultimate scaling laws of passive modelocking have long been known—Haus showed in 1975 that pulses modelocked by a fast saturable absorber have a bandwidth proportional to effective gain bandwidth—the limits of FM combs have been much less clear. Here, we show that FM combs based on fast gain media are governed by the same fundamental limits, producing combs whose bandwidths are linear in the effective gain bandwidth. Not only do we show theoretically that the diffusive effect of gain curvature limits comb bandwidth, but we also show experimentally how this limit can be increased. By adding carefully designed resonant-loss structures that are evanescently coupled to the cavity of a terahertz laser, we reduce the curvature and increase the effective gain bandwidth of the laser, demonstrating bandwidth enhancement. Our results can better enable the creation of active chip-scale combs and be applied to a wide array of cavity geometries.
Study of subcooled pool boiling experiments performed using a dielectric coolant to test effects of variations in heater surface configuration on pool boiling characteristics.