Our research was focused on forecasting the position and shape of the winter stratospheric polar vortex at a subseasonal timescale of 15 days in advance. To achieve this, we employed both statistical and neural network machine learning techniques. The analysis was performed on 42 winter seasons of reanalysis data provided by NASA giving us a total of 6,342 days of data. The state of the polar vortex for determined by using geometric moments to calculate the centroid latitude and the aspect ratio of an ellipse fit onto the vortex. Timeseries for thirty additional precursors were calculated to help improve the predictive capabilities of the algorithm. Feature importance of these precursors was performed using random forest to measure the predictive importance and the ideal number of precursors. Then, using the precursors identified as important, various statistical methods were tested for predictive accuracy with random forest and nearest neighbor performing the best. An echo state network, a type of recurrent neural network that features sparsely connected hidden layer and a reduced number of trainable parameters that allows for rapid training and testing, was also implemented for the forecasting problem. Hyperparameter tuning was performed for each methods using a subset of the training data. The algorithms were trained and tuned on the first 41 years of data, then tested for accuracy on the final year. In general, the centroid latitude of the polar vortex proved easier to predict than the aspect ratio across all algorithms. Random forest outperformed other statistical forecasting algorithms overall but struggled to predict extreme values. Forecasting from echo state network suggested a strong predictive capability past 15 days, but further work is required to fully realize the potential of recurrent neural network approaches.
Condensation trails, or contrails, are aircraft-induced cirrus clouds. They come from the formation of water droplets, later converting to ice crystals as a result of water vapor condensing on aerosols either emitted by the aircraft engines or already present in the upper atmosphere. While there is ongoing debate about their true impact, contrails are estimated to be a major contributor to climate forcing from aviation. We remind that air transportation currently accounts for about 5 % of the global anthropogenic climate forcing, and that it is anticipated that air traffic will double in the coming decade or two. The expected growth reinforces the urgency of the need to develop a plan to better understand contrail formation and persistence, and deploy means to reduce or avoid contrail formation, or greatly mitigate their impact. It is evident that contrails should be part of the picture when developing a plan to make the aviation sector sustainable.
Copper is a challenging material to process using laser-based additive manufacturing due to its high reflectivity and high thermal conductivity. Sintering-based processes can produce solid copper parts without the processing challenges and defects associated with laser melting; however, sintering can also cause distortion in copper parts, especially those with thin walls. In this study, we use physics-informed Gaussian process regression to predict and compensate for sintering distortion in thin-walled copper parts produced using a Markforged Metal X bound powder extrusion (BPE) additive manufacturing system. Through experimental characterization and computational simulation of copper’s viscoelastic sintering behavior, we can predict sintering deformation. We can then manufacture, simulate, and test parts with various compensation scaling factors to inform Gaussian process regression and predict a compensated as-printed (pre-sintered) part geometry that produces the desired final (post-sintered) part.
We extend an existing approach for efficient use of shared mapped memory across Chapel and C++ for graph data stored as 1-D arrays to sparse tensor data stored using a combination of 2-D and 1-D arrays. We describe the specific extensions that provide use of shared mapped memory tensor data for a particular C++ tensor decomposition tool called GentenMPI. We then demonstrate our approach on several real-world datasets, providing timing results that illustrate minimal overhead incurred using this approach. Finally, we extend our work to improve memory usage and provide convenient random access to sparse shared mapped memory tensor elements in Chapel, while still being capable of leveraging high performance implementations of tensor algorithms in C++.
Nuclear magnetic resonance spectroscopy (NMR) is a form of spectroscopy that yields detailed mechanistic information about chemical structures, reactions, and processes. Photochemistry has widespread use across many industries and holds excellent utility for additive manufacturing (AM) processes. Here, we use photoNMR to investigate three photochemical processes spanning AM relevant timescales. We first investigate the photodecomposition of a photobase generator on the slow timescale, then the photoactivation of a ruthenium catalyst on the intermediate timescale, and finally the radical polymerization of an acrylate system on the fast timescale. In doing so, we gain fundamental insights to mission relevant photochemistries and develop a new spectroscopic capability at SNL.
This final report summarizes the results of the Laboratory Direct Research and Development (LDRD) Project Number 229740. Wide band gap semiconductors such as gallium nitride (GaN) have features highly desirable for multiple mission electronic applications. Realization of their potential requires atomic-scale understanding of electronic behavior. The principal experimental tools for electronically probing defects in GaN are chemically undifferentiating and lack a practical theoretical counterpart needed to identify and characterize specific defects. This project investigated whether a simple idea for modeling defect excited states and their associated photoluminescence (PL) energies is viable, as a path to accelerate the understanding of defect behavior and gain valuable insights into engineering new electronic materials and devices. The research implemented a non-self-consistent total-energy evaluation of a Koopmans-type estimation of an excited electronic state energy in density functional theory (DFT) calculations, and proceeded to design, implement, and assess a self-consistent method for computing excited states based upon an OCcupation-Constrained-DFT (occ-DFT). The occ-DFT was verified in test calculations of defect excited states and validated against well-characterized PL data for 3d transition metal defects in GaN. The method proved stable and robust in computing excited states and gave accurate predictions compared to experimental PL data. The combined ground state/excited-state capability proved capable of chemically differentiating defect species in GaN. In application to 3d dopants in GaN, we reinterpreted extensive experimental literature, proposed new defects as prospective candidates for use in quantum information applications, and outlined design strategies to create and exploit these potentially useful functional defects in GaN.
This work demonstrates that classical shear-flow stability theory can be successfully applied to modify wind turbine wakes and also explains the success of several emerging, empirically-arrived control methods (i.e., dynamic induction and helix control). Linear stability theory predictions are extended to include the effects of non-axisymmetric inflow profiles, such as wind shear, which is shown to not strongly affect the primary forcing frequency. The predictions, as well as idealized large-eddy simulations using actuator-line representation of the turbine blades, agree that the n = 0 and ±1 modes have faster initial growth rates than higher-order modes, suggesting the lower-order modes are more appropriate for wake control. Exciting the lower-order modes with periodic pitching of the blades produces higher entrainment into the wake and consequently faster wake recovery.
98% of the budget is deployed, remaining $325,000 to be assigned by the end of May. Costs plus Commitments total 58% of the deployed budget. 29 projects have been kicked off and are in progress. Five project plans are being finalized and will be kicked off early summer. The February start has contributed to the risk of not costing all of the FY23 budget.
Protocols play an essential role in Advance Reactor systems. A diverse set of protocols are available to these reactors. Advanced Reactors benefit from technologies that can minimize their resource utilization and costs. Evaluation frameworks are often used when assessing protocols and processes related to cryptographic security systems. The following report discusses the various characteristics associated with these protocol evaluation frameworks, and derives a novel evaluative framework.
The National Solar Thermal Test Facility (NSTTF) is a DOE Core Capability and Technology Deployment Center located in Albuquerque, NM. It is operated by Sandia National Laboratories (Sandia) for the U.S. Department of Energy (DOE). The NSTTF is the only multi-mission, multi-use, multi-story test facility of its type in the United States. The NSTTF was founded in 1978 and began testing with high heat flux that same year. Over the past 45 years, the NSTTF has been at the forefront of the research, design, fabrication, and testing of many of the critical Concentrating Solar Power (CSP) technologies. These technologies have allowed costs to be dramatically reduced from over $\$ $0.40 /kWh to $\$$0.12 /kWh since the conception of this renewable energy technology. The NSTTF has worked to make the Solar Energy Generating Systems (SEGS) parabolic trough plants successful, while also working with the Solar One and Solar Two facilities for successful implementation. Over the four decades since its founding, the mission of the NSTTF has grown to include new receiver technologies, like our generation 3 falling particle system (G3P3 Tower), optical metrology techniques like SOFAST, molten salt testing, thermal energy storage, solar thermal chemistry, and more. We continue to expand our capabilities in pursuit of the DOE SETO mission and the DOE SunShot 2030 goals: unsubsidized LCOE of $\$$0.05/kWh for CSP that includes 12 or more hours of thermal energy storage. To support both the DOE SETO mission and support the CSP sector as a whole, we are working to develop our operations and maintenance framework to provide a world class testing facility in support of our technological achievements. To accomplish both of these missions, the NSTTF draws on the decades of experience and expertise of our staff along with the world-class facilities at Sandia National Laboratories to further the science of concentrated solar thermal technologies in diverse applications. We remain a trusted partner for high-quality and impactful research in both fundamental and applied arenas. We are able to provide our partners with both one-of-a-kind testing platforms as well as world-class analytics.
Parekh, Ojas D.; Lougovski, Pavel; Broz, Joe; Byrd, Mark; Chapman, Joseph C.; Chembo, Yanne; De Jong, Wibe A.; Figueroa, Eden; Humble, Travis S.; Larson, Jeffrey; Quiroz, Gregory; Ravi, Gokul; Shammah, Nathan; Svore, Krysta M.; Wu, Wenji; Zeng, William J.
Employing quantum mechanical resources in computing and networking opens the door to new computation and communication models and potential disruptive advantages over classical counterparts. However, quantifying and realizing such advantages face extensive scientific and engineering challenges. Investments by the Department of Energy (DOE) have driven progress toward addressing such challenges. Quantum algorithms have been recently developed, in some cases offering asymptotic exponential advantages in speed or accuracy, for fundamental scientific problems such as simulating physical systems, solving systems of linear equations, or solving differential equations. Empirical demonstrations on nascent quantum hardware suggest better performance than classical analogs on specialized computational tasks favorable to the quantum computing systems. However, demonstration of an end-to-end, substantial and rigorously quantifiable quantum performance advantage over classical analogs remains a grand challenge, especially for problems of practical value. The definition of requirements for quantum technologies to exhibit scalable, rigorous, and transformative performance advantages for practical applications also remains an outstanding open question, namely, what will be required to ultimately demonstrate practical quantum advantage?
We propose the average spectrum norm to study the minimum number of measurements required to approximate a multidimensional array (i.e., sample complexity) via low-rank tensor recovery. Our focus is on the tensor completion problem, where the aim is to estimate a multiway array using a subset of tensor entries corrupted by noise. Our average spectrum norm-based analysis provides near-optimal sample complexities, exhibiting dependence on the ambient dimensions and rank that do not suffer from exponential scaling as the order increases.
Imaging methods driven by probes, electrons, and ions have played a dominant role in modern science and engineering. Opportunities for machine vision and AI that focus on consumer problems like driving and feature recognition, are now presenting themselves for automating aspects of the scientific processes. This proposal aims to enable and drive discovery in ultra-low energy implantation by taking advantage of faster processing, flexible control and detection methods, and architecture-agnostic workflows that will result in higher efficiency and shorter scientific development cycles. Custom microscope control, collection and analysis hardware will provide a framework for conducting novel in situ experiments revealing unprecedented insight into surface dynamics at the nanoscale. Ion implantation is a key capability for the semiconductor industry. As devices shrink, novel materials enter the manufacturing line, and quantum technologies transition to being more mainstream. Traditional implantation methods fall short in terms of energy, ion species, and positional precision. Here we demonstrate 1 keV focused ion beam Au implantation into Si and validate the results via atom probe tomography. We show the Au implant depth at 1 keV is 0.8 nm and that identical results for low energy ion implants can be achieved by either lowering the column voltage, or decelerating ions using bias – while maintaining a sub-micron beam focus. We compare our experimental results to static calculations using SRIM and dynamic calculations using binary collision approximation codes TRIDYN and IMSIL. A large discrepancy between the static and dynamic simulation is found that is due to lattice enrichment with high stopping power Au and surface sputtering. Additionally, we demonstrate how model details are particularly important to the simulation of these low-energy heavy-ion implantations. Finally, we discuss how our results pave a way to much lower implantation energies, while maintaining high spatial resolution.
This report summarizes Fiscal Year 2023 accomplishments from Sandia National Laboratories Wind Energy Program. The portfolio consists of funding provided by the DOE EERE Wind Energy Technologies Office (WETO), Advanced Research Projects Agency-Energy (ARPA-E), Advanced Manufacturing Office (AMO), the Sandia Laboratory Directed Research and Development (LDRD) program, and private industry. These accomplishments were made possible through capabilities investments by WETO, internal Sandia investment, and partnerships between Sandia and other national laboratories, universities, and research institutions around the world. Sandia’s Wind Energy Program is primarily built around core capabilities as expressed in the strategic plan thrust areas, with 29 staff members in the Wind Energy Design and Experimentation department and the Wind Energy Computational Sciences department leading and supporting R&D at the time of this report. Staff from other departments at Sandia support the program by leveraging Sandia’s unique capabilities in other disciplines.
We present a fast and Bayes-optimal-approximating tensor network decoder for planar quantum LDPC codes based on the tensor renormalization group algorithm, originally proposed by Levin, and Nave. By precomputing the renormalization group flow for the null syndrome, we need only recompute tensor contractions in the causal cone of the measured syndrome at the time of decoding. This allows us to achieve an overall runtime complexity of ($pnχ^6$) where p is the depolarizing noise rate, and χ is the cutoff value used to control singular value decomposition approximations used in the algorithm. We apply our decoder to the surface code in the code capacity noise model and compare its performance to the original matrix product state (MPS) tensor network decoder introduced by Bravyi, Suchara, and Vargo. The MPS decoder has a p-independent runtime complexity of $\mathcal{O}(nχ^3)$ resulting in significantly slower decoding times compared to our algorithm in the low-p regime.
The project objective is to develop high-magnetization, low-loss iron nitride based soft magnetic composites for electrical machines. These new SMCs will enable low eddy current losses and therefore highly efficient motor operation at rotational speeds up to 20,000 rpm. Additionally, iron nitride and epoxy composites will be capable of operating at temperatures of 150 °C or greater over a lifetime of 300,000 miles or 15 years.
To move toward rational design of efficient organic light emitting diodes based on the radical idea of inverted singlet-triplet gap (INVEST) systems, we propose a set of novel quantum chemical approaches, predictive but low-cost, to unveil a set of structural-property relationships. We perform a computational study of a series of substituted molecules based on a small set of known INVEST molecules. Our study demonstrates a high degree of correlation between the intramolecular charge transfer and the singlet-triplet energy gap and hints towards the use of a quantitative estimate of charge transfer to predict and modulate these energy gaps. We aim to create a database of INVEST molecules that includes accurate benchmarks of singlet-triplet energy gaps. Furthermore, we aim to link structural features and molecular properties, enabling a control knob for rational design.
Sandia is a federally funded research and development center (FFRDC) focused on developing and applying advanced science and engineering capabilities to mitigate national security threats. This is accomplished through the exceptional staff leading research at the Labs and partnering with universities and companies. Sandia’s LDRD program aims to maintain the scientific and technical vitality of the Labs and to enhance the Labs’ ability to address future national security needs. The program funds foundational, leading-edge discretionary research projects that cultivate and utilize core science, technology, and engineering (ST&E) capabilities. Per Congressional intent (P.L. 101-510) and Department of Energy (DOE) guidance (DOE Order 413.2C, Chg 1), Sandia’s LDRD program is crucial to maintaining the nation’s scientific and technical vitality
This Sandia National Laboratories Mission Campaign (MC) seeks to create the technical basis that allows national leaders to efficiently assess and manage the digital assurance of high consequence systems. We will call for transformative research that enables efficient (1) development of provably secure systems and secure integration of untrusted products, (2) intelligent threat mitigation, and (3) digital risk-informed engineering trade-offs. Ultimately, this MC will impact multiple national security missions; it will develop an informed Digital Assurance for High Consequence Systems (DAHCS) community and expand Sandia partnerships to build this national capability.
The Artificial Intelligence Enhanced Co-Design for Next Generation Microelectronics virtual workshop was held April 4-5, 2023, and attended by subject matter experts from universities, industry, and national laboratories. This was the third in a series of workshops to motivate the research community to identify and address major challenges facing microelectronics research and production. The 2023 workshop focused on a set of topics from materials to computing algorithms, and included discussions on relevant federal legislation and such as the Creating Helpful Incentives to Produce Semiconductors and Science Act (CHIPS Act) which was signed into law in the summer of 2022. Talks at the workshop included edge computing in radiation environments, new materials for neuromorphic computing, advanced packaging for microelectronics, and new AI techniques. We also received project updates from several of the Department of Energy (DOE) microelectronics co-design projects funded in the fall of 2021, and from three of the Energy Frontier Research Centers (EFRCs) that had been funded in the fall of 2022. The workshop also conducted a set of breakout discussions around the five principal research directions (PRDs) from the 2018 Department of Energy workshop report: 1) define innovative material, device, and architecture requirements driven by applications, algorithms, and software; 2) revolutionize memory and data storage; 3) re-imagine information flow unconstrained by interconnects; 4) redefine computing by leveraging unexploited physical phenomena; 5) reinvent the electricity grid through new materials, devices, and architectures. We tasked each breakout group to consider one primary PRD (and other PRDs as relevant topics arose during discussions) and to address questions such as whether the research community has embraced co-design as a methodology and whether new developments at any level of innovation from materials to programming models requires the research community to reevaluate the PRDs developed back in 2018.
Spontaneous isotope fractionation has been reported under nanoconfinement conditions in naturally occurring systems, but the origin of this phenomena is currently unknown. Two existing hypotheses have been proposed, one based on changes in the solvation environment of the isotopes that reduces the non-mass dependent hydrodynamics contribution to diffusion. The other is that isotopes have mass-dependent surface adsorption, varying their total diffusion through nanoconfined channels. To investigate these hypotheses, benchtop experiments, nuclear magnetic resonance (NMR) spectroscopy, and molecule scale modeling were applied. Classical molecular dynamics simulations identified that the Na+ and Cl- hydration shells across the three different salt solutions (22Na35Cl, 23Na35Cl, 24Na35Cl) did not vary as a function of the Na+ isotope, but that there was a significant pore size effect, with larger hydration shells at larger pore sizes. Additionally, while total adsorption times did not vary as a function of the Na+ isotope or pore size, the free ion concentration, or those adsorbed on the surface for <5% of the simulation time did exhibit isotope dependence. Experimentally, challenges occurred developing a repeatable experiment, but NMR characterization of water diffusion rates through ordered alumina membranes was able to identify the existence of two distinct water environments associated with water inside and outside the pore. Further NMR studies could be used to confirm variation in hydration shells and diffusion rates of dissolved ions in water. Ultimately, mass-dependence adsorption is a primary driver of variations in isotope diffusion rates, rather than variation in hydration shells that occur under nanoconfinement.
Photocatalytic water splitting using suspensions of nanoparticle photocatalysts is a promising route to economically sustainable production of green hydrogen. The principal challenge is to develop photocatalysts with overall solar-to-hydrogen conversion efficiency that exceeds 10 percent. In this project we have developed a new platform for investigating candidate materials for photocatalytic water splitting. Our platform consists of patterned Au electrodes and a Ag/AgCl reference electrode on an insulating substrate onto which we disperse nanoparticle photocatalysts. We then cover the substrate with a thin layer of ionogel containing a protic ionic liquid that dissolves water from the ambient. Using this platform we have demonstrated photoelectrochemical activity mapping for single and small clusters of BiVO4 nanoparticle photocatalysts and correlated these results to their Raman and photoluminescence spectra. The preliminary results suggest a strong correlation for low efficiency nanoparticles, followed by saturation for those with higher activities, indicating that interface reaction or electrolyte transport become the limiting factor. We anticipate that further application of this platform to investigation of candidate photocatalyst materials will provide useful insights into the mechanisms that limit their performance.
The aviation industry stands at a crossroads, facing the dual challenge of meeting the growing global demand for air travel while mitigating its environmental impact. As concerns over climate change intensify, sustainable aviation fuels (SAFs) have emerged as a promising solution to reduce the carbon footprint of air travel. The aviation sector has long been recognized as a contributor to greenhouse gas emissions, with carbon dioxide (CO2) being a primary concern. SAFs, derived from renewable feedstocks such as biomass, waste oils, or synthetic processes, offer a promising avenue for reducing the net carbon emissions associated with aviation. While SAFs have shown potential in lowering CO2 emissions, the combustion process introduces complexities related to soot particle formation and contrail generation that require comprehensive exploration. These aspects are pivotal not only for their environmental implications but also for their influence on atmospheric climate interactions. As the aviation industry increasingly embraces SAFs to meet sustainability goals, it is imperative to assess their combustion characteristics, unravel the mechanisms of soot formation, and scrutinize the factors influencing contrail development.
This report summarizes the work towards developing stochastic weighted particle methods (SWPM) for future application in hypersonic flows. Extensive changes to Sandia’s direct simulation Monte Carlo (DSMC) solver, SPARTA (Stochastic Particle Real Time Analyzer), were made to enable the necessary particle splitting and reduction capabilities for SWPM. The results from one-dimensional Couette and Fourier flows suggest that SWPM can reproduce the correct transport for a large range of Knudsen numbers with adequate accuracy. The associated velocity and temperature profiles are in good agreement with DSMC. An issue with particle placement during particle number reduction, is identified, to which, a simple but effective solution based on minimizing the center of mass error is proposed. High Mach wheel flows are simulated using the SWPM and DSMC methods. SWPM is capable of providing nearly an order of magnitude increase in efficiency over DSMC while retaining high accuracy.
Concentrating Solar Power (CSP) requires precision mirrors, and these in turn require metrology systems to measure their optical slope. In this project we studied a color-based approach to the correspondence problem, which is the association of points on an optical target with their corresponding points seen in a reflection. This is a core problem in deflectometry-based metrology, and a color solution would enable important new capabilities. We modeled color as a vector in the [R,G,B] space measured by a digital camera, and explored a dual-image approach to compensate for inevitable changes in illumination color. Through a series of experiments including color target design and dual-image setups both indoors and outdoors, we collected reference/measurement image pairs for a variety of configurations and light conditions. We then analyzed the resulting image pairs by selecting example [R,G,B] pixels in the reference image, and seeking matching [R,G,B] pixels in the measurement image. Modulating a tolerance threshold enabled us to assess both match reliability and match ambiguity, and for some configurations, orthorectification enabled us to assess match accuracy. Using direct-direct imaging, we demonstrated color correspondence achieving average match accuracy values of 0.004 h, where h is the height of the color pattern. We found that wide-area two-dimensional and linear one-dimensional color targets outperformed hybrid linear/lateral gradient targets in the cases studied. Introducing a mirror degraded performance under our current techniques, and we did not have time to evaluate whether matches could be reliably achieved despite varying light conditions. Nonetheless, our results thus far are promising.
The tension between accuracy and computational cost is a common thread throughout computational simulation. One such example arises in the modeling of mechanical joints. Joints are typically confined to a physically small domain and yet are computationally expensive to model with a high-resolution finite element representation. A common approach is to substitute reduced-order models that can capture important aspects of the joint response and enable the use of more computationally efficient techniques overall. Unfortunately, such reduced-order models are often difficult to use, error prone, and have a narrow range of application. In contrast, we propose a new type of reduced-order model, leveraging machine learning, that would be both user-friendly and extensible to a wide range of applications.
The design of high consequence controllers (in weapons systems, autonomy, etc.) that do what they are supposed to do is a significant challenge. Testing simply does not come close to meeting the requirements for assurance. Today circuit designers at Sandia (and elsewhere) typically capture the core behavior of their components using state models in tools such as STATEFLOW. They then check that their models meet certain requirements (e.g. “The system bus must not deadlock” or “both traffic lights at an intersection must not be green at the same time”) using tools called model checkers. If the model checker returns “yes” then the property is guaranteed to be satisfied by the model. However, there are several drawbacks to this industry practice: (1) there is a lot of detail to get right, this is particularly challenging when there are multiple components requiring complex coordination (2) any errors returned by the model checker have to be traced back through the design and fixed, necessitating rework, (3) there are severe scalability problems with this approach, particularly when dealing with concurrency. All this places high demands on the designers who now face not only an accelerated schedule but also controllers of increasing complexity. This report describes a new and fundamentally different approach to the construction of safety-critical digital controllers. Instead of directly constructing a complete model and then trying to verify it, the designer can start with an initial abstract (think “sketch”) model plus the requirements, from which a correct concrete model is automatically synthesized. There is no need for post-hoc verification of required functional properties. Having tool to carry this out will significantly impact the nation’s ability to ensure the safety of high-consequence digital systems. The approach has been implemented in a prototype tool, along with a suite of examples, including ones that reflect actual problems faced by designers. Our approach operates on a variant of Statecharts developed at Sandia called Qspecs. Statecharts are a widely used formalism for developing concurrent reactive systems, supporting scalability through allowing state models containing composite states, which are the serial or parallel composition of substates which can themselves contain statecharts. Statecharts enable an incremental style of development, in which states are progressively refined to incorporate greater detail in an incremental model of software development. Our approach formulates a set of constraints from the structure of the models and the requirements and propagates these constraints to a fixpoint. The solution to the constraints is an inductive invariant along with guards on the transitions. We also show how our approach extends to implementation refinement, decomposition, composition, and elaboration. We currently handle safety requirements written in LTL (Linear Temporal Logic)
The generation of synthetic seismograms through simulation is a fundamental tool of seismology required to run quantitative hypothesis tests. A variety of approaches have been developed throughout the seismological community and each has their own specific user interface based on their implementation. This causes a challenge to researchers who will need to learn new interfaces with each new software they wish to use and create substantial challenges when attempting to compare results from different tools. Here we provide a unified interface that facilitates interoperability amongst several simulation tools through a modern containerized Python package. Further, this package includes post-processing analysis modules designed to facilitate end-to-end analysis of synthetic seismograms. In this report we present the conceptual guidance and an example implementation of the new Waveform Simulation Framework.
We experimentally and computationally investigate a proposed frequency-domain method for detecting and tracking cislunar spacecraft and near-earth asteroids using heliostat fields at night. Unlike imaging, which detects spacecraft and asteroids by their streak in sidereally-fixed long-exposure photographs, our proposed detection method oscillates the orientation of heliostats concentrating light from the stellar field and measures the light’s photocurrent power spectrum at sub-milliHertz resolution. If heliostat oscillation traces out an ellipse fixed in the galactic coordinate system, spacecraft or asteroids produce a peak in photocurrent power spectrum at a frequency slightly shifted from the starlight peak. The frequency shift is on the scale of milliHertz and proportional to apparent angular rate relative to sidereal. Relative phase corresponds to relative angular position, enabling tracking. A potential advantage of this frequency-domain method over imaging is that detectivity improves with apparent angular rate and number of heliostats. Since heliostats are inexpensive compared to an astronomical observatory and otherwise unused at night, the proposed method may cost-effectively augment observatory systems such as NASA’s Asteroid Terrestrial-impact Last Alert System (ATLAS).
Batched sparse linear algebra operations in general, and solvers in particular, have become the major algorithmic development activity and foremost performance engineering effort in the numerical software libraries work on modern hardware with accelerators such as GPUs. Many applications, ECP and non-ECP alike, require simultaneous solutions of many small linear systems of equations that are structurally sparse in one form or another. In order to move towards high hardware utilization levels, it is important to provide these applications with appropriate interface designs to be both functionally efficient and performance portable and give full access to the appropriate batched sparse solvers running on modern hardware accelerators prevalent across DOE supercomputing sites since the inception of ECP. To this end, we present here a summary of recent advances on the interface designs in use by HPC software libraries supporting batched sparse linear algebra and the development of sparse batched kernel codes for solvers and preconditioners. We also address the potential interoperability opportunities to keep the corresponding software portable between the major hardware accelerators from AMD, Intel, and NVIDIA, while maintaining the appropriate disclosure levels conforming to the active NDA agreements. The presented interface specifications include a mix of batched band, sparse iterative, and sparse direct solvers with their accompanying functionality that is already required by the application codes or we anticipated to be needed in the near future. This report summarizes progress in Kokkos Kernels and the xSDK libraries MAGMA, Ginkgo, hypre, PETSc, and SuperLU.
Actinide thin-film coatings such as uranium dioxide (UO2) play an important role in nuclear reactors and other mission-relevant applications, but realization of their potential requires a deep fundamental understanding of the chemical vapor deposition (CVD) processes used for their growth. The slow experimental progress can be attributed, in part, to the standard safety guidelines associated with handling uranium byproducts, which are often corrosive, toxic, and radioactive. Accurate simulation techniques, when used in concert with experiment, can improve laboratory safety, material durability, and deliverable timeframes. However, state-of-the-art computational methods are either insufficiently accurate or intractably expensive. To remedy this situation, in this project we suggested a machine-learning (ML) accelerated workflow for simulating molecular clustering toward deposition. As a benchmark test case, we considered molecular clustering in steam and assessed independent components of our workflow by comparing with measured thermodynamic properties of water. After analyzing each component individually and finding no fundamental barrier to realization of the workflow, we attempted to integrate the ML component, a Sandia-developed tool called FitSNAP. As this was the first application of FitSNAP to atoms and molecules in the gas phase at Sandia, the method required more fitting data than was originally anticipated. Systematic improvements were made by including in the fit data diatomic potentials, molecular single-bond-breaking curves, and symmetry-constrained intermolecular potentials. We concluded that our strategy provides a feasible pathway toward modeling CVD and related processes, but that extensive training data must be generated before it can be of practical use.
The goal of this Exploratory Express project was to explore the possibility of tunable ferromagnetism in Mn or Cr incorporated epitaxial Ga2O3 films. Tunability of magnetic properties can enable novel applications in spintronics, quantum computing, and magnetism-based logics by allowing control of magnetism down to the nanoscale. Carriers (electrons or holes) mediated ferromagnetic ordering in semiconductor can lead to tunable ferromagnetism by leveraging the tunability of carrier density with doping level, gate electric field, or optical pumping of the carriers. The magnetic ions (Cr or Mn) in Ga2O3 act as localized spin centers which can potentially be magnetically coupled through conduction electrons to enable ferromagnetic ordering. Here we investigated tunable ferromagnetism in beta Ga2O3 semiconductor host with various n-doping levels by incorporating 2.4 atomic percent Mn or Cr. The R&D approach involved growth of epitaxial Ga2O3 film on sapphire or Ga2O3 substrate, implantation of Mn or Cr ions, annealing of the samples post implantation, and magnetic measurements. We studied magnetic behavior of Mn:Ga2O3 as a function of different n-doping levels and various annealing temperatures. The vibrating sample magnetometry (VSM) measurement exhibited strong ferromagnetic signals from the annealed Mn:Ga2O3 sample with n-doping level of 5E19 cm-3. This ferromagnetic behavior disappears from Mn:Ga2O3 when the n-doping level is reduced to 5E16 cm-3. Although these results are to be further verified by other measurement schemes due to the observation of background ferromagnetism from the growth substrate, these results indicate the possibility of tunable ferromagnetism in Mn:Ga2O3 mediated by conduction electrons.
High-quality uncertainty quantification (UQ) is a critical component of enabling trust in deep learning (DL) models and is especially important if DL models are to be deployed in high-consequence applications. Conformal prediction (CP) methods represent an emerging nonparametric approach for producing UQ that is easily interpretable and, under weak assumptions, provides a guarantee regarding UQ quality. This report describes the research outputs of an Exploratory Express Laboratory Directed Research and Development (LDRD) project at Sandia National Laboratories. This project focused on how best to implement CP methods for DL models. This report introduces new methodology for obtaining high-quality UQ from DL models using CP methods, describes a novel system of assessing UQ quality, and provides experimental results that demonstrate the quality of the new methodology and utility of the UQ quality assessment system. Avenues for future research and discussion of potential impacts at Sandia and in the wider research community are also given.
A colinear Second-Harmonic Orthogonal Polarized (SHOP) interferometer diagnostic capable of making electron areal density measurements of plasmas formed in Magnetically Insulated Transmission Lines (MITLs) has been developed.
Excitation of iron pentacarbonyl [Fe(CO)5], a prototypical photocatalyst, at 266 nm causes the sequential loss of two CO ligands in the gas phase, creating catalytically active, unsaturated iron carbonyls. Despite numerous studies, major aspects of its ultrafast photochemistry remain unresolved because the early excited-state dynamics have so far eluded spectroscopic observation. This has led to the long-held assumption that ultrafast dissociation of gas-phase Fe(CO)5 proceeds exclusively on the singlet manifold. Herein, we present a combined experimental-theoretical study employing ultrafast extreme ultraviolet transient absorption spectroscopy near the Fe M2,3-edge, which features spectral evolution on 100 fs and 3 ps time scales, alongside high-level electronic structure theory, which enables characterization of the molecular geometries and electronic states involved in the ultrafast photodissociation of Fe(CO)5. We assign the 100 fs evolution to spectroscopic signatures associated with intertwined structural and electronic dynamics on the singlet metal-centered states during the first CO loss and the 3 ps evolution to the competing dissociation of Fe(CO)4 along the lowest singlet and triplet surfaces to form Fe(CO)3. Calculations of transient spectra in both singlet and triplet states as well as spin-orbit coupling constants along key structural pathways provide evidence for intersystem crossing to the triplet ground state of Fe(CO)4. Thus, our work presents the first spectroscopic detection of transient excited states during ultrafast photodissociation of gas-phase Fe(CO)5 and challenges the long-standing assumption that triplet states do not play a role in the ultrafast dynamics.
Bilir, Baris; Kutanoglu, Erhan; Hasenbein, John J.; Austgen, Brent; Garcia, Manuel; Skolfield, Joshua K.
Here we develop two-stage stochastic programming models for generator winterization that enhance power grid resilience while incorporating social equity. The first stage in our models captures the investment decisions for generator winterization, and the second stage captures the operation of a degraded power grid, with the objective of minimizing load shed and social inequity. To incorporate equity into our models, we propose a concept called adverse effect probability that captures the disproportionate effects of power outages on communities with varying vulnerability levels. Grid operations are modeled using DC power flow, and equity is captured through mean or maximum adverse effects experienced by communities. We apply our models to a synthetic Texas power grid, using winter storm scenarios created from the generator outage data from the 2021 Texas winter storm. Our extensive numerical experiments show that more equitable outcomes, in the sense of reducing adverse effects experienced by vulnerable communities during power outages, are achievable with no impact on total load shed through investing in winterization of generators in different locations and capacities.
A highly parallelizable fluid plasma simulation tool based upon the first-order drift-diffusion equations is discussed. Atmospheric pressure plasmas have densities and gradients that require small element sizes in order to accurately simulate the plasm resulting in computational meshes on the order of millions to tens of millions of elements for realistic size plasma reactors. To enable simulations of this nature, parallel computing is required and must be optimized for the particular problem. Here, a finite-volume, electrostatic drift-diffusion implementation for low-temperature plasma is discussed. The implementation is built upon the Message Passing Interface (MPI) library in C++ using Object Oriented Programming. The underlying numerical method is outlined in detail and benchmarked against simple streamer formation from other streamer codes. Electron densities, electric field, and propagation speeds are compared with the reference case and show good agreement. Convergence studies are also performed showing a minimal space step of approximately 4 μm required to reduce relative error to below 1% during early streamer simulation times and even finer space steps are required for longer times. Additionally, strong and weak scaling of the implementation are studied and demonstrate the excellent performance behavior of the implementation up to 100 million elements on 1024 processors. Lastly, different advection schemes are compared for the simple streamer problem to analyze the influence of numerical diffusion on the resulting quantities of interest.
A technique is proposed for reproducing particle size distributions in three-dimensional simulations of the crushing and comminution of solid materials. The method is designed to produce realistic distributions over a wide range of loading conditions, especially for small fragments. In contrast to most existing methods, the new model does not explicitly treat the small-scale process of fracture. Instead, it uses measured fragment distributions from laboratory tests as the basic material property that is incorporated into the algorithm, providing a data-driven approach. The algorithm is implemented within a nonlocal peridynamic solver, which simulates the underlying continuum mechanics and contact interactions between fragments after they are formed. The technique is illustrated in reproducing fragmentation data from drop weight testing on sandstone samples.
We present large-scale atomistic simulations that reveal triple junction (TJ) segregation in Pt-Au nanocrystalline alloys in agreement with experimental observations. While existing studies suggest grain boundary solute segregation as a route to thermally stabilize nanocrystalline materials with respect to grain coarsening, here we quantitatively show that it is specifically the segregation to TJs that dominates the observed stability of these alloys. Our results reveal that doping the TJs renders them immobile, thereby locking the grain boundary network and hindering its evolution. In dilute alloys, it is shown that grain boundary and TJ segregation are not as effective in mitigating grain coarsening, as the solute content is not sufficient to dope and pin all grain boundaries and TJs. Our work highlights the need to account for TJ segregation effects in order to understand and predict the evolution of nanocrystalline alloys under extreme environments.
Granular metals (GMs), consisting of metal nanoparticles separated by an insulating matrix, frequently serve as a platform for fundamental electron transport studies. However, few technologically mature devices incorporating GMs have been realized, in large part because intrinsic defects (e.g., electron trapping sites and metal/insulator interfacial defects) frequently impede electron transport, particularly in GMs that do not contain noble metals. Here, we demonstrate that such defects can be minimized in molybdenum-silicon nitride (Mo-SiNx) GMs via optimization of the sputter deposition atmosphere. For Mo-SiNx GMs deposited in a mixed Ar/N2 environment, x-ray photoemission spectroscopy shows a 40%-60% reduction of interfacial Mo-silicide defects compared to Mo-SiNx GMs sputtered in a pure Ar environment. Electron transport measurements confirm the reduced defect density; the dc conductivity improved (decreased) by 104-105 and the activation energy for variable-range hopping increased 10×. Since GMs are disordered materials, the GM nanostructure should, theoretically, support a universal power law (UPL) response; in practice, that response is generally overwhelmed by resistive (defective) transport. Here, the defect-minimized Mo-SiNx GMs display a superlinear UPL response, which we quantify as the ratio of the conductivity at 1 MHz to that at dc, Δ σ ω . Remarkably, these GMs display a Δ σ ω up to 107, a three-orders-of-magnitude improved response than previously reported for GMs. By enabling high-performance electric transport with a non-noble metal GM, this work represents an important step toward both new fundamental UPL research and scalable, mature GM device applications.
Oakes et al. (2023) published a review article in this journal. In that paper, Oakes et al. (2023) developed thermodynamic models to describe electrolyte solutions for HClO4–NaClO4–H2O and HBr–NaBr–H2O systems, based on literature data. In their paper, previously published work from researchers in the field was criticized; some of it is ours. Here, in this brief Comment, we first comment on their models, and then we briefly provide a technical response to that criticism.
Frequency-modulated (FM) combs based on active cavities like quantum cascade lasers have recently emerged as promising light sources in many spectral regions. Unlike passive modelocking, which generates amplitude modulation using the field’s amplitude, FM comb formation relies on the generation of phase modulation from the field’s phase. They can therefore be regarded as a phase-domain version of passive modelocking. However, while the ultimate scaling laws of passive modelocking have long been known—Haus showed in 1975 that pulses modelocked by a fast saturable absorber have a bandwidth proportional to effective gain bandwidth—the limits of FM combs have been much less clear. Here, we show that FM combs based on fast gain media are governed by the same fundamental limits, producing combs whose bandwidths are linear in the effective gain bandwidth. Not only do we show theoretically that the diffusive effect of gain curvature limits comb bandwidth, but we also show experimentally how this limit can be increased. By adding carefully designed resonant-loss structures that are evanescently coupled to the cavity of a terahertz laser, we reduce the curvature and increase the effective gain bandwidth of the laser, demonstrating bandwidth enhancement. Our results can better enable the creation of active chip-scale combs and be applied to a wide array of cavity geometries.
Mitrani, James M.; Ampleford, David J.; Chandler, Gordon A.; Eckart, Mark J.; Hahn, Kelly D.; Jeet, Justin; Kerr, Shaun M.; Mannion, Owen M.; Moore, Alastair M.; Schlossberg, David J.; Youmans, Amanda E.; Grim, Gary P.
On pulsed fusion experiments, the neutron time of flight (nToF) diagnostic provides critical information on the fusion neutron energy spectrum. This work presents an analysis technique that uses two collinear nToF detectors, potentially to measure nuclear bang time and directional flow velocities. Two collinear detectors may be sufficient to disambiguate the contributions of nuclear bang time and directional flow velocities to the first moment of the neutron energy spectrum, providing an independent measurement of nuclear bang time. Preliminary results from measured nToF traces on the National Ignition Facility and additional applications of this technique are presented.
In magnetized liner inertial fusion (MagLIF), a cylindrical liner filled with fusion fuel is imploded with the goal of producing a one-dimensional plasma column at thermonuclear conditions. However, structures attributed to three-dimensional effects are observed in self-emission x-ray images. Despite this, the impact of many experimental inputs on the column morphology has not been characterized. We demonstrate the use of a linear regression analysis to explore correlations between morphology and a wide variety of experimental inputs across 57 MagLIF experiments. Results indicate the possibility of several unexplored effects. For example, we demonstrate that increasing the initial magnetic field correlates with improved stability. Although intuitively expected, this has never been quantitatively assessed in integrated MagLIF experiments. We also demonstrate that azimuthal drive asymmetries resulting from the geometry of the “current return can” appear to measurably impact the morphology. In conjunction with several counterintuitive null results, we expect the observed correlations will encourage further experimental, theoretical, and simulation-based studies. Finally, we note that the method used in this work is general and may be applied to explore not only correlations between input conditions and morphology but also with other experimentally measured quantities.
Atomic cluster expansion (ACE) methods provide a systematic way to describe particle local environments of arbitrary body order. For practical applications it is often required that the basis of cluster functions be symmetrized with respect to rotations and permutations. Existing methodologies yield sets of symmetrized functions that are over-complete. These methodologies thus require an additional numerical procedure, such as singular value decomposition (SVD), to eliminate redundant functions. In this work, it is shown that analytical linear relationships for subsets of cluster functions may be derived using recursion and permutation properties of generalized Wigner symbols. From these relationships, subsets (blocks) of cluster functions can be selected such that, within each block, functions are guaranteed to be linearly independent. It is conjectured that this block-wise independent set of permutation-adapted rotation and permutation invariant (PA-RPI) functions forms a complete, independent basis for ACE. Along with the first analytical proofs of block-wise linear dependence of ACE cluster functions and other theoretical arguments, numerical results are offered to demonstrate this. The utility of the method is demonstrated in the development of an ACE interatomic potential for tantalum. Using the new basis functions in combination with Bayesian compressive sensing sparse regression, some high degree descriptors are observed to persist and help achieve high-accuracy models.
Nuclear power plant (NPP) risk assessment is broadly separated into disciplines of nuclear safety, security, and safeguards. Different analysis methods and computer models have been constructed to analyze each of these as separate disciplines. However, due to the complexity of NPP systems, there are risks that can span all these disciplines and require consideration of safety-security (2S) interactions which allows a more complete understanding of the relationship among these risks. A novel leading simulator/trailing simulator (LS/TS) method is introduced to integrate multiple generic safety and security computer models into a single, holistic 2S analysis. A case study is performed using this novel method to determine its effectiveness. The case study shows that the LS/TS method avoided introducing errors in simulation, compared to the same scenario performed without the LS/TS method. A second case study is then used to illustrate an integrated 2S analysis which shows that different levels of damage to vital equipment from sabotage at a NPP can affect accident evolution by several hours.
The additive manufacture of compositionally graded Al/Cu parts by laser engineered net shaping (LENS) is demonstrated. The use of a blue light build laser enabled deposition on a Cu substrate. The thermal gradient and rapid solidification inherent to selective laser melting enabled mass transport of Cu up to 4 mm from a Cu substrate through a pure Al deposition, providing a means of producing gradients with finer step sizes than the printed layer thicknesses. Divorcing gradient continuity from layer or particle size makes LENS a potentially enabling technology for the manufacture of graded density impactors for ramp compression experiments. Printing graded structures with pure Al, however, was prevented by the growth of Al2Cu3 dendrites and acicular grains amid a matrix of Al2Cu. A combination of adding TiB2 grain refining powder and actively varying print layer composition suppressed the dendritic growth mode and produced an equiaxed microstructure in a compositionally graded part. Material phase was characterized for crystal structure and nanoindentation hardness to enable a discussion of phase evolution in the rapidly solidifying melt pool of a LENS print.
The formation of magnesium chloride-hydroxide salts (magnesium hydroxychlorides) has implications for many geochemical processes and technical applications. For this reason, a thermodynamic database for evaluating the Mg(OH)2–MgCl2–H2O ternary system from 0 °C–120 °C has been developed based on extensive experimental solubility data. Internally consistent sets of standard thermodynamic parameters (ΔGf°, ΔHf°, S°, and CP) were derived for several solid phases: 3 Mg(OH)2:MgCl2:8H2O, 9 Mg(OH)2:MgCl2:4H2O, 2 Mg(OH)2:MgCl2:4H2O, 2 Mg(OH)2:MgCl2: 2H2O(s), brucite (Mg(OH)2), bischofite (MgCl2:6H2O), and MgCl2:4H2O. First, estimated values for the thermodynamic parameters were derived using a component addition method. These parameters were combined with standard thermodynamic data for Mg2+(aq) consistent with CODATA (Cox et al., 1989) to generate temperature-dependent Gibbs energies for the dissolution reactions of the solid phases. These data, in combination with values for MgOH+(aq) updated to be consistent with Mg2+-CODATA, were used to compute equilibrium constants and incorporated into a Pitzer thermodynamic database for concentrated electrolyte solutions. Phase solubility diagrams were constructed as a function of temperature and magnesium chloride concentration for comparisons with available experimental data. To improve the fits to the experimental data, reaction equilibrium constants for the Mg-bearing mineral phases, the binary Pitzer parameters for the MgOH+ — Cl− interaction, and the temperature-dependent coefficients for those Pitzer parameters were constrained by experimental phase boundaries and to match phase solubilities. These parameter adjustments resulted in an updated set of standard thermodynamic data and associated temperature-dependent functions. The resulting database has direct applications to investigations of magnesia cement formation and leaching, chemical barrier interactions related to disposition of heat-generating nuclear waste, and evaluation of magnesium-rich salt and brine stabilities at elevated temperatures.
The 2022 National Defense Strategy of the United States listed climate change as a serious threat to national security. Climate intervention methods, such as stratospheric aerosol injection, have been proposed as mitigation strategies, but the downstream effects of such actions on a complex climate system are not well understood. The development of algorithmic techniques for quantifying relationships between source and impact variables related to a climate event (i.e., a climate pathway) would help inform policy decisions. Data-driven deep learning models have become powerful tools for modeling highly nonlinear relationships and may provide a route to characterize climate variable relationships. In this paper, we explore the use of an echo state network (ESN) for characterizing climate pathways. ESNs are a computationally efficient neural network variation designed for temporal data, and recent work proposes ESNs as a useful tool for forecasting spatiotemporal climate data. However, ESNs are noninterpretable black-box models along with other neural networks. The lack of model transparency poses a hurdle for understanding variable relationships. We address this issue by developing feature importance methods for ESNs in the context of spatiotemporal data to quantify variable relationships captured by the model. We conduct a simulation study to assess and compare the feature importance techniques, and we demonstrate the approach on reanalysis climate data. In the climate application, we consider a time period that includes the 1991 volcanic eruption of Mount Pinatubo. This event was a significant stratospheric aerosol injection, which acts as a proxy for an anthropogenic stratospheric aerosol injection. We are able to use the proposed approach to characterize relationships between pathway variables associated with this event that agree with relationships previously identified by climate scientists.
Interpenetrating lattices consist of two or more interwoven but physically separate sub-lattices with unique behaviors derived from their multi-body construction. If the sublattices are constructed or coated with an electrically conducting material, the close proximity and high surface area of the electrically isolated conductors allow the two lattices to interact electromagnetically either across the initial dielectric filled gap or through physical contact. Changes in the size of the dielectric gap between the sub-lattices induced by deformation can be measured via capacitance or resistance, allowing a structurally competent lattice to operate as a force or deformation sensor. In addition to resistive and capacitive deformation sensing, this work explores capacitance as a fundamental metamaterial property and the environmental sensing behaviors of interpenetrating lattices.
Study of subcooled pool boiling experiments performed using a dielectric coolant to test effects of variations in heater surface configuration on pool boiling characteristics.
The rise of grid modernization has been prompted by the escalating demand for power, the deteriorating state of infrastructure, and the growing concern regarding the reliability of electric utilities. The smart grid encompasses recent advancements in electronics, technology, telecommunications, and computer capabilities. Smart grid telecommunication frameworks provide bidirectional communication to facilitate grid operations. Software-defined networking (SDN) is a proposed approach for monitoring and regulating telecommunication networks, which allows for enhanced visibility, control, and security in smart grid systems. Nevertheless, the integration of telecommunications infrastructure exposes smart grid networks to potential cyberattacks. Unauthorized individuals may exploit unauthorized access to intercept communications, introduce fabricated data into system measurements, overwhelm communication channels with false data packets, or attack centralized controllers to disable network control. An ongoing, thorough examination of cyber attacks and protection strategies for smart grid networks is essential due to the ever-changing nature of these threats. Previous surveys on smart grid security lack modern methodologies and, to the best of our knowledge, most, if not all, focus on only one sort of attack or protection. This survey examines the most recent security techniques, simultaneous multi-pronged cyber attacks, and defense utilities in order to address the challenges of future SDN smart grid research. The objective is to identify future research requirements, describe the existing security challenges, and highlight emerging threats and their potential impact on the deployment of software-defined smart grid (SD-SG).
Seismic waveform data recorded at stations can be thought of as a superposition of the signal from a source of interest and noise from other sources. Frequency-based filtering methods for waveform denoising do not result in desired outcomes when the targeted signal and noise occupy similar frequency bands. Recently, denoising techniques based on deep-learning convolutional neural networks (CNNs), in which a recorded waveform is decomposed into signal and noise components, have led to improved results. These CNN methods, which use short-time Fourier transform representations of the time series, provide signal and noise masks for the input waveform. These masks are used to create denoised signal and designaled noise waveforms, respectively. However, advancements in the field of image denoising have shown the benefits of incorporating discrete wavelet transforms (DWTs) into CNN architectures to create multilevel wavelet CNN (MWCNN) models. The MWCNN model preserves the details of the input due to the good time–frequency localization of the DWT. Here, we use a data set of over 382,000 constructed seismograms recorded by the University of Utah Seismograph Stations network to compare the performance of CNN and MWCNN-based denoising models. Evaluation of both models on constructed test data shows that the MWCNN model outperforms the CNN model in the ability to recover the ground-truth signal component in terms of both waveform similarity and preservation of amplitude information. Model evaluation of real-world data shows that both the CNN and MWCNN models outperform standard band-pass filtering (BPF; average improvement in signal-to-noise ratio of 9.6 and 19.7 dB, respectively, with respect to BPF). Evaluation of continuous data suggests the MWCNN denoiser can improve both signal detection capabilities and phase arrival time estimates.
Explosion sources have been observed to generate significant shear-wave energy despite their isotropic nature. To investigate this phenomenon, we conduct an analysis of the seismic data collected as part of the Source Physics Experiment (SPE): Dry Alluvium Geology (DAG) and investigate the generation of shear-wave energy via scattering. The data were produced by three underground chemical explosions and consist of three-component seis-mograms, which were recorded by the DAG Large-N array. Synthetic tests suggest that for the DAG experiments, small-scale stochastic heterogeneities, defined as features with correlation lengths of 10–100s of meters, are more effective than large-scale geologic structure (scales >1–10 km) at reproducing the scattering of explosion generated wavefields observed at DAG. We analyze the seismic data for spatially variable ratios between transversely and radially polarized seismic energy, and then estimate the mean free path of P and S waves. All analyses are conducted within a frequency band of 5–50 Hz. The ratio of transversely to radially polarized energy is the highest in the east and west portion of the Large-N array. In addition, the magnitude of the estimated S-wave mean free path is shorter in the eastern portion of the Large-N array. This variation indicates that the eastern area of the DAG array is where more scattering is occurring, suggesting azimuthal dependence of P-to-P and P-to-S scattering. This azimuthal dependence of P-to-S scattering can have implications for explosion discrimination based on spectral ratios of seismic wave types, because the general assumption is that explosions do not generate shear-wave energy. Synthetic tests modeling only larger-scale geologic structure had lower transversely polarized energy (only four stations showing a transversely to radially polarized energy ratio greater than 1) and fewer stations (<10) displaying shorter (<300 m) mean free paths than what was observed in the DAG data results.
Foulk, James W.; Davis, Jacob; Tom, Nathan; Thiagarajan, Krish
This study presents theoretical formulations to evaluate the fundamental parameters and performance characteristics of a bottom-raised oscillating surge wave energy converter (OSWEC) device. Employing a flat plate assumption and potential flow formulation in elliptical coordinates, closed-form equations for the added mass, radiation damping, and excitation forces/torques in the relevant pitch-pitch and surge-pitch directions of motion are developed and used to calculate the system's response amplitude operator and the forces and moments acting on the foundation. The model is benchmarked against numerical simulations using WAMIT and WEC-Sim, showcasing excellent agreement. The sensitivity of plate thickness on the analytical hydrodynamic solutions is investigated over several thickness-to-width ratios ranging from 1:80 to 1:10. The results show that as the thickness of the benchmark OSWEC increases, the deviation of the analytical hydrodynamic coefficients from the numerical solutions grows from 3 % to 25 %. Differences in the excitation forces and torques, however, are contained within 12 %. While the flat plate assumption is a limitation of the proposed analytical model, the error is within a reasonable margin for use in the design space exploration phase before a higher-fidelity (and thus more computationally expensive) model is employed. A parametric study demonstrates the ability of the analytical model to quickly sweep over a domain of OSWEC dimensions, illustrating the analytical model's utility in the early phases of design.
The rotating bending fatigue (RBF) behavior (fully reversed, R = −1) of additively manufactured (AM) Ti–6Al–4V alloy produced via laser powder bed fusion (PBF-L) was investigated with respect to different microstructures achieved through novel heat treatments. The investigation herein seeks to elucidate the effect of microstructure by controlling variables that can affect fatigue behavior in Ti–6Al–4V, such as chemistry, porosity, and surface roughness. In order to control these variables, different hot isostatic pressing (HIP) treatments at 800 °C, 920 °C, and 1050 °C with a 920 °C temper were applied to three sets of Ti–6Al–4V cylinders that originated from the same PBF-L build, such that there were 30 tests per condition. After HIP treatment, the specimens were machined and tested. The highest runout stress was achieved after sub-β transus HIP at 800 °C for 2 h at 200 MPa of pressure. A significant drop in fatigue strength was attributed to large prior-β grains and grain boundary α resulting from super-β transus HIP treated specimens. For the sub-β transus HIP specimens, differences in fatigue strength were attributed to α lath thickness, relative dislocation density, and dislocation boundary strengthening.
Naturally occurring uranium complicates monitoring for occupational exposures. There are several retroactive methods that can be used to monitor for occupational exposures, with benefits and drawbacks to each. Analysis of uranium in urine by mass spectrometry and alpha spectrometry is compared, and methods of determining an occupational exposure are presented. Furthermore, the minimum detectable concentrations from each analysis and a method for intake determination based on the analytical results are compared for various solubility types and mixtures. Mass spectrometry with radiochemical separation was found to be the most sensitive analysis for detecting occupational exposures to anthropogenic mixtures based on minimum detectable doses calculated from the proposed method for intake determination.
The critical stress for cutting of a void and He bubble (generically referred to as a cavity) by edge and screw dislocations has been determined for FCC Fe0.70Cr0.20Ni0.10—close to 300-series stainless steel—over a range of cavity spacings, diameters, pressures, and glide plane positions. The results exhibit anomalous trends with spacing, diameter, and pressure when compared with classical theories for obstacle hardening. These anomalies are attributed to elastic anisotropy and the wide extended dislocation core in low stacking fault energy metals, indicating that caution must be exercised when using perfect dislocations in isotropic solids to study void and bubble hardening. In many simulations with screw dislocations, cross-slip was observed at the void/bubble surface, leading to an additional contribution to strengthening. We refer to this phenomenon as cavity cross-slip locking, and argue that it may be an important contributor to void and bubble hardening.
Exploding bridgewire (EBW) detonators are used to rapidly and reliably initiate energetic reactions by exploding a bridgewire via Joule heating. While the mechanisms of EBW detonators have been studied extensively in nominal conditions, comparatively few studies have addressed thermally damaged detonator operability. We present a mesoscale simulation study of thermal damage in a representative EBW detonator, using discrete element method (DEM) simulations that explicitly account for individual particles in the pressed explosive powder. We use a simplified model of melting, where solid spherical particles undergo uniform shrinking, and fluid dynamics are ignored. The subsequent settling of particles results in the formation of a gap between the solid powder and the bridgewire, which we study under different conditions. In particular, particle cohesion has a significant effect on gap formation and settling behavior, where sufficiently high cohesion leads to coalescence of particles into a free-standing pellet. This behavior is qualitatively compared to experimental visualization data, and simulations are shown to capture several key changes in pellet shape. We derive a minimum and maximum limit on gap formation during melting using simple geometric arguments. In the absence of cohesion, results agree with the maximum gap size. With increasing cohesion, the gap size decreases, eventually saturating at the minimum limit. We present results for different combinations of interparticle cohesion and detonator orientations with respect to gravity, demonstrating the complex behavior of these systems and the potential for DEM simulations to capture a range of scenarios.
Radiation and radioactive substances result in the production of radioactive wastes which require safe management and disposal to avoid risks to human health and the environment. To ensure permanent safe disposal, the performance of a deep geological repository for radioactive waste is assessed against internationally agreed risk-based standards. Assessing postclosure safety of the future system's evolution includes screening of features, events, and processes (FEPs) relevant to the situation, their subsequent development into scenarios, and finally the development and execution of safety assessment (SA) models. Global FEP catalogs describe important natural and man-made repository system features and identify events and processes that may affect these features into the future. By combining FEPs, many of which are uncertain, different possible future system evolution scenarios are derived. Repository licensing should consider both the reference or “base” evolution as well as alternative futures that may lead to radiation release, pollution, or exposures. Scenarios are used to derive and consider both base and alternative evolutions, often through production of scenario-specific SA models and the recombination of their results into an assessment of the risk of harm. While the FEP-based scenario development process outlined here has evolved somewhat since its development in the 1980s, the fundamental ideas remain unchanged. A spectrum of common approaches is given here (e.g., bottom–up vs. top–down scenario development, probabilistic vs. bounding handling of uncertainty), related to how individual numerical models for possible futures are converted into a determination as to whether the system is safe (i.e., how aleatoric uncertainty and scenarios are integrated through bounding or Monte Carlo approaches).
Hydrogen powered locomotives are being explored to reduce emissions in rail applications. The risks of operations like refueling should be understood to ensure safe environments for workers and members of the public. Sensitivity analyses were conducted using HyRAM+ to identify major drivers of risk and compare effects of system parameters on individual risk. The consequences of jet fires from full-bore leaks dominated the risk, compared to explosions or smaller leaks. Pipe size, leak detection capability, and leak frequencies of system components greatly affected risk while overpressure modeling parameters and ambient conditions had little effect. The effects of personal protective equipment (PPE) materials on individual risk were quantified by reducing the individual’s exposure time or absorbed thermal dose. PPE only showed a risk reduction in low-risk cases. This study highlighted target areas for risk mitigation, including leak detection equipment and component maintenance, and indicated that the minimal effects of other parameters on risk may not justify prescriptive requirements for refueling operators.
Typical QRAs provide deterministic estimates and understanding of risks posed but are constructed using significant assumptions and uncertainties due to limited data availability and historical momentum of using nominal estimates. This report presents a hydrogen QRA analysis using HyRAM+ that incorporates uncertainty with Latin hypercube sampling and sensitivity analysis using linear regression.
This study conducts a comparative analysis, using non-equilibrium Green’s functions (NEGF), of two state-of-the-art two-well (TW) Terahertz Quantum Cascade Lasers (THz QCLs) supporting clean 3-level systems. The devices have nearly identical parameters and the NEGF calculations with an abrupt-interface roughness height of 0.12 nm predict a maximum operating temperature (Tmax) of ~ 250 K for both devices. However, experimentally, one device reaches a Tmax of ~ 250 K and the other a Tmax of only ~ 134 K. Both devices were fabricated and measured under identical conditions in the same laboratory, with high quality processes as verified by reference devices. The main difference between the two devices is that they were grown in different MBE reactors. Our NEGF-based analysis considered all parameters related to MBE growth, including the maximum estimated variation in aluminum content, growth rate, doping density, background doping, and abrupt-interface roughness height. From our NEGF calculations it is evident that the sole parameter to which a drastic drop in Tmax could be attributed is the abrupt-interface roughness height. We can also learn from the simulations that both devices exhibit high-quality interfaces, with one having an abrupt-interface roughness height of approximately an atomic layer and the other approximately a monolayer. However, these small differences in interface sharpness are the cause of the large performance discrepancy. This underscores the sensitivity of device performance to interface roughness and emphasizes its strategic role in achieving higher operating temperatures for THz QCLs. We suggest Atom Probe Tomography (APT) as a path to analyze and measure the (graded)-interfaces roughness (IFR) parameters for THz QCLs, and subsequently as a design tool for higher performance THz QCLs, as was done for mid-IR QCLs. Our study not only addresses challenges faced by other groups in reproducing the record Tmax of ~ 250 K and ~ 261 K but also proposes a systematic pathway for further improving the temperature performance of THz QCLs beyond the state-of-the-art.
Finding alloys with specific design properties is challenging due to the large number of possible compositions and the complex interactions between elements. This study introduces a multi-objective Bayesian optimization approach guiding molecular dynamics simulations for discovering high-performance refractory alloys with both targeted intrinsic static thermomechanical properties and also deformation mechanisms occurring during dynamic loading. The objective functions are aiming for excellent thermomechanical stability via a high bulk modulus, a low thermal expansion, a high heat capacity, and for a resilient deformation mechanism maximizing the retention of the BCC phase after shock loading. Contrasting two optimization procedures, we show that the Pareto-optimal solutions are confined to a small performance space when the property objectives display a cooperative relationship. Conversely, the Pareto front is much broader in the performance space when these properties have antagonistic relationships. Density functional theory simulations validate these findings and unveil underlying atomic-bond changes driving property improvements.
Understanding and accurately characterizing energy dissipation mechanisms in civil structures during earthquakes is an important element of seismic assessment and design. The most commonly used model is attributed to Rayleigh. This paper proposes a systematic approach to quantify the uncertainty associated with Rayleigh's damping model. Bayesian calibration with embedded model error is employed to treat the coefficients of the Rayleigh model as random variables using modal damping ratios. Through a numerical example, we illustrate how this approach works and how the calibrated model can address modeling uncertainty associated with the Rayleigh damping model.
We present a comprehensive benchmarking framework for evaluating machine-learning approaches applied to phase-field problems. This framework focuses on four key analysis areas crucial for assessing the performance of such approaches in a systematic and structured way. Firstly, interpolation tasks are examined to identify trends in prediction accuracy and accumulation of error over simulation time. Secondly, extrapolation tasks are also evaluated according to the same metrics. Thirdly, the relationship between model performance and data requirements is investigated to understand the impact on predictions and robustness of these approaches. Finally, systematic errors are analyzed to identify specific events or inadvertent rare events triggering high errors. Quantitative metrics evaluating the local and global description of the microstructure evolution, along with other scalar metrics representative of phase-field problems, are used across these four analysis areas. This benchmarking framework provides a path to evaluate the effectiveness and limitations of machine-learning strategies applied to phase-field problems, ultimately facilitating their practical application.
This study investigates high performance electrochromic windows used on a passive house and residential dwelling to IECC 2021 (i.e., IECC dwelling). In the lab, the electrochromic film switches transmitted solar heat gain coefficient (SHGC) from 0.09 to 0.7 and visible transmittance from 0.15 to 0.82 with power consumption of 1.23 W/m2 during switching times less than 3 minutes. We extrapolate these results to a window assembly. Building energy models of the houses were evaluated in Santa Fe, New Mexico. A Monte Carlo analysis for 2020, 2040, 2060, and 2080 was conducted for Shared Socioeconomic Pathways 2-4.5, 3-7.0, and 5-8.5. Cases with and without the electrochromic windows and with and without electricity were used to determine energy use intensity and hours beyond thermal safety thresholds. The passive house showed 1.3-3.1% mean energy savings and the IECC dwelling 4.4-5.1% with electrochromic efficiency benefits growing into the future for both cases. Even so, overall savings decrease into the future for the passive house, due to growth in cooling load being dominant, conversely overall energy savings increase into the future for the IECC dwelling due to heating loads being dominant. For thermal resilience, the passive house exhibited a mean percent decrease of 0.02-0.31% hours in the extreme caution (i.e., > 32.2 ∘C, ≤ 39.4 ∘C) range while the IECC dwelling exhibited 0.38-4.38%. The study therefore shows that electrochromic windows will have smaller benefits for the passive house in comparison to the IECC dwelling. The relationship between electrochromic windows is shown to have a complex relationship between house efficiency and climate change by these results.