This report examines the localization of high frequency electromagnetic fields in general three-dimensional convex walled cavities along periodic paths between opposing sides of the cavity. The report examines the three-dimensional case where the mirrors at the end of the orbit have two different radii of curvature. The cases where these orbits lead to unstable localized modes are known as scars.
Chen, Qi; Johnson, Emma S.; Bernal, David E.; Valentin, Romeo; Kale, Sunjeev; Bates, Johnny; Siirola, John D.; Grossmann, Ignacio E.
We present three core principles for engineering-oriented integrated modeling and optimization tool sets—intuitive modeling contexts, systematic computer-aided reformulations, and flexible solution strategies—and describe how new developments in Pyomo.GDP for Generalized Disjunctive Programming (GDP) advance this vision. We describe a new logical expression system implementation for Pyomo.GDP allowing for a more intuitive description of logical propositions. The logical expression system supports automated reformulation of these logical constraints to linear constraints. We also describe two new logic-based global optimization solver implementations built on Pyomo.GDP that exploit logical structure to avoid “zero-flow” numerical difficulties that arise in nonlinear network design problems when nodes or streams disappear. These new solvers also demonstrate the capability to link to external libraries for expanded functionality within an integrated implementation. We present these new solvers in the context of a flexible array of solution paths available to GDP models. Finally, we present results on a new library of GDP models demonstrating the value of multiple solution approaches.
With the proliferation of additive manufacturing and 3D printing technologies, a broader palette of material properties can be elicited from cellular solids, also known as metamaterials, architected foams, programmable materials, or lattice structures. Metamaterials are designed and optimized under the assumption of perfect geometry and a homogeneous underlying base material. Yet in practice real lattices contain thousands or even millions of complex features, each with imperfections in shape and material constituency. While the role of these defects on the mean properties of metamaterials has been well studied, little attention has been paid to the stochastic properties of metamaterials, a crucial next step for high reliability aerospace or biomedical applications. In this work we show that it is precisely the large quantity of features that serves to homogenize the heterogeneities of the individual features, thereby reducing the variability of the collective structure and achieving effective properties that can be even more consistent than the monolithic base material. In this first statistical study of additive lattice variability, a total of 239 strut-based lattices were mechanically tested for two pedagogical lattice topologies (body centered cubic and face centered cubic) at three different relative densities. The variability in yield strength and modulus was observed to exponentially decrease with feature count (to the power −0.5), a scaling trend that we show can be predicted using an analytic model or a finite element beam model. The latter provides an efficient pathway to extend the current concepts to arbitrary/complex geometries and loading scenarios. These results not only illustrate the homogenizing benefit of lattices, but also provide governing design principles that can be used to mitigate manufacturing inconsistencies via topological design.
In late 2004, the U.S. Nuclear Regulatory Commission (NRC) initiated a project to analyze the relative efficacy of alternative protective action strategies in reducing consequences to the public from a spectrum of nuclear power plant core melt accidents. The study is documented in NUREG/CR-6953, “Review of NUREG-0654, Supplement 3, ‘Criteria for Protective Action Recommendations for Severe Accidents,’” Volumes 1, 2, and 3. The Protective Action Recommendations (PAR) study provided a technical basis for enhancing the protective action guidance contained in Supplement 3, “Guidance for Protective Action Strategies,” to NUREG-0654/FEMA-REP-1, Rev. 1, “Criteria for Preparation and Evaluation of Radiological Emergency Response Plans and Preparedness in Support of Nuclear Power Plants, ” dated November 2011. In the time since, a number of important changes and additions have been made to the MACCS code suite, the nuclear accident consequence analysis code used to perform the study. The purpose of this analysis is to determine whether the MACCS results used in the PAR study would be different given recent changes to the MACCS code suite and input parameter guidance. Updated parameters that were analyzed include cohorts, keyhole evacuation, shielding and exposure parameters, compass sector resolution, and a range of source terms from rapidly progressing accidents. Results indicate that using updated modeling assumptions and capabilities may lead to a decrease in predicted health consequences for those within the emergency planning zone compared to the original PAR study.
The BayoTech hydrogen generation system has been evaluated in terms of safety considerations at the NM Gas site. The consequence of a leak in different components in the system was evaluated in terms of plume dispersion and overpressure. Additionally, the likelihood of a leak scenario for different hydrogen components was identified. The worst-case plume dispersion cases, full-bore leaks, resulted in relatively large plumes. However, these cases were noted to be far less likely than the partial break cases that were evaluated. The partial break cases resulted in nearly negligible plume lengths. Similarly, the overpressure analysis of the full-bore break scenarios resulted in much larger overpressures than the partial break cases (which resulted in negligible overpressure at the lot line). There were several cases evaluated in the analysis that represented leak scenarios from both hydrogen and natural gas sources. Generally, the natural gas leak scenarios resulted in a smaller horizontal impact than that of hydrogen leaks. The worst-case consequence from a hydrogen leak resulted from the compressors, storage pods, or dispensing system. To consider the safety features that may isolate the leak, the consequence was evaluated at different times after the leak event to show the reduction of pressure. After 2 seconds, the plume dispersion from this event is contained within the perimeter of the site. The worst-case consequences show that the plume may disperse to adjacent facilities and to the street. When considering both likelihood and consequence, the risk may be considered low because the maximum frequency of a full-bore leak from any component within the hydrogen compound is 8.2 E-5/yr. This means that a full-bore leak is expected to occur less than once every 10,000 years. The risk can be further reduced by implementing mitigative countermeasures, such as CMU walls along the sides of the equipment compound. This would reduce the overall consequence of the worst-case dispersion scenarios (horizontal impact of plume). In terms of siting and safety analysis, the NFPA 2 code was used to provide a high-level evaluation of the current site plan. The most limiting equipment in terms of set-back distance are the compressors/storage units because of the high-pressure hydrogen. The site layout was evaluated for an acceptable location for the compression/storage unit based on NFPA 2 set-back distances. It is important to note that the NFPA 2 set-back distances consider both likelihood and consequence. This is important because the worst-case results evaluated herein also represent the least likely leak scenario. Other site-specific considerations were evaluated, including the parking shade structure with photovoltaic cells and refueling vehicles. These issues were dispositioned and determined to not present a safety risk.
National Technology & Engineering Solutions of Sandia, LLC (NTESS) has recently amended the NTESS Retirement Income Plan (Pension Plan). The updated Summary Plan Description (SPD) for the Pension Plan effective January 1, 2022 is provided.
This report details how to successfully use the Fairfield Nodal ZLand seismic instruments to collect data, including preparation steps prior to deploying the instruments, how to record data during a field campaign, and how to retrieve recorded data from the instruments after their deployment. This guide will walk through each step for the novice user, as well as provide a checklist of critical steps for the advanced user to ensure successful, efficient field campaigns and seismic data collection. Currently, use of the seismic nodal instruments is highly limited due to the detailed nature and prior knowledge required to successfully set up, use, and retrieve data from these instruments. With this guide, all interested users will have the knowledge required to perform a seismic deployment and collect data with the Fairfield Nodal instruments.
In the pursuit of improving additively manufactured (AM) component quality and reliability, fine-tuning critical process parameters such as laser power and scan speed is a great first step toward limiting defect formation and optimizing the microstructure. However, the synergistic effects between these process parameters, layer thickness, and feedstock attributes (e.g. powder size distribution) on part characteristics such as microstructure, density, hardness, and surface roughness are not as well-studied. In this work, we investigate 316L stainless steel density cubes built via laser powder bed fusion (L-PBF), emphasizing the significant microstructural changes that occur due to altering the volumetric energy density (VED) via laser power, scan speed, and layer thickness changes, coupled with different starting powder size distributions. This study demonstrates that there is not one ideal process set and powder size distribution for each machine. Instead, there are several combinations or feedstock/process parameter ‘recipes’ to achieve similar goals. This study also establishes that for equivalent VEDs, changing powder size can significantly alter part density, GND density, and hardness. Through proper parameter and feedstock control, part attributes such as density, grain size, texture, dislocation density, hardness, and surface roughness can be customized, thereby creating multiple high-performance regions in the AM process space.
In this article, we provide an analytical model for the total ionizing dose (TID) effects on the bit error statistics of commercial flash memory chips. We have validated the model with experimental data collected by irradiating several commercial NAND flash memory chips from different technology nodes. We find that our analytical model can project bit errors at higher TID values [20 krad (Si)] from measured data at lower TID values [<1 krad (Si)]. Based on our model and the measured data, we have formulated basic design rules for using a commercial flash memory chip as a dosimeter. We discuss the impact of NAND chip-to-chip variability, noise margin, and the intrinsic errors on the dosimeter design using detailed experimentation.
Current state-of-the-art gasoline direct-injection (GDI) engines use multiple injections as one of the key technologies to improve exhaust emissions and fuel efficiency. For this technology to be successful, secured adequate control of fuel quantity for each injection is mandatory. However, nonlinearity and variations in the injection quantity can deteriorate the accuracy of fuel control, especially with small fuel injections. Therefore, it is necessary to understand the complex injection behavior and to develop a predictive model to be utilized in the development process. This study presents a methodology for rate of injection (ROI) and solenoid voltage modeling using artificial neural networks (ANNs) constructed from a set of Zeuch-style hydraulic experimental measurements conducted over a wide range of conditions. A quantitative comparison between the ANN model and the experimental data shows that the model is capable of predicting not only general features of the ROI trend, but also transient and non-linear behaviors at particular conditions. In addition, the end of injection (EOI) could be detected precisely with a virtually generated solenoid voltage signal and the signal processing method, which applies to an actual engine control unit. A correlation between the detected EOI timings calculated from the modeled signal and the measurement results showed a high coefficient of determination.
Complex networks of information processing systems, or information supply chains, present challenges for performance analysis. We establish a mathematical setting, in which a process within an information supply chain can be analyzed in terms of the functionality of the system's components. Principles of this methodology are rigorously defended and induce a model for determining the reliability for the various products in these networks. Our model does not limit us from having cycles in the network, as long as the cycles do not contain negation. It is shown that our approach to reliability resolves the nonuniqueness caused by cycles in a probabilistic Boolean network. An iterative algorithm is given to find the reliability values of the model, using a process that can be fully automated. This automated method of discerning reliability is beneficial for systems managers. As a systems manager considers systems modification, such as the replacement of owned and maintained hardware systems with cloud computing resources, the need for comparative analysis of system reliability is paramount. The model is extended to handle conditional knowledge about the network, allowing one to make predictions of weaknesses in the system. Finally, to illustrate the model's flexibility over different forms, it is demonstrated on a system of components and subcomponents.
The How To Manual supplements the User’s Manual and the Theory Manual. The goal of the How To Manual is to reduce learning time for complex end to end analyses. These documents are intended to be used together. See the User’s Manual for a complete list of the options for a solution case. All the examples are part of the Sierra/SD test suite. Each runs as is. The organization is similar to the other documents: How to run, Commands, Solution cases, Materials, Elements, Boundary conditions, and then Contact. The table of contents and index are indispensable. The Geometric Rigid Body Modes section is shared with the Users Manual.
This report summarizes Fiscal Year 2021 accomplishments from Sandia National Laboratories Wind Energy Program. The portfolio consists of funding provided by the DOE EERE Wind Energy Technologies Office (WETO), Advanced Research Projects Agency-Energy (ARPA-E), DOE Small Business Innovation Research (SBIR), and the Sandia Laboratory Directed Research and Development (LDRD) program. These accomplishments were made possible through capabilities investments by WETO, internal Sandia investment, and partnerships between Sandia and other national laboratories, universities, and research institutions around the world.
We present an experiment to detect one ton TNT-equivalent chemical explosions using pulsed Doppler radar observations of isodensity layers in the ionospheric E region during two campaigns. The first campaign, conducted on 15 October 2019, produced potential detections of all three shots. The detections closely resemble the temporal and spectral properties predicted using the InfraGA ray tracing and weakly nonlinear waveform propagation model. Here the model predicts that within 6.5–7.25 min of each shot a waveform peaking between 0.9 and 0.4 Hz will impact the ionosphere at 100 km. As the waves pass through this region, they will imprint their signal on an isodensity layer, which is detectable using a Doppler radar operating at the plasma frequency of the isodensity. Within the time windows of each of the three shots in the first campaign, we detect enhanced wave activity peaking near 0.5 Hz. These waves were imprinted on the Doppler signal probing an isodensity layer at 2.785 MHz near 100 km altitude. Despite these detections, the method appears to be unreliable as none of the six shots from the second campaign, conducted on 10 July 2020 were detected. The observations from this campaign were characterized by an increased acoustic noise environment in the microbarom band and persistent scintillation on the radar returns. These effects obscured any detectable signal from these shots and the baseline noise was well above the detection levels of the first campaign.
Agarwal, Sapan; Clark, Lawrence T.; Youngsciortino, Clifford; Ng, Garrick; Black, Dolores; Cannon, Matthew; Black, Jeffrey; Quinn, Heather; Brunhaver, John; Barnaby, Hugh; Manuel, Jack; Blansett, Ethan; Marinella, Matthew J.
In this article, we present a unique method of measuring single-event transient (SET) sensitivity in 12-nm FinFET technology. A test structure is presented that approximately measures the length of SETs using flip-flop shift registers with clock inputs driven by an inverter chain. The test structure was irradiated with ions at linear energy transfers (LETs) of 4.0, 5.6, 10.4, and 17.9 MeV-cm2/mg, and the cross sections of SET pulses measured down to 12.7 ps are presented. The experimental results are interpreted using a modeling methodology that combines TCAD and radiation effect simulations to capture the SET physics, and SPICE simulations to model the SETs in a circuit. The modeling shows that only ion strikes on the fin structure of the transistor would result in enough charge collected to produce SETs, while strikes in the subfin and substrate do not result in enough charge collected to produce measurable transients. Comparisons of the cumulative cross sections obtained from the experiment and from the simulations validate the modeling methodology presented.
Magnesium borohydride (Mg(BH4)2) is a promising candidate for material-based hydrogen storage due to its high hydrogen gravimetric/volumetric capacities and potential for dehydrogenation reversibility. Currently, slow dehydrogenation kinetics and the formation of intermediate polyboranes deter its application in clean energy technologies. In this study, a novel approach for modifying the physicochemical properties of Mg(BH4)2 is described, which involves the addition of reactive molecules in the vapor phase. This process enables the investigation of a new class of additive molecules for material-based hydrogen storage. The effects of four molecules (BBr3, Al2(CH3)6, TiCl4, and N2H4) with varying degrees of electrophilicity are examined to infer how the chemical reactivity can be used to tune the additive-Mg(BH4)2 interaction and optimize the release of hydrogen at lower temperatures. Control over the amounts of additive exposure to Mg(BH4)2 is shown to prevent degradation of the bulk γ-Mg(BH4)2 crystal structure and loss of hydrogen capacity. Trimethylaluminum provides the most encouraging results on Mg(BH4)2, maintaining 97% of the starting theoretical Mg(BH4)2 hydrogen content and demonstrating hydrogen release at 115 °C. These results firmly establish the efficacy of this approach toward controlling the properties of Mg(BH4)2 and provide a new path forward for additive-based modification of hydrogen storage materials.
Understanding of semiconductor breakdown under high electric fields is an important aspect of materials’ properties, particularly for the design of power devices. For decades, a power-law has been used to describe the dependence of material-specific critical electrical field (Ecrit) at which the material breaks down and bandgap (Eg). The relationship is often used to gauge tradeoffs of emerging materials whose properties haven’t yet been determined. Unfortunately, the reported dependencies of Ecrit on Eg cover a surprisingly wide range in the literature. Moreover, Ecrit is a function of material doping. Further, discrepancies arise in Ecrit values owing to differences between punch-through and non-punch-through device structures. We report a new normalization procedure that enables comparison of critical electric field values across materials, doping, and different device types. An extensive examination of numerous references reveals that the dependence Ecrit ∝ Eg1.83 best fits the most reliable and newest data for both direct and indirect semiconductors. Graphical abstract: [Figure not available: see fulltext.].
InAs-based interband cascade lasers (ICLs) can be more easily adapted toward long wavelength operation than their GaSb counterparts. Devices made from two recent ICL wafers with an advanced waveguide structure are reported, which demonstrate improved device performance in terms of reduced threshold current densities for ICLs near 11 μm or extended operating wavelength beyond 13 μm. The ICLs near 11 μm yielded a significantly reduced continuous wave (cw) lasing threshold of 23 A/cm2 at 80 K with substantially increased cw output power, compared with previously reported ICLs at similar wavelengths. ICLs made from the second wafer incorporated an innovative quantum well active region, comprised of InAsP layers, and lased in the pulsed-mode up to 120 K at 13.2 μm, which is the longest wavelength achieved for III-V interband lasers.
The Savannah River Site plans to reprocess defense spent nuclear fuel currently stored in their L-Basin via the Accelerated Basin Deinventory (ABD) Program. The previous plan for the L-Basin spent nuclear fuel was to dispose of it directly in the federal repository without reprocessing. Implementing the ABD Program will result in final disposal of approximately 900 fewer canisters of defense spent nuclear fuel and the production of approximately 521 more canisters of vitrified high-level waste glass with some specific differences from the planned high-level waste glass. Because the 235U in the L-Basin spent nuclear fuel is not intended to be recovered, the fissile mass loading of the vitrified high-level glass waste form to be produced must be increased above the current value of 897 g/m3 to a maximum of 2,500 g/m3. Therefore, implementing the ABD Program would produce a variant of high-level waste glass—the ABD glass—that needs to be evaluated for future repository licensing, which includes both preclosure safety and postclosure performance. This report describes the approach to and summarizes the results of an evaluation of the potential effects of implementing the ABD Program at the Savannah River Site on the technical basis for future repository licensing for a generic repository that is similar to Yucca Mountain and for one that is fully generic. This evaluation includes the effects on preclosure safety analyses and postclosure performance assessment for both repository settings. The license application for the proposed Yucca Mountain repository (DOE 2008), which is serving as a framework for this evaluation, concluded that the proposed Yucca Mountain repository would meet all applicable regulatory requirements. The evaluation documented in this report found that implementing the ABD Program is not expected to change that conclusion for a generic repository similar to Yucca Mountain or for a generic repository with respect to the preclosure safety analyses. With respect to the postclosure performance of a generic repository, no concerns were identified.
Filtration, pressure drop and quantitative fit of N95 respirators were robust to several decontamination methods including vaporous hydrogen peroxide, wet heat, bleach, and ultraviolet light. Bleach may not have penetrated the hydrophobic outer layers of the N95 respirator. Isopropyl alcohol and detergent both severely degraded the electrostatic charge of the electret filtration layer. First data in N95 respirators that the loss of filtration efficiency was directly correlated with loss of surface potential on the filtration layer. The pressure drop was unchanged, so loss of filtration efficacy would not be apparent during a user seal check. Mechanical straps degrade with repeated mechanical cycling during extended use. Decontamination did not appear to degrade the elastic straps. Significant loss of strap elasticity would be apparent during a user negative pressure seal check.
A myriad of phenomena in materials science and chemistry rely on quantum-level simulations of the electronic structure in matter. While moving to larger length and time scales has been a pressing issue for decades, such large-scale electronic structure calculations are still challenging despite modern software approaches and advances in high-performance computing. The silver lining in this regard is the use of machine learning to accelerate electronic structure calculations – this line of research has recently gained growing attention. The grand challenge therein is finding a suitable machine-learning model during a process called hyperparameter optimization. This, however, causes a massive computational overhead in addition to that of data generation. We accelerate the construction of machine-learning surrogate models by roughly two orders of magnitude by circumventing excessive training during the hyperparameter optimization phase. We demonstrate our workflow for Kohn-Sham density functional theory, the most popular computational method in materials science and chemistry.
Here, we explore the dimensionality of the U.S. Department of Agriculture’s household food security survey module among households with children. Using a novel methodological approach to measuring food security, we find that there is multidimensionality in the module for households with children that is associated with the overall household, adult, and child dimensions of food security. Additional analyses suggest official estimates of food security among households with children are robust to this multidimensionality. However, we also find that accounting for the multidimensionality of food security among these households provides new insights into the correlates of food security at the household, adult, and child levels of measurement.
The objective of this project was to develop a novel capability to generate synthetic data sets for the purpose of training Machine Learning (ML) algorithms for the detection of malicious activities on satellite systems. The approach experimented with was to a) generate sparse data sets using emulation modeling and b) enlarge the sparse data using Generative Adversarial Networks (GANs). We based our emulation modeling on the Open Source NASA Operational Simulator for Small Satellites (NOS3) developed by the Katherine Johnson Independent Verification and Validation (IV&V) program in West Virginia. Significant new capabilities on NOS3 had to be developed for our data set generation needs. To expand these data sets for the purpose of training ML, we experimented with a) Extreme Learning Machines (ELMs) and b) Wasserstein-GANs (WGAN-GP).
The core function of many neural network algorithms is the dot product, or vector matrix multiply (VMM) operation. Crossbar arrays utilizing resistive memory elements can reduce computational energy in neural algorithms by up to five orders of magnitude compared to conventional CPUs. Moving data between a processor, SRAM, and DRAM dominates energy consumption. By utilizing analog operations to reduce data movement, resistive memory crossbars can enable processing of large amounts of data at lower energy than conventional memory architectures.
For isolated white dwarf (WD) stars, fits to their observed spectra provide the most precise estimates of their effective temperatures and surface gravities. Even so, recent studies have shown that systematic offsets exist between such spectroscopic parameter determinations and those based on broadband photometry. These large discrepancies (10% in Teff, 0.1 M⊙ in mass) provide scientific motivation for reconsidering the atomic physics employed in the model atmospheres of these stars. Recent simulation work of ours suggests that the most important remaining uncertainties in simulation-based calculations of line shapes are the treatment of 1) the electric field distribution and 2) the occupation probability (OP) prescription. We review the work that has been done in these areas and outline possible avenues for progress.
The role of a solid surface for initiating gas-phase reactions is still not well understood. The hydrogen atom (H) is an important intermediate in gas-phase ethane dehydrogenation and is known to interact with surface sites on catalysts. However, direct measurements of H near catalytic surfaces have not yet been reported. Here, we present the first H measurements by laser-induced fluorescence in the gas-phase above catalytic and noncatalytic surfaces. Measurements at temperatures up to 700 °C show H concentrations to be at the highest above inert quartz surfaces compared to stainless steel and a platinum-based catalyst. Additionally, H concentrations above the catalyst decreased rapidly with time on stream. These newly obtained observations are consistent with the recently reported differences in bulk ethane dehydrogenation reactivity of these materials, suggesting H may be a good reporter for dehydrogenation activity.
Liu, Weiran; Ullrich, Paul A.; Guba, Oksana G.; Caldwell, Peter M.; Keen, Noel D.
In global atmospheric modeling, the differences between nonhydrostatic (NH) and hydrostatic (H) dynamical cores are negligible in dry simulations when grid spacing is larger than 10 km. However, recent studies suggest that those differences can be significant at far coarser resolution when moisture is included. To better understand how NH and H differences manifest in global fields, we perform and analyze an ensemble of 28 and 13 km seasonal simulations with the NH and H dynamical cores in the Energy Exascale Earth System Model global atmosphere model, where the differences between H and NH configurations are minimized. A set of idealized rising bubble experiments is also conducted to further investigate the differences. Although NH and H differences are not significant in global statistics and zonal averages, significant differences in precipitation amount and patterns are observed in parts of the tropics. The most prominent differences emerge near India and the Western Pacific in the boreal summer, and the central-southern Indian Ocean and Pacific in the boreal winter. Tropical differences influence surrounding regions through modification of the regional circulation and can propagate to the extratropics, leading to significant temperature and geopotential differences over the middle to high latitudes. While the dry bubble experiments show negligible deviation between H and NH dynamics until grid spacing is below 6.25 km, precipitation amount and vertical velocity are different in the moist case even at 25 km resolution.
This presentation provides details regarding integral experiments at Sandia National Laboratory for fiscal year 2021. The experiments discussed are as follows: IER 230: Characterize the Thermal Capabilities of the 7uPCX; IER 304: Temperature Dependent Critical Benchmarks; IER 305: Critical Experiments with UO2 Rods and Molybdenum Foils; IER 306: Critical Experiments with UO2 Rods and Rhodium Foils ; IER 441: Epithermal HEX Lattices with SNL 7uPCX Fuel for Testing Nuclear Data; IER 452: Inversion Point of the Isothermal Reactivity Coefficient; and IER 523: Critical Experiments with ACRR UO2-BeO Fuel.
Sandia National Labs has access to unused ACRR fuel, which is unique in its enrichment 35% and material composition BeO. ACRR fuel is available in quantities well above what is needed for experiments. Two experiment concepts have been investigated: UO2BeO fuel elements and pellets with 7uPCX fuel. The worth of UO2BeO is large enough to be well above the anticipated experiment uncertainties.
This work presents a new multiscale method for coupling the 3D Maxwell's equations to the 1D telegrapher's equations. While Maxwell's equations are appropriate for modeling complex electromagnetics in arbitrary-geometry domains, simulation cost for many applications (e.g. pulsed power) can be dramatically reduced by representing less complex transmission line regions of the domain with a 1D model. By assuming a transverse electromagnetic (TEM) ansatz for the solution in a transmission line region, we reduce the Maxwell's equations to the telegrapher's equations. We propose a self-consistent finite element formulation of the fully coupled system that uses boundary integrals to couple between the 3D and 1D domains and supports arbitrary unstructured 3D meshes. Additionally, by using a Lagrange multiplier to enforce continuity at the coupling interface, we allow for an absorbing boundary condition to also be applied to non-TEM modes on this boundary. We demonstrate that this feature reduces non-physical reflection and ringing of non-TEM modes off of the coupling boundary. By employing implicit time integration, we ensure a stable coupling, and we introduce an efficient method for solving the resulting linear systems. We demonstrate the accuracy of the new method on two verification problems, a transient O-wave in a rectilinear prism and a steady-state problem in a coaxial geometry, and show the efficiency and weak scalability of our implementation on a cold test of the Z-machine MITL and post-hole convolute.
We introduce a robust verification tool for computational codes, which we call Stochastic Robust Extrapolation based Error Quantification (StREEQ). Unlike the prevalent Grid Convergence Index (GCI) [1] method, our approach is suitable for both stochastic and deterministic computational codes and is generalizable to any number of discretization variables. Building on ideas introduced in the Robust Verification [2] approach, we estimate the converged solution and orders of convergence with uncertainty using multiple fits of a discretization error model. In contrast to Robust Verification, we perform these fits to many bootstrap samples yielding a larger set of predictions with smoother statistics. Here, bootstrap resampling is performed on the lack-of-fit errors for deterministic code responses, and directly on the noisy data set for stochastic responses. This approach lends a degree of robustness to the overall results, capable of yielding precise verification results for sufficiently resolved data sets, and appropriately expanding the uncertainty when the data set does not support a precise result. For stochastic responses, a credibility assessment is also performed to give the analyst an indication of the trustworthiness of the results. This approach is suitable for both code and solution verification, and is particularly useful for solution verification of high-consequence simulations.
This presentation discusses activities related to the Nuclear Criticality Safety Program (NCSP) at Sandia National Laboratory in fiscal year 2021. This includes NCSP funding, integral experiment requests, integral experiment spending, highlights, and COVID-19 impacts.
In situ analysis of surfaces during high-flux plasma exposure represents a long-standing challenge in the study of plasma-material interactions. While post-mortem microscopy can provide a detailed picture of structural and compositional changes, in situ techniques can capture the dynamic evolution of the surface. In this study, we demonstrate how spectroscopic ellipsometry can be applied to the real-time characterization of W nanostructure (also known as "fuzz") growth during exposure to low temperature, high-flux He plasmas. Strikingly, over a wide range of sample temperatures and helium fluences, the measured ellipsometric parameters (ψ, Δ) collapse onto a single curve that can be directly correlated with surface morphologies characterized by ex situ helium ion microscopy. The initial variation in the (ψ, Δ) parameters appears to be governed by small changes in surface roughness (<50 nm) produced by helium bubble nucleation and growth, followed by the emergence of 50 nm diameter W tendrils. This basic behavior appears to be reproducible over a wide parameter space, indicating that the spectroscopic ellipsometry may be of general practical use as a diagnostic to study surface morphologies produced by high-flux He implantation in refractory metals. An advantage of the methods outlined here is that they are applicable at low incident ion energies, even below the sputtering threshold. As an example of this application, we apply in situ ellipsometry to examine how W fuzz growth is affected both by varying ion energy and the temperature of the surface.
Here, we utilize electrically detected magnetic resonance (EDMR) measurements to compare high-field stressed, and gamma irradiated Si/SiO2 metal–oxide–silicon (MOS) structures. We utilize spin-dependent recombination (SDR) EDMR detected using the Fitzgerald and Grove dc I-V approach to compare the effects of high-field electrical stressing and gamma irradiation on defect formation at and near the Si/SiO2 interface. As anticipated, both greatly increase the concentration of Pb centers (silicon dangling bonds at the interface) densities. The irradiation also generated a significant increase in the dc I-V EDMR response of E' centers (oxygen vacancies in the SiO2 films), whereas the generation of an E' EDMR response in high-field stressing is much weaker than in the gamma irradiation case. These results likely suggest a difference in their physical distribution resulting from radiation damage and high electric field stressing.
The reactivity of carbonyl oxides has previously been shown to exhibit strong conformer and substituent dependencies. Through a combination of synchrotron-multiplexed photoionization mass spectrometry experiments (298 K and 4 Torr) and high-level theory [CCSD(T)-F12/cc-pVTZ-F12//B2PLYP-D3/cc-pVTZ with an added CCSDT(Q) correction], we explore the conformer dependence of the reaction of acetaldehyde oxide (CH3CHOO) with dimethylamine (DMA). The experimental data support the theoretically predicted 1,2-insertion mechanism and the formation of an amine-functionalized hydroperoxide reaction product. Tunable-vacuum ultraviolet photoionization probing of anti- or anti- + syn-CH3CHOO reveals a strong conformer dependence of the title reaction. The rate coefficient of DMA with anti-CH3CHOO is predicted to exceed that for the reaction with syn-CH3CHOO by a factor of ∼34,000, which is attributed to submerged barrier (syn) versus barrierless (anti) mechanisms for energetically downhill reactions.
CPU/GPU heterogeneous compute platforms are an ubiquitous element in computing and a programming model specified for this heterogeneous computing model is important for both performance and programmability. A programming model that exposes the shared, unified, address space between the heterogeneous units is a necessary step in this direction as it removes the burden of explicit data movement from the programmer while maintaining performance. GPU vendors, such as AMD and NVIDIA, have released software-managed runtimes that can provide programmers the illusion of unified CPU and GPU memory by automatically migrating data in and out of the GPU memory. However, this runtime support is not included in GPGPU-Sim, a commonly used framework that models the features of a modern graphics processor that are relevant to non-graphics applications. UVM Smart was developed, which extended GPGPU-Sim 3.x to in- corporate the modeling of on-demand pageing and data migration through the runtime. This report discusses the integration of UVM Smart and GPGPU-Sim 4.0 and the modifications to improve simulation performance and accuracy.
Enhancing the efficiency of second-harmonic generation using all-dielectric metasurfaces to date has mostly focused on electromagnetic engineering of optical modes in the meta-atom. Further advances in nonlinear conversion efficiencies can be gained by engineering the material nonlinearities at the nanoscale, however this cannot be achieved using conventional materials. Semiconductor heterostructures that support resonant nonlinearities using quantum engineered intersubband transitions can provide this new degree of freedom. By simultaneously optimizing the heterostructures and meta-atoms, we experimentally realize an all-dielectric polaritonic metasurface with a maximum second-harmonic generation power conversion factor of 0.5 mW/W2 and power conversion efficiencies of 0.015% at nominal pump intensities of 11 kW/cm2. These conversion efficiencies are higher than the record values reported to date in all-dielectric nonlinear metasurfaces but with 3 orders of magnitude lower pump power. Our results therefore open a new direction for designing efficient nonlinear all-dielectric metasurfaces for new classical and quantum light sources.
The protection systems (circuit breakers, relays, reclosers, and fuses) of the electric grid are the primary component responding to resilience events, ranging from common storms to extreme events. The protective equipment must detect and operate very quickly, generally <0.25 seconds, to remove faults in the system before the system goes unstable or additional equipment is damaged. The burden on protection systems is increasing as the complexity of the grid increases; renewable energy resources, particularly inverter-based resources (IBR) and increasing electrification all contribute to a more complex grid landscape for protection devices. In addition, there are increasing threats from natural disasters, aging infrastructure, and manmade attacks that can cause faults and disturbances in the electric grid. The challenge for the application of AI into power system protection is that events are rare and unpredictable. In order to improve the resiliency of the electric grid, AI has to be able to learn from very little data. During an extreme disaster, it may not be important that the perfect, most optimal action is taken, but AI must be guaranteed to always respond by moving the grid toward a more stable state during unseen events.
A combination of electrodeposition and thermal reduction methods have been utilized for the synthesis of ligand-free FeNiCo alloy nanoparticles through a high-entropy oxide intermediate. These phases are of great interest to the electrocatalysis community, especially when formed by a sustainable chemistry method. This is successfully achieved by first forming a complex five element amorphous FeNiCoCrMn high-entropy oxide (HEO) phase via electrodeposition from a nanodroplet emulsion solution of the metal salt reactants. The amorphous oxide phase is then thermally treated and reduced at 570-600 °C to form the crystalline FeNiCo alloy with a separate CrMnOx cophase. The FeNiCo alloy is fully characterized by scanning transmission electron microscopy and energy-dispersive X-ray spectroscopy elemental analysis and is identified as a face-centered cubic crystal with the lattice constant a = 3.52 Å. The unoptimized, ligand-free FeNiCo NPs activity toward the oxygen evolution reaction is evaluated in alkaline solution and found to have an ∼185 mV more cathodic onset potential than the Pt metal. Beyond being able to synthesize highly crystalline, ligand-free FeNiCo nanoparticles, the demonstrated and relatively simple two-step process is ideal for the synthesis of tailor-made nanoparticles where the desired composition is not easily achieved with classical solution-based chemistries.
There are several engineering applications in which the assumptions of homogenization and scale separation may be violated, in particular, for metallic structures constructed through additive manufacturing. Instead of resorting to direct numerical simulation of the macroscale system with an embedded fine scale, an alternative approach is to use an approximate macroscale constitutive model, but then estimate the model-form error using a posteriori error estimation techniques and subsequently adapt the macroscale model to reduce the error for a given boundary value problem and quantity of interest. Here, we investigate this approach to multiscale analysis in solids with unseparated scales using the example of an additively manufactured metallic structure consisting of a polycrystalline microstructure that is neither periodic nor statistically homogeneous. As a first step to the general nonlinear case, we focus here on linear elasticity in which each grain within the polycrystal is linear elastic but anisotropic.
Experiments were designed and conducted to investigate the impact that geometric cavities have on the transfer of energy from an embedded explosion to the surface of the physical domain. The experimental domains were fabricated as 3-inch polymer cubes, with varying cavity geometries centered in the cubes. The energy transfer, represented as a shock wave, was generated by the detonation of an exploding bridgewire at the center of the cavity. The shock propagation was tracked by schlieren imaging through the optically accessible polymer. The magnitude of energy transferred to the surface was recorded by an array of pressure sensors. A minimum of five experimental runs were conducted for each cavity geometry and statistical results were developed and compared. Results demonstrated the decoupling effect that geometric cavities produce on the energy field at the surface.
Electrically detected magnetic resonance and near-zero-field magnetoresistance measurements were used to study atomic-scale traps generated during high-field gate stressing in Si/SiO2 MOSFETs. The defects observed are almost certainly important to time-dependent dielectric breakdown. The measurements were made with spin-dependent recombination current involving defects at and near the Si/SiO2 boundary. The interface traps observed are Pb0 and Pb1 centers, which are silicon dangling bond defects. The ratio of Pb0/Pb1 is dependent on the gate stressing polarity. Electrically detected magnetic resonance measurements also reveal generation of E′ oxide defects near the Si/SiO2 interface. Near-zero-field magnetoresistance measurements made throughout stressing reveal that the local hyperfine environment of the interface traps changes with stressing time; these changes are almost certainly due to the redistribution of hydrogen near the interface.
Graph partitioning has emerged as an area of interest due to its use in various applications in computational research. One way to partition a graph is to solve for the eigenvectors of the corresponding graph Laplacian matrix. This project focuses on the eigensolver LOBPCG and the evaluation of a new preconditioner: Randomized Cholesky Factorization (rchol). This proconditioner was tested for its speed and accuracy against other well-known preconditioners for the method. After experiments were run on several known test matrices, rchol appears to be a better preconditioner for structured matrices. This research was sponsored by National Nuclear Security Administration Minority Serving Institutions Internship Program (NNSA-MSIIP) and completed at host facility Sandia National Laboratories. As such, after discussion of the research project itself, this report contains a brief reflection on experience gained as a result of participating in the NNSA-MSIIP.
We present an overview of the magneto-inertial fusion (MIF) concept MagLIF (Magnetized Liner Inertial Fusion) pursued at Sandia National Laboratories and review some of the most prominent results since the initial experiments in 2013. In MagLIF, a centimeter-scale beryllium tube or "liner" is filled with a fusion fuel, axially pre-magnetized, laser pre-heated, and finally imploded using up to 20 MA from the Z machine. All of these elements are necessary to generate a thermonuclear plasma: laser preheating raises the initial temperature of the fuel, the electrical current implodes the liner and quasi-adiabatically compresses the fuel via the Lorentz force, and the axial magnetic field limits thermal conduction from the hot plasma to the cold liner walls during the implosion. MagLIF is the first MIF concept to demonstrate fusion relevant temperatures, significant fusion production (>10^13 primary DD neutron yield), and magnetic trapping of charged fusion particles. On a 60 MA next-generation pulsed-power machine, two-dimensional simulations suggest that MagLIF has the potential to generate multi-MJ yields with significant self-heating, a long-term goal of the US Stockpile Stewardship Program. At currents exceeding 65 MA, the high gains required for fusion energy could be achievable.
Computational tools to study thermodynamic properties of magnetic materials have, until recently, been limited to phenomenological modeling or to small domain sizes limiting our mechanistic understanding of thermal transport in ferromagnets. Herein, we study the interplay of phonon and magnetic spin contributions to the thermal conductivity in a-iron utilizing non-equilibrium molecular dynamics simulations. It was observed that the magnetic spin contribution to the total thermal conductivity exceeds lattice transport for temperatures up to two-thirds of the Curie temperature after which only strongly coupled magnon-phonon modes become active heat carriers. Characterizations of the phonon and magnon spectra give a detailed insight into the coupling between these heat carriers, and the temperature sensitivity of these coupled systems. Comparisons to both experiments and ab initio data support our inferred electronic thermal conductivity, supporting the coupled molecular dynamics/spin dynamics framework as a viable method to extend the predictive capability for magnetic material properties.
Spectral line-shape models are an important part of understanding high-energy-density (HED) plasmas. Models are needed for calculating opacity of materials and can serve as diagnostics for astrophysical and laboratory plasmas. However, much of the literature on line shapes is directed toward specialists. This perspective makes it difficult for non-specialists to enter the field. We have two broad goals with this topical review. First, we aim to give information so that others in HED physics may better understand the current field. This first goal may help guide future experiments to test different aspects of the theory. Second, we provide an introduction for those who might be interested in line-shape theory, and enough materials to be able to navigate the field and the literature. We give a high-level overview of line broadening process, as well as dive into the formalism, available methods, and approximations.