The supply chain attack pathway is being increasingly used by adversaries to bypass security controls and gain unauthorized access to sensitive networks and equipment (e.g., Critical Digital Assets). Cyber-attacks targeting supply chain generally aim to compromise the environments, products, or services of vendors and suppliers to inject, add, or substitute authentic software and hardware with malicious elements. These malicious elements are deemed to be authentic as they arise from the vendor or supplier (i.e., the supply chain). This research aims at providing a survey of technologies that have the potential to reduce exposure of sensitive networks and equipment to these attacks, thereby improving tamper resistance. The recent advances in the performance and capabilities of these technologies in recent years has increased their potential applications to reduce or mitigate exposure of the supply chain attack pathway. The focus being on providing an analysis of the benefits and disadvantages of smart cards, secure tokens, and elements to provide root of trust. This analysis provides evidence that these roots of trust can increase the technical capability of equipment and networks to authenticate changes to software and configuration thereby increasing resilience to some supply chain attacks, such as those related to logistics and ICT channels, but not development environment attacks.
This article evaluates the data retention characteristics of irradiated multilevel-cell (MLC) 3-D NAND flash memories. We irradiated the memory chips by a Co-60 gamma-ray source for up to 50 krad(Si) and then wrote a random data pattern on the irradiated chips to find their retention characteristics. The experimental results show that the data retention property of the irradiated chips is significantly degraded when compared to the un-irradiated ones. We evaluated two independent strategies to improve the data retention characteristics of the irradiated chips. The first method involves high-temperature annealing of the irradiated chips, while the second method suggests preprogramming the memory modules before deploying them into radiation-prone environments.
MELCOR is an integrated thermal hydraulics, accident progression, and source term code for reactor safety analysis that has been developed at Sandia National Laboratories for the United States Nuclear Regulatory Commission (NRC) since the early 1980s. Though MELCOR originated as a light water reactor (LWR) code, development and modernization efforts have expanded its application scope to includ e non-LWR reactor concepts. Current MELCOR development efforts include providing the NRC with the analytical capabilities to support regulatory readiness for licensing non-LWR techno logies under Strategy 2 of the NRC?s near- term Implementation Action Plans. Beginning with the Next Generation Nuclear Project (NGNP), MELCOR has undergone a range of enha ncements to provide analytical capabilities for modeling the spectrum of advanced non-LWR concepts. This report describes the generic plant model developed to demonstrate MELCOR capabilities to perform heat pipe reactor (HPR) safety evaluations. The generic plant mode l is based on a publicly-available Los Alamos National Laboratory (LANL) Megapower design as modified in the Idaho National Laboratory (INL) Design A description. For plant aspects (e.g., reactor building size and leak rate) that are not described in the LANL and INL references , the analysts made assumptions needed to construct a MELCOR full-plant model. The HP R uses high assay, low-enrichment uranium (HALEU) fuel with steel cladding that uses heat pipes to transfer heat to a secondary Brayton air cycle. The core region is surrounded by a stainless-steel shroud, alumina reflector, core barrel and boron carbide neutron shield. The reactor is secured inside a below-grade cavity, with the operating floor located above the cavity. Example calculations are performed to show the plant response and MELCOR capabilities to characterize a range of accident conditions. The accidents selected for evaluation consider a range of degraded and failed modes of operation for key safety functions providing re activity control, the primary and secondary system heat removal, and the effectiveness of th e confinement natural circulation flow into the reactor cavity (i.e., a flow blockage).
Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verfication test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This document can be used to confirm that a given code capability is verfied or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verfication tests exist in the Sierra / SM test suite, but have not yet been included in this manual.
The effects of applied stress, ranging from tensile to compressive, on the atmospheric pitting corrosion behavior of 304L stainless steel (SS304L) were analyzed through accelerated atmospheric laboratory exposures and microelectrochemical cell analysis. After exposing the lateral surface of a SS304L four-point bend specimen to artificial seawater at 50°C and 35% relative humidity for 50 d, pitting characteristics were determined using optical profilometry and scanning electron microscopy. The SS304L microstructure was analyzed using electron backscatter diffraction. Additionally, localized electrochemical measurements were performed on a similar, unexposed, SS304L four-point bend bar to determine the effects of applied stress on corrosion susceptibility. Under the applied loads and the environment tested, the observed pitting characteristics showed no correlation with the applied stress (from 250 MPa to -250 MPa). Pitting depth, surface area, roundness, and distribution were found to be independent of location on the sample or applied stress. The lack of correlation between pitting statistics and applied stress was more likely due to the aggressive exposure environment, with a sea salt loading of 4 g/m2 chloride. The pitting characteristics observed were instead governed by the available cathode current and salt distribution, which are a function of sea salt loading, as well as pre-existing underlying microstructure. In microelectrochemical cell experiments performed in Cl- environments comparable to the atmospheric exposure and in environments containing orders of magnitude lower Cl- concentrations, effects of the applied stress on corrosion susceptibility were only apparent in open-circuit potential in low Cl- concentration solutions. Cl- concentration governed the current density and transpassive dissolution potential.
Livera, Andreas; Paphitis, George; Theristis, Marios; Lopez-Lorente, Javier; Makrides, George; Georghiou, George E.
The timely detection of photovoltaic (PV) system failures is important for maintaining optimal performance and lifetime reliability. A main challenge remains the lack of a unified health-state architecture for the uninterrupted monitoring and predictive performance of PV systems. To this end, existing failure detection models are strongly dependent on the availability and quality of site-specific historic data. The scope of this work is to address these fundamental challenges by presenting a health-state architecture for advanced PV system monitoring. The proposed architecture comprises of a machine learning model for PV performance modeling and accurate failure diagnosis. The predictive model is optimally trained on low amounts of on-site data using minimal features and coupled to functional routines for data quality verification, whereas the classifier is trained under an enhanced supervised learning regime. The results demonstrated high accuracies for the implemented predictive model, exhibiting normalized root mean square errors lower than 3.40% even when trained with low data shares. The classification results provided evidence that fault conditions can be detected with a sensitivity of 83.91% for synthetic power-loss events (power reduction of 5%) and of 97.99% for field-emulated failures in the test-bench PV system. Finally, this work provides insights on how to construct an accurate PV system with predictive and classification models for the timely detection of faults and uninterrupted monitoring of PV systems, regardless of historic data availability and quality. Such guidelines and insights on the development of accurate health-state architectures for PV plants can have positive implications in operation and maintenance and monitoring strategies, thus improving the system’s performance.
Structural disorder causes materials' surface electronic properties, e.g., work function (φ), to vary spatially, yet it is challenging to prove exact causal relationships to underlying ensemble disorder, e.g., roughness or granularity. For polycrystalline Pt, nanoscale resolution photoemission threshold mapping reveals a spatially varying φ = 5.70 ± 0.03 eV over a distribution of (111) vicinal grain surfaces prepared by sputter deposition and annealing. With regard to field emission and related phenomena, e.g., vacuum arc initiation, a salient feature of the φ distribution is that it is skewed with a long tail to values down to 5.4 eV, i.e., far below the mean, which is exponentially impactful to field emission via the Fowler-Nordheim relation. We show that the φ spatial variation and distribution can be explained by ensemble variations of granular tilts and surface slopes via a Smoluchowski smoothing model wherein local φ variations result from spatially varying densities of electric dipole moments, intrinsic to atomic steps, that locally modify φ. Atomic step-terrace structure is confirmed with scanning tunneling microscopy (STM) at several locations on our surfaces, and prior works showed STM evidence for atomic step dipoles at various metal surfaces. From our model, we find an atomic step edge dipole μ = 0.12 D/edge atom, which is comparable to values reported in studies that utilized other methods and materials. Our results elucidate a connection between macroscopic φ and the nanostructure that may contribute to the spread of reported φ for Pt and other surfaces and may be useful toward more complete descriptions of polycrystalline metals in the models of field emission and other related vacuum electronics phenomena, e.g., arc initiation.
We investigate the sensitivity of silicon-oxide-nitride-silicon-oxide (SONOS) charge trapping memory technology to heavy-ion induced single-event effects. Threshold voltage ( V_T ) statistics were collected across multiple test chips that contained in total 18 Mb of 40-nm SONOS memory arrays. The arrays were irradiated with Kr and Ar ion beams, and the changes in their V_T distributions were analyzed as a function of linear energy transfer (LET), beam fluence, and operating temperature. We observe that heavy ion irradiation induces a tail of disturbed devices in the 'program' state distribution, which has also been seen in the response of floating-gate (FG) flash cells. However, the V_T distribution of SONOS cells lacks a distinct secondary peak, which is generally attributed to direct ion strikes to the gate-stack of FG cells. This property, combined with the observed change in the V_T distribution with LET, suggests that SONOS cells are not particularly sensitive to direct ion strikes but cells in the proximity of an ion's absorption can still experience a V_T shift. These results shed new light on the physical mechanisms underlying the V_T shift induced by a single heavy ion in scaled charge trap memory.
This report documents details of the microstructure and mechanical properties of -tin (Sn), that is used in the Tri-lab (Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), Sandia National Laboratories (SNL)) collaboration project on Multi-phase Tin Strength. We report microstructural features detailing the crystallographic texture and grain morphology of as-received -tin from electron back scatter diffraction (EBSD). Temperature and strain rate dependent mechanical behavior was investigated by multiple compression tests at temperatures of 200K to 400K and strain rates of 0.0001 /s to 100 /s. Tri-lab tin showed significant temperature and strain rate dependent strength with no significant plastic anisotropy. A sample to sample material variation was observed from duplicate compression tests and texture measurements. Compression data was used to calibrate model parameters for temperature and rate dependent strength models, Johnson-Cook (JC), Zerilli-Armstrong (ZA) and Preston-Tonks-Wallace (PTW) strength models.
This report is part of a series of six white papers, prepared jointly by the Proliferation Resistance and Physical Protection Working Group (PRPPWG) and the six System Steering Committees (SSCs) and provisional System Steering Committees (pSSCs). This publication is an update to a similar series published in 2011 presenting the status of Proliferation Resistance & Physical Protection (PR&PP) characteristics for each of the six systems selected by the Generation IV International Forum (GIF) for further research and development, namely: the Sodium-cooled fast Reactor (SFR), the Very high temperature reactor (VHTR), the gas-cooled fast reactor (GFR), the Molten salt reactor (MSR) and the Supercritical water–cooled reactor (SCWR). This white paper represents the status of Proliferation Resistance and Physical Protection (PR&PP) characteristics for the Very-High-Temperature Reactor (VHTR) reference designs selected by the Generation IV International Forum (GIF) VHTR System Steering Committee (SSC). The intent is to generate preliminary information about the PR&PP features of the VHTR reactor technology and to provide insights for optimizing their PR&PP performance for the benefit of VHTR system designers. It updates the VHTR analysis published in the 2011 report “Proliferation Resistance and Physical Protection of the Six Generation IV Nuclear Energy Systems”, prepared Jointly by the Proliferation Resistance and Physical Protection Working Group (PRPPWG) and the System Steering Committees and provisional System Steering Committees of the Generation IV International Forum, taking into account the evolution of both the systems, the GIF R&D activities, and an increased understanding of the PR&PP features. The white paper, prepared jointly by the GIF PRPPWG and the GIF VHTR SSC, follows the high-level paradigm of the GIF PR&PP Evaluation Methodology to investigate the key points of PR&PP features extracted from the reference designs of VHTRs under consideration in various countries. A major update from the 2011 report is an explicit distinction between prismatic block-type VHTRs and pebble-bed VHTRs. The white paper also provides an overview of the TRISO fuel and fuel cycle. For PR, the document analyses and discusses the proliferation resistance aspects in terms of robustness against State-based threats associated with diversion of materials, misuse of facilities, breakout scenarios, and production in clandestine facilities. Similarly, for PP, the document discusses the robustness against theft of material and sabotage by non-State actors. The document follows a common template adopted by all the white papers in the updated series.
Impact ionization coefficients play a critical role in semiconductors. In addition to silicon, silicon carbide and gallium nitride are important semiconductors that are being seen more as mainstream semiconductor technologies. As a reflection of the maturity of these semiconductors, predictive modeling has become essential to device and circuit designers, and impact ionization coefficients play a key role here. Recently, several studies have measured impact ionization coefficients. We dedicated the first part of our study to comparing three experimental methods to estimate impact ionization coefficients in GaN, which are all based on photomultiplication but feature characteristic differences. The first method inserts an InGaN hole-injection layer, the accuracy of which is challenged by the dominance of ionization in InGaN, leading to possible overestimation of the coefficients. The second method utilizes the Franz-Keldysh effect for hole injection but not for electrons, where the mixed injection of induced carriers would require a margin of error. The third method uses complementary p-n and n-p structures that have been at the basis of this estimation in Si and SiC and leans on the assumption of a constant electric field, and any deviation would require a margin of error. In the second part of our study, we evaluated the models using recent experimental data from diodes demonstrating avalanche breakdown.
This report outlines the development of load-mitigating feedback control for wave energy converters. A simple, self-tuning multi-objective controller is demonstrated in simulation for a 3-DOF (surge, heave, pitch) point absorber. In previous work, the proposed control architecture has been shown to be effective in experiment for a variety of device archetypes for the single objective of the maximization of electrical power capture: here this architecture is extended to reduce device loading as well. In particular, PTO actuation forces and the minimization of fatigue damage (determined from the sum of wave-exerted and PTO forces) are considered as additional objectives for the self-tuning controller. This controller is demonstrated for two similar, but distinct systems: one described by the identified linear models from physical testing of the WaveBot device, and another based upon a WEC-Sim simulation that expands upon boundary element method data from the WaveBot device. In both cases, because the power surface is consistently fairly flat in the vicinity of control parameters that maximize power capture in contrasting sea-states, it is found to be generally possible to mitigate either fatigue damage or PTO load. However, PTO load is found to conflict with fatigue damage in some sea-states, limiting the efficacy of control objectives that attempt to mitigate both simultaneously. Additionally, coupling between the surge and pitch DOFs also limits the extent to which fatigue damage can be mitigated for both DOFs in some sea-states. Because control objectives can be considered a function of the sea-state (e.g., load mitigation may not be a concern until the sea is sufficiently large) a simple transition strategy is proposed and demonstrated. This transition strategy is found to be effective with some caveats: firstly, it cannot circumvent the aforementioned objective contradictions. Secondly, this objective transition is too slow to act as a system constraint, and objective thresholds must thus be considered quite conservatively. Improvement of the adjustment strategy is demonstrated through the addition of an integral term. Selection of well-performing transition parameters can be a function of sea-state. While a simple selection procedure is proposed, it is non-optimal, and a more robust selection procedure is suggested for future work.
Advances on differentiating between malicious intent and natural "organizational evolution"to explain observed anomalies in operational workplace patterns suggest benefit from evaluating collective behaviors observed in the facilities to improve insider threat detection and mitigation (ITDM). Advances in artificial neural networks (ANN) provide more robust pathways for capturing, analyzing, and collating disparate data signals into quantitative descriptions of operational workplace patterns. In response, a joint study by Sandia National Laboratories and the University of Texas at Austin explored the effectiveness of commercial artificial neural network (ANN) software to improve ITDM. This research demonstrates the benefit of learning patterns of organizational behaviors, detecting off-normal (or anomalous) deviations from these patterns, and alerting when certain types, frequencies, or quantities of deviations emerge for improving ITDM. Evaluating nearly 33,000 access control data points and over 1,600 intrusion sensor data points collected over a nearly twelve-month period, this study's results demonstrated the ANN could recognize operational patterns at the Nuclear Engineering Teaching Laboratory (NETL) and detect off-normal behaviors - suggesting that ANNs can be used to support a data-analytic approach to ITDM. Several representative experiments were conducted to further evaluate these conclusions, with the resultant insights supporting collective behavior-based analytical approaches to quantitatively describe insider threat detection and mitigation.
In an x-ray driven cavity experiment, an intense flux of soft x rays on the emitting surface produces significant emission of photoelectrons having several kiloelectronvolts of kinetic energy. At the same time, rapid heating of the emitting surface occurs, resulting in the release of adsorbed surface impurities and subsequent formation of an impurity plasma. This numerical study explores a simple model for the photoelectric currents and the impurity plasma. Attention is given to the effect of varying the composition of the impurity plasma. The presence of protons or hydrogen molecular ions leads to a substantially enhanced cavity current, while heavier plasma ions are seen to have a limited effect on the cavity current due to their lower mobility. Additionally, it is demonstrated that an additional peak in the current waveform can appear due to the impurity plasma. A correlation between the impurity plasma composition and the timing of this peak is elucidated.
This article analyzes the total ionizing dose (TID) effects on noise characteristics of commercial multi-level-cell (MLC) 3-D NAND memory technology during the read operation. The chips were exposed to a Co-60 gamma-ray source for up to 100 krad(Si) of TID. We find that the number of noisy cells in the irradiated chip increases with TID. Bit-flip noise was more dominant for cells in an erased state during irradiation compared to programmed cells.
Teng, Jeffrey W.; Nergui, Delgermaa; Parameswaran, Hari; Sepulveda-Ramos, Nelson E.; Tzintzarov, George N.; Mensah, Yaw; Cheon, Clifford D.; Rao, Sunil G.; Ringel, Brett; Gorchichko, Mariia; Li, Kan; Ying, Hanbin; Ildefonso, Adrian; Dodds, Nathaniel A.; Nowlin, Robert N.; Zhang, En X.; Fleetwood, Daniel M.; Cressler, John D.
Integrated silicon microwave pin diodes are exposed to 10-keV X-rays up to a dose of 2 Mrad(SiO2) and 14-MeV fast neutrons up to a fluence of 2.2, × ,10,^ 13 cm-2. Changes in both dc leakage current and small-signal circuit components are examined. Degradation in performance due to total-ionizing dose (TID) is shown to be suppressed by non-quasi-static (NQS) effects during radio frequency (RF) operation. Tolerance to displacement damage from fast neutrons is also observed, which is explained using technology computer-aided design (TCAD) simulations. Overall, the characterized pin diodes are tolerant to cumulative radiation at levels consistent with space applications such as geosynchronous weather satellites.
The RISC-V instruction set architecture open licensing policy has spawned a hive of development activity, making a range of implementations publicly available. The environments in which RISC-V operates have expanded correspondingly, driving the need for a generalized approach to evaluating the reliability of RISC-V implementations under adverse operating conditions or after normal wear-out periods. Fault injection (FI) refers to the process of changing the state of registers or wires, either permanently or momentarily, and then observing execution behavior. The analysis provides insight into the development of countermeasures that protect against the leakage or corruption of sensitive information, which might occur because of unexpected execution behavior. In this article, we develop a hardware-software co-design architecture that enables fast, configurable fault emulation and utilize it for information leakage and data corruption analysis. Modern system-on-chip FPGAs enable building an evaluation platform, where control elements run on a processor(s) (PS) simultaneously with the target design running in the programmable logic (PL). Software components of the FI system introduce faults and report execution behavior. A pair of RISC-V FI-instrumented implementations are created and configured to execute the Advanced Encryption Standard and Twister algorithms. Key and plaintext information leakage and degraded pseudorandom sequences are both observed in the output for a subset of the emulated faults.
This report details a method to estimate the energy content of various types of seismic body waves. The method is based on the strain energy of an elastic wavefield and Hooke’s Law. We present a detailed derivation of a set of equations that explicitly partition the seismic strain energy into two parts: one for compressional (P) waves and one for shear (S) waves. We posit that the ratio of these two quantities can be used to determine the relative contribution of seismic P and S waves, possibly as a method to discriminate between earthquakes and buried explosions. We demonstrate the efficacy of our method by using it to compute the strain energy of synthetic seismograms with differing source characteristics. Specifically, we find that explosion-generated seismograms contain a preponderance of P wave strain energy when compared to earthquake-generated synthetic seismograms. Conversely, earthquake-generated synthetic seismograms contain a much greater degree of S wave strain energy when compared to explosion-generated seismograms.
MELCOR is an integrated thermal hydraulics, accident progression, and source term code for reactor safety analysis that has been developed at Sandia National Laboratories for the United States Nuclear Regulatory Commission (NRC) since the early 1980s. Though MELCOR originated as a light water reactor (LWR) code, development and modernization efforts have expanded its application scope to include non-LWR reactor concepts. Current MELCOR development efforts include providing the NRC with the analytical capabilities to support regulatory readiness for licensing non-LWR technologies under Strategy 2 of the NRC's near- term Implementation Action Plans. Beginning with the Next Generation Nuclear Project (NGNP), MELCOR has undergone a range of enhancements to provide analytical capabilities for modeling the spectrum of advanced non-LWR concepts. This report describes the generic plant model developed to demonstrate MELCOR capabilities to perform fluoride-salt-cooled high-temperature reactor (FHR) safety evaluations. The generic plant model is based on publicly-available FHR design information. For plant aspects (e.g., reactor building leak rate and details of the cover-gas system) that are not described in the FHR references, the analysts made assumptions needed to construct a MELCOR full-plant model. The FHR model uses a TRi-structural ISOtropic (TRISO)-particle fuel pebble-bed reactor with a primary system rejecting heat to two coiled tube air heat ex changers. Three passive direct reactor auxiliary cooling systems provide heat removal to supplement or replace the emergency secondary system heat removal during accident conditions. Surrounding the reactor vessel is a low volume reactor cavity that insulates the reactor with fire bricks and thick concrete walls. A refractory reactor liner system provides water cooling to reduce the concrete wall temperatures. Example calculations are performed to show the plant response and MELCOR capabilities to characterize a range of accident conditions. The accidents selected for evaluation consider a range of degraded and failed modes of operation for key safety functions providing reactivity control, the primary system decay heat removal and also a piping leak of the line to the coolant drain tank.
This report examines the localization of high frequency electromagnetic fields in general three-dimensional convex walled cavities along periodic paths between opposing sides of the cavity. The report examines the three-dimensional case where the mirrors at the end of the orbit have two different radii of curvature. The cases where these orbits lead to unstable localized modes are known as scars.
Chen, Qi; Johnson, Emma S.; Bernal, David E.; Valentin, Romeo; Kale, Sunjeev; Bates, Johnny; Siirola, John D.; Grossmann, Ignacio E.
We present three core principles for engineering-oriented integrated modeling and optimization tool sets—intuitive modeling contexts, systematic computer-aided reformulations, and flexible solution strategies—and describe how new developments in Pyomo.GDP for Generalized Disjunctive Programming (GDP) advance this vision. We describe a new logical expression system implementation for Pyomo.GDP allowing for a more intuitive description of logical propositions. The logical expression system supports automated reformulation of these logical constraints to linear constraints. We also describe two new logic-based global optimization solver implementations built on Pyomo.GDP that exploit logical structure to avoid “zero-flow” numerical difficulties that arise in nonlinear network design problems when nodes or streams disappear. These new solvers also demonstrate the capability to link to external libraries for expanded functionality within an integrated implementation. We present these new solvers in the context of a flexible array of solution paths available to GDP models. Finally, we present results on a new library of GDP models demonstrating the value of multiple solution approaches.
With the proliferation of additive manufacturing and 3D printing technologies, a broader palette of material properties can be elicited from cellular solids, also known as metamaterials, architected foams, programmable materials, or lattice structures. Metamaterials are designed and optimized under the assumption of perfect geometry and a homogeneous underlying base material. Yet in practice real lattices contain thousands or even millions of complex features, each with imperfections in shape and material constituency. While the role of these defects on the mean properties of metamaterials has been well studied, little attention has been paid to the stochastic properties of metamaterials, a crucial next step for high reliability aerospace or biomedical applications. In this work we show that it is precisely the large quantity of features that serves to homogenize the heterogeneities of the individual features, thereby reducing the variability of the collective structure and achieving effective properties that can be even more consistent than the monolithic base material. In this first statistical study of additive lattice variability, a total of 239 strut-based lattices were mechanically tested for two pedagogical lattice topologies (body centered cubic and face centered cubic) at three different relative densities. The variability in yield strength and modulus was observed to exponentially decrease with feature count (to the power −0.5), a scaling trend that we show can be predicted using an analytic model or a finite element beam model. The latter provides an efficient pathway to extend the current concepts to arbitrary/complex geometries and loading scenarios. These results not only illustrate the homogenizing benefit of lattices, but also provide governing design principles that can be used to mitigate manufacturing inconsistencies via topological design.
In late 2004, the U.S. Nuclear Regulatory Commission (NRC) initiated a project to analyze the relative efficacy of alternative protective action strategies in reducing consequences to the public from a spectrum of nuclear power plant core melt accidents. The study is documented in NUREG/CR-6953, “Review of NUREG-0654, Supplement 3, ‘Criteria for Protective Action Recommendations for Severe Accidents,’” Volumes 1, 2, and 3. The Protective Action Recommendations (PAR) study provided a technical basis for enhancing the protective action guidance contained in Supplement 3, “Guidance for Protective Action Strategies,” to NUREG-0654/FEMA-REP-1, Rev. 1, “Criteria for Preparation and Evaluation of Radiological Emergency Response Plans and Preparedness in Support of Nuclear Power Plants, ” dated November 2011. In the time since, a number of important changes and additions have been made to the MACCS code suite, the nuclear accident consequence analysis code used to perform the study. The purpose of this analysis is to determine whether the MACCS results used in the PAR study would be different given recent changes to the MACCS code suite and input parameter guidance. Updated parameters that were analyzed include cohorts, keyhole evacuation, shielding and exposure parameters, compass sector resolution, and a range of source terms from rapidly progressing accidents. Results indicate that using updated modeling assumptions and capabilities may lead to a decrease in predicted health consequences for those within the emergency planning zone compared to the original PAR study.
The BayoTech hydrogen generation system has been evaluated in terms of safety considerations at the NM Gas site. The consequence of a leak in different components in the system was evaluated in terms of plume dispersion and overpressure. Additionally, the likelihood of a leak scenario for different hydrogen components was identified. The worst-case plume dispersion cases, full-bore leaks, resulted in relatively large plumes. However, these cases were noted to be far less likely than the partial break cases that were evaluated. The partial break cases resulted in nearly negligible plume lengths. Similarly, the overpressure analysis of the full-bore break scenarios resulted in much larger overpressures than the partial break cases (which resulted in negligible overpressure at the lot line). There were several cases evaluated in the analysis that represented leak scenarios from both hydrogen and natural gas sources. Generally, the natural gas leak scenarios resulted in a smaller horizontal impact than that of hydrogen leaks. The worst-case consequence from a hydrogen leak resulted from the compressors, storage pods, or dispensing system. To consider the safety features that may isolate the leak, the consequence was evaluated at different times after the leak event to show the reduction of pressure. After 2 seconds, the plume dispersion from this event is contained within the perimeter of the site. The worst-case consequences show that the plume may disperse to adjacent facilities and to the street. When considering both likelihood and consequence, the risk may be considered low because the maximum frequency of a full-bore leak from any component within the hydrogen compound is 8.2 E-5/yr. This means that a full-bore leak is expected to occur less than once every 10,000 years. The risk can be further reduced by implementing mitigative countermeasures, such as CMU walls along the sides of the equipment compound. This would reduce the overall consequence of the worst-case dispersion scenarios (horizontal impact of plume). In terms of siting and safety analysis, the NFPA 2 code was used to provide a high-level evaluation of the current site plan. The most limiting equipment in terms of set-back distance are the compressors/storage units because of the high-pressure hydrogen. The site layout was evaluated for an acceptable location for the compression/storage unit based on NFPA 2 set-back distances. It is important to note that the NFPA 2 set-back distances consider both likelihood and consequence. This is important because the worst-case results evaluated herein also represent the least likely leak scenario. Other site-specific considerations were evaluated, including the parking shade structure with photovoltaic cells and refueling vehicles. These issues were dispositioned and determined to not present a safety risk.
National Technology & Engineering Solutions of Sandia, LLC (NTESS) has recently amended the NTESS Retirement Income Plan (Pension Plan). The updated Summary Plan Description (SPD) for the Pension Plan effective January 1, 2022 is provided.
This report details how to successfully use the Fairfield Nodal ZLand seismic instruments to collect data, including preparation steps prior to deploying the instruments, how to record data during a field campaign, and how to retrieve recorded data from the instruments after their deployment. This guide will walk through each step for the novice user, as well as provide a checklist of critical steps for the advanced user to ensure successful, efficient field campaigns and seismic data collection. Currently, use of the seismic nodal instruments is highly limited due to the detailed nature and prior knowledge required to successfully set up, use, and retrieve data from these instruments. With this guide, all interested users will have the knowledge required to perform a seismic deployment and collect data with the Fairfield Nodal instruments.
In the pursuit of improving additively manufactured (AM) component quality and reliability, fine-tuning critical process parameters such as laser power and scan speed is a great first step toward limiting defect formation and optimizing the microstructure. However, the synergistic effects between these process parameters, layer thickness, and feedstock attributes (e.g. powder size distribution) on part characteristics such as microstructure, density, hardness, and surface roughness are not as well-studied. In this work, we investigate 316L stainless steel density cubes built via laser powder bed fusion (L-PBF), emphasizing the significant microstructural changes that occur due to altering the volumetric energy density (VED) via laser power, scan speed, and layer thickness changes, coupled with different starting powder size distributions. This study demonstrates that there is not one ideal process set and powder size distribution for each machine. Instead, there are several combinations or feedstock/process parameter ‘recipes’ to achieve similar goals. This study also establishes that for equivalent VEDs, changing powder size can significantly alter part density, GND density, and hardness. Through proper parameter and feedstock control, part attributes such as density, grain size, texture, dislocation density, hardness, and surface roughness can be customized, thereby creating multiple high-performance regions in the AM process space.
In this article, we provide an analytical model for the total ionizing dose (TID) effects on the bit error statistics of commercial flash memory chips. We have validated the model with experimental data collected by irradiating several commercial NAND flash memory chips from different technology nodes. We find that our analytical model can project bit errors at higher TID values [20 krad (Si)] from measured data at lower TID values [<1 krad (Si)]. Based on our model and the measured data, we have formulated basic design rules for using a commercial flash memory chip as a dosimeter. We discuss the impact of NAND chip-to-chip variability, noise margin, and the intrinsic errors on the dosimeter design using detailed experimentation.
Current state-of-the-art gasoline direct-injection (GDI) engines use multiple injections as one of the key technologies to improve exhaust emissions and fuel efficiency. For this technology to be successful, secured adequate control of fuel quantity for each injection is mandatory. However, nonlinearity and variations in the injection quantity can deteriorate the accuracy of fuel control, especially with small fuel injections. Therefore, it is necessary to understand the complex injection behavior and to develop a predictive model to be utilized in the development process. This study presents a methodology for rate of injection (ROI) and solenoid voltage modeling using artificial neural networks (ANNs) constructed from a set of Zeuch-style hydraulic experimental measurements conducted over a wide range of conditions. A quantitative comparison between the ANN model and the experimental data shows that the model is capable of predicting not only general features of the ROI trend, but also transient and non-linear behaviors at particular conditions. In addition, the end of injection (EOI) could be detected precisely with a virtually generated solenoid voltage signal and the signal processing method, which applies to an actual engine control unit. A correlation between the detected EOI timings calculated from the modeled signal and the measurement results showed a high coefficient of determination.
Complex networks of information processing systems, or information supply chains, present challenges for performance analysis. We establish a mathematical setting, in which a process within an information supply chain can be analyzed in terms of the functionality of the system's components. Principles of this methodology are rigorously defended and induce a model for determining the reliability for the various products in these networks. Our model does not limit us from having cycles in the network, as long as the cycles do not contain negation. It is shown that our approach to reliability resolves the nonuniqueness caused by cycles in a probabilistic Boolean network. An iterative algorithm is given to find the reliability values of the model, using a process that can be fully automated. This automated method of discerning reliability is beneficial for systems managers. As a systems manager considers systems modification, such as the replacement of owned and maintained hardware systems with cloud computing resources, the need for comparative analysis of system reliability is paramount. The model is extended to handle conditional knowledge about the network, allowing one to make predictions of weaknesses in the system. Finally, to illustrate the model's flexibility over different forms, it is demonstrated on a system of components and subcomponents.
The How To Manual supplements the User’s Manual and the Theory Manual. The goal of the How To Manual is to reduce learning time for complex end to end analyses. These documents are intended to be used together. See the User’s Manual for a complete list of the options for a solution case. All the examples are part of the Sierra/SD test suite. Each runs as is. The organization is similar to the other documents: How to run, Commands, Solution cases, Materials, Elements, Boundary conditions, and then Contact. The table of contents and index are indispensable. The Geometric Rigid Body Modes section is shared with the Users Manual.
This report summarizes Fiscal Year 2021 accomplishments from Sandia National Laboratories Wind Energy Program. The portfolio consists of funding provided by the DOE EERE Wind Energy Technologies Office (WETO), Advanced Research Projects Agency-Energy (ARPA-E), DOE Small Business Innovation Research (SBIR), and the Sandia Laboratory Directed Research and Development (LDRD) program. These accomplishments were made possible through capabilities investments by WETO, internal Sandia investment, and partnerships between Sandia and other national laboratories, universities, and research institutions around the world.
We present an experiment to detect one ton TNT-equivalent chemical explosions using pulsed Doppler radar observations of isodensity layers in the ionospheric E region during two campaigns. The first campaign, conducted on 15 October 2019, produced potential detections of all three shots. The detections closely resemble the temporal and spectral properties predicted using the InfraGA ray tracing and weakly nonlinear waveform propagation model. Here the model predicts that within 6.5–7.25 min of each shot a waveform peaking between 0.9 and 0.4 Hz will impact the ionosphere at 100 km. As the waves pass through this region, they will imprint their signal on an isodensity layer, which is detectable using a Doppler radar operating at the plasma frequency of the isodensity. Within the time windows of each of the three shots in the first campaign, we detect enhanced wave activity peaking near 0.5 Hz. These waves were imprinted on the Doppler signal probing an isodensity layer at 2.785 MHz near 100 km altitude. Despite these detections, the method appears to be unreliable as none of the six shots from the second campaign, conducted on 10 July 2020 were detected. The observations from this campaign were characterized by an increased acoustic noise environment in the microbarom band and persistent scintillation on the radar returns. These effects obscured any detectable signal from these shots and the baseline noise was well above the detection levels of the first campaign.
Agarwal, Sapan; Clark, Lawrence T.; Youngsciortino, Clifford; Ng, Garrick; Black, Dolores; Cannon, Matthew; Black, Jeffrey; Quinn, Heather; Brunhaver, John; Barnaby, Hugh; Manuel, Jack; Blansett, Ethan; Marinella, Matthew J.
In this article, we present a unique method of measuring single-event transient (SET) sensitivity in 12-nm FinFET technology. A test structure is presented that approximately measures the length of SETs using flip-flop shift registers with clock inputs driven by an inverter chain. The test structure was irradiated with ions at linear energy transfers (LETs) of 4.0, 5.6, 10.4, and 17.9 MeV-cm2/mg, and the cross sections of SET pulses measured down to 12.7 ps are presented. The experimental results are interpreted using a modeling methodology that combines TCAD and radiation effect simulations to capture the SET physics, and SPICE simulations to model the SETs in a circuit. The modeling shows that only ion strikes on the fin structure of the transistor would result in enough charge collected to produce SETs, while strikes in the subfin and substrate do not result in enough charge collected to produce measurable transients. Comparisons of the cumulative cross sections obtained from the experiment and from the simulations validate the modeling methodology presented.
Magnesium borohydride (Mg(BH4)2) is a promising candidate for material-based hydrogen storage due to its high hydrogen gravimetric/volumetric capacities and potential for dehydrogenation reversibility. Currently, slow dehydrogenation kinetics and the formation of intermediate polyboranes deter its application in clean energy technologies. In this study, a novel approach for modifying the physicochemical properties of Mg(BH4)2 is described, which involves the addition of reactive molecules in the vapor phase. This process enables the investigation of a new class of additive molecules for material-based hydrogen storage. The effects of four molecules (BBr3, Al2(CH3)6, TiCl4, and N2H4) with varying degrees of electrophilicity are examined to infer how the chemical reactivity can be used to tune the additive-Mg(BH4)2 interaction and optimize the release of hydrogen at lower temperatures. Control over the amounts of additive exposure to Mg(BH4)2 is shown to prevent degradation of the bulk γ-Mg(BH4)2 crystal structure and loss of hydrogen capacity. Trimethylaluminum provides the most encouraging results on Mg(BH4)2, maintaining 97% of the starting theoretical Mg(BH4)2 hydrogen content and demonstrating hydrogen release at 115 °C. These results firmly establish the efficacy of this approach toward controlling the properties of Mg(BH4)2 and provide a new path forward for additive-based modification of hydrogen storage materials.
Understanding of semiconductor breakdown under high electric fields is an important aspect of materials’ properties, particularly for the design of power devices. For decades, a power-law has been used to describe the dependence of material-specific critical electrical field (Ecrit) at which the material breaks down and bandgap (Eg). The relationship is often used to gauge tradeoffs of emerging materials whose properties haven’t yet been determined. Unfortunately, the reported dependencies of Ecrit on Eg cover a surprisingly wide range in the literature. Moreover, Ecrit is a function of material doping. Further, discrepancies arise in Ecrit values owing to differences between punch-through and non-punch-through device structures. We report a new normalization procedure that enables comparison of critical electric field values across materials, doping, and different device types. An extensive examination of numerous references reveals that the dependence Ecrit ∝ Eg1.83 best fits the most reliable and newest data for both direct and indirect semiconductors. Graphical abstract: [Figure not available: see fulltext.].
InAs-based interband cascade lasers (ICLs) can be more easily adapted toward long wavelength operation than their GaSb counterparts. Devices made from two recent ICL wafers with an advanced waveguide structure are reported, which demonstrate improved device performance in terms of reduced threshold current densities for ICLs near 11 μm or extended operating wavelength beyond 13 μm. The ICLs near 11 μm yielded a significantly reduced continuous wave (cw) lasing threshold of 23 A/cm2 at 80 K with substantially increased cw output power, compared with previously reported ICLs at similar wavelengths. ICLs made from the second wafer incorporated an innovative quantum well active region, comprised of InAsP layers, and lased in the pulsed-mode up to 120 K at 13.2 μm, which is the longest wavelength achieved for III-V interband lasers.
The Savannah River Site plans to reprocess defense spent nuclear fuel currently stored in their L-Basin via the Accelerated Basin Deinventory (ABD) Program. The previous plan for the L-Basin spent nuclear fuel was to dispose of it directly in the federal repository without reprocessing. Implementing the ABD Program will result in final disposal of approximately 900 fewer canisters of defense spent nuclear fuel and the production of approximately 521 more canisters of vitrified high-level waste glass with some specific differences from the planned high-level waste glass. Because the 235U in the L-Basin spent nuclear fuel is not intended to be recovered, the fissile mass loading of the vitrified high-level glass waste form to be produced must be increased above the current value of 897 g/m3 to a maximum of 2,500 g/m3. Therefore, implementing the ABD Program would produce a variant of high-level waste glass—the ABD glass—that needs to be evaluated for future repository licensing, which includes both preclosure safety and postclosure performance. This report describes the approach to and summarizes the results of an evaluation of the potential effects of implementing the ABD Program at the Savannah River Site on the technical basis for future repository licensing for a generic repository that is similar to Yucca Mountain and for one that is fully generic. This evaluation includes the effects on preclosure safety analyses and postclosure performance assessment for both repository settings. The license application for the proposed Yucca Mountain repository (DOE 2008), which is serving as a framework for this evaluation, concluded that the proposed Yucca Mountain repository would meet all applicable regulatory requirements. The evaluation documented in this report found that implementing the ABD Program is not expected to change that conclusion for a generic repository similar to Yucca Mountain or for a generic repository with respect to the preclosure safety analyses. With respect to the postclosure performance of a generic repository, no concerns were identified.
Filtration, pressure drop and quantitative fit of N95 respirators were robust to several decontamination methods including vaporous hydrogen peroxide, wet heat, bleach, and ultraviolet light. Bleach may not have penetrated the hydrophobic outer layers of the N95 respirator. Isopropyl alcohol and detergent both severely degraded the electrostatic charge of the electret filtration layer. First data in N95 respirators that the loss of filtration efficiency was directly correlated with loss of surface potential on the filtration layer. The pressure drop was unchanged, so loss of filtration efficacy would not be apparent during a user seal check. Mechanical straps degrade with repeated mechanical cycling during extended use. Decontamination did not appear to degrade the elastic straps. Significant loss of strap elasticity would be apparent during a user negative pressure seal check.
A myriad of phenomena in materials science and chemistry rely on quantum-level simulations of the electronic structure in matter. While moving to larger length and time scales has been a pressing issue for decades, such large-scale electronic structure calculations are still challenging despite modern software approaches and advances in high-performance computing. The silver lining in this regard is the use of machine learning to accelerate electronic structure calculations – this line of research has recently gained growing attention. The grand challenge therein is finding a suitable machine-learning model during a process called hyperparameter optimization. This, however, causes a massive computational overhead in addition to that of data generation. We accelerate the construction of machine-learning surrogate models by roughly two orders of magnitude by circumventing excessive training during the hyperparameter optimization phase. We demonstrate our workflow for Kohn-Sham density functional theory, the most popular computational method in materials science and chemistry.
Here, we explore the dimensionality of the U.S. Department of Agriculture’s household food security survey module among households with children. Using a novel methodological approach to measuring food security, we find that there is multidimensionality in the module for households with children that is associated with the overall household, adult, and child dimensions of food security. Additional analyses suggest official estimates of food security among households with children are robust to this multidimensionality. However, we also find that accounting for the multidimensionality of food security among these households provides new insights into the correlates of food security at the household, adult, and child levels of measurement.
The objective of this project was to develop a novel capability to generate synthetic data sets for the purpose of training Machine Learning (ML) algorithms for the detection of malicious activities on satellite systems. The approach experimented with was to a) generate sparse data sets using emulation modeling and b) enlarge the sparse data using Generative Adversarial Networks (GANs). We based our emulation modeling on the Open Source NASA Operational Simulator for Small Satellites (NOS3) developed by the Katherine Johnson Independent Verification and Validation (IV&V) program in West Virginia. Significant new capabilities on NOS3 had to be developed for our data set generation needs. To expand these data sets for the purpose of training ML, we experimented with a) Extreme Learning Machines (ELMs) and b) Wasserstein-GANs (WGAN-GP).
The core function of many neural network algorithms is the dot product, or vector matrix multiply (VMM) operation. Crossbar arrays utilizing resistive memory elements can reduce computational energy in neural algorithms by up to five orders of magnitude compared to conventional CPUs. Moving data between a processor, SRAM, and DRAM dominates energy consumption. By utilizing analog operations to reduce data movement, resistive memory crossbars can enable processing of large amounts of data at lower energy than conventional memory architectures.
For isolated white dwarf (WD) stars, fits to their observed spectra provide the most precise estimates of their effective temperatures and surface gravities. Even so, recent studies have shown that systematic offsets exist between such spectroscopic parameter determinations and those based on broadband photometry. These large discrepancies (10% in Teff, 0.1 M⊙ in mass) provide scientific motivation for reconsidering the atomic physics employed in the model atmospheres of these stars. Recent simulation work of ours suggests that the most important remaining uncertainties in simulation-based calculations of line shapes are the treatment of 1) the electric field distribution and 2) the occupation probability (OP) prescription. We review the work that has been done in these areas and outline possible avenues for progress.
The role of a solid surface for initiating gas-phase reactions is still not well understood. The hydrogen atom (H) is an important intermediate in gas-phase ethane dehydrogenation and is known to interact with surface sites on catalysts. However, direct measurements of H near catalytic surfaces have not yet been reported. Here, we present the first H measurements by laser-induced fluorescence in the gas-phase above catalytic and noncatalytic surfaces. Measurements at temperatures up to 700 °C show H concentrations to be at the highest above inert quartz surfaces compared to stainless steel and a platinum-based catalyst. Additionally, H concentrations above the catalyst decreased rapidly with time on stream. These newly obtained observations are consistent with the recently reported differences in bulk ethane dehydrogenation reactivity of these materials, suggesting H may be a good reporter for dehydrogenation activity.
Liu, Weiran; Ullrich, Paul A.; Guba, Oksana G.; Caldwell, Peter M.; Keen, Noel D.
In global atmospheric modeling, the differences between nonhydrostatic (NH) and hydrostatic (H) dynamical cores are negligible in dry simulations when grid spacing is larger than 10 km. However, recent studies suggest that those differences can be significant at far coarser resolution when moisture is included. To better understand how NH and H differences manifest in global fields, we perform and analyze an ensemble of 28 and 13 km seasonal simulations with the NH and H dynamical cores in the Energy Exascale Earth System Model global atmosphere model, where the differences between H and NH configurations are minimized. A set of idealized rising bubble experiments is also conducted to further investigate the differences. Although NH and H differences are not significant in global statistics and zonal averages, significant differences in precipitation amount and patterns are observed in parts of the tropics. The most prominent differences emerge near India and the Western Pacific in the boreal summer, and the central-southern Indian Ocean and Pacific in the boreal winter. Tropical differences influence surrounding regions through modification of the regional circulation and can propagate to the extratropics, leading to significant temperature and geopotential differences over the middle to high latitudes. While the dry bubble experiments show negligible deviation between H and NH dynamics until grid spacing is below 6.25 km, precipitation amount and vertical velocity are different in the moist case even at 25 km resolution.
This presentation provides details regarding integral experiments at Sandia National Laboratory for fiscal year 2021. The experiments discussed are as follows: IER 230: Characterize the Thermal Capabilities of the 7uPCX; IER 304: Temperature Dependent Critical Benchmarks; IER 305: Critical Experiments with UO2 Rods and Molybdenum Foils; IER 306: Critical Experiments with UO2 Rods and Rhodium Foils ; IER 441: Epithermal HEX Lattices with SNL 7uPCX Fuel for Testing Nuclear Data; IER 452: Inversion Point of the Isothermal Reactivity Coefficient; and IER 523: Critical Experiments with ACRR UO2-BeO Fuel.
Sandia National Labs has access to unused ACRR fuel, which is unique in its enrichment 35% and material composition BeO. ACRR fuel is available in quantities well above what is needed for experiments. Two experiment concepts have been investigated: UO2BeO fuel elements and pellets with 7uPCX fuel. The worth of UO2BeO is large enough to be well above the anticipated experiment uncertainties.
This work presents a new multiscale method for coupling the 3D Maxwell's equations to the 1D telegrapher's equations. While Maxwell's equations are appropriate for modeling complex electromagnetics in arbitrary-geometry domains, simulation cost for many applications (e.g. pulsed power) can be dramatically reduced by representing less complex transmission line regions of the domain with a 1D model. By assuming a transverse electromagnetic (TEM) ansatz for the solution in a transmission line region, we reduce the Maxwell's equations to the telegrapher's equations. We propose a self-consistent finite element formulation of the fully coupled system that uses boundary integrals to couple between the 3D and 1D domains and supports arbitrary unstructured 3D meshes. Additionally, by using a Lagrange multiplier to enforce continuity at the coupling interface, we allow for an absorbing boundary condition to also be applied to non-TEM modes on this boundary. We demonstrate that this feature reduces non-physical reflection and ringing of non-TEM modes off of the coupling boundary. By employing implicit time integration, we ensure a stable coupling, and we introduce an efficient method for solving the resulting linear systems. We demonstrate the accuracy of the new method on two verification problems, a transient O-wave in a rectilinear prism and a steady-state problem in a coaxial geometry, and show the efficiency and weak scalability of our implementation on a cold test of the Z-machine MITL and post-hole convolute.
We introduce a robust verification tool for computational codes, which we call Stochastic Robust Extrapolation based Error Quantification (StREEQ). Unlike the prevalent Grid Convergence Index (GCI) [1] method, our approach is suitable for both stochastic and deterministic computational codes and is generalizable to any number of discretization variables. Building on ideas introduced in the Robust Verification [2] approach, we estimate the converged solution and orders of convergence with uncertainty using multiple fits of a discretization error model. In contrast to Robust Verification, we perform these fits to many bootstrap samples yielding a larger set of predictions with smoother statistics. Here, bootstrap resampling is performed on the lack-of-fit errors for deterministic code responses, and directly on the noisy data set for stochastic responses. This approach lends a degree of robustness to the overall results, capable of yielding precise verification results for sufficiently resolved data sets, and appropriately expanding the uncertainty when the data set does not support a precise result. For stochastic responses, a credibility assessment is also performed to give the analyst an indication of the trustworthiness of the results. This approach is suitable for both code and solution verification, and is particularly useful for solution verification of high-consequence simulations.
This presentation discusses activities related to the Nuclear Criticality Safety Program (NCSP) at Sandia National Laboratory in fiscal year 2021. This includes NCSP funding, integral experiment requests, integral experiment spending, highlights, and COVID-19 impacts.
In situ analysis of surfaces during high-flux plasma exposure represents a long-standing challenge in the study of plasma-material interactions. While post-mortem microscopy can provide a detailed picture of structural and compositional changes, in situ techniques can capture the dynamic evolution of the surface. In this study, we demonstrate how spectroscopic ellipsometry can be applied to the real-time characterization of W nanostructure (also known as "fuzz") growth during exposure to low temperature, high-flux He plasmas. Strikingly, over a wide range of sample temperatures and helium fluences, the measured ellipsometric parameters (ψ, Δ) collapse onto a single curve that can be directly correlated with surface morphologies characterized by ex situ helium ion microscopy. The initial variation in the (ψ, Δ) parameters appears to be governed by small changes in surface roughness (<50 nm) produced by helium bubble nucleation and growth, followed by the emergence of 50 nm diameter W tendrils. This basic behavior appears to be reproducible over a wide parameter space, indicating that the spectroscopic ellipsometry may be of general practical use as a diagnostic to study surface morphologies produced by high-flux He implantation in refractory metals. An advantage of the methods outlined here is that they are applicable at low incident ion energies, even below the sputtering threshold. As an example of this application, we apply in situ ellipsometry to examine how W fuzz growth is affected both by varying ion energy and the temperature of the surface.
Here, we utilize electrically detected magnetic resonance (EDMR) measurements to compare high-field stressed, and gamma irradiated Si/SiO2 metal–oxide–silicon (MOS) structures. We utilize spin-dependent recombination (SDR) EDMR detected using the Fitzgerald and Grove dc I-V approach to compare the effects of high-field electrical stressing and gamma irradiation on defect formation at and near the Si/SiO2 interface. As anticipated, both greatly increase the concentration of Pb centers (silicon dangling bonds at the interface) densities. The irradiation also generated a significant increase in the dc I-V EDMR response of E' centers (oxygen vacancies in the SiO2 films), whereas the generation of an E' EDMR response in high-field stressing is much weaker than in the gamma irradiation case. These results likely suggest a difference in their physical distribution resulting from radiation damage and high electric field stressing.
The reactivity of carbonyl oxides has previously been shown to exhibit strong conformer and substituent dependencies. Through a combination of synchrotron-multiplexed photoionization mass spectrometry experiments (298 K and 4 Torr) and high-level theory [CCSD(T)-F12/cc-pVTZ-F12//B2PLYP-D3/cc-pVTZ with an added CCSDT(Q) correction], we explore the conformer dependence of the reaction of acetaldehyde oxide (CH3CHOO) with dimethylamine (DMA). The experimental data support the theoretically predicted 1,2-insertion mechanism and the formation of an amine-functionalized hydroperoxide reaction product. Tunable-vacuum ultraviolet photoionization probing of anti- or anti- + syn-CH3CHOO reveals a strong conformer dependence of the title reaction. The rate coefficient of DMA with anti-CH3CHOO is predicted to exceed that for the reaction with syn-CH3CHOO by a factor of ∼34,000, which is attributed to submerged barrier (syn) versus barrierless (anti) mechanisms for energetically downhill reactions.
CPU/GPU heterogeneous compute platforms are an ubiquitous element in computing and a programming model specified for this heterogeneous computing model is important for both performance and programmability. A programming model that exposes the shared, unified, address space between the heterogeneous units is a necessary step in this direction as it removes the burden of explicit data movement from the programmer while maintaining performance. GPU vendors, such as AMD and NVIDIA, have released software-managed runtimes that can provide programmers the illusion of unified CPU and GPU memory by automatically migrating data in and out of the GPU memory. However, this runtime support is not included in GPGPU-Sim, a commonly used framework that models the features of a modern graphics processor that are relevant to non-graphics applications. UVM Smart was developed, which extended GPGPU-Sim 3.x to in- corporate the modeling of on-demand pageing and data migration through the runtime. This report discusses the integration of UVM Smart and GPGPU-Sim 4.0 and the modifications to improve simulation performance and accuracy.
Enhancing the efficiency of second-harmonic generation using all-dielectric metasurfaces to date has mostly focused on electromagnetic engineering of optical modes in the meta-atom. Further advances in nonlinear conversion efficiencies can be gained by engineering the material nonlinearities at the nanoscale, however this cannot be achieved using conventional materials. Semiconductor heterostructures that support resonant nonlinearities using quantum engineered intersubband transitions can provide this new degree of freedom. By simultaneously optimizing the heterostructures and meta-atoms, we experimentally realize an all-dielectric polaritonic metasurface with a maximum second-harmonic generation power conversion factor of 0.5 mW/W2 and power conversion efficiencies of 0.015% at nominal pump intensities of 11 kW/cm2. These conversion efficiencies are higher than the record values reported to date in all-dielectric nonlinear metasurfaces but with 3 orders of magnitude lower pump power. Our results therefore open a new direction for designing efficient nonlinear all-dielectric metasurfaces for new classical and quantum light sources.
The protection systems (circuit breakers, relays, reclosers, and fuses) of the electric grid are the primary component responding to resilience events, ranging from common storms to extreme events. The protective equipment must detect and operate very quickly, generally <0.25 seconds, to remove faults in the system before the system goes unstable or additional equipment is damaged. The burden on protection systems is increasing as the complexity of the grid increases; renewable energy resources, particularly inverter-based resources (IBR) and increasing electrification all contribute to a more complex grid landscape for protection devices. In addition, there are increasing threats from natural disasters, aging infrastructure, and manmade attacks that can cause faults and disturbances in the electric grid. The challenge for the application of AI into power system protection is that events are rare and unpredictable. In order to improve the resiliency of the electric grid, AI has to be able to learn from very little data. During an extreme disaster, it may not be important that the perfect, most optimal action is taken, but AI must be guaranteed to always respond by moving the grid toward a more stable state during unseen events.
A combination of electrodeposition and thermal reduction methods have been utilized for the synthesis of ligand-free FeNiCo alloy nanoparticles through a high-entropy oxide intermediate. These phases are of great interest to the electrocatalysis community, especially when formed by a sustainable chemistry method. This is successfully achieved by first forming a complex five element amorphous FeNiCoCrMn high-entropy oxide (HEO) phase via electrodeposition from a nanodroplet emulsion solution of the metal salt reactants. The amorphous oxide phase is then thermally treated and reduced at 570-600 °C to form the crystalline FeNiCo alloy with a separate CrMnOx cophase. The FeNiCo alloy is fully characterized by scanning transmission electron microscopy and energy-dispersive X-ray spectroscopy elemental analysis and is identified as a face-centered cubic crystal with the lattice constant a = 3.52 Å. The unoptimized, ligand-free FeNiCo NPs activity toward the oxygen evolution reaction is evaluated in alkaline solution and found to have an ∼185 mV more cathodic onset potential than the Pt metal. Beyond being able to synthesize highly crystalline, ligand-free FeNiCo nanoparticles, the demonstrated and relatively simple two-step process is ideal for the synthesis of tailor-made nanoparticles where the desired composition is not easily achieved with classical solution-based chemistries.
There are several engineering applications in which the assumptions of homogenization and scale separation may be violated, in particular, for metallic structures constructed through additive manufacturing. Instead of resorting to direct numerical simulation of the macroscale system with an embedded fine scale, an alternative approach is to use an approximate macroscale constitutive model, but then estimate the model-form error using a posteriori error estimation techniques and subsequently adapt the macroscale model to reduce the error for a given boundary value problem and quantity of interest. Here, we investigate this approach to multiscale analysis in solids with unseparated scales using the example of an additively manufactured metallic structure consisting of a polycrystalline microstructure that is neither periodic nor statistically homogeneous. As a first step to the general nonlinear case, we focus here on linear elasticity in which each grain within the polycrystal is linear elastic but anisotropic.
Experiments were designed and conducted to investigate the impact that geometric cavities have on the transfer of energy from an embedded explosion to the surface of the physical domain. The experimental domains were fabricated as 3-inch polymer cubes, with varying cavity geometries centered in the cubes. The energy transfer, represented as a shock wave, was generated by the detonation of an exploding bridgewire at the center of the cavity. The shock propagation was tracked by schlieren imaging through the optically accessible polymer. The magnitude of energy transferred to the surface was recorded by an array of pressure sensors. A minimum of five experimental runs were conducted for each cavity geometry and statistical results were developed and compared. Results demonstrated the decoupling effect that geometric cavities produce on the energy field at the surface.
Electrically detected magnetic resonance and near-zero-field magnetoresistance measurements were used to study atomic-scale traps generated during high-field gate stressing in Si/SiO2 MOSFETs. The defects observed are almost certainly important to time-dependent dielectric breakdown. The measurements were made with spin-dependent recombination current involving defects at and near the Si/SiO2 boundary. The interface traps observed are Pb0 and Pb1 centers, which are silicon dangling bond defects. The ratio of Pb0/Pb1 is dependent on the gate stressing polarity. Electrically detected magnetic resonance measurements also reveal generation of E′ oxide defects near the Si/SiO2 interface. Near-zero-field magnetoresistance measurements made throughout stressing reveal that the local hyperfine environment of the interface traps changes with stressing time; these changes are almost certainly due to the redistribution of hydrogen near the interface.
Graph partitioning has emerged as an area of interest due to its use in various applications in computational research. One way to partition a graph is to solve for the eigenvectors of the corresponding graph Laplacian matrix. This project focuses on the eigensolver LOBPCG and the evaluation of a new preconditioner: Randomized Cholesky Factorization (rchol). This proconditioner was tested for its speed and accuracy against other well-known preconditioners for the method. After experiments were run on several known test matrices, rchol appears to be a better preconditioner for structured matrices. This research was sponsored by National Nuclear Security Administration Minority Serving Institutions Internship Program (NNSA-MSIIP) and completed at host facility Sandia National Laboratories. As such, after discussion of the research project itself, this report contains a brief reflection on experience gained as a result of participating in the NNSA-MSIIP.
We present an overview of the magneto-inertial fusion (MIF) concept MagLIF (Magnetized Liner Inertial Fusion) pursued at Sandia National Laboratories and review some of the most prominent results since the initial experiments in 2013. In MagLIF, a centimeter-scale beryllium tube or "liner" is filled with a fusion fuel, axially pre-magnetized, laser pre-heated, and finally imploded using up to 20 MA from the Z machine. All of these elements are necessary to generate a thermonuclear plasma: laser preheating raises the initial temperature of the fuel, the electrical current implodes the liner and quasi-adiabatically compresses the fuel via the Lorentz force, and the axial magnetic field limits thermal conduction from the hot plasma to the cold liner walls during the implosion. MagLIF is the first MIF concept to demonstrate fusion relevant temperatures, significant fusion production (>10^13 primary DD neutron yield), and magnetic trapping of charged fusion particles. On a 60 MA next-generation pulsed-power machine, two-dimensional simulations suggest that MagLIF has the potential to generate multi-MJ yields with significant self-heating, a long-term goal of the US Stockpile Stewardship Program. At currents exceeding 65 MA, the high gains required for fusion energy could be achievable.
Computational tools to study thermodynamic properties of magnetic materials have, until recently, been limited to phenomenological modeling or to small domain sizes limiting our mechanistic understanding of thermal transport in ferromagnets. Herein, we study the interplay of phonon and magnetic spin contributions to the thermal conductivity in a-iron utilizing non-equilibrium molecular dynamics simulations. It was observed that the magnetic spin contribution to the total thermal conductivity exceeds lattice transport for temperatures up to two-thirds of the Curie temperature after which only strongly coupled magnon-phonon modes become active heat carriers. Characterizations of the phonon and magnon spectra give a detailed insight into the coupling between these heat carriers, and the temperature sensitivity of these coupled systems. Comparisons to both experiments and ab initio data support our inferred electronic thermal conductivity, supporting the coupled molecular dynamics/spin dynamics framework as a viable method to extend the predictive capability for magnetic material properties.
Spectral line-shape models are an important part of understanding high-energy-density (HED) plasmas. Models are needed for calculating opacity of materials and can serve as diagnostics for astrophysical and laboratory plasmas. However, much of the literature on line shapes is directed toward specialists. This perspective makes it difficult for non-specialists to enter the field. We have two broad goals with this topical review. First, we aim to give information so that others in HED physics may better understand the current field. This first goal may help guide future experiments to test different aspects of the theory. Second, we provide an introduction for those who might be interested in line-shape theory, and enough materials to be able to navigate the field and the literature. We give a high-level overview of line broadening process, as well as dive into the formalism, available methods, and approximations.