The addition of a common amino acid, phenylalanine, to a Layer-by-Layer (LbL) deposited polyelectrolyte (PE) film on a nanoporous membrane can increase its ionic selectivity over a PE film without the added amino acid. The addition of phenylalanine is inspired by detailed knowledge of the structure of the channelrhodopsins family of protein ion channels, where phenylalanine plays an instrumental role in facilitating sodium ion transport. The normally deposited and crosslinked PE films increase the cationic selectivity of a support membrane in a controllable manner where higher selectivity is achieved with thicker PE coatings, which in turn also increases the ionic resistance of the membrane. The increased ionic selectivity is desired while the increased resistance is not. We show that through incorporation of phenylalanine during the LbL deposition process, in solutions of NaCl with concentrations ranging from 0.1 to 100 mM, the ionic selectivity can be increased independently of the membrane resistance. Specifically, the addition is shown to increase the cationic transference of the PE films from 81.4% to 86.4%, an increase on par with PE films that are nearly triple the thickness while exhibiting much lower resistance compared to the thicker coatings, where the phenylalanine incorporated PE films display an area specific resistance of 1.81 Ω cm2in 100 mM NaCl while much thicker PE membranes show a higher resistance of 2.75 Ω cm2in the same 100 mM NaCl solution.
As the push towards exascale hardware has increased the diversity of system architectures, performance portability has become a critical aspect for scientific software. We describe the Kokkos Performance Portable Programming Model that allows developers to write single source applications for diverse high performance computing architectures. Kokkos provides key abstractions for both the compute and memory hierarchy of modern hardware. Here, we describe the novel abstractions that have been added to Kokkos recently such as hierarchical parallelism, containers, task graphs, and arbitrary-sized atomic operations. We demonstrate the performance of these new features with reproducible benchmarks on CPUs and GPUs.
The compound (Pb2MnSe3)0.6VSe2 was predicted to be kinetically stable based on density functional theory (DFT) calculations on an island of Pb2MnSe3 between layers of VSe2. This approach provides a high degree of freedom by not forcing interlayer lattice match, making it ideal to investigate the likelihood of formation of new incommensurate layer misfit structures. The free space around the island is critical, as it allows atoms to diffuse and hence exploring the local energy landscape around the initial configuration. (Pb2MnSe3)0.6VSe2 was synthesized via a near diffusionless reaction from precursors where a repeating sequence of elemental layers matches the local composition and layer sequence of the predicted compound. The VSe2 layer consists of a Se-V-Se trilayer with octahedral coordination of the V atoms. The Pb2MnSe3 layer consists of three rock-salt-like planes, with a MnSe layer between the planes of PbSe. The center MnSe plane stabilizes the puckering of the outer PbSe layers. Electrical properties indicate that (Pb2Mn1Se3)0.6VSe2 undergoes a charge density wave transition at ~100 K and orders ferromagnetically at 35 K. The combination of theory and experiment enables a faster convergence to new heterostructures than either approach in isolation.
Simulated previous quantum chemistry experiments using JAQAL (Just Another Quantum Assembly Language), matching expected results. Lays the groundwork for QSCOUT (Quantum Scientific Computing Open User Testbed) users to conduct their own quantum experiments, and tests the capabilities of the JAQAL language.
This assessment analyzes ES&H occurrences and Non-Occurrence Trackable Events (NOTEs) from the first and second quarters (Q1 and Q2) of fiscal year (FY) 2021. For this report, assessors used three primary methods for categorizing occurrence and NOTE data: issue categorization, DOE reporting criteria groups, and DOE cause codes.
We present a surrogate modeling framework for conservatively estimating measures of risk from limited realizations of an expensive physical experiment or computational simulation. We adopt a probabilistic description of risk that assigns probabilities to consequences associated with an event and use risk measures, which combine objective evidence with the subjective values of decision makers, to quantify anticipated outcomes. Given a set of samples, we construct a surrogate model that produces estimates of risk measures that are always greater than their empirical estimates obtained from the training data. These surrogate models not only limit over-confidence in reliability and safety assessments, but produce estimates of risk measures that converge much faster to the true value than purely sample-based estimates. We first detail the construction of conservative surrogate models that can be tailored to the specific risk preferences of the stakeholder and then present an approach, based upon stochastic orders, for constructing surrogate models that are conservative with respect to families of risk measures. The surrogate models introduce a bias that allows them to conservatively estimate the target risk measures. We provide theoretical results that show that this bias decays at the same rate as the L2 error in the surrogate model. Our numerical examples confirm that risk-aware surrogate models do indeed over-estimate the target risk measures while converging at the expected rate.
We present an adaptive algorithm for constructing surrogate models for integrated systems composed of a set of coupled components. With this goal we introduce ‘coupling’ variables with a priori unknown distributions that allow approximations of each component to be built independently. Once built, the surrogates of the components are combined and used to predict system-level quantities of interest (QoI) at a fraction of the cost of interrogating the full system model. We use a greedy experimental design procedure, based upon a modification of Multi-Index Stochastic Collocation (MISC), to minimize the error of the combined surrogate. This is achieved by refining each component surrogate in accordance with its relative contribution to error in the approximation of the system-level QoI. Our adaptation of MISC is a multi-fidelity procedure that can leverage ensembles of models of varying cost and accuracy, for one or more components, to produce estimates of system-level QoI. Several numerical examples demonstrate the efficacy of the proposed approach on systems involving feed-forward and feedback coupling. For a fixed computational budget, the proposed algorithm is able to produce approximations that are orders of magnitude more accurate than approximations that treat the integrated system as a black-box.
Efforts at Sandia National Laboratories have focused on fundamental experiments to understand the dispersal of dense particle distributions in high-speed compressible flow. The experiments are conducted in shock tube facilities where the flow conditions and the initial conditions of the particle distributions are well controlled and well characterized. An additional advantage of the shock tube is that it is more readily able to accommodate advanced measurement diagnostics in comparison to explosive field tests.
Vaidya, Sachin; Benalcazar, Wladimir A.; Cerjan, Alexander W.; Rechtsman, Mikael C.
We show that point defects in two-dimensional photonic crystals can support bound states in the continuum (BICs). The mechanism of confinement is a symmetry mismatch between the defect mode and the Bloch modes of the photonic crystal. These BICs occur in the absence of band gaps and therefore provide an alternative mechanism to confine light. Furthermore, we show that such BICs can propagate in a fiber geometry and exhibit arbitrarily small group velocity which could serve as a platform for enhancing nonlinear effects and light-matter interactions in structured fibers.
Potassium channels modulate various cellular functions through efficient and selective conduction of K+ions. The mechanism of ion conduction in potassium channels has recently emerged as a topic of debate. Crystal structures of potassium channels show four K+ions bound to adjacent binding sites in the selectivity filter, while chemical intuition and molecular modeling suggest that the direct ion contacts are unstable. Molecular dynamics (MD) simulations have been instrumental in the study of conduction and gating mechanisms of ion channels. Based on MD simulations, two hypotheses have been proposed, in which the four-ion configuration is an artifact due to either averaged structures or low temperature in crystallographic experiments. The two hypotheses have been supported or challenged by different experiments. Here, MD simulations with polarizable force fields validated byab initiocalculations were used to investigate the ion binding thermodynamics. Contrary to previous beliefs, the four-ion configuration was predicted to be thermodynamically stable after accounting for the complex electrostatic interactions and dielectric screening. Polarization plays a critical role in the thermodynamic stabilities. As a result, the ion conduction likely operates through a simple single-vacancy and water-free mechanism. The simulations explained crystal structures, ion binding experiments and recent controversial mutagenesis experiments. This work provides a clear view of the mechanism underlying the efficient ion conduction and demonstrates the importance of polarization in ion channel simulations.
We present a theoretical framework for designing and assessing the performance of algorithms executing in networks consisting of spiking artificial neurons. Although spiking neural networks (SNNs) are capable of general-purpose computation, few algorithmic results with rigorous asymptotic performance analysis are known. SNNs are exceptionally well-motivated practically, as neuromorphic computing systems with 100 million spiking neurons are available, and systems with a billion neurons are anticipated in the next few years. Beyond massive parallelism and scalability, neuromorphic computing systems offer energy consumption orders of magnitude lower than conventional high-performance computing systems. We employ our framework to design and analyze neuromorphic graph algorithms, focusing on shortest path problems. Our neuromorphic algorithms are message-passing algorithms relying critically on data movement for computation, and we develop data-movement lower bounds for conventional algorithms. A fair and rigorous comparison with conventional algorithms and architectures is challenging but paramount. We prove a polynomial-factor advantage even when we assume an SNN consisting of a simple grid-like network of neurons. To the best of our knowledge, this is one of the first examples of a provable asymptotic computational advantage for neuromorphic computing.
Understanding the impact of high-energy electron radiation on device characteristics remains critical for the expanding use of semiconductor electronics in space-borne applications and other radiation harsh environments. Here, we report on in situ measurements of high-energy electron radiation effects on the hole diffusion length in low threading dislocation density homoepitaxial bulk n-GaN Schottky diodes using electron beam induced current (EBIC) in high-voltage scanning electron microscopy mode. Despite the large interaction volume in this system, quantitative EBIC imaging is possible due to the sustained collimation of the incident electron beam. This approach enables direct measurement of electron radiation effects without having to thin the specimen. Using a combination of experimental EBIC measurements and Monte Carlo simulations of electron trajectories, we determine a hole diffusion length of 264 ± 11 nm for n-GaN. Irradiation with 200 kV electron beam with an accumulated dose of 24 × 1016 electrons cm−2 led to an approximate 35% decrease in the minority carrier diffusion length.
Dos Santos, Gonzalo; Meyer, Robert; Aparicio, Romina; Tranchida, Julien G.; Bringa, Eduardo M.; Urbassek, Herbert M.
Magnetization of clusters is often simulated using atomistic spin dynamics for a fixed lattice. Coupled spin-lattice dynamics simulations of the magnetization of nanoparticles have, to date, neglected the change in the size of the atomic magnetic moments near surfaces. We show that the introduction of variable magnetic moments leads to a better description of experimental data for the magnetization of small Fe nanoparticles. To this end, we divide atoms into a surface-near shell and a core with bulk properties. It is demonstrated that both the magnitude of the shell magnetic moment and the exchange interactions need to be modified to obtain a fair representation of the experimental data. This allows for a reasonable description of the average magnetic moment vs cluster size, and also the cluster magnetization vs temperature.
Actinide oxalates are chemical compounds important to nuclear industry, ranging from actinide separation in waste reprocessing, to production of specialty actinides, and to disposal of high level nuclear waste (HLW) and spent nuclear fuel (SNF). In this study, the solubility constants for Pr2(C2O4)3·10H2O and Nd2(C2O4)3·10H2O by performing solubility experiments in HNO3 and mixtures of HNO3 and H2C2O4 at 23.0 ± 0.2 °C have been determined. The targeted starting materials, Pr2(C2O4)3·10H2O and Nd2(C2O4)3·10H2O, were successfully synthesized at room temperature using PrCl3, NdCl3 and oxalic acid as the source metrials. Then, we utilized the targeted solubility-controlling phases to conduct solubility measurements. There was no phase change over the entire periods of experiments, demonstrating that Pr2(C2O4)3·10H2O and Nd2(C2O4)3·10H2O were the solubility-controlling phases in our respective experiments. Based on our experimental data, we have developed a thermodynamic model for Pr2(C2O4)3·10H2O and Nd2(C2O4)3·10H2O in the mixtures of HNO3 and H2C2O4 to high ionic strengths. The model for Pr2(C2O4)3·10H2O reproduces well the reported experimental data for Pu2(C2O4)3·10H2O, which are not utilized for the model development, demonstrating that Pr(III) is an excellent analog for Pu(III). Similarly, the model for Nd2(C2O4)3·10H2O reproduces the solubility of Am2(C2O4)3·10H2O and Cm2(C2O4)3·10H2O. The Pitzer model was used for the calculation of activity coefficients. Based on the published, well established model for dissociation constants for oxalic acid and stability constants for actinide-oxalate complexes [i.e., AmC2O4+, and Am(C2O4)2−] to high ionic strengths, we have obtained the solubility constants (log10K0) for the following reactions at 25 °C,Pr2(C2O4)3·10H2O ⇌ 2Pr3+ + 3C2O42− + 10H2O(l)Nd2(C2O4)3·10H2O ⇌ 2Nd3+ + 3C2O42− + 10H2O(l) to be −30.82 ± 0.30 (2σ), and −31.14 ± 0.35 (2σ), respectively. These values can be directly applied to Pu2(C2O4)3·10H2O, Am2(C2O4)3·10H2O and Cm2(C2O4)3·10H2O. The model established for actinide oxalates by this study provides the needed knowledge with regard to solubilities of actinide/REE oxalates at various ionic strengths, and is expected to find applications in many fields, including the geological disposal of nuclear waste and the mobility of REE under the surface conditions, as Pr2(C2O4)3·10H2O and Nd2(C2O4)3·10H2O can be regarded as the pure Pr and Nd end-members of deveroite, a recently discovered natural REE oxalate with the following stoichiometry, (Ce1.01Nd0.33La0.32Pr0.11Y0.11Sm0.01Pb0.04U0.03Th0.01Ca0.04)2.01(C2O4)2.99·9.99H2O. Regarding its importance in the geological disposal of nuclear waste, Am2(C2O4)3·10H2O/Pu2(C2O4)3·10H2O/Cm2(C2O4)3·10H2O can be the source-term phase for actinides, as demonstrated by the instance in the disposal in clay/shale formations. This is exemplified by the stability of Am2(C2O4)3·10H2O in comparison with Am(OH)3(am), Am(OH)3(s) and AmCO3(OH)(s) under the relevant geological repository conditions.
The costs associated with the increasing maintenance and surveillance needs of aging structures are rising at an unexpected rate. Multi-site fatigue damage, hidden cracks in hard-to-reach locations, disbonded joints, erosion, impact, and corrosion are among the major flaws encountered in today’s extensive fleet of aging aircraft and space vehicles. Aircraft maintenance and repairs represent about a quarter of a commercial fleet’s operating costs. The application of Structural Health Monitoring (SHM) systems using distributed sensor networks can reduce these costs by facilitating rapid and global assessments of structural integrity. The use of in-situ sensors for real-time health monitoring can overcome inspection impediments stemming from accessibility limitations, complex geometries, and the location and depth of hidden damage. Reliable, structural health monitoring systems can automatically process data, assess structural condition, and signal the need for human intervention. The ease of monitoring an entire on-board network of distributed sensors means that structural health assessments can occur more often, allowing operators to be even more vigilant with respect to flaw onset. SHM systems also allow for condition-based maintenance practices to be substituted for the current time-based or cycle-based maintenance approach thus optimizing maintenance labor. The Federal Aviation Administration has conducted a series of SHM validation and certification programs intended to comprehensively support the evolution and adoption of SHM practices into routine aircraft maintenance practices. This report presents one of those programs involving a Sandia Labs-aviation industry effort to move SHM into routine use for aircraft maintenance. The Airworthiness Assurance NDI Validation Center (AANC) at Sandia Labs, in conjunction with Sikorsky, Structural Monitoring Systems Ltd., Anodyne Electronics Manufacturing Corp., Acellent Technologies Inc., and the Federal Aviation Administration (FAA) carried out a trial validation and certification program to evaluate Comparative Vacuum Monitoring (CVM) and Piezoelectric Transducers (PZT) as a structural health monitoring solution to specific rotorcraft applications. Validation tasks were designed to address the SHM equipment, the health monitoring task, the resolution required, the sensor interrogation procedures, the conditions under which the monitoring will occur, the potential inspector population, adoption of CVM and PZT systems into rotorcraft maintenance programs and the document revisions necessary to allow for their routine use as an alternate means of performing periodic structural inspections. This program addressed formal SHM technology validation and certification issues so that the full spectrum of concerns, including design, deployment, performance and certification were appropriately considered. Sandia Labs designed, implemented, and analyzed the results from a focused and statistically relevant experimental effort to quantify the reliability of a CVM system applied to Sikorsky S-92 fuselage frame application and a PZT system applied to an S-92 main gearbox mount beam application. The applications included both local and global damage detection assessments. All factors that affect SHM sensitivity were included in this program: flaw size, shape, orientation and location relative to the sensors, as well as operational and environmental variables. Statistical methods were applied to performance data to derive Probability of Detection (POD) values for SHM sensors in a manner that agrees with current nondestructive inspection (NDI) validation requirements and is acceptable to both the aviation industry and regulatory bodies. The validation work completed in this program demonstrated the ability of both CVM and PZT SHM systems to detect cracks in rotorcraft components. It proved the ability to use final system response parameters to provide a Green Light/Red Light (“GO” – “NO GO”) decision on the presence of damage. In additional to quantifying the performance of each SHM system for the trial applications on the S-92 platform, this study also identified specific methods that can be used to optimize damage detection, guidance on deployment scenarios that can affect performance and considerations that must be made to properly apply CVM and PZT sensors. These results support the main goal of safely integrating SHM sensors into rotorcraft maintenance programs. Additional benefits from deploying rotorcraft Health and Usage Monitoring Systems (HUMS) may be realized when structural assessment data, collected by an SHM system, is also used to detect structural damage to compliment the operational environment monitoring. The use of in-situ sensors for health monitoring of rotorcraft structures can be a viable option for both flaw detection and maintenance planning activities. This formal SHM validation will allow aircraft manufacturers and airlines to confidently make informed decisions about the proper utilization of CVM and PZT technology. It will also streamline future regulatory actions and formal certification measures needed to assure the safe application of SHM solutions.
The Environment, Safety, and Health Planning department at Sandia National Laboratories is interested in the purchase and storage of chemicals and their potential impact following an uncontrolled release. The large number of projects conducted at SNL make tracking every chemical purchase impractical; therefore, attention is focused on hazardous substances purchased in large quantities. Chemicals and quantities of concern are determined through regulatory guidelines; e.g., the OSHA Process Safety Management list, the EPA Risk Management Plan list, and the Department of Energy Subcommittee on Consequence Assessment and Protective Actions Emergency Response Planning Guidelines. Based on these regulations, a list of chemicals with quantities of concern was created using the Aerial Locations of Hazardous Atmospheres (ALOHA) and SCREEN View chemical dispersion modelling software. The nature of this report does not draw conclusions, rather it documents the logic for a chemicals of concern list to ensure compliance with various regulations and form the basis for monitoring chemicals that may affect hazard classification. Hazardous Chemical Inventory Guidelines, Purpose, and Process 4 This page left blank.
BC-4 is an abandoned brining cavern situated in the middle of the site. Its presence poses a concern for several reasons: 1) the cavern was leached up into the caprock; 2) it is similar to BC-7, a brining cavern on the northwest corner of the dome that collapsed in 1954 and now is the home to Cavern Lake; 3) a similar collapse of BC-4 would have catastrophic consequences for the future operation of the site. There exists a previously mapped fault feature in the caprock and thought to extend into the salt dome than runs in close proximity to BC-4. There are uncertainties about the true extent of the fault, and no explicit analysis has been performed to predict the effects of the fault on BC-4 stability. Additional knowledge of the fault and its effects is becoming more crucial as an enhanced monitoring program is developed and installed.
The Y-12 SIRP Quadrant 1, Quadrant 2, and Vehicle Barrier project involves the following summary elements. The perimeter intrusion detection and assessment system (PIDAS) is an existing system at the Y-12 National Security Complex, which is a government-owned facility located in Oak Ridge, Tennessee, and managed by Consolidated Nuclear Security, LLC (CNS) for the Department of Energy (DOE). National Technology and Engineering Solutions of Sandia, LLC (NTESS) is the engineering design agent and construction manager (CM) for the Y-12 SIRP effort. The Quadrants 1 and 2 portion of the project involves the replacement of the PIDAS, and the vehicle barrier portion of the project involves the installation of a continuous passive vehicle barrier alongside the inner PIDAS fence.
The Primary Standards Lab employs guardbanding methods to reduce risk of false acceptance in calibration when test uncertainty ratios are low. Similarly, production agencies guardband their requirements to reduce false accept rates in product acceptance. The root-sum-square guardbanding method is recommended by PSL, but many other guardbanding methods have been proposed in literature or implemented in commercial software. This report analyzes the false accept and reject rates resulting from the most common guardbanding methods. It is shown that the root-sum-square method and the Dobbert Managed Guardband strategy are similar and both are suitable for calibration and product acceptance work in the NSE.
Partial fuel stratification (PFS) is a promising fuel injection strategy to improve the stability of lean combustion by applying a small amount of pilot injection right before spark timing. Mixed-mode combustion, which makes use of end-gas autoignition following conventional deflagration-based combustion, can be further utilized to speed up the overall combustion. In this study, PFS-assisted mixed-mode combustion in a lean-burn direct injection sparkignition (DISI) engine is numerically investigated using multi-cycle large eddy simulation (LES). A previously developed hybrid G-equation/well-stirred reactor combustion model for the well-mixed operation is extended to the PFS-assisted operation. The experimental spray morphology is employed to derive spray model parameters for the pilot injection. The LES-based model is validated against experimental data and is further compared with the Reynolds-averaged Navier-Stokes (RANS)-based model. Overall, both RANS and LES predict the mean pressure and heat release rate traces well, while LES outperforms RANS in capturing the cycle-to-cycle variation (CCV) and the combustion phasing in the mass burned space. Liquid and vapor penetrations obtained from the simulations agree reasonably well with the experiment. Detailed flame structures predicted from the simulations reveal the transition from a sooting diffusion flame to a lean premixed flame, which is consistent with experimental findings. LES captures more wrinkled and stretched flames than RANS. Finally, the LES model is employed to investigate the impacts of fuel properties, including heat of vaporization (HoV) and laminar burning speed (SL). Combustion phasing is found more sensitive to SL than to HoV, with a larger fuel property sensitivity of the heat release rate from autoignition than that from deflagration. Moreover, the combustion phasing in the PFS-assisted operation is shown to be less sensitive to SL compared with the well-mixed operation.
This paper describes how performance problems can be “masked,” or not readily evident by several causes: by photovoltaic (PV) system configuration (such as the size of the PV array capacity relative to the size of the inverter and the resultant clipped operating mode); by instrumentation design, installation, and maintenance (such as a misaligned or dirty pyranometer); by contract clauses (when operational availability is transformed to contractual availability, which excludes many factors); and by identified management and operational practices (such as reporting on a portfolio of plants rather than individually). A simple method based on a duration curve is introduced to overcome shortcomings of Performance Ratio based on nameplate capacity and Performance Index based on hourly simulation when quantifying masking effects, and inverter clipping and pyranometer soiling are presented as two examples of the new method. With a better understanding of the non-transparency of masking issues, stakeholders can better interpret performance data and deliver improved AC and DC plant conditions through PV system operation and maintenance (O&M) for improved performance, reduced O&M costs, and a more consistently delivered, and reduced, levelized cost of energy (LCOE).
Ignition and material response properties of aluminumized HMX heterogeneous explosive mixtures were explored in a series of planar impact experiments performed over multiple years. This work expands on previous work studying material response to impact in single-component HMX granular materials. The addition of nanometric aluminum is shown to affect the ignition sensitivity and growth to reaction from impact. The gas gun test results are presented here varying parameters of particle size, shock strength, and aluminum mass fraction.
Surging interest in engineering quantum computers has stimulated significant and focused research on technologies needed to make them manufacturable and scalable. In the ion trap realm this has led to a transition from bulk three-dimensional macro-scale traps to chip-based ion traps and included important demonstrations of passive and active electronics, waveguides, detectors, and other integrated components. At the same time as these technologies are being developed the system sizes are demanding more ions to run noisy intermediate scale quantum (NISQ) algorithms, growing from around ten ions today to potentially a hundred or more in the near future. To realize the size and features needed for this growth, the geometric and material design space of microfabricated ion traps must expand. In this paper we describe present limitations and the approaches needed to overcome them, including how geometric complexity drives the number of metal levels, why routing congestion affects the size and location of shunting capacitors, and how RF power dissipation can limit the size of the trap array. We also give recommendations for future research needed to accommodate the demands of NISQ scale ion traps that are integrated with additional technologies.
It has been recognized that as cavern operations become more frequent due to oil sales, field conditions may arise which require a faster turnaround time of analysis to address potential cavern impacts. This letter describes attempts to implement a strategy of transferring an intermediate solution of a Big Hill (BH) geomechanical model from a previous finite element mesh with a specified cavern geometry, to a new mesh with a new cavern geometry created by leaching from an oil sale operation.
Fu, Pengcheng; Schoenball, Martin; Ajo-Franklin, Jonathan B.; Chai, Chengping; Maceira, Monica; Morris, Joseph P.; Wu, Hui; Knox, Hunter; Schwering, Paul C.; White, Mark D.; Burghardt, Jeffrey A.; Strickland, Christopher E.; Johnson, Timothy C.; Vermeul, Vince R.; Sprinkle, Parker; Roberts, Benjamin; Ulrich, Craig; Guglielmi, Yves; Cook, Paul J.; Dobson, Patrick F.; Wood, Todd; Frash, Luke P.; Ingraham, Mathew D.; Pope, Joseph S.; Smith, Megan M.; Neupane, Ghanashyam; Doe, Thomas W.; Roggenthen, William M.; Horne, Roland; Singh, Ankush; Zoback, Mark D.; Wang, Herb; Condon, Kate; Ghassemi, Ahmad; Chen, Hao; Mcclure, Mark W.; Vandine, George; Blankenship, Douglas A.; Kneafsey, Timothy J.
The final version of the above article was posted prematurely on 16 July 2021, owing to a technical error. The final, corrected version of record will be made fully available at a later date.
Direct Numerical Simulations (DNS) are performed to investigate the process of spontaneous ignition of hydrogen flames at laminar, turbulent, adiabatic and non-adiabatic conditions. Mixtures of hydrogen and vitiated air at temperatures representing gas-turbine reheat combustion are considered. Adiabatic spontaneous ignition processes are investigated first, providing a quantitative characterization of stable and unstable flames. Results indicate that, in hydrogen reheat combustion, compressibility effects play a key role in flame stability and that unstable ignition and combustion are consistently encountered for reactant temperatures close to the mixture's characteristic crossover temperature. Furthermore, it is also found that the characterization of the adiabatic processes is also valid in the presence of non-adiabaticity due to wall heat-loss. Finally, a quantitative characterization of the instantaneous fuel consumption rate within the reaction front is obtained and of its ability, at auto-ignitive conditions, to advance against the approaching turbulent flow of the reactants, for a range of different turbulence intensities, temperatures and pressure levels.
Laplace, T.A.; Goldblum, B.L.; Manfredi, J.J.; Brown, J.A.; Bleuel, D.L.; Brand, C.A.; Gabella, G.; Gordon, J.; Brubaker, Erik B.
Background: Organic scintillators are widely used for neutron detection in both basic nuclear physics and applications. While the proton light yield of organic scintillators has been extensively studied, measurements of the light yield from neutron interactions with carbon nuclei are scarce. Purpose: Demonstrate a new approach for the simultaneous measurement of the proton and carbon light yield of organic scintillators. Provide new carbon light yield data for the EJ-309 liquid and EJ-204 plastic organic scintillators. Method: A 33-MeV H+2 beam from the 88-Inch Cyclotron at Lawrence Berkeley National Laboratory was impinged upon a 3-mm-thick Be target to produce a high-flux, broad-spectrum neutron beam. The double time-of-flight technique was extended to simultaneously measure the proton and carbon light yields of the organic scintillators, wherein the light output associated with the recoil particle was determined using np and nC elastic scattering kinematics. Results: The proton and carbon light yield relations of the EJ-309 liquid and EJ-204 plastic organic scintillators were measured over a recoil energy range of approximately 0.3 to 1 MeV and 2 to 5 MeV, respectively, for EJ-309, and 0.2 to 0.5 MeV and 1 to 4 MeV, respectively, for EJ-204. Conclusions: These data provide new insight into the ionization quenching effect in organic scintillators and key input for simulation of the response of organic scintillators for both basic science and a broad range of applications.
International Journal of High Performance Computing Applications
Benacchio, Tommaso; Bonaventura, Luca; Altenbernd, Mirco; Cantwell, Chris D.; Duben, Peter D.; Gillard, Mike; Giraud, Luc; Goddeke, Dominik; Raffin, Erwan; Teranishi, Keita T.; Wedi, Nils
Progress in numerical weather and climate prediction accuracy greatly depends on the growth of the available computing power. As the number of cores in top computing facilities pushes into the millions, increased average frequency of hardware and software failures forces users to review their algorithms and systems in order to protect simulations from breakdown. This report surveys hardware, application-level and algorithm-level resilience approaches of particular relevance to time-critical numerical weather and climate prediction systems. A selection of applicable existing strategies is analysed, featuring interpolation-restart and compressed checkpointing for the numerical schemes, in-memory checkpointing, user-level failure mitigation and backup-based methods for the systems. Numerical examples showcase the performance of the techniques in addressing faults, with particular emphasis on iterative solvers for linear systems, a staple of atmospheric fluid flow solvers. The potential impact of these strategies is discussed in relation to current development of numerical weather prediction algorithms and systems towards the exascale. Trade-offs between performance, efficiency and effectiveness of resiliency strategies are analysed and some recommendations outlined for future developments.
Two events of magnitude (mb) 3.6-3.8 occurred in southern North Korea (NK) on 27 June 2019 and 11 May 2020. Although these events were located ~330-400 km from the known nuclear test site, the fact that they occurred within the territory of NK, a country with a recent history of underground nuclear tests, made them events of interest for the monitoring community. Weused P/Lg ratios from regional stations to categorize seismic events that occurred in NK from 2006 to May 2020, including these two recent events, the six declared NK nuclear tests, and the cavity collapse and triggered earthquakes that followed the 3 September 2017 nuclear explosion. We were able to separate the cavity collapse from the population of nuclear explosions. However, based on P/Lg ratios, the distinction between the earthquakes and the cavity collapse is ambiguous. The performed discriminant analyses suggest that combining Pg/Lg and Pn/Lg ratios results in improved discriminant power compared with any of the ratio types alone. We used the two ratio types jointly in a quadratic discriminant function and successfully classified the six declared nuclear tests and the triggered earthquakes that followed the September 2017 explosion. Our analyses also confirm that the recent southern events of June 2019 and May 2020 are both tectonic earthquakes that occurred naturally.
This report summarizes the international collaboration work conducted by Sandia and funded by the US Department of Energy Office (DOE) of Nuclear Energy Spent Fuel and Waste Science & Technology (SFWST) as part of the Sandia National Laboratories Salt R&D and Salt International work packages. This report satisfies the level-three milestone M3SF-20SN010303062. Several stand-alone sections make up this summary report, each completed by the participants. The sections discuss international collaborations on geomechanical benchmarking exercises (WEIMOS), granular salt reconsolidation (KOMPASS), engineered barriers (RANGERS), and model comparison (DECOVALEX). Lastly, the report summarizes a newly developed working group on the development of scenarios as part of the performance assessment development process, and the activities related to the Nuclear Energy Agency (NEA) Salt club and the US/German Workshop on Repository Research, Design and Operations.
Natural and anthropogenic infrasound may travel vast distances, making it an invaluable resource for monitoring phenomena such as nuclear explosions, volcanic eruptions, severe storms, and many others. Typically, these waves are captured using pressure sensors, which cannot encode the direction of arrival—critical information when the source location is not known beforehand. Obtaining this information therefore requires arrays of sensors with apertures ranging from tens of meters to kilometers depending on the wavelengths of interest. This is often impractical in locations that lack the necessary real estate (urban areas, rugged regions, or remote islands); in any case, it requires multiple power, digitizer, and telemetry deployments. Here, the theoretical basis behind a compact infrasound direction of arrival sensor based on the acoustic metamaterials is presented. This sensor occupies a footprint that is orders of magnitude smaller than the span of a typical infrasound array. The diminutive size of the unit greatly expands the locations where it can be deployed. The sensor design is described, its ability to determine the direction of arrival is evaluated, and further avenues of study are suggested.
The software package developed by Sandia National Laboratories is intended to allow the integration of Simulink models into emulations of control networks. To accomplish this, three programs are included: Simulink S-Function, Data Broker, and End Point
We present the technology-aided computer design (TCAD) device simulation and modeling of a silicon p-i-n diode for detecting time-dependent X-ray radiation. We show that the simulated forward and reverse breakdown current-voltage characteristics agree well with the measured data under nonradiation environment by only calibrating carrier lifetimes for the forward bias case and avalanche model critical fields for the reverse bias condition. Using the calibrated parameters and other nominal material properties, we simulated the radiation responses of the p-i-n diode and compared with experimental data when the diode was exposed to X-ray radiation at Sandia's Saturn facility and the Idaho State University (ISU) TriMeV facility. For Saturn's Gaussian dose-rate pulses, we show three findings from TCAD simulations. First, the simulated photocurrents are in excellent agreement with the measured data for two dose-rate pulses with peak values of 1.16 times 10 -{10} and 1.88 times 10 -{10} rad(Si)/s. Second, the simulation results of high dose-rate pulses predict increased delayed photocurrents with longer time tails in the diode electrical responses due to excess carrier generation. Third, simulated peak values of diode radiation responses versus peak dose rates at different bias conditions provide useful guidance to determine the dose-rate range that the p-i-n diode can reliably detect in experiment. For TriMeV's non-Gaussian dose-rate pulse, our simulated diode response is in decent agreement with the measured data without further calibration. We also studied the effects of device geometry, recombination process, and dose-rate enhancement via TCAD simulations to understand the higher measured response in the time after the peak dose-rate radiation for the p-i-n diode exposed to TriMeV irradiation.
Despite their wide use in terahertz (THz) research and technology, the application spectra of photoconductive antenna (PCA) THz detectors are severely limited due to the relatively high optical gating power requirement. This originates from poor conversion efficiency of optical gate beam photons to photocurrent in materials with subpicosecond carrier lifetimes. Here we show that using an ultra-thin (160 nm), perfectly absorbing low-temperature grown GaAs metasurface as the photoconductive channel drastically improves the efficiency of THz PCA detectors. This is achieved through perfect absorption of the gate beam in a significantly reduced photoconductive volume, enabled by the metasurface. This Letter demonstrates that sensitive THz PCA detection is possible using optical gate powers as low as 5 μW-three orders of magnitude lower than gating powers used for conventionalPCAdetectors.We show that significantly higher optical gate powers are not necessary for optimal operation, as they do not improve the sensitivity to the THz field. This class of efficient PCA THz detectors opens doors for THz applications with low gate power requirements.
Accelerated growth of the additive manufacturing (AM) industry in recent years is accompanied by a rising need for methods to quickly assess quality at-scale. Current practices for quality inspection include nondestructive test methods and destructive testing of witness coupons, which are artifacts built alongside the actual part. However, these methods can be costly and time-consuming. Recognizing this need, the Additive Manufacturing Center of Excellence (AM CoE) initiated a project led by its partner, Auburn University, to develop rapid testing procedure using asbuilt samples tested in torsion to quantitatively assess build quality. The presented work developed a rapid testing procedure using as-built samples tested in torsion to quantify small variances for assessing build quality.
Nagasaka, Cocoro A.; Kozma, Karoly; Russo, Chris J.; Alam, Todd M.; Nyman, May
Removal of radioactive Cs from sodium-rich solutions is a technical challenge that goes back to post World War II nuclear waste storage and treatment; and interest in this topic was reinvigorated by the Fukushima-Daiichi nuclear power plant disaster, 10 years ago. Since the 1960′s there has been considerable focus on layered Zr phosphates as robust inorganic sorbents for separation of radionuclides such as Cs. Here we present synthesis and characterization, and direct comparison of Cs sorption capacity and selectivity of four related materials: 1) crystalline α-Zr phosphate and α-Hf phosphate, and 2) amorphous analogues of these. Powder X-ray diffraction, thermogravimetry, solid-state 31P magic angle spinning nuclear magnetic resonance (MAS-NMR) spectroscopy, and compositional analysis (inductively coupled plasma optical emission spectroscopy and mass spectroscopy, ICP OES and ICP MS) provided formulae; respectively M(HPO4)2⋅1H2O and M(HPO4)2⋅4H2O (M = Hf, Zr) for crystalline and amorphous analogues. Maximum Cs loading, competitive Cs-Na selectivity and maximum Cs-Na loading followed by the above characterizations plus 133Cs MAS-NMR spectroscopy revealed that amorphous analogues are considerably better Cs-sorbents (based on maximum Cs-loading and selectivity over Na) than the well-studied crystalline Zr-analogue. Additionally, crystalline α-Hf phosphate is better Cs-sorbent than crystalline α-Zr phosphate. All these studies consistently show that Hf phosphate is less crystallize than Zr phosphate, when obtained under similar or identical synthesis conditions. We attribute this to lower solubility of Hf phosphate compared to Zr phosphate, preventing ‘defect healing’ during the synthesis process.
This project was a follow-on to the Sandia National Laboratories (SNL) and the Laboratory for Laser Energetics (LLE) ARPA-E ALPHA project entitled “Demonstrating Fuel Magnetization and Laser Heating Tools for Low-Cost Fusion Energy”. The primary purpose of this follow-on project was to obtain additional data at the OMEGA facility to help better understand how MagLIF, a platform that has already demonstrated the scientific viability of magneto-inertial fusion, scales across a factor of 1000 in driver energy. A secondary aspect of this project was to extend simulations and analysis at SNL to cover a wider magneto-inertial fusion (MIF) parameter space and test scaling of those models across this wide range of input energies and conditions of the target. This work was successful in improving understanding of how key physics elements of MIF scales and improves confidence in setting requirements for fusion gain with larger drivers. The OMEGA experiments at the smaller scale verified the hypothesis that preheating the fuel plays a significant role in introducing wall contaminants that mix into the fuel and significantly degrade fusion performance. This contamination not only impacts target performance but the optimal input conditions for the target. However, analysis at the Z-scale showed that target performance at high preheat levels is limited by the Nernst effect, which advects magnetic flux from the hot spot, reducing magnetic insulation and consequently reduces the temperature of the fuel. The combination of MagLIF experiments at the disparate scales of OMEGA and Z along with a multiscale 3D simulation analysis has led to new insight into the physical mechanisms responsible for limiting target performance and provides important benchmarks to assess target scaling more generally for MIF schemes. Finally, in addition to the MagLIF related work, a semi-analytic model of liner driven Field Reversed Configuration (FRC) was developed that predicts the fusion gain for such systems. This model was also validated with 2D radiation magneto-hydrodynamic simulations and predicts that fusion gains of near unity could be driven by the Z machine.