The equation of state (EOS) and shock compression of bulk vanadium were investigated using canonical ab initio molecular dynamic simulations, with experimental validation to 865 GPa from shock data collected at Sandia's Z Pulsed Power Facility. In simulations the phase space was sampled along isotherms ranging from 3000 K to 50000 K, for densities between -ü=3 and 15g/cm3, with a focus on the liquid regime and the body-centered-cubic phase in the vicinity of the melting limit. The principal Hugoniot predicted from first principles is overall consistent with shock data, while it showed that current multiphase SESAME-type EOS for vanadium needed revision in the liquid regime. A more accurate SESAME EOS was developed using constraints from experiments and simulations. This work emphasizes the need to use a combined theoretical and experimental approach to develop high-fidelity EOS models for extreme conditions.
The thirty-year non-stationary historical trends in the wave energy climate for United States coastal waters between 1980 and 2009 are investigated using spectral partitioned wave data generated from a WaveWatch III® (version 5.05) hindcast. In addition to historical trends in the omni-directional wave power, frequency and directionally resolved wave power, frequency and directional spreading, and seasonal variability, are examined for the first time, including their geographical distribution. These historical wave energy climate trends are linked to changes to the dominant wave systems and commensurate trends in the historical wind climate. Total wave power trends are consistent with other studies, but the present study identifies regions, and specific frequency and direction bands, where significant wave energy climate changes have. For some regions, changes to wave climate parameters exceeded one-percent annually, more than thirty-percent over the study period. Non-stationary trends of this magnitude have significant implications for ocean and coastal engineering projects designed assuming wave climates are stationary and warrant their consideration in design practices.
Neuromorphic computing is a critical future technology for the computing industry, but it has yet to achieve its promise and has struggled to establish a cohesive research community. A large part of the challenge is that full realization of the potential of brain inspiration requires advances in both device hardware, computing architectures, and algorithms. This simultaneous development across technology scales is unprecedented in the computing field. This article presents a strategy, framed by market and policy pressures, for moving past these current technological and cultural hurdles to realize its full impact across technology. Achieving the full potential of brain-derived algorithms as well as post-complementary metal-oxide-semiconductor (CMOS) scaling neuromorphic hardware requires appropriately balancing the near-term opportunities of deep learning applications with the long-term potential of less understood opportunities in neural computing.
Dejong, Stephanie A.; Van Benthem, Mark H.; Keller, Timothy J.; Gillispie, Gregory D.
This work presents a novel method of performing PARAFAC2 factorization of three-way data using a compact representation of that data. In the standard PARAFAC2 algorithm, two modes of the data are recovered directly during the decomposition while the third mode is returned as a transformation matrix, which is then used to rotate sets of orthogonal third-mode basis factors into interpretable factors. In our new method, the data are first decomposed into a core matrix and orthogonal factor loading matrices in the first two modes as well as sets of orthogonal factors in the third mode (as in standard PARAFAC2). The core matrix is then decomposed using a the standard PARAFAC2 strategy to produce transformation matrices in all three modes. The algorithm is particularly useful for very large data sets and essentially permits imposition of nonnegativity in all three modes.
High-pressure multiplexed photoionization mass spectrometry (MPIMS) with tunable vacuum ultraviolet (VUV) ionization radiation from the Lawrence Berkeley Labs Advanced Light Source is used to investigate the oxidation of diethyl ether (DEE). Kinetics and photoionization (PI) spectra are simultaneously measured for the species formed. Several stable products from DEE oxidation are identified and quantified using reference PI cross-sections. In addition, we directly detect and quantify three key chemical intermediates: peroxy (ROO), hydroperoxyalkyl peroxy (OOQOOH), and ketohydroperoxide (HOOPO, KHP). These intermediates undergo dissociative ionization (DI) into smaller fragments, making their identification by mass spectrometry challenging. With the aid of quantum chemical calculations, we identify the DI channels of these key chemical species and quantify their time-resolved concentrations from the overall carbon atom balance at T = 450 K and P = 7500 torr. This allows the determination of the absolute PI cross-sections of ROO, OOQOOH, and KHP into each DI channel directly from experiment. The PI cross-sections in turn enable the quantification of ROO, OOQOOH, and KHP from DEE oxidation over a range of experimental conditions that reveal the effects of pressure, O2 concentration, and temperature on the competition among radical decomposition and second O2 addition pathways.
Steady-state photocapacitance (SSPC) was conducted on nonpolar m-plane GaN n-type Schottky diodes to evaluate the defects induced by inductively coupled plasma (ICP) dry etching in etched-and-regrown unipolar structures. An ∼10× increase in the near-midgap Ec - 1.9 eV level compared to an as-grown material was observed. Defect levels associated with regrowth without an etch were also investigated. The defects in the regrown structure (without an etch) are highly spatially localized to the regrowth interface. Subsequently, by depth profiling an etched-and-regrown sample, we show that the intensities of the defect-related SSPC features associated with dry etching depend strongly on the depth away from the regrowth interface, which is also reported previously [Nedy et al., Semicond. Sci. Technol. 30, 085019 (2015); Fang et al., Jpn. J. Appl. Phys. 42, 4207-4212 (2003); and Cao et al., IEEE Trans. Electron Devices 47, 1320-1324 (2000)]. A photoelectrochemical etching (PEC) method and a wet AZ400K treatment are also introduced to reduce the etch-induced deep levels. A significant reduction in the density of deep levels is observed in the sample that was treated with PEC etching after dry etching and prior to regrowth. An ∼2× reduction in the density of Ec - 1.9 eV level compared to a reference etched-and-regrown structure was observed upon the application of PEC etching treatment prior to the regrowth. The PEC etching method is promising for reducing defects in selective-area doping for vertical power switching structures with complex geometries [Meyers et al., J. Electron. Mater. 49, 3481-3489 (2020)].
Machine learning (ML) techniques are being used to detect increasing amounts of malware and variants. Despite successful applications of ML, we hypothesize that the full potential of ML is not realized in malware analysis (MA) due to a semantic gap between the ML and MA communities-as demonstrated in the data that is used. Due in part to the available data, ML has primarily focused on detection whereas MA is also interested in identifying behaviors. We review existing open-source malware datasets used in ML and find a lack of behavioral information that could facilitate stronger impact by ML in MA. As a first step in bridging this gap, we label existing data with behavioral information using open-source MA reports-1) altering the analysis from identifying malware to identifying behaviors, 2)~aligning ML better with MA, and 3)~allowing ML models to generalize to novel malware in a zero/few-shot learning manner. We classify the behavior of a malware family not seen during training using transfer learning from a state-of-the-art model for malware family classification and achieve 57%-84% accuracy on behavioral identification but fail to outperform the baseline set by a majority class predictor. This highlights opportunities for improvement on this task related to the data representation, the need for malware specific ML techniques, and a larger training set of malware samples labeled with behaviors.
To combat dynamic, cyber-physical disturbances in the electric grid, online and adaptive remedial action schemes (RASs) are needed to achieve fast and effective response. However, a major challenge lies in reducing the computational burden of analyses needed to inform selection of appropriate controls. This paper proposes the use of a role and interaction discovery (RID) algorithm that leverages control sensitivities to gain insight into the controller roles and support groups. Using these results, a procedure is developed to reduce the control search space to reduce computation time while achieving effective control response. A case study is presented that considers corrective line switching to mitigate geomagnetically induced current (GIC) -saturated reactive power losses in a 20-bus test system. Results demonstrated both significant reduction of both the control search space and reactive power losses using the RID approach.
The swelling of clay at high temperature and pressure is important for applications including nuclear waste storage but is not well understood. A molecular dynamics study of the swelling of Na montmorillonite in water at several temperatures (T = 298, 400, and 500 K) and water environment pressures (Pe = 5 and 100 MPa) is reported here. Adopting a rarely used setup that enables swelling pressure to be resolved with an accuracy of ~1 MPa, the swelling pressure was computed at basal spacings 1.6–2.6 nm (or 2–5 water layers between neighboring clay sheets), which has not been widely studied before. At T = 298 K and Pe = 5 MPa, swelling pressure Ps oscillates at d-spacing d smaller than 2.2 nm and decays monotonically as d increases. Increasing T to 500 K but keeping Pe at 5 MPa, Ps remains oscillatory at small d, but its repulsive peak at d = 2.2 nm shifts to ~2.0 nm and Ps at different d-spacings can grow more attractive or repulsive. At d > 2.0 nm, Ps is weakened greatly. Keeping T at 500 K and increasing Pe to 100 MPa, Ps recovers toward that at T = 298 K and Pe = 5 MPa, however, the repulsive peak at d = 2.0 nm remains the same. The opposite effects of increasing temperature and pressure on the density and dielectric screening of water, which control ion correlations and thus double layer repulsion, are essential for understanding the observed swelling pressure at elevated temperatures and its response to environment pressures.
Alkaline zinc-manganese dioxide (Zn-MnO2) batteries are well suited for grid storage applications because of their inherently safe, aqueous electrolyte and established materials supply chain, resulting in low production costs. With recent advances in the development of Cu/Bi-stabilized birnessite cathodes capable of the full 2-electron capacity equivalent of MnO2 (617 mA h/g), there is a need for selective separators that prevent zincate (Zn(OH)4)2- transport from the anode to the cathode during cycling, as this electrode system fails in the presence of dissolved zinc. Herein, we present the synthesis of N-butylimidazolium-functionalized polysulfone (NBI-PSU)-based separators and evaluate their ability to selectively transport hydroxide over zincate. We then examine their impact on the cycling of high depth of discharge Zn/(Cu/Bi-MnO2) batteries when inserted in between the cathode and anode. Initially, we establish our membranes' selectivity by performing zincate and hydroxide diffusion tests, showing a marked improvement in zincate-blocking (DZn (cm2/min): 0.17 ± 0.04 × 10-6 for 50-PSU, our most selective separator vs 2.0 ± 0.8 × 10-6 for Cellophane 350P00 and 5.7 ± 0.8 × 10-6 for Celgard 3501), while maintaining similar crossover rates for hydroxide (DOH (cm2/min): 9.4 ± 0.1 × 10-6 for 50-PSU vs 17 ± 0.5 × 10-6 for Cellophane 350P00 and 6.7 ± 0.6 × 10-6 for Celgard 3501). We then implement our membranes into cells and observe an improvement in cycle life over control cells containing only the commercial separators (cell lifetime extended from 21 to 79 cycles).
Enhanced Geothermal Systems could provide a substantial contribution to the global energy demand if their implementation could overcome inherent challenges. Examples are insufficient created permeability, early thermal breakthrough, and unacceptable induced seismicity. Here we report on the seismic response of a mesoscale hydraulic fracturing experiment performed at 1.5‐km depth at the Sanford Underground Research Facility. We have measured the seismic activity by utilizing a 100‐kHz, continuous seismic monitoring system deployed in six 60‐m length monitoring boreholes surrounding the experimental domain in 3‐D. The achieved location uncertainty was on the order of 1 m and limited by the signal‐to‐noise ratio of detected events. These uncertainties were corroborated by detections of fracture intersections at the monitoring boreholes. Three intervals of the dedicated injection borehole were hydraulically stimulated by water injection at pressures up to 33 MPa and flow rates up to 5 L/min. We located 1,933 seismic events during several injection periods. The recorded seismicity delineates a complex fracture network comprised of multistrand hydraulic fractures and shear‐reactivated, preexisting planes of weakness that grew unilaterally from the point of initiation. We find that heterogeneity of stress dictates the seismic outcome of hydraulic stimulations, even when relying on theoretically well‐behaved hydraulic fractures. Once hydraulic fractures intersected boreholes, the boreholes acted as a pressure relief and fracture propagation ceased. In order to create an efficient subsurface heat exchanger, production boreholes should not be drilled before the end of hydraulic stimulations.
Passive silicon photonic waveguides are exposed to gamma radiation to understand how the performance of silicon photonic integrated circuits is affected in harsh environments such as space or high energy physics experiments. The propagation loss and group index of the mode guided by these waveguides is characterized by implementing a phase sensitive swept-wavelength interferometric method. We find that the propagation loss associated with each waveguide geometry explored in this study slightly increases at absorbed doses of up to 100 krad (Si). The measured change in group index associated with the same waveguide geometries is negligibly changed after exposure. Additionally, we show that the post-exposure degradation of these waveguides can be improved through heat treatment.
Dalbey, Keith R.; Eldred, Michael S.; Geraci, Gianluca; Jakeman, John D.; Maupin, Kathryn A.; Monschke, Jason A.; Seidl, Daniel T.; Tran, Anh; Menhorn, Friedrich; Zeng, Xiaoshu
The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.
Impact problems of plate-like parts by punch-like objects with relatively large mass moving at slow speeds of a few feet per second constitute a subset of impact problems of interest at Sandia. This is in contrast to small objects moving in the range of hundreds or thousands of feet per second or higher. The objective of this work is to develop a simple formula that can be used to estimate a lower bound for the puncture energy of metal plates impacted by cylindrical, essentially rigid punches of circular cross-section and at nose. Such geometry is used as a basis in the design of puncture mitigation barriers or procedures. This was accomplished by deriving an expression using non-dimensional analysis and then calibrating it based on tests results in the range of speeds of interest. Lower bounds can then be determined based on confidence intervals or factors of safety.
Tensor decomposition models play an increasingly important role in modern data science applications. One problem of particular interest is fitting a low-rank Canonical Polyadic (CP) tensor decomposition model when the tensor has sparse structure and the tensor elements are nonnegative count data. SparTen is a high-performance C++ library which computes a low-rank decomposition using different solvers: a first-order quasi-Newton or a second-order damped Newton method, along with the appropriate choice of runtime parameters. Since default parameters in SparTen are tuned to experimental results in prior published work on a single real-world dataset conducted using MATLAB implementations of these methods, it remains unclear if the parameter defaults in SparTen are appropriate for general tensor data. Furthermore, it is unknown how sensitive algorithm convergence is to changes in the input parameter values. This report addresses these unresolved issues with large-scale experimentation on three benchmark tensor data sets. Experiments were conducted on several different CPU architectures and replicated with many initial states to establish generalized profiles of algorithm convergence behavior.
Researchers at Sandia National Laboratories have integrated the GRANTA materials database with the MatCal calibration engine to calibrate material models from characterization data. GRANTA is gaining acceptance across the NNSA Tri-lab complex and is being populated with weapons-specific test data by Sandia experimentalists. To use that data to create material models for use by weapons systems analysts, MatCal has been enabled import calibration data and test conditions from GRANTA to quickly and reproducibly produce a calibrated set of parameters for a given constitutive model. The team is currently working to store the parameters characterizing material behavior in GRANTA to make them accessible by all weapons analysts.
As the US electrifies the transportation sector, cyber attacks targeting vehicle charging could bring consequences to electrical system infrastructure. This is a growing area of concern as charging stations increase power delivery and must communicate to a range of entities to authorize charging, sequence the charging process, and manage load (grid operators, vehicles, OEM vendors, charging network operators, etc.). The research challenges are numerous and are complicated because there are many end users, stakeholders, and software and equipment vendors interests involved. Poorly implemented electric vehicle supply equipment (EVSE), electric vehicle (EV), or grid communication system cybersecurity could be a significant risk to EV adoption because the political, social, and financial impact of cyberattacks - or public perception of such - ripples across the industry and has lasting and devastating effects. Unfortunately, there is no comprehensive EVSE cybersecurity approach and limited best practices have been adopted by the EV/EVSE industry. There is an incomplete industry understanding of the attack surface, interconnected assets, and unsecured interfaces. Thus, comprehensive cybersecurity recommendations founded on sound research are necessary to secure EV charging infrastructure. This project is providing the power, security, and automotive industry with a strong technical basis for securing this infrastructure by developing threat models, determining technology gaps, and identifying or developing effective countermeasures. Specifically, the team is creating a cybersecurity threat model and performing a technical risk assessment of EVSE assets, so that automotive, charging, and utility stakeholders can better protect customers, vehicles, and power systems in the face of new cyber threats.
The principal Hugoniot, sound velocity, and Grüneisen parameter of polystyrene were measured at conditions relevant to shocks in inertial confinement fusion implosions, from 100 to 1000 GPa. The sound velocity is in good agreement with quantum molecular dynamics calculations and all tabular equation of state models at pressures below 200 GPa. Above 200 GPa, the experimental results agree with two of the examined tables, but do not agree with the most recent table developed for design of inertial confinement fusion (ICF) experiments. The Grüneisen parameter increases with density below ∼3.1g/cm3 and approaches the asymptotic value for an ideal gas after complete dissociation. This behavior is in good agreement with quantum molecular dynamics results and previous work but is not represented by any of the tabular models. The discrepancy between tabular models and experimental measurement of the sound velocity and Grüneisen parameter is sufficient to impact simulations of ICF experiments.
The application of deep learning toward discovery of data-driven models requires careful application of inductive biases to obtain a description of physics which is both accurate and robust. We present here a framework for discovering continuum models from high fidelity molecular simulation data. Our approach applies a neural network parameterization of governing physics in modal space, allowing a characterization of differential operators while providing structure which may be used to impose biases related to symmetry, isotropy, and conservation form. Here, we demonstrate the effectiveness of our framework for a variety of physics, including local and nonlocal diffusion processes and single and multiphase flows. For the flow physics we demonstrate this approach leads to a learned operator that generalizes to system characteristics not included in the training sets, such as variable particle sizes, densities, and concentration.
Guo, Qianying; Gu, Yucong; Barr, Christopher M.; Koenig, Thomas; Hattar, Khalid M.; Li, Lin; Thompson, Gregory B.
The incorporation of nanostructured and amorphous metals into modern applications is reliant on the understanding of deformation and failure modes in constrained conditions. To study this, a 105 nm crystalline Cu/160 nm amorphous Cu45Zr55 (at.%) multilayer structure was fabricated with the two crystalline layers sputter deposited between the top-middle-bottom amorphous layers and prepared to electron transparency. The multilayer was then in situ indented either under a single load to a depth of ~ 100 nm (max load of ~ 100 μN) or held at 20 μN and then repeatedly indented with an additional 5 μN up to 20,000 cycles in a transmission electron microscope to compare the deformation responses in the nanolaminate. For the single indentation test, the multilayer showed serrated load-displacement behavior upon initial indentation inductive of shear banding. At an indentation depth of ~ 32 nm, the multilayer exhibited perfect plastic behavior and no strain hardening. Both indented and fatigue-indented films revealed diffraction contrast changes with deformation. Subsequent Automated Crystal Orientation Mapping (ACOM) measurements confirmed and quantified global texture changes in the crystalline layers with specifically identified grains revealing rotation. Using a finite element model, the in-plane displacement vectors under the indent mapped conditions where ACOM determined grain rotation was observed, indicating the stress flow induced grain rotation. The single indented Cu layers also exhibited evidence of deformation induced grain growth, which was not evident in the fatigue-indented Cu based multilayer. Finally, the single indented multilayer retained a significant plastic crater in the upper most amorphous layer that directly contacted the indenter; a negligible crater impression in the same region was observed in the fatigued tested multilayer. These differences are explained by the different loading methods, applied load, and deformation mechanisms experienced in the multilayers.
Silva-Quinones, Dhamelyz; He, Chuan; Dwyer, Kevin J.; Butera, Robert E.; Wang, George T.; Teplyakov, Andrew V.
The reactivity of liquid hydrazine (N2H4) with respect to H-, Cl-, and Br-terminated Si(100) surfaces was investigated to uncover the principles of nitrogen incorporation into the interface. This process has important implications in a wide variety of applications, including semiconductor surface passivation and functionalization, nitride growth, and many others. The use of hydrazine as a precursor allows for reactions that exclude carbon and oxygen, the primary sources of contamination in processing. In this work, the reactivity of N2H4 with H- and Cl-terminated surfaces prepared by traditional solvent-based methods and with a Br-terminated Si(100) prepared in ultrahigh vacuum was compared. The reactions were studied with X-ray photoelectron spectroscopy, atomic force microscopy, and scanning tunneling microscopy, and the observations were supported by computational investigations. The H-terminated surface led to the highest level of nitrogen incorporation; however, the process proceeds with increasing surface roughness, suggesting possible etching or replacement reactions. In the case of Cl-terminated (predominantly dichloride) and Br-terminated (monobromide) surfaces, the amount of nitrogen incorporation on both surfaces after the reaction with hydrazine was very similar despite the differences in preparation, initial structure, and chemical composition. Density functional theory was used to propose the possible surface structures and to analyze surface reactivity.
Pd readily absorbs hydrogen and its isotopes, and can be used to purify gas mixtures involving tritium. Tritium decays to He, forming He bubbles. Bubbles causes possible PCT effects swelling, He release, all leading to failures. Radioactive decay experiments take many years. Molecular dynamics (MD) studies can be quickly done. No previous MD methods can simulate He bubble nucleation and growth.
The fundamental interactions between an edge dislocation and a random solid solution are studied by analyzing dislocation line roughness profiles obtained from molecular dynamics simulations of Fe0.70Ni0.11Cr0.19 over a range of stresses and temperatures. These roughness profiles reveal the hallmark features of a depinning transition. Namely, below a temperature-dependent critical stress, the dislocation line exhibits roughness in two different length scale regimes which are divided by a so-called correlation length. This correlation length increases with applied stress and at the critical stress (depinning transition or yield stress) formally goes to infinity. Above the critical stress, the line roughness profile converges to that of a random noise field. Motivated by these results, a physical model is developed based on the notion of coherent line bowing over all length scales below the correlation length. Above the correlation length, the solute field prohibits such coherent line bow outs. Using this model, we identify potential gaps in existing theories of solid solution strengthening and show that recent observations of length-dependent dislocation mobilities can be rationalized.
Here, we describe recent efforts to improve our predictive modeling of rate-dependent behavior at, or near, a phase transition using molecular dynamics simulations. Cadmium sulfide (CdS) is a well-studied material that undergoes a solid-solid phase transition from wurtzite to rock salt structures between 3 and 9 GPa. Atomistic simulations are used to investigate the dominant transition mechanisms as a function of orientation, size and rate. We found that the final rock salt orientations were determined relative to the initial wurtzite orientation, and that these orientations were different for the two orientations and two pressure regimes studied. The CdS solid-solid phase transition is studied, for both a bulk single crystal and for polymer-encapsulated spherical nanoparticles of various sizes.
Shock-induced detonation is a key property of energetic materials (EM) that remains empirically understood. One proposed mechanism of shock-initiation in EM is “phonon up-pumping” to initiate chemical reactions, where excitation of lattice phonon modes rapidly transfers energy into intramolecular vibrations, ultimately resulting in the breaking of chemical bonds. We are developing novel ultrafast laser spectroscopy techniques to study vibrational energy transfer from phonon modes to intramolecular vibrations (phonon up-pumping), as well as competing energy transfer pathways from intramolecular vibrations to phonon modes (vibrational cooling). Through combinations of plasma-generated supercontinuum infrared, tunable near- and mid-infrared, and terahertz pulses in pump-probe spectroscopy, supplemented with ab inito simulations, we can explore the energy transfer processes on a sub-picosecond time scale to elucidate vibrational energy transfer pathways and lifetimes in EM. Herein we highlight recent progress, including the spectral and temporal characteristics of the infrared and THz sources as well as preliminary results on select EM.
Within the energetics community, considerable effort is being put forth to find a robust scale-bridging link between unreacted material microstructures and the observed material responses, e.g. detonation and sub-detonative phenomena. Specifically, one area where this scale-bridging capability is needed is mesoscale modeling of explosives initiation (MMEI); here, material microstructures are imported directly or as statistical reconstructions into a hydrocode. While MMEI is attractive for simulating the shock initiation process with ever-increasing model fidelity, a large gap remains between the data being generated at the mesoscale and the calibration of burn model parameters. In this work, stochastic burn models are explored as a paradigm-shift to address possible scale-bridging schemes. These stochastic, particle-based methods are similar to those used for granular and droplet-laden flows, with Langevin-type equations. Further parallels are drawn to turbulent combustion modeling and preliminary developments using probability density function (pdf) theory by Baer, Gartling, and DesJardin. In order to implement these new scale-bridging schemes, one example of a stochastic burn model is explained in greater detail. Results from the stochastic burn model and MMEI simulations are given to illustrate the proposed approach. Ultimately, the execution of this work will be a community endeavor; to achieve such a capability, research efforts should focus on full-field data mining and pdf evolution, in addition to new numerical techniques for hydrocodes.
We discuss major challenges in modeling giant impacts between planetary bodies, focusing on the equations of state (EOS). During the giant impact stage of planet formation, rocky planets are melted and partially vaporized. However, most EOS models fail to reproduce experimental constraints on the thermodynamic properties of the major minerals over the required phase space. Here, we present an updated version of the widely-used ANEOS model that includes a user-defined heat capacity limit in the thermal free energy term. Our revised model for forsterite (Mg2SiO4), a common proxy for the mantles of rocky planets, provides a better fit to material data over most of the phase space of giant impacts. We discuss the limitations of this model and the Tillotson equation of state, a commonly used alternative model.
Pd readily absorbs hydrogen and its isotopes, and can be used to purify gas mixtures involving tritium. Tritium decays to He, forming He bubbles. Bubbles causes possible PCT effects swelling, He release, all leading to failures. Radioactive decay experiments take many years. Molecular dynamics (MD) studies can be quickly done. No previous MD methods can simulate He bubble nucleation and growth.
An A-and B-site substitutional study of SrFeO3−δ perovskites (A’x A1−x B’y B1−y O3−δ, where A = Sr and B = Fe) was performed for a two-step solar thermochemical air separation cycle. The cycle steps encompass (1) the thermal reduction of A’x Sr1−x B’y Fe1−y O3−δ driven by concentrated solar irradiation and (2) the oxidation of A’x Sr1−x B’y Fe1−y O3−δ in air to remove O2, leaving N2 . The oxidized A’x Sr1−x B’y Fe1−y O3−δ is recycled back to the first step to complete the cycle, resulting in the separation of N2 from air and concentrated solar irradiation. A-site substitution fractions between 0 ≤ x ≤ 0.2 were examined for A’ = Ba, Ca, and La. B-site substitution fractions between 0 ≤ y ≤ 0.2 were examined for B’ = Cr, Cu, Co, and Mn. Samples were prepared with a modified Pechini method and characterized with X-ray diffractometry. The mass changes and deviations from stoichiometry were evaluated with thermogravimetry in three screenings with temperature-and O2 pressure-swings between 573 and 1473 K and 20% O2 /Ar and 100% Ar at 1 bar, respectively. A’ = Ba or La and B’ = Co resulted in the most improved redox capacities amongst temperature-and O2 pressure-swing experiments.
The Extended History Variable Reactive Burn model (XHVRB), as proposed by Starkenburg, uses shock capturing rather than current pressure for calculating the pseudo-entropy that is used to model the reaction rate of detonating explosives. In addition to its extended capabilities for modeling explosive desensitization in multi-shock environments, XHVRB's shock capturing offers potential improvement for single shock modeling over the historically used workhorse model HVRB in CTH, an Eulerian shock physics code developed at Sandia National Labs. The detailed transition to detonation of PBX9501, as revealed by embedded gauge data, is compared to models with both HVRB and XHVRB. Improvements to the comparison of model to test data are shown with XHVRB, though not all of the details of the transition are captured by these single-rate models.
Energetic materials with different properties can be mixed or layered to control performance. However, reactions at material interfaces are poorly understood and performance may be highly dependent on the degree of mixing. In this work, we use vapor-deposited explosive multilayers as a model system to investigate shock interactions between different explosive materials with precisely controlled spacings. Samples consisted of alternating pentaerythritol tetranitrate (PETN) and hexanitrostilbene (HNS) layers, materials that have substantial differences in detonation velocity, with individual layer thicknesses in the vicinity of the critical thickness for detonation propagation of each material (~100 - 200 μm). Additional experiments on PETN/HNS bilayer samples were conducted to elucidate the role of non-ideal interfaces on detonation propagation. Preliminary hydrocode simulations were employed to simulate detonation performance, using an Arrhenius reactive burn model that was parameterized from detonation velocity and failure data from vapor-deposited films of each constituent material. Measured detonation velocities in the multilayer samples were significantly lower than expected, given that the individual PETN layer thicknesses were larger than the critical thickness for detonation propagation. The bilayer experiments highlight the role of non-ideal interfaces in contributing to this result.
Signal arrival-time estimation plays a critical role in a variety of downstream seismic analyses, including location estimation and source characterization. Any arrival-time errors propagate through subsequent data-processing results. In this article, we detail a general framework for refining estimated seismic signal arrival times along with full estimation of their associated uncertainty. Using the standard short-term average/long-term average threshold algorithm to identify a search window, we demonstrate how to refine the pick estimate through two different approaches. In both cases, new waveform realizations are generated through bootstrap algorithms to produce full a posteriori estimates of uncertainty of onset arrival time of the seismic signal. The onset arrival uncertainty estimates provide additional data-derived information from the signal and have the potential to influence seismic analysis along several fronts.
Energy storage systems (ESSs) are being deployed widely due to numerous benefits including operational flexibility, high ramping capability, and decreasing costs. This study investigates the economic benefits provided by battery ESSs when they are deployed for market-related applications, considering the battery degradation cost. A comprehensive investment planning framework is presented, which estimates the maximum revenue that the ESS can generate over its lifetime and provides the necessary tools to investors for aiding the decision making process regarding an ESS project. The applications chosen for this study are energy arbitrage and frequency regulation. Lithium-ion batteries are considered due to their wide popularity arising from high efficiency, high energy density, and declining costs. A new degradation cost model based on energy throughput and cycle count is developed for Lithium-ion batteries participating in electricity markets. The lifetime revenue of ESS is calculated considering battery degradation and a cost-benefit analysis is performed to provide investors with an estimate of the net present value, return on investment and payback period. The effect of considering the degradation cost on the estimated revenue is also studied. The proposed approach is demonstrated on the IEEE Reliability Test System and historical data from PJM Interconnection.
Understanding microstructural and strain evolutions induced by noble gas production in the nuclear fuel matrix or plasma-facing materials is crucial for designing next generation nuclear reactors, as they are responsible for volumetric swelling and catastrophic failure. We describe a multimodal approach combining synchrotron-based nanoscale X-ray imaging techniques with atomic-scale electron microscopy techniques for mapping chemical composition, morphology and lattice distortion in a single crystal W induced by Kr irradiation. We report that Kr-irradiated single crystal W undergoes surface deformation, forming Kr containing cavities. Furthermore, positive strain fields are observed in Kr-irradiated regions, which lead to compression of underlying W matrix.
Leung, L.R.; Bader, David C.; Taylor, Mark A.; Mccoy, Renata B.
Supported by the U.S. Department of Energy (DOE), the Energy Exascale Earth System Model (E3SM) project aims to optimize the use of DOE resources to address the grand challenge of actionable predictions of Earth system variability and change. This requires sustained advancement to (1) integrate model development with leading-edge computational advances toward ultra-high-resolution modeling; (2) represent the coupled human-Earth system to address energy sector vulnerability to variability and change; and (3) address uncertainty in model simulations and projections. Scientific development of the E3SM modeling system is driven by the simulation requirements in three overarching science areas centering on understanding the Earth's water cycle, biogeochemistry, and cryosphere systems and their future changes. This paper serves as an introduction to the E3SM special collection, which includes 50 papers published in several AGU journals. It provides an overview of the E3SM project, including its goals and science drivers. It also provides a brief history of the development of E3SM version 1 and highlights some key findings from papers included in the special collection.
Work performed under this one-year LDRD was concerned with estimating resource requirements for small quantum test beds that are expected to be available in the near future. This work represents a preliminary demonstration of our ability to leverage quantum hardware for solving small quantum simulation problems in areas of interest to the DOE. The algorithms enabling such studies are hybrid quantum-classical variational algorithms, in particular the widely-used variational quantum eigensolver (VQE). Employing this hybrid algorithm, in which the quantum computer complements the classical one, we implemented an end-to-end application-level toolchain that allows the user to specify a molecule of interest and compute the ground state energy using the VQE approach. We found significant limitations attributable to the classical portion of the hybrid system, including a greater than greater-than-quartic power scaling of the classical memory requirements with the system size. Current VQE approaches would require an exascale machine for solving any molecule with size greater than 150 nuclei. Our findings include several improvements that we implemented into the VQE toolchain, including a new classical optimizer that is decades old but hadn't been considered before in the VQE ecosystem. Our findings suggest limitations to variational hybrid approaches to simulation that further motivate the need for a gate-based fault-tolerant quantum processor that can implement larger problems using the fully digital quantum phase estimation algorithm.
An organic glass scintillator developed by Sandia National Laboratories was characterized in terms of its light output and pulse shape discrimination (PSD) properties and compared to commercial liquid (EJ-309) and plastic (EJ-276) organic scintillators. The electron light output was determined through relative comparison of the 137Cs Compton edge location. The proton light yield was measured using a double time-of-flight technique at the 88-Inch Cyclotron at Lawrence Berkeley National Laboratory. Using a tunable broad-spectrum neutron source and an array of pulse-shape-discriminating observation scintillators, a continuous measurement of the proton light yield was performed for EJ-309 (200 keV–3.2 MeV), EJ-276 (170 keV–4.9 MeV), and the organic glass (50 keV–20 MeV). Finally, the PSD properties of the organic glass, EJ-309, and EJ-276 were evaluated using an AmBe source and compared via a figure-of-merit metric. The organic glass exhibited a higher electron light output than both EJ-309 and EJ-276. Its proton light yield and PSD performance were comparable to EJ-309 and superior to that of EJ-276. With these performance characteristics, the organic glass scintillator is well poised to replace current state-of-the-art PSD-capable scintillators in a range of fast neutron detection applications.
Conathane EN-7 (referred to as EN-7) has been used for decades to pot electrical connectors, providing mechanical support for solder joints in cables. Unfortunately, the EN-7 formulation contains a suspect carcinogen and chemical sensitizer, toluene diisocyanate (TDI). Because of this, various groups have been formulating replacement materials, but all have come up short in final properties or in processing. We propose Arathane 5753 HVB as a replacement for EN-7. The properties compare very well with EN-7 and the processing has both advantages and disadvantages over EN-7 as discussed in detail below.
This report details the current benchmark results to verify, validate and demonstrate the capabilities of the in-house multi-physics phase-field modeling framework Mesoscale Multiphysics Phase Field Simulator (MEMPHIS) developed at the Center for Integrated Nanotechnologies (CINT). MEMPHIS is a general phase-field capability to model various nanoscience and materials science phenomena related to microstructure evolution. MEMPHIS has been benchmarked against a suite of reported classical phase-field benchmark problems to verify and validate the correctness, accuracy and precision of the models and numerical methods currently implemented into the code.
Atomic Force Microscopy (AFM), in conjunction with Peak Force Kelvin Probe Force Microscopy (PF-KPFM) and Peak Force Scanning Spreading Resistance Microscopy (PF-SSRM), was used to assess changes on thin metal films that underwent accelerated aging. The AFM technique provides a relatively easy, non-destructive methodology that does not require high-vacuum facilities to obtain nanometer-scale spatial resolution of surface chemistry changes. Surface morphology, roughness, contact potential difference, and spreading resistance were monitored to qualitatively identify effects of aging-morphology changes and oxidation of Au, Al, Cu thin film standards as well as diffusion of CuAu and AlAu thin film stacks at 65C under dried nitrogen flow conditions. AFM PF-KPFM and PF-SSRM modes have been exercised, refined and have proven to be viable and necessary early aging detection tools.
Secondary networks are used to supply power to loads that require very high reliability and are in use around the world, particularly in dense urban downtown areas. The protection of secondary network systems poses unique challenges. The addition of distributed energy resources (DERs) to secondary networks can compound these challenges, and deployment of microgrids on secondary networks will create a new set of challenges and opportunities. This report discusses secondary networks and their protection, the challenges associated with interconnecting DERs to a secondary network, issues expected to be associated with creating microgrids on secondary networks, standards that deal with these challenges and issues, and suggestions for research and development foci that would yield new means for addressing these challenges.
Lin, Yong; Scott, Bobby R.; Saxton, Bryanna; Chen, Wenshu; Belinsky, Steven; Potter, Charles G.A.
There are numerous self-shielded research irradiators used in various facilities throughout the United States. The irradiators employ radioactive sources containing either 137Cs or 60Co and the irradiators are used for a variety of radiobiological investigations involving cellular and/or animal models. A report from the National Academy of Sciences described security issues associated with particular radiation sources and the desire for their replacement with suitable X-ray irradiators. One possible replacement would be a 320 kV X-ray irradiator. The participants in this research successfully performed in vivo radiobiological studies involving mice exposed to filtered (HVL ≈ 4 mm Cu) 320 kV X rays. Two publications (one published and one submitted at the publishing of this report) documenting key findings are provided in Appendices A and B of this report. The 320 kV X rays were found suitable for in vivo (in mice) cell survival studies and are expected to be suitable for bone marrow transplantation studies using mice but this needs to be experimentally validated.
We present the results of the first Charged-Particle Transport Coefficient Code Comparison Workshop, which was held in Albuquerque, NM October 4–6, 2016. In this first workshop, scientists from eight institutions and four countries gathered to compare calculations of transport coefficients including thermal and electrical conduction, electron–ion coupling, inter-ion diffusion, ion viscosity, and charged particle stopping powers. Here, we give general background on Coulomb coupling and computational expense, review where some transport coefficients appear in hydrodynamic equations, and present the submitted data. Large variations are found when either the relevant Coulomb coupling parameter is large or computational expense causes difficulties. Understanding the general accuracy and uncertainty associated with such transport coefficients is important for quantifying errors in hydrodynamic simulations of inertial confinement fusion and high-energy density experiments.
Wide-area time-synchronized measurements have recently revealed troublesome forced oscillations (FOs) within modern synchronized power grids. In some cases, these FOs represent a dangerous hazard to the system. Recent research has focused on locating the source of FOs to provide operators with knowledge for mitigating their impact locally. This paper presents a complementary mitigation strategy, which is to purposely induce a second oscillation into the grid which cancels the impact of the FO. Such a strategy is complementary in that it may provide valuable time to operators attempting to locate the FOs source and to determine how to rectify it. This paper presents a suppression control strategy which modulates controllable devices to automatically cancel the impact of the FO without the need for locating the source of the original FO. The strategy is based upon tuned feedback control. The approach is demonstrated on a simulation system via modulation of inverter-based generation.
Lean operation of Spark-Ignition engines can provide higher thermal efficiency compared to standard stoichiometric operation. However, for a homogeneous lean mixture, the associated reduction of flame speeds becomes an important issue from the perspective of robust ignition and fast flame spread throughout the charge. This study is focused on the use of a lean partial fuel stratification strategy that can stabilize the deflagration, while sufficiently fast combustion is ensured via the use of end-gas autoignition. The engine has a spray-guided Direct-Injection Spark-Ignition combustion system and was fueled with either a high-octane certification gasoline or E85. Partial fuel stratification was achieved using several fuel injections during the intake stroke in combination with a small pilot-injection concurrent with the Spark-Ignition. The results reveal that partial fuel stratification enables very stable combustion, offering higher thermal efficiency for parts of the load range in comparison to well-mixed lean and stoichiometric combustion. The heat release and flame imaging demonstrate that the combustion often has three distinct stages. The combustion of the pilot-injected fuel, ignited by the normal spark, acts as a “super igniter,” ensuring a very repeatable initiation of combustion, and flame incandescence reveals locally rich conditions. The second stage is mainly composed of blue flame propagation in a well-mixed lean mixture. The third stage is the compression autoignition of a well-mixed and typically very lean end-gas. The end-gas autoignition is critical for achieving high combustion efficiency, high thermal efficiency, and stable combustion. Partial fuel stratification enables very effective combustion-phasing control, which is critical for controlling the occurrence and intensity of end-gas autoignition. Comparing the gasoline and E85 fuels, it is noted that achieving end-gas autoignition for the higher octane E85 requires a more aggressive compression of the end-gas via the use of a more advanced combustion phasing or higher intake-air temperature.
We develop a generalized stress inversion technique (or the generalized inversion method) capable of recovering stresses in linear elastic bodies subjected to arbitrary cuts. Specifically, given a set of displacement measurements found experimentally from digital image correlation (DIC), we formulate a stress estimation inverse problem as a partial differential equation-constrained optimization problem. We use gradient-based optimization methods, and we accordingly derive the necessary gradient and Hessian information in a matrix-free form to allow for parallel, large-scale operations. By using a combination of finite elements, DIC, and a matrix-free optimization framework, the generalized inversion method can be used on any arbitrary geometry, provided that the DIC camera can view a sufficient part of the surface. We present numerical simulations and experiments, and we demonstrate that the generalized inversion method can be applied to estimate residual stress.
Predictions of the bulk scale thermal conductivity of solids using non-equilibrium molecular dynamics (MD) simulations have relied on the linear extrapolation of the thermal resistivity versus the reciprocal of the system length in the simulations. Several studies have reported deviation of the extrapolation from linearity near the micro-scale, raising a concern of its applicability to large systems. To investigate this issue, present work conducted extensive MD simulations of silicon with two different potentials (EDIP and Tersoff-II) for unprecedented length scales up to 10.3 μm and simulation times up to 530 ns. For large systems ≥0.35 μm in size the non-linearity of the extrapolation of the reciprocal of the thermal conductivity is mostly due to ignoring the dependence of the thermal conductivity on temperature. To account for such dependence, the present analysis fixes the temperature range for determining the gradient for calculating the thermal conductivity values. However, short systems ≤0.23 μm in size show significant non-linearity in the calculated thermal conductivity values using a temperature window of 500 ± 10 K from the simulations results with the EDIP potential. Since these system sizes are shorter than the mean phonon free path in EDIP (~0.22 μm), the nonlinearity may be attributed to phonon transport. For the MD simulations with the Tersoff-II potential there is no significant non-linearity in the calculated thermal conductivity values for systems ranging in size from 0.05 to 5.4 μm.
Proceedings of IA3 2020: 10th Workshop on Irregular Applications: Architectures and Algorithms, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Graph coloring is often used in parallelizing scientific computations that run in distributed and multi-GPU environments; it identifies sets of independent data that can be updated in parallel. Many algorithms exist for graph coloring on a single GPU or in distributed memory, but hybrid MPI+GPU algorithms have been unexplored until this work, to the best of our knowledge. We present several MPI+GPU coloring approaches that use implementations of the distributed coloring algorithms of Gebremedhin et al. and the shared-memory algorithms of Deveci et al. The on-node parallel coloring uses implementations in KokkosKernels, which provide parallelization for both multicore CPUs and GPUs. We further extend our approaches to solve for distance-2 coloring, giving the first known distributed and multi-GPU algorithm for this problem. In addition, we propose novel methods to reduce communication in distributed graph coloring. Our experiments show that our approaches operate efficiently on inputs too large to fit on a single GPU and scale up to graphs with 76.7 billion edges running on 128 GPUs.
Proceedings of FTXS 2020: Fault Tolerance for HPC at eXtreme Scale, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Benefits of local recovery (restarting only a failed process or task) have been previously demonstrated in parallel solvers. Local recovery has a reduced impact on application performance due to masking of failure delays (for message-passing codes) or dynamic load balancing (for asynchronous many-task codes). In this paper, we implement MPI-process-local checkpointing and recovery of data (as an extension of the Fenix library) in combination with an existing method for local detection of silent errors in partial-differential-equation solvers, to show a path for incorporating lightweight silent-error resilience. In addition, we demonstrate how asynchrony introduced by maximizing computation-communication overlap can halt the propagation of delays. For a prototype stencil solver (including an iterative-solver-like variant) with injected memory bit flips, results show greatly reduced overhead under weak scaling compared to global recovery, and high failure-masking efficiency. The approach is expected to be generalizable to other MPI-based solvers.
Proceedings of FTXS 2020: Fault Tolerance for HPC at eXtreme Scale, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Gupta, Nikunj; Mayo, Jackson R.; Lemoine, Adrian S.; Kaiser, Hartmut
Exceptions and errors occurring within mission critical applications due to hardware failures have a high cost. With the emerging Next Generation Platforms (NGPs), the rate of hardware failures will likely increase. Therefore, designing our applications to be resilient is a critical concern in order to retain the reliability of results while meeting the constraints on power budgets. In this paper, we discuss software resilience in AMTs at both local and distributed scale. We choose HPX to prototype our resiliency designs. We implement two resiliency APIs that we expose to the application developers, namely task replication and task replay. Task replication repeats a task n-times and executes them asynchronously. Task replay reschedules a task up to n-times until a valid output is returned. Furthermore, we expose algorithm based fault tolerance (ABFT) using user provided predicates (e.g., checksums) to validate the returned results. We benchmark the resiliency scheme for both synthetic and real world applications at local and distributed scale and show that most of the added execution time arises from the replay, replication or data movement of the tasks and not the boilerplate code added to achieve resilience.
Proceedings of ExaMPI 2020: Exascale MPI Workshop, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
We present the execution model of Virtual Transport (VT) a new, Asynchronous Many-Task (AMT) runtime system that provides unprecedented integration and interoperability with MPI. We have developed VT in conjunction with large production applications to provide a highly incremental, high-value path to AMT adoption in the dominant ecosystem of MPI applications, libraries, and developers. Our aim is that the'MPI+X' model of hybrid parallelism can smoothly extend to become'MPI+VT +X'. We illustrate a set of design and implementation techniques that have been useful in building VT. We believe that these ideas and the code embodying them will be useful to others building similar systems, and perhaps provide insight to how MPI might evolve to better support them. We motivate our approach with two applications that are adopting VT and have begun to benefit from increased asynchrony and dynamic load balancing.
Proceedings of IPDRM 2020: 4th Annual Workshop on Emerging Parallel and Distributed Runtime Systems and Middleware, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
As network speeds increase, the overhead of processing incoming messages is becoming onerous enough that many manufacturers now provide network interface cards (NICs) with offload capabilities to handle these overheads. This increase in NIC capabilities creates an opportunity to enable computation on data in-situ on the NIC. These enhanced NICs can be classified into several different categories of SmartNICs. SmartNICs present an interesting opportunity for future runtime software designs. Designing runtime software to be located in the network as opposed to the host level leads to new radical distributed runtime possibilities that were not practical prior to SmartNICs. In the process of transitioning to a radically different runtime software design for SmartNICs there are intermediary steps of migrating current runtime software to be offloaded onto a SmartNIC that also present interesting possibilities. This paper will describe SmartNIC design and how SmartNICs can be leveraged to offload current generation runtime software and lead to future radically different in-network distributed runtime systems.
Proceedings of FTXS 2020: Fault Tolerance for HPC at eXtreme Scale, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Gupta, Nikunj; Mayo, Jackson R.; Lemoine, Adrian S.; Kaiser, Hartmut
Exceptions and errors occurring within mission critical applications due to hardware failures have a high cost. With the emerging Next Generation Platforms (NGPs), the rate of hardware failures will likely increase. Therefore, designing our applications to be resilient is a critical concern in order to retain the reliability of results while meeting the constraints on power budgets. In this paper, we discuss software resilience in AMTs at both local and distributed scale. We choose HPX to prototype our resiliency designs. We implement two resiliency APIs that we expose to the application developers, namely task replication and task replay. Task replication repeats a task n-times and executes them asynchronously. Task replay reschedules a task up to n-times until a valid output is returned. Furthermore, we expose algorithm based fault tolerance (ABFT) using user provided predicates (e.g., checksums) to validate the returned results. We benchmark the resiliency scheme for both synthetic and real world applications at local and distributed scale and show that most of the added execution time arises from the replay, replication or data movement of the tasks and not the boilerplate code added to achieve resilience.
Proceedings of ExaMPI 2020: Exascale MPI Workshop, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Multithreaded MPI applications are gaining popularity in scientific and high-performance computing. While the combination of programming models is suited to support current parallel hardware, it moves threading models and their interaction with MPI into focus. With the advent of new threading libraries, the flexibility to select threading implementations of choice is becoming an important usability feature. Open MPI has traditionally avoided componentizing its threading model, relying on code inlining and static initialization to minimize potential impacts on runtime fast paths and synchronization. This paper describes the implementation of a generic threading runtime support in Open MPI using the Opal Modular Component Architecture. This architecture allows the programmer to select a threading library at compile-or run-time, providing both static initialization of threading primitives as well as dynamic instantiation of threading objects. In this work, we present the implementation, define required interfaces, and discuss trade-offs of dynamic and static initialization.
Understanding the fundamental limits of gas deliverable capacity in porous materials is of critical importance as it informs whether technical targets (e.g., for on-board vehicular storage) are feasible. High-throughput screening studies of rigid materials, for example, have shown they are not able to achieve the original ARPA-E methane storage targets, yet an interesting question remains: what is the upper limit of deliverable capacity in flexible materials? In this work we develop a statistical adsorption model that specifically probes the limit of deliverable capacity in intrinsically flexible materials. The resulting adsorption thermodynamics indicate that a perfectly designed, intrinsically flexible nanoporous material could achieve higher methane deliverable capacity than the best benchmark systems known to date with little to no total volume change. Density functional theory and grand canonical Monte Carlo simulations identify a known metal-organic framework (MOF) that validates key features of the model. Therefore, this work (1) motivates a continued, extensive effort to rationally design a porous material analogous to the adsorption model and (2) calls for continued discovery of additional high deliverable capacity materials that remain hidden from rigid structure screening studies due to nominal non-porosity.
Proceedings of FTXS 2020: Fault Tolerance for HPC at eXtreme Scale, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Gupta, Nikunj; Mayo, Jackson R.; Lemoine, Adrian S.; Kaiser, Hartmut
Exceptions and errors occurring within mission critical applications due to hardware failures have a high cost. With the emerging Next Generation Platforms (NGPs), the rate of hardware failures will likely increase. Therefore, designing our applications to be resilient is a critical concern in order to retain the reliability of results while meeting the constraints on power budgets. In this paper, we discuss software resilience in AMTs at both local and distributed scale. We choose HPX to prototype our resiliency designs. We implement two resiliency APIs that we expose to the application developers, namely task replication and task replay. Task replication repeats a task n-times and executes them asynchronously. Task replay reschedules a task up to n-times until a valid output is returned. Furthermore, we expose algorithm based fault tolerance (ABFT) using user provided predicates (e.g., checksums) to validate the returned results. We benchmark the resiliency scheme for both synthetic and real world applications at local and distributed scale and show that most of the added execution time arises from the replay, replication or data movement of the tasks and not the boilerplate code added to achieve resilience.
Proceedings of FTXS 2020: Fault Tolerance for HPC at eXtreme Scale, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Benefits of local recovery (restarting only a failed process or task) have been previously demonstrated in parallel solvers. Local recovery has a reduced impact on application performance due to masking of failure delays (for message-passing codes) or dynamic load balancing (for asynchronous many-task codes). In this paper, we implement MPI-process-local checkpointing and recovery of data (as an extension of the Fenix library) in combination with an existing method for local detection of silent errors in partial-differential-equation solvers, to show a path for incorporating lightweight silent-error resilience. In addition, we demonstrate how asynchrony introduced by maximizing computation-communication overlap can halt the propagation of delays. For a prototype stencil solver (including an iterative-solver-like variant) with injected memory bit flips, results show greatly reduced overhead under weak scaling compared to global recovery, and high failure-masking efficiency. The approach is expected to be generalizable to other MPI-based solvers.
Proceedings of MCHPC 2020: Workshop on Memory Centric High Performance Computing, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Many-core systems are beginning to feature novel large, high-bandwidth intermediate memory as a visible part of the memory hierarchy. This paper discusses how to make use of intermediate memory when composing matrix multiply with transpose to compute $A$ * AT. We re-purpose the cache-oblivious approach developed by Frigo et al. and apply it to the composition of a bandwidth-bound kernel (transpose) with a compute-bound kernel (matrix multiply). Particular focus is on regions of matrix shapes far from square that are not usually considered. Our codes are simpler than optimized codes, but reasonably close in performance. Also, perhaps of more importance is developing a paradigm for how to construct other codes using intermediate memories.
A system’s response to disturbances in an internal or external driving signal can be characterized as performing an implicit computation, where the dynamics of the system are a manifestation of its new state holding some memory about those disturbances. Identifying small disturbances in the response signal requires detailed information about the dynamics of the inputs, which can be challenging. This paper presents a new method called the Information Impulse Function (IIF) for detecting and time-localizing small disturbances in system response data. The novelty of IIF is its ability to measure relative information content without using Boltzmann’s equation by modeling signal transmission as a series of dissipative steps. Since a detailed expression of the informational structure in the signal is achieved with IIF, it is ideal for detecting disturbances in the response signal, i.e., the system dynamics. Those findings are based on numerical studies of the topological structure of the dynamics of a nonlinear system due to perturbated driving signals. The IIF is compared to both the Permutation entropy and Shannon entropy to demonstrate its entropy-like relationship with system state and its degree of sensitivity to perturbations in a driving signal.
In this synthesis, we assess present research and anticipate future development needs in modeling water quality in watersheds. We first discuss areas of potential improvement in the representation of freshwater systems pertaining to water quality, including representation of environmental interfaces, in-stream water quality and process interactions, soil health and land management, and (peri-)urban areas. In addition, we provide insights into the contemporary challenges in the practices of watershed water quality modeling, including quality control of monitoring data, model parameterization and calibration, uncertainty management, scale mismatches, and provisioning of modeling tools. Finally, we make three recommendations to provide a path forward for improving watershed water quality modeling science, infrastructure, and practices. These include building stronger collaborations between experimentalists and modelers, bridging gaps between modelers and stakeholders, and cultivating and applying procedural knowledge to better govern and support water quality modeling processes within organizations.
A High-Altitude Electromagnetic Pulse (HEMP) is a potential threat to the power grid. HEMP can couple to transmission lines and cables, causing significant overvoltages which can be harmful to line connected equipment. The effects of overvoltages on various types of power systems components need to be understood. HEMP effects on trip coils were tested and presented in this report. A high voltage pulser was built to replicate the induced voltage waveform from a HEMP. The pulser was used to test breaker trip coils with increasing pulse magnitudes ranging from 20 kV to 80 kV. The State-of-Health of each trip coils was measured via mechanical operation and impedance measurements before and after each insult to identify any damage or degradation to the trip coils. Dielectric breakdown was observed at the conductor leads during testing, causing the HEMP insult to be diverted to the grounded casing. However, the dielectric breakdown did not cause interference with regular device operation.
Research shows that individuals often overestimate their knowledge and performance without realizing they have done so, which can lead to faulty technical outcomes. This phenomenon is known as the Dunning-Kruger effect (Kruger & Dunning, 1999). This research sought to determine if some individuals were more prone to overestimating their performance due to underlying personality and cognitive characteristics. To test our hypothesis, we first collected individual difference measures. Next, we asked participants to estimate their performance on three performance tasks to assess the likelihood of overestimation. We found that some individuals may be more prone to overestimating their performance than others, and that faulty problem-solving abilities and low skill may be to blame. Encouraging individuals to think critically through all options and to consult with others before making a high-consequence decision may reduce overestimation.
A useful and popular waveform for high-performance radar systems is the Linear Frequency Modulated (LFM) chirp. The chirp may have a positive frequency slope with time (up-chirp) or a negative frequency slope with time (down-chirp). There is no inherent advantage to one with respect to the other, except that the receiver needs to be matched to the proper waveform. However, if up-chirps and down-chirps are employed on different pulses in the same Coherent Processing Interval (CPI), then care must be taken to maintain coherence in the range-compressed echo signals. We present the mathematics for doing so, for both correlation processing and stretch processing.
We present an implementation that can keep a coldatom ensemble within a sub-millimeter diameter hole in a transparent membrane. Based on the effective beam diameter of the magneto-optical trap (MOT), d = 400 mm-hole diameter, we measure the atom number that is 105 times higher than the predicted value using the conventional d6 scaling rule. Atoms trapped by the membrane MOT are cooled down to 10 mK with sub- Doppler cooling process and can be potentially coupled to the photonic/electronic integrated circuits that can be fabricated in the membrane device by taking a step toward the atom trap integrated platform.
Future nuclear fuel cycle facilities will see a significant benefit from considering materials accountancy requirements early in the design process. The Material Protection, Accounting, and Control Technologies (MPACT) working group is demonstrating Safeguards and Security by Design (SSBD) for a notional electrochemical reprocessing facility as part of a 2020 Milestone. The idea behind SSBD is to consider regulatory requirements early in the design process to provide more optimized systems and avoid costly retrofits later in the design process. Safeguards modeling, using single analyst tools, allows the designer to efficiently consider materials accountancy approaches that meet regulatory requirements. However, safeguards modeling also allows the facility designer to go beyond current regulations and work toward accountancy designs with rapid response and lower thresholds for detection of anomalies. This type of modeling enables new safeguards approaches and may inform future regulatory changes. The Separation and Safeguards Performance Model (SSPM) has been used for materials accountancy system design and analysis. This paper steps through the process of designing a Material Control and Accountancy (MC&A) system, presents the baseline system design for an electrochemical reprocessing facility, and provides performance metrics from the modeling analysis. The most critical measurements in the electrochemical facility are the spent fuel input, electrorefiner salt, and U/TRU product output measurements. Finally, material loss scenario analysis found that measurement uncertainties (relative standard deviations) for Pu would need to be at 1% (random and systematic error components) or better in order to meet domestic detection goals or as high as 3% in order to meet international detection goals, based on a 100 metric ton per year plant size.
The Sodium-Cooled Fast Reactor (SFR) system was identified during the Generation IV Technology Roadmap as a promising technology to perform the actinide management mission and, if enhanced economics for the system could be realized, also the electricity and heat production missions. The main characteristics of the SFR that make it especially suitable for the actinide management mission are: Consumption of transuranics in a closed fuel cycle, thus reducing the radiotoxicity and heat load which facilitates waste disposal and geologic isolation; Enhanced utilization of uranium resources through efficient management of fissile materials and multi-recycle; and, High level of safety achieved through inherent and passive means that accommodate transients and bounding events with significant safety margins.
The Port of Alaska in Anchorage enables the economic vitality of the Municipality of Anchorage and State of Alaska. It also provides significant support to defense activities across Alaska, especially to the Joint Base Elmendorf-Richardson (JBER) that is immediately adjacent to the Port. For this reason, stakeholders are interested in the resilience of the Ports operations. This report documents a preliminary feasibility analysis for developing an energy system that increases electric supply resilience for the Port and for a specific location inside JBER. The project concept emerged from prior work led by the Municipality of Anchorage and consultation with Port stakeholders. The project consists of a microgrid with PV, storage and diesel generation, capable of supplying electricity to loads at the Port a specific JBER location during utility outages, while also delivering economic value during blue-sky conditions. The study aims to estimate the size, configuration and concept of operations based on existing infrastructure and limited demand data. It also explores potential project benefits and challenges. The report goal is to inform further stakeholder consultation and next steps.
This work demonstrates how staged heat release from layered metal oxide cathodes in the presence of organic electrolytes can be predicted from basic thermodynamic properties. These prediction methods for heat release are an advancement compared to typical modeling approaches for thermal runaway in lithium-ion batteries, which tend to rely exclusively on calorimetry measurements of battery components. These calculations generate useful new insights when compared to calorimetry measurements for lithium cobalt oxide (LCO) as well as the most common varieties of nickel manganese cobalt oxide (NMC) and nickel cobalt aluminum oxide (NCA). Accurate trends in heat release with varying state of charge are predicted for all of these cathode materials. These results suggest that thermodynamic calculations utilizing a recently published database of properties are broadly applicable for predicting decomposition behavior of layered metal oxide cathodes. Aspects of literature calorimetry measurements relevant to thermal runaway modeling are identified and classified as thermodynamic or kinetic effects. The calorimetry measurements reviewed in this work will be useful for development of a new generation of thermal runaway models targeting applications where accurate maximum cell temperatures are required to predict cascading cell-to-cell propagation rates.
This short concept article discusses four specific ways to eradicate respiratory pandemics once and for all. These include: Protecting the nose, mouth, throat and lungs; New hygiene regimens; Clearing the air; and Biophysical interventions. Technical breakthoughs in all four of these areas would not only protect people from life-threatening pathogens, but also take the dread out of respiratory disease outbreaks.
Optimized designs were achieved using a genetic algorithm to evaluate multi-objective trade space, including Mean-Time-Between-Failure (MTBF) and volumetric power density. This work provides a foundational platform that can be used to optimize additional power converters, such as an inverter for the EV traction drive system as well as trade-offs in thermal management due to the use of different device substrate materials.
For high voltage electrical devices, prevention of high voltage breakdown is critical for device function. Use of polymeric encapsulation such as epoxies is common, but these may include air bubbles or other voids of varying size. The present work aimed to model and experimentally determine the size dependence of breakdown voltage for voids in an epoxy matrix, as a step toward establishing size criteria for void screening. Effects were investigated experimentally for both one-dimensional metal/epoxy/air/epoxy/metal gap sizes from 50 μm to 10 mm, as well as spherical voids of 250 μm, 500 μm, 1 mm and 2 mm sizes. These experimental results were compared to modified Paschen curve and particle-in-cell discharge models; minimum breakdown voltages of 6 - 8.5 kV appeared to be predicted by 1D models and experiments, with minimum breakdown voltage for void sizes of 0.2 - 1 mm. In a limited set of 3D experiments on 250 μm, 500 μm, 1 mm and 2 mm voids within epoxy, the minimum breakdown voltages observed were 18.5 - 20 kV, for 500 μm void sizes. These experiments and models are aimed at providing initial size and voltage criteria for tolerable void sizes and expected discharge voltages to support design of encapsulated high voltage components.
The magnetized liner inertial fusion (MagLIF) scheme relies on coupling laser energy into an underdense fuel raising the fuel adiabat at the start of the implosion. To deposit energy into the fuel, the laser must first penetrate a laser entrance hole (LEH) foil which can be a significant energy sink and introduce mix. In this paper, we report on experiments investigating laser energy coupling into MagLIF-relevant gas cell targets with LEH foil thicknesses varying from 0.5 μm to 3 μm. Two-dimensional (2D) axisymmetric simulations match the experimental results well for 0.5 μm and 1 μm thick LEH foils but exhibit whole-beam self-focusing and excessive penetration of the laser into the gas for 2 μm and 3 μm thick LEH foils. Better agreement for the 2 μm-thick foil is found when using a different thermal conductivity model in 2D simulations, while only 3D Cartesian simulations come close to matching the 3 μm-thick foil experiments. The study suggests that simulations may over-predict the tendency for the laser to self-focus during MagLIF preheat when thicker LEH foils are used. This effect is pronounced with 2D simulations where the azimuthally symmetric density channel effectively self-focuses the rays that are forced to traverse the center of the plasma. The extra degree of freedom in 3D simulations significantly reduces this effect. The experiments and simulations also suggest that, in this study, the amount of energy coupled into the gas is highly correlated with the laser propagation length regardless of the LEH foil thickness.
Disposal of large, heat-generating waste packages containing the equivalent of 21 pressurized water reactor (PWR) assemblies or more is among the disposal concepts under investigation for a future repository for spent nuclear fuel (SNF) in the United States. Without a long (>200 years) surface storage period, disposal of 21-PWR or larger waste packages (especially if they contain high-burnup fuel) would result in in-drift and near-field temperatures considerably higher than considered in previous generic reference cases that assume either 4-PWR or 12-PWR waste packages (Jové Colón et al. 2014; Mariner et al. 2015; 2017). Sevougian et al. (2019c) identified high-temperature process understanding as a key research and development (R&D) area for the Spent Fuel and Waste Science and Technology (SFWST) Campaign. A two-day workshop in February 2020 brought together campaign scientists with expertise in geology, geochemistry, geomechanics, engineered barriers, waste forms, and corrosion processes to begin integrated development of a high-temperature reference case for disposal of SNF in a mined repository in a shale host rock. Building on the progress made in the workshop, the study team further explored the concepts and processes needed to form the basis for a high-temperature shale repository reference case. The results are described in this report and summarized..
Adams, Brian M.; Bohnhoff, William J.; Dalbey, Keith R.; Ebeida, Mohamed S.; Eddy, John P.; Eldred, Michael S.; Hooper, Russell W.; Hough, Patricia D.; Hu, Kenneth T.; Jakeman, John D.; Khalil, Mohammad; Maupin, Kathryn A.; Monschke, Jason A.; Ridgway, Elliott M.; Rushdi, Ahmad; Seidl, Daniel T.; Stephens, John A.; Winokur, Justin G.
The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user’s manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
The Materials Protection, Accounting, and Control Technologies (MPACT) campaign, within the U.S. Department of Energy Office of Nuclear Energy, has developed a Virtual Facility Distributed Test Bed for safeguards and security design for future nuclear fuel cycle facilities. The purpose of the Virtual Test Bed is to bring together experimental and modeling capabilities across the U.S. national laboratory and university complex to provide a one-stop-shop for advanced Safeguards and Security by Design (SSBD). Experimental testing alone of safeguards and security technologies would be cost prohibitive, but testbeds and laboratory processing facilities with safeguards measurement opportunities, coupled with modeling and simulation, provide the ability to generate modern, efficient safeguards and security systems for new facilities. This Virtual Test Bed concept has been demonstrated using a generic electrochemical reprocessing facility as an example, but the concept can be extended to other facilities. While much of the recent work in the MPACT program has focused on electrochemical safeguards and security technologies, the laboratory capabilities have been applied to other facilities in the past (including aqueous reprocessing, fuel fabrication, and molten salt reactors as examples). This paper provides an overview of the Virtual Test Bed concept, a description of the design process, and a baseline safeguards and security design for the example facility. Parallel papers in this issue go into more detail on the various technologies, experimental testing, modeling capabilities, and performance testing.