To combat dynamic, cyber-physical disturbances in the electric grid, online and adaptive remedial action schemes (RASs) are needed to achieve fast and effective response. However, a major challenge lies in reducing the computational burden of analyses needed to inform selection of appropriate controls. This paper proposes the use of a role and interaction discovery (RID) algorithm that leverages control sensitivities to gain insight into the controller roles and support groups. Using these results, a procedure is developed to reduce the control search space to reduce computation time while achieving effective control response. A case study is presented that considers corrective line switching to mitigate geomagnetically induced current (GIC) -saturated reactive power losses in a 20-bus test system. Results demonstrated both significant reduction of both the control search space and reactive power losses using the RID approach.
Alkaline zinc-manganese dioxide (Zn-MnO2) batteries are well suited for grid storage applications because of their inherently safe, aqueous electrolyte and established materials supply chain, resulting in low production costs. With recent advances in the development of Cu/Bi-stabilized birnessite cathodes capable of the full 2-electron capacity equivalent of MnO2 (617 mA h/g), there is a need for selective separators that prevent zincate (Zn(OH)4)2- transport from the anode to the cathode during cycling, as this electrode system fails in the presence of dissolved zinc. Herein, we present the synthesis of N-butylimidazolium-functionalized polysulfone (NBI-PSU)-based separators and evaluate their ability to selectively transport hydroxide over zincate. We then examine their impact on the cycling of high depth of discharge Zn/(Cu/Bi-MnO2) batteries when inserted in between the cathode and anode. Initially, we establish our membranes' selectivity by performing zincate and hydroxide diffusion tests, showing a marked improvement in zincate-blocking (DZn (cm2/min): 0.17 ± 0.04 × 10-6 for 50-PSU, our most selective separator vs 2.0 ± 0.8 × 10-6 for Cellophane 350P00 and 5.7 ± 0.8 × 10-6 for Celgard 3501), while maintaining similar crossover rates for hydroxide (DOH (cm2/min): 9.4 ± 0.1 × 10-6 for 50-PSU vs 17 ± 0.5 × 10-6 for Cellophane 350P00 and 6.7 ± 0.6 × 10-6 for Celgard 3501). We then implement our membranes into cells and observe an improvement in cycle life over control cells containing only the commercial separators (cell lifetime extended from 21 to 79 cycles).
Enhanced Geothermal Systems could provide a substantial contribution to the global energy demand if their implementation could overcome inherent challenges. Examples are insufficient created permeability, early thermal breakthrough, and unacceptable induced seismicity. Here we report on the seismic response of a mesoscale hydraulic fracturing experiment performed at 1.5‐km depth at the Sanford Underground Research Facility. We have measured the seismic activity by utilizing a 100‐kHz, continuous seismic monitoring system deployed in six 60‐m length monitoring boreholes surrounding the experimental domain in 3‐D. The achieved location uncertainty was on the order of 1 m and limited by the signal‐to‐noise ratio of detected events. These uncertainties were corroborated by detections of fracture intersections at the monitoring boreholes. Three intervals of the dedicated injection borehole were hydraulically stimulated by water injection at pressures up to 33 MPa and flow rates up to 5 L/min. We located 1,933 seismic events during several injection periods. The recorded seismicity delineates a complex fracture network comprised of multistrand hydraulic fractures and shear‐reactivated, preexisting planes of weakness that grew unilaterally from the point of initiation. We find that heterogeneity of stress dictates the seismic outcome of hydraulic stimulations, even when relying on theoretically well‐behaved hydraulic fractures. Once hydraulic fractures intersected boreholes, the boreholes acted as a pressure relief and fracture propagation ceased. In order to create an efficient subsurface heat exchanger, production boreholes should not be drilled before the end of hydraulic stimulations.
Passive silicon photonic waveguides are exposed to gamma radiation to understand how the performance of silicon photonic integrated circuits is affected in harsh environments such as space or high energy physics experiments. The propagation loss and group index of the mode guided by these waveguides is characterized by implementing a phase sensitive swept-wavelength interferometric method. We find that the propagation loss associated with each waveguide geometry explored in this study slightly increases at absorbed doses of up to 100 krad (Si). The measured change in group index associated with the same waveguide geometries is negligibly changed after exposure. Additionally, we show that the post-exposure degradation of these waveguides can be improved through heat treatment.
Dalbey, Keith R.; Eldred, Michael S.; Geraci, Gianluca; Jakeman, John D.; Maupin, Kathryn A.; Monschke, Jason A.; Seidl, Daniel T.; Tran, Anh; Menhorn, Friedrich; Zeng, Xiaoshu
The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.
Impact problems of plate-like parts by punch-like objects with relatively large mass moving at slow speeds of a few feet per second constitute a subset of impact problems of interest at Sandia. This is in contrast to small objects moving in the range of hundreds or thousands of feet per second or higher. The objective of this work is to develop a simple formula that can be used to estimate a lower bound for the puncture energy of metal plates impacted by cylindrical, essentially rigid punches of circular cross-section and at nose. Such geometry is used as a basis in the design of puncture mitigation barriers or procedures. This was accomplished by deriving an expression using non-dimensional analysis and then calibrating it based on tests results in the range of speeds of interest. Lower bounds can then be determined based on confidence intervals or factors of safety.
Tensor decomposition models play an increasingly important role in modern data science applications. One problem of particular interest is fitting a low-rank Canonical Polyadic (CP) tensor decomposition model when the tensor has sparse structure and the tensor elements are nonnegative count data. SparTen is a high-performance C++ library which computes a low-rank decomposition using different solvers: a first-order quasi-Newton or a second-order damped Newton method, along with the appropriate choice of runtime parameters. Since default parameters in SparTen are tuned to experimental results in prior published work on a single real-world dataset conducted using MATLAB implementations of these methods, it remains unclear if the parameter defaults in SparTen are appropriate for general tensor data. Furthermore, it is unknown how sensitive algorithm convergence is to changes in the input parameter values. This report addresses these unresolved issues with large-scale experimentation on three benchmark tensor data sets. Experiments were conducted on several different CPU architectures and replicated with many initial states to establish generalized profiles of algorithm convergence behavior.
Researchers at Sandia National Laboratories have integrated the GRANTA materials database with the MatCal calibration engine to calibrate material models from characterization data. GRANTA is gaining acceptance across the NNSA Tri-lab complex and is being populated with weapons-specific test data by Sandia experimentalists. To use that data to create material models for use by weapons systems analysts, MatCal has been enabled import calibration data and test conditions from GRANTA to quickly and reproducibly produce a calibrated set of parameters for a given constitutive model. The team is currently working to store the parameters characterizing material behavior in GRANTA to make them accessible by all weapons analysts.
As the US electrifies the transportation sector, cyber attacks targeting vehicle charging could bring consequences to electrical system infrastructure. This is a growing area of concern as charging stations increase power delivery and must communicate to a range of entities to authorize charging, sequence the charging process, and manage load (grid operators, vehicles, OEM vendors, charging network operators, etc.). The research challenges are numerous and are complicated because there are many end users, stakeholders, and software and equipment vendors interests involved. Poorly implemented electric vehicle supply equipment (EVSE), electric vehicle (EV), or grid communication system cybersecurity could be a significant risk to EV adoption because the political, social, and financial impact of cyberattacks - or public perception of such - ripples across the industry and has lasting and devastating effects. Unfortunately, there is no comprehensive EVSE cybersecurity approach and limited best practices have been adopted by the EV/EVSE industry. There is an incomplete industry understanding of the attack surface, interconnected assets, and unsecured interfaces. Thus, comprehensive cybersecurity recommendations founded on sound research are necessary to secure EV charging infrastructure. This project is providing the power, security, and automotive industry with a strong technical basis for securing this infrastructure by developing threat models, determining technology gaps, and identifying or developing effective countermeasures. Specifically, the team is creating a cybersecurity threat model and performing a technical risk assessment of EVSE assets, so that automotive, charging, and utility stakeholders can better protect customers, vehicles, and power systems in the face of new cyber threats.
The principal Hugoniot, sound velocity, and Grüneisen parameter of polystyrene were measured at conditions relevant to shocks in inertial confinement fusion implosions, from 100 to 1000 GPa. The sound velocity is in good agreement with quantum molecular dynamics calculations and all tabular equation of state models at pressures below 200 GPa. Above 200 GPa, the experimental results agree with two of the examined tables, but do not agree with the most recent table developed for design of inertial confinement fusion (ICF) experiments. The Grüneisen parameter increases with density below ∼3.1g/cm3 and approaches the asymptotic value for an ideal gas after complete dissociation. This behavior is in good agreement with quantum molecular dynamics results and previous work but is not represented by any of the tabular models. The discrepancy between tabular models and experimental measurement of the sound velocity and Grüneisen parameter is sufficient to impact simulations of ICF experiments.
The application of deep learning toward discovery of data-driven models requires careful application of inductive biases to obtain a description of physics which is both accurate and robust. We present here a framework for discovering continuum models from high fidelity molecular simulation data. Our approach applies a neural network parameterization of governing physics in modal space, allowing a characterization of differential operators while providing structure which may be used to impose biases related to symmetry, isotropy, and conservation form. Here, we demonstrate the effectiveness of our framework for a variety of physics, including local and nonlocal diffusion processes and single and multiphase flows. For the flow physics we demonstrate this approach leads to a learned operator that generalizes to system characteristics not included in the training sets, such as variable particle sizes, densities, and concentration.
The fundamental interactions between an edge dislocation and a random solid solution are studied by analyzing dislocation line roughness profiles obtained from molecular dynamics simulations of Fe0.70Ni0.11Cr0.19 over a range of stresses and temperatures. These roughness profiles reveal the hallmark features of a depinning transition. Namely, below a temperature-dependent critical stress, the dislocation line exhibits roughness in two different length scale regimes which are divided by a so-called correlation length. This correlation length increases with applied stress and at the critical stress (depinning transition or yield stress) formally goes to infinity. Above the critical stress, the line roughness profile converges to that of a random noise field. Motivated by these results, a physical model is developed based on the notion of coherent line bowing over all length scales below the correlation length. Above the correlation length, the solute field prohibits such coherent line bow outs. Using this model, we identify potential gaps in existing theories of solid solution strengthening and show that recent observations of length-dependent dislocation mobilities can be rationalized.
Guo, Qianying; Gu, Yucong; Barr, Christopher M.; Koenig, Thomas; Hattar, Khalid M.; Li, Lin; Thompson, Gregory B.
The incorporation of nanostructured and amorphous metals into modern applications is reliant on the understanding of deformation and failure modes in constrained conditions. To study this, a 105 nm crystalline Cu/160 nm amorphous Cu45Zr55 (at.%) multilayer structure was fabricated with the two crystalline layers sputter deposited between the top-middle-bottom amorphous layers and prepared to electron transparency. The multilayer was then in situ indented either under a single load to a depth of ~ 100 nm (max load of ~ 100 μN) or held at 20 μN and then repeatedly indented with an additional 5 μN up to 20,000 cycles in a transmission electron microscope to compare the deformation responses in the nanolaminate. For the single indentation test, the multilayer showed serrated load-displacement behavior upon initial indentation inductive of shear banding. At an indentation depth of ~ 32 nm, the multilayer exhibited perfect plastic behavior and no strain hardening. Both indented and fatigue-indented films revealed diffraction contrast changes with deformation. Subsequent Automated Crystal Orientation Mapping (ACOM) measurements confirmed and quantified global texture changes in the crystalline layers with specifically identified grains revealing rotation. Using a finite element model, the in-plane displacement vectors under the indent mapped conditions where ACOM determined grain rotation was observed, indicating the stress flow induced grain rotation. The single indented Cu layers also exhibited evidence of deformation induced grain growth, which was not evident in the fatigue-indented Cu based multilayer. Finally, the single indented multilayer retained a significant plastic crater in the upper most amorphous layer that directly contacted the indenter; a negligible crater impression in the same region was observed in the fatigued tested multilayer. These differences are explained by the different loading methods, applied load, and deformation mechanisms experienced in the multilayers.
Here, we describe recent efforts to improve our predictive modeling of rate-dependent behavior at, or near, a phase transition using molecular dynamics simulations. Cadmium sulfide (CdS) is a well-studied material that undergoes a solid-solid phase transition from wurtzite to rock salt structures between 3 and 9 GPa. Atomistic simulations are used to investigate the dominant transition mechanisms as a function of orientation, size and rate. We found that the final rock salt orientations were determined relative to the initial wurtzite orientation, and that these orientations were different for the two orientations and two pressure regimes studied. The CdS solid-solid phase transition is studied, for both a bulk single crystal and for polymer-encapsulated spherical nanoparticles of various sizes.
Silva-Quinones, Dhamelyz; He, Chuan; Dwyer, Kevin J.; Butera, Robert E.; Wang, George T.; Teplyakov, Andrew V.
The reactivity of liquid hydrazine (N2H4) with respect to H-, Cl-, and Br-terminated Si(100) surfaces was investigated to uncover the principles of nitrogen incorporation into the interface. This process has important implications in a wide variety of applications, including semiconductor surface passivation and functionalization, nitride growth, and many others. The use of hydrazine as a precursor allows for reactions that exclude carbon and oxygen, the primary sources of contamination in processing. In this work, the reactivity of N2H4 with H- and Cl-terminated surfaces prepared by traditional solvent-based methods and with a Br-terminated Si(100) prepared in ultrahigh vacuum was compared. The reactions were studied with X-ray photoelectron spectroscopy, atomic force microscopy, and scanning tunneling microscopy, and the observations were supported by computational investigations. The H-terminated surface led to the highest level of nitrogen incorporation; however, the process proceeds with increasing surface roughness, suggesting possible etching or replacement reactions. In the case of Cl-terminated (predominantly dichloride) and Br-terminated (monobromide) surfaces, the amount of nitrogen incorporation on both surfaces after the reaction with hydrazine was very similar despite the differences in preparation, initial structure, and chemical composition. Density functional theory was used to propose the possible surface structures and to analyze surface reactivity.
Pd readily absorbs hydrogen and its isotopes, and can be used to purify gas mixtures involving tritium. Tritium decays to He, forming He bubbles. Bubbles causes possible PCT effects swelling, He release, all leading to failures. Radioactive decay experiments take many years. Molecular dynamics (MD) studies can be quickly done. No previous MD methods can simulate He bubble nucleation and growth.
Pd readily absorbs hydrogen and its isotopes, and can be used to purify gas mixtures involving tritium. Tritium decays to He, forming He bubbles. Bubbles causes possible PCT effects swelling, He release, all leading to failures. Radioactive decay experiments take many years. Molecular dynamics (MD) studies can be quickly done. No previous MD methods can simulate He bubble nucleation and growth.
Signal arrival-time estimation plays a critical role in a variety of downstream seismic analyses, including location estimation and source characterization. Any arrival-time errors propagate through subsequent data-processing results. In this article, we detail a general framework for refining estimated seismic signal arrival times along with full estimation of their associated uncertainty. Using the standard short-term average/long-term average threshold algorithm to identify a search window, we demonstrate how to refine the pick estimate through two different approaches. In both cases, new waveform realizations are generated through bootstrap algorithms to produce full a posteriori estimates of uncertainty of onset arrival time of the seismic signal. The onset arrival uncertainty estimates provide additional data-derived information from the signal and have the potential to influence seismic analysis along several fronts.
Energy storage systems (ESSs) are being deployed widely due to numerous benefits including operational flexibility, high ramping capability, and decreasing costs. This study investigates the economic benefits provided by battery ESSs when they are deployed for market-related applications, considering the battery degradation cost. A comprehensive investment planning framework is presented, which estimates the maximum revenue that the ESS can generate over its lifetime and provides the necessary tools to investors for aiding the decision making process regarding an ESS project. The applications chosen for this study are energy arbitrage and frequency regulation. Lithium-ion batteries are considered due to their wide popularity arising from high efficiency, high energy density, and declining costs. A new degradation cost model based on energy throughput and cycle count is developed for Lithium-ion batteries participating in electricity markets. The lifetime revenue of ESS is calculated considering battery degradation and a cost-benefit analysis is performed to provide investors with an estimate of the net present value, return on investment and payback period. The effect of considering the degradation cost on the estimated revenue is also studied. The proposed approach is demonstrated on the IEEE Reliability Test System and historical data from PJM Interconnection.
Shock-induced detonation is a key property of energetic materials (EM) that remains empirically understood. One proposed mechanism of shock-initiation in EM is “phonon up-pumping” to initiate chemical reactions, where excitation of lattice phonon modes rapidly transfers energy into intramolecular vibrations, ultimately resulting in the breaking of chemical bonds. We are developing novel ultrafast laser spectroscopy techniques to study vibrational energy transfer from phonon modes to intramolecular vibrations (phonon up-pumping), as well as competing energy transfer pathways from intramolecular vibrations to phonon modes (vibrational cooling). Through combinations of plasma-generated supercontinuum infrared, tunable near- and mid-infrared, and terahertz pulses in pump-probe spectroscopy, supplemented with ab inito simulations, we can explore the energy transfer processes on a sub-picosecond time scale to elucidate vibrational energy transfer pathways and lifetimes in EM. Herein we highlight recent progress, including the spectral and temporal characteristics of the infrared and THz sources as well as preliminary results on select EM.
Within the energetics community, considerable effort is being put forth to find a robust scale-bridging link between unreacted material microstructures and the observed material responses, e.g. detonation and sub-detonative phenomena. Specifically, one area where this scale-bridging capability is needed is mesoscale modeling of explosives initiation (MMEI); here, material microstructures are imported directly or as statistical reconstructions into a hydrocode. While MMEI is attractive for simulating the shock initiation process with ever-increasing model fidelity, a large gap remains between the data being generated at the mesoscale and the calibration of burn model parameters. In this work, stochastic burn models are explored as a paradigm-shift to address possible scale-bridging schemes. These stochastic, particle-based methods are similar to those used for granular and droplet-laden flows, with Langevin-type equations. Further parallels are drawn to turbulent combustion modeling and preliminary developments using probability density function (pdf) theory by Baer, Gartling, and DesJardin. In order to implement these new scale-bridging schemes, one example of a stochastic burn model is explained in greater detail. Results from the stochastic burn model and MMEI simulations are given to illustrate the proposed approach. Ultimately, the execution of this work will be a community endeavor; to achieve such a capability, research efforts should focus on full-field data mining and pdf evolution, in addition to new numerical techniques for hydrocodes.
We discuss major challenges in modeling giant impacts between planetary bodies, focusing on the equations of state (EOS). During the giant impact stage of planet formation, rocky planets are melted and partially vaporized. However, most EOS models fail to reproduce experimental constraints on the thermodynamic properties of the major minerals over the required phase space. Here, we present an updated version of the widely-used ANEOS model that includes a user-defined heat capacity limit in the thermal free energy term. Our revised model for forsterite (Mg2SiO4), a common proxy for the mantles of rocky planets, provides a better fit to material data over most of the phase space of giant impacts. We discuss the limitations of this model and the Tillotson equation of state, a commonly used alternative model.
An A-and B-site substitutional study of SrFeO3−δ perovskites (A’x A1−x B’y B1−y O3−δ, where A = Sr and B = Fe) was performed for a two-step solar thermochemical air separation cycle. The cycle steps encompass (1) the thermal reduction of A’x Sr1−x B’y Fe1−y O3−δ driven by concentrated solar irradiation and (2) the oxidation of A’x Sr1−x B’y Fe1−y O3−δ in air to remove O2, leaving N2 . The oxidized A’x Sr1−x B’y Fe1−y O3−δ is recycled back to the first step to complete the cycle, resulting in the separation of N2 from air and concentrated solar irradiation. A-site substitution fractions between 0 ≤ x ≤ 0.2 were examined for A’ = Ba, Ca, and La. B-site substitution fractions between 0 ≤ y ≤ 0.2 were examined for B’ = Cr, Cu, Co, and Mn. Samples were prepared with a modified Pechini method and characterized with X-ray diffractometry. The mass changes and deviations from stoichiometry were evaluated with thermogravimetry in three screenings with temperature-and O2 pressure-swings between 573 and 1473 K and 20% O2 /Ar and 100% Ar at 1 bar, respectively. A’ = Ba or La and B’ = Co resulted in the most improved redox capacities amongst temperature-and O2 pressure-swing experiments.
Eldridge, Brent; Castillo, Anya; Knueven, Bernard; Garcia, Manuel J.
This document is an online supplement for Sparse, Dense, and Compact Linearizations of the AC OPF. Here we present complete derivations of the formulations examined, details of the lazy constraint algorithm, and full computational results supporting Sparse, Dense, and Compact Linearizations of the AC OPF.
The Extended History Variable Reactive Burn model (XHVRB), as proposed by Starkenburg, uses shock capturing rather than current pressure for calculating the pseudo-entropy that is used to model the reaction rate of detonating explosives. In addition to its extended capabilities for modeling explosive desensitization in multi-shock environments, XHVRB's shock capturing offers potential improvement for single shock modeling over the historically used workhorse model HVRB in CTH, an Eulerian shock physics code developed at Sandia National Labs. The detailed transition to detonation of PBX9501, as revealed by embedded gauge data, is compared to models with both HVRB and XHVRB. Improvements to the comparison of model to test data are shown with XHVRB, though not all of the details of the transition are captured by these single-rate models.
Energetic materials with different properties can be mixed or layered to control performance. However, reactions at material interfaces are poorly understood and performance may be highly dependent on the degree of mixing. In this work, we use vapor-deposited explosive multilayers as a model system to investigate shock interactions between different explosive materials with precisely controlled spacings. Samples consisted of alternating pentaerythritol tetranitrate (PETN) and hexanitrostilbene (HNS) layers, materials that have substantial differences in detonation velocity, with individual layer thicknesses in the vicinity of the critical thickness for detonation propagation of each material (~100 - 200 μm). Additional experiments on PETN/HNS bilayer samples were conducted to elucidate the role of non-ideal interfaces on detonation propagation. Preliminary hydrocode simulations were employed to simulate detonation performance, using an Arrhenius reactive burn model that was parameterized from detonation velocity and failure data from vapor-deposited films of each constituent material. Measured detonation velocities in the multilayer samples were significantly lower than expected, given that the individual PETN layer thicknesses were larger than the critical thickness for detonation propagation. The bilayer experiments highlight the role of non-ideal interfaces in contributing to this result.
Additive Manufacturing (AM) techniques are increasingly being utilized for energetic material processes and research. Energetic material samples fabricated using these techniques can develop artifacts or defects during the manufacturing process. In this work, we use Physical Vapor Deposition (PVD) of explosive samples as a model system to investigate the effects of typical AM artifact or defect geometries on detonation propagation. PVD techniques allow for precise control of geometry to simulate typical AM artifacts or defects embedded into explosive samples. This experiment specifically investigates triangular and diamond-shaped artifacts that can result during direct-ink-writing (Robocasting). Samples were prepared with different sizes of voids embedded into the films. An ultra-high-speed framing camera and streak camera were used to view the samples under dynamic shock loading. It was determined that both geometry and size of the defects have a significant impact on the detonation front.
Wide-area time-synchronized measurements have recently revealed troublesome forced oscillations (FOs) within modern synchronized power grids. In some cases, these FOs represent a dangerous hazard to the system. Recent research has focused on locating the source of FOs to provide operators with knowledge for mitigating their impact locally. This paper presents a complementary mitigation strategy, which is to purposely induce a second oscillation into the grid which cancels the impact of the FO. Such a strategy is complementary in that it may provide valuable time to operators attempting to locate the FOs source and to determine how to rectify it. This paper presents a suppression control strategy which modulates controllable devices to automatically cancel the impact of the FO without the need for locating the source of the original FO. The strategy is based upon tuned feedback control. The approach is demonstrated on a simulation system via modulation of inverter-based generation.
Lean operation of Spark-Ignition engines can provide higher thermal efficiency compared to standard stoichiometric operation. However, for a homogeneous lean mixture, the associated reduction of flame speeds becomes an important issue from the perspective of robust ignition and fast flame spread throughout the charge. This study is focused on the use of a lean partial fuel stratification strategy that can stabilize the deflagration, while sufficiently fast combustion is ensured via the use of end-gas autoignition. The engine has a spray-guided Direct-Injection Spark-Ignition combustion system and was fueled with either a high-octane certification gasoline or E85. Partial fuel stratification was achieved using several fuel injections during the intake stroke in combination with a small pilot-injection concurrent with the Spark-Ignition. The results reveal that partial fuel stratification enables very stable combustion, offering higher thermal efficiency for parts of the load range in comparison to well-mixed lean and stoichiometric combustion. The heat release and flame imaging demonstrate that the combustion often has three distinct stages. The combustion of the pilot-injected fuel, ignited by the normal spark, acts as a “super igniter,” ensuring a very repeatable initiation of combustion, and flame incandescence reveals locally rich conditions. The second stage is mainly composed of blue flame propagation in a well-mixed lean mixture. The third stage is the compression autoignition of a well-mixed and typically very lean end-gas. The end-gas autoignition is critical for achieving high combustion efficiency, high thermal efficiency, and stable combustion. Partial fuel stratification enables very effective combustion-phasing control, which is critical for controlling the occurrence and intensity of end-gas autoignition. Comparing the gasoline and E85 fuels, it is noted that achieving end-gas autoignition for the higher octane E85 requires a more aggressive compression of the end-gas via the use of a more advanced combustion phasing or higher intake-air temperature.
We develop a generalized stress inversion technique (or the generalized inversion method) capable of recovering stresses in linear elastic bodies subjected to arbitrary cuts. Specifically, given a set of displacement measurements found experimentally from digital image correlation (DIC), we formulate a stress estimation inverse problem as a partial differential equation-constrained optimization problem. We use gradient-based optimization methods, and we accordingly derive the necessary gradient and Hessian information in a matrix-free form to allow for parallel, large-scale operations. By using a combination of finite elements, DIC, and a matrix-free optimization framework, the generalized inversion method can be used on any arbitrary geometry, provided that the DIC camera can view a sufficient part of the surface. We present numerical simulations and experiments, and we demonstrate that the generalized inversion method can be applied to estimate residual stress.
Predictions of the bulk scale thermal conductivity of solids using non-equilibrium molecular dynamics (MD) simulations have relied on the linear extrapolation of the thermal resistivity versus the reciprocal of the system length in the simulations. Several studies have reported deviation of the extrapolation from linearity near the micro-scale, raising a concern of its applicability to large systems. To investigate this issue, present work conducted extensive MD simulations of silicon with two different potentials (EDIP and Tersoff-II) for unprecedented length scales up to 10.3 μm and simulation times up to 530 ns. For large systems ≥0.35 μm in size the non-linearity of the extrapolation of the reciprocal of the thermal conductivity is mostly due to ignoring the dependence of the thermal conductivity on temperature. To account for such dependence, the present analysis fixes the temperature range for determining the gradient for calculating the thermal conductivity values. However, short systems ≤0.23 μm in size show significant non-linearity in the calculated thermal conductivity values using a temperature window of 500 ± 10 K from the simulations results with the EDIP potential. Since these system sizes are shorter than the mean phonon free path in EDIP (~0.22 μm), the nonlinearity may be attributed to phonon transport. For the MD simulations with the Tersoff-II potential there is no significant non-linearity in the calculated thermal conductivity values for systems ranging in size from 0.05 to 5.4 μm.
Proceedings of IA3 2020: 10th Workshop on Irregular Applications: Architectures and Algorithms, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Graph coloring is often used in parallelizing scientific computations that run in distributed and multi-GPU environments; it identifies sets of independent data that can be updated in parallel. Many algorithms exist for graph coloring on a single GPU or in distributed memory, but hybrid MPI+GPU algorithms have been unexplored until this work, to the best of our knowledge. We present several MPI+GPU coloring approaches that use implementations of the distributed coloring algorithms of Gebremedhin et al. and the shared-memory algorithms of Deveci et al. The on-node parallel coloring uses implementations in KokkosKernels, which provide parallelization for both multicore CPUs and GPUs. We further extend our approaches to solve for distance-2 coloring, giving the first known distributed and multi-GPU algorithm for this problem. In addition, we propose novel methods to reduce communication in distributed graph coloring. Our experiments show that our approaches operate efficiently on inputs too large to fit on a single GPU and scale up to graphs with 76.7 billion edges running on 128 GPUs.
Proceedings of FTXS 2020: Fault Tolerance for HPC at eXtreme Scale, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Benefits of local recovery (restarting only a failed process or task) have been previously demonstrated in parallel solvers. Local recovery has a reduced impact on application performance due to masking of failure delays (for message-passing codes) or dynamic load balancing (for asynchronous many-task codes). In this paper, we implement MPI-process-local checkpointing and recovery of data (as an extension of the Fenix library) in combination with an existing method for local detection of silent errors in partial-differential-equation solvers, to show a path for incorporating lightweight silent-error resilience. In addition, we demonstrate how asynchrony introduced by maximizing computation-communication overlap can halt the propagation of delays. For a prototype stencil solver (including an iterative-solver-like variant) with injected memory bit flips, results show greatly reduced overhead under weak scaling compared to global recovery, and high failure-masking efficiency. The approach is expected to be generalizable to other MPI-based solvers.
Proceedings of FTXS 2020: Fault Tolerance for HPC at eXtreme Scale, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Gupta, Nikunj; Mayo, Jackson M.; Lemoine, Adrian S.; Kaiser, Hartmut
Exceptions and errors occurring within mission critical applications due to hardware failures have a high cost. With the emerging Next Generation Platforms (NGPs), the rate of hardware failures will likely increase. Therefore, designing our applications to be resilient is a critical concern in order to retain the reliability of results while meeting the constraints on power budgets. In this paper, we discuss software resilience in AMTs at both local and distributed scale. We choose HPX to prototype our resiliency designs. We implement two resiliency APIs that we expose to the application developers, namely task replication and task replay. Task replication repeats a task n-times and executes them asynchronously. Task replay reschedules a task up to n-times until a valid output is returned. Furthermore, we expose algorithm based fault tolerance (ABFT) using user provided predicates (e.g., checksums) to validate the returned results. We benchmark the resiliency scheme for both synthetic and real world applications at local and distributed scale and show that most of the added execution time arises from the replay, replication or data movement of the tasks and not the boilerplate code added to achieve resilience.
Proceedings of ExaMPI 2020: Exascale MPI Workshop, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
We present the execution model of Virtual Transport (VT) a new, Asynchronous Many-Task (AMT) runtime system that provides unprecedented integration and interoperability with MPI. We have developed VT in conjunction with large production applications to provide a highly incremental, high-value path to AMT adoption in the dominant ecosystem of MPI applications, libraries, and developers. Our aim is that the'MPI+X' model of hybrid parallelism can smoothly extend to become'MPI+VT +X'. We illustrate a set of design and implementation techniques that have been useful in building VT. We believe that these ideas and the code embodying them will be useful to others building similar systems, and perhaps provide insight to how MPI might evolve to better support them. We motivate our approach with two applications that are adopting VT and have begun to benefit from increased asynchrony and dynamic load balancing.
Proceedings of IPDRM 2020: 4th Annual Workshop on Emerging Parallel and Distributed Runtime Systems and Middleware, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
As network speeds increase, the overhead of processing incoming messages is becoming onerous enough that many manufacturers now provide network interface cards (NICs) with offload capabilities to handle these overheads. This increase in NIC capabilities creates an opportunity to enable computation on data in-situ on the NIC. These enhanced NICs can be classified into several different categories of SmartNICs. SmartNICs present an interesting opportunity for future runtime software designs. Designing runtime software to be located in the network as opposed to the host level leads to new radical distributed runtime possibilities that were not practical prior to SmartNICs. In the process of transitioning to a radically different runtime software design for SmartNICs there are intermediary steps of migrating current runtime software to be offloaded onto a SmartNIC that also present interesting possibilities. This paper will describe SmartNIC design and how SmartNICs can be leveraged to offload current generation runtime software and lead to future radically different in-network distributed runtime systems.
Proceedings of FTXS 2020: Fault Tolerance for HPC at eXtreme Scale, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Gupta, Nikunj; Mayo, Jackson M.; Lemoine, Adrian S.; Kaiser, Hartmut
Exceptions and errors occurring within mission critical applications due to hardware failures have a high cost. With the emerging Next Generation Platforms (NGPs), the rate of hardware failures will likely increase. Therefore, designing our applications to be resilient is a critical concern in order to retain the reliability of results while meeting the constraints on power budgets. In this paper, we discuss software resilience in AMTs at both local and distributed scale. We choose HPX to prototype our resiliency designs. We implement two resiliency APIs that we expose to the application developers, namely task replication and task replay. Task replication repeats a task n-times and executes them asynchronously. Task replay reschedules a task up to n-times until a valid output is returned. Furthermore, we expose algorithm based fault tolerance (ABFT) using user provided predicates (e.g., checksums) to validate the returned results. We benchmark the resiliency scheme for both synthetic and real world applications at local and distributed scale and show that most of the added execution time arises from the replay, replication or data movement of the tasks and not the boilerplate code added to achieve resilience.
Proceedings of ExaMPI 2020: Exascale MPI Workshop, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Multithreaded MPI applications are gaining popularity in scientific and high-performance computing. While the combination of programming models is suited to support current parallel hardware, it moves threading models and their interaction with MPI into focus. With the advent of new threading libraries, the flexibility to select threading implementations of choice is becoming an important usability feature. Open MPI has traditionally avoided componentizing its threading model, relying on code inlining and static initialization to minimize potential impacts on runtime fast paths and synchronization. This paper describes the implementation of a generic threading runtime support in Open MPI using the Opal Modular Component Architecture. This architecture allows the programmer to select a threading library at compile-or run-time, providing both static initialization of threading primitives as well as dynamic instantiation of threading objects. In this work, we present the implementation, define required interfaces, and discuss trade-offs of dynamic and static initialization.
Understanding the fundamental limits of gas deliverable capacity in porous materials is of critical importance as it informs whether technical targets (e.g., for on-board vehicular storage) are feasible. High-throughput screening studies of rigid materials, for example, have shown they are not able to achieve the original ARPA-E methane storage targets, yet an interesting question remains: what is the upper limit of deliverable capacity in flexible materials? In this work we develop a statistical adsorption model that specifically probes the limit of deliverable capacity in intrinsically flexible materials. The resulting adsorption thermodynamics indicate that a perfectly designed, intrinsically flexible nanoporous material could achieve higher methane deliverable capacity than the best benchmark systems known to date with little to no total volume change. Density functional theory and grand canonical Monte Carlo simulations identify a known metal-organic framework (MOF) that validates key features of the model. Therefore, this work (1) motivates a continued, extensive effort to rationally design a porous material analogous to the adsorption model and (2) calls for continued discovery of additional high deliverable capacity materials that remain hidden from rigid structure screening studies due to nominal non-porosity.
Proceedings of MCHPC 2020: Workshop on Memory Centric High Performance Computing, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Many-core systems are beginning to feature novel large, high-bandwidth intermediate memory as a visible part of the memory hierarchy. This paper discusses how to make use of intermediate memory when composing matrix multiply with transpose to compute $A$ * AT. We re-purpose the cache-oblivious approach developed by Frigo et al. and apply it to the composition of a bandwidth-bound kernel (transpose) with a compute-bound kernel (matrix multiply). Particular focus is on regions of matrix shapes far from square that are not usually considered. Our codes are simpler than optimized codes, but reasonably close in performance. Also, perhaps of more importance is developing a paradigm for how to construct other codes using intermediate memories.
A system’s response to disturbances in an internal or external driving signal can be characterized as performing an implicit computation, where the dynamics of the system are a manifestation of its new state holding some memory about those disturbances. Identifying small disturbances in the response signal requires detailed information about the dynamics of the inputs, which can be challenging. This paper presents a new method called the Information Impulse Function (IIF) for detecting and time-localizing small disturbances in system response data. The novelty of IIF is its ability to measure relative information content without using Boltzmann’s equation by modeling signal transmission as a series of dissipative steps. Since a detailed expression of the informational structure in the signal is achieved with IIF, it is ideal for detecting disturbances in the response signal, i.e., the system dynamics. Those findings are based on numerical studies of the topological structure of the dynamics of a nonlinear system due to perturbated driving signals. The IIF is compared to both the Permutation entropy and Shannon entropy to demonstrate its entropy-like relationship with system state and its degree of sensitivity to perturbations in a driving signal.
In this synthesis, we assess present research and anticipate future development needs in modeling water quality in watersheds. We first discuss areas of potential improvement in the representation of freshwater systems pertaining to water quality, including representation of environmental interfaces, in-stream water quality and process interactions, soil health and land management, and (peri-)urban areas. In addition, we provide insights into the contemporary challenges in the practices of watershed water quality modeling, including quality control of monitoring data, model parameterization and calibration, uncertainty management, scale mismatches, and provisioning of modeling tools. Finally, we make three recommendations to provide a path forward for improving watershed water quality modeling science, infrastructure, and practices. These include building stronger collaborations between experimentalists and modelers, bridging gaps between modelers and stakeholders, and cultivating and applying procedural knowledge to better govern and support water quality modeling processes within organizations.
A High-Altitude Electromagnetic Pulse (HEMP) is a potential threat to the power grid. HEMP can couple to transmission lines and cables, causing significant overvoltages which can be harmful to line connected equipment. The effects of overvoltages on various types of power systems components need to be understood. HEMP effects on trip coils were tested and presented in this report. A high voltage pulser was built to replicate the induced voltage waveform from a HEMP. The pulser was used to test breaker trip coils with increasing pulse magnitudes ranging from 20 kV to 80 kV. The State-of-Health of each trip coils was measured via mechanical operation and impedance measurements before and after each insult to identify any damage or degradation to the trip coils. Dielectric breakdown was observed at the conductor leads during testing, causing the HEMP insult to be diverted to the grounded casing. However, the dielectric breakdown did not cause interference with regular device operation.
Research shows that individuals often overestimate their knowledge and performance without realizing they have done so, which can lead to faulty technical outcomes. This phenomenon is known as the Dunning-Kruger effect (Kruger & Dunning, 1999). This research sought to determine if some individuals were more prone to overestimating their performance due to underlying personality and cognitive characteristics. To test our hypothesis, we first collected individual difference measures. Next, we asked participants to estimate their performance on three performance tasks to assess the likelihood of overestimation. We found that some individuals may be more prone to overestimating their performance than others, and that faulty problem-solving abilities and low skill may be to blame. Encouraging individuals to think critically through all options and to consult with others before making a high-consequence decision may reduce overestimation.
A useful and popular waveform for high-performance radar systems is the Linear Frequency Modulated (LFM) chirp. The chirp may have a positive frequency slope with time (up-chirp) or a negative frequency slope with time (down-chirp). There is no inherent advantage to one with respect to the other, except that the receiver needs to be matched to the proper waveform. However, if up-chirps and down-chirps are employed on different pulses in the same Coherent Processing Interval (CPI), then care must be taken to maintain coherence in the range-compressed echo signals. We present the mathematics for doing so, for both correlation processing and stretch processing.
We present an implementation that can keep a coldatom ensemble within a sub-millimeter diameter hole in a transparent membrane. Based on the effective beam diameter of the magneto-optical trap (MOT), d = 400 mm-hole diameter, we measure the atom number that is 105 times higher than the predicted value using the conventional d6 scaling rule. Atoms trapped by the membrane MOT are cooled down to 10 mK with sub- Doppler cooling process and can be potentially coupled to the photonic/electronic integrated circuits that can be fabricated in the membrane device by taking a step toward the atom trap integrated platform.
Future nuclear fuel cycle facilities will see a significant benefit from considering materials accountancy requirements early in the design process. The Material Protection, Accounting, and Control Technologies (MPACT) working group is demonstrating Safeguards and Security by Design (SSBD) for a notional electrochemical reprocessing facility as part of a 2020 Milestone. The idea behind SSBD is to consider regulatory requirements early in the design process to provide more optimized systems and avoid costly retrofits later in the design process. Safeguards modeling, using single analyst tools, allows the designer to efficiently consider materials accountancy approaches that meet regulatory requirements. However, safeguards modeling also allows the facility designer to go beyond current regulations and work toward accountancy designs with rapid response and lower thresholds for detection of anomalies. This type of modeling enables new safeguards approaches and may inform future regulatory changes. The Separation and Safeguards Performance Model (SSPM) has been used for materials accountancy system design and analysis. This paper steps through the process of designing a Material Control and Accountancy (MC&A) system, presents the baseline system design for an electrochemical reprocessing facility, and provides performance metrics from the modeling analysis. The most critical measurements in the electrochemical facility are the spent fuel input, electrorefiner salt, and U/TRU product output measurements. Finally, material loss scenario analysis found that measurement uncertainties (relative standard deviations) for Pu would need to be at 1% (random and systematic error components) or better in order to meet domestic detection goals or as high as 3% in order to meet international detection goals, based on a 100 metric ton per year plant size.
The Sodium-Cooled Fast Reactor (SFR) system was identified during the Generation IV Technology Roadmap as a promising technology to perform the actinide management mission and, if enhanced economics for the system could be realized, also the electricity and heat production missions. The main characteristics of the SFR that make it especially suitable for the actinide management mission are: Consumption of transuranics in a closed fuel cycle, thus reducing the radiotoxicity and heat load which facilitates waste disposal and geologic isolation; Enhanced utilization of uranium resources through efficient management of fissile materials and multi-recycle; and, High level of safety achieved through inherent and passive means that accommodate transients and bounding events with significant safety margins.
The Port of Alaska in Anchorage enables the economic vitality of the Municipality of Anchorage and State of Alaska. It also provides significant support to defense activities across Alaska, especially to the Joint Base Elmendorf-Richardson (JBER) that is immediately adjacent to the Port. For this reason, stakeholders are interested in the resilience of the Ports operations. This report documents a preliminary feasibility analysis for developing an energy system that increases electric supply resilience for the Port and for a specific location inside JBER. The project concept emerged from prior work led by the Municipality of Anchorage and consultation with Port stakeholders. The project consists of a microgrid with PV, storage and diesel generation, capable of supplying electricity to loads at the Port a specific JBER location during utility outages, while also delivering economic value during blue-sky conditions. The study aims to estimate the size, configuration and concept of operations based on existing infrastructure and limited demand data. It also explores potential project benefits and challenges. The report goal is to inform further stakeholder consultation and next steps.
This work demonstrates how staged heat release from layered metal oxide cathodes in the presence of organic electrolytes can be predicted from basic thermodynamic properties. These prediction methods for heat release are an advancement compared to typical modeling approaches for thermal runaway in lithium-ion batteries, which tend to rely exclusively on calorimetry measurements of battery components. These calculations generate useful new insights when compared to calorimetry measurements for lithium cobalt oxide (LCO) as well as the most common varieties of nickel manganese cobalt oxide (NMC) and nickel cobalt aluminum oxide (NCA). Accurate trends in heat release with varying state of charge are predicted for all of these cathode materials. These results suggest that thermodynamic calculations utilizing a recently published database of properties are broadly applicable for predicting decomposition behavior of layered metal oxide cathodes. Aspects of literature calorimetry measurements relevant to thermal runaway modeling are identified and classified as thermodynamic or kinetic effects. The calorimetry measurements reviewed in this work will be useful for development of a new generation of thermal runaway models targeting applications where accurate maximum cell temperatures are required to predict cascading cell-to-cell propagation rates.
This short concept article discusses four specific ways to eradicate respiratory pandemics once and for all. These include: Protecting the nose, mouth, throat and lungs; New hygiene regimens; Clearing the air; and Biophysical interventions. Technical breakthoughs in all four of these areas would not only protect people from life-threatening pathogens, but also take the dread out of respiratory disease outbreaks.
Optimized designs were achieved using a genetic algorithm to evaluate multi-objective trade space, including Mean-Time-Between-Failure (MTBF) and volumetric power density. This work provides a foundational platform that can be used to optimize additional power converters, such as an inverter for the EV traction drive system as well as trade-offs in thermal management due to the use of different device substrate materials.
For high voltage electrical devices, prevention of high voltage breakdown is critical for device function. Use of polymeric encapsulation such as epoxies is common, but these may include air bubbles or other voids of varying size. The present work aimed to model and experimentally determine the size dependence of breakdown voltage for voids in an epoxy matrix, as a step toward establishing size criteria for void screening. Effects were investigated experimentally for both one-dimensional metal/epoxy/air/epoxy/metal gap sizes from 50 μm to 10 mm, as well as spherical voids of 250 μm, 500 μm, 1 mm and 2 mm sizes. These experimental results were compared to modified Paschen curve and particle-in-cell discharge models; minimum breakdown voltages of 6 - 8.5 kV appeared to be predicted by 1D models and experiments, with minimum breakdown voltage for void sizes of 0.2 - 1 mm. In a limited set of 3D experiments on 250 μm, 500 μm, 1 mm and 2 mm voids within epoxy, the minimum breakdown voltages observed were 18.5 - 20 kV, for 500 μm void sizes. These experiments and models are aimed at providing initial size and voltage criteria for tolerable void sizes and expected discharge voltages to support design of encapsulated high voltage components.
The magnetized liner inertial fusion (MagLIF) scheme relies on coupling laser energy into an underdense fuel raising the fuel adiabat at the start of the implosion. To deposit energy into the fuel, the laser must first penetrate a laser entrance hole (LEH) foil which can be a significant energy sink and introduce mix. In this paper, we report on experiments investigating laser energy coupling into MagLIF-relevant gas cell targets with LEH foil thicknesses varying from 0.5 μm to 3 μm. Two-dimensional (2D) axisymmetric simulations match the experimental results well for 0.5 μm and 1 μm thick LEH foils but exhibit whole-beam self-focusing and excessive penetration of the laser into the gas for 2 μm and 3 μm thick LEH foils. Better agreement for the 2 μm-thick foil is found when using a different thermal conductivity model in 2D simulations, while only 3D Cartesian simulations come close to matching the 3 μm-thick foil experiments. The study suggests that simulations may over-predict the tendency for the laser to self-focus during MagLIF preheat when thicker LEH foils are used. This effect is pronounced with 2D simulations where the azimuthally symmetric density channel effectively self-focuses the rays that are forced to traverse the center of the plasma. The extra degree of freedom in 3D simulations significantly reduces this effect. The experiments and simulations also suggest that, in this study, the amount of energy coupled into the gas is highly correlated with the laser propagation length regardless of the LEH foil thickness.
Disposal of large, heat-generating waste packages containing the equivalent of 21 pressurized water reactor (PWR) assemblies or more is among the disposal concepts under investigation for a future repository for spent nuclear fuel (SNF) in the United States. Without a long (>200 years) surface storage period, disposal of 21-PWR or larger waste packages (especially if they contain high-burnup fuel) would result in in-drift and near-field temperatures considerably higher than considered in previous generic reference cases that assume either 4-PWR or 12-PWR waste packages (Jové Colón et al. 2014; Mariner et al. 2015; 2017). Sevougian et al. (2019c) identified high-temperature process understanding as a key research and development (R&D) area for the Spent Fuel and Waste Science and Technology (SFWST) Campaign. A two-day workshop in February 2020 brought together campaign scientists with expertise in geology, geochemistry, geomechanics, engineered barriers, waste forms, and corrosion processes to begin integrated development of a high-temperature reference case for disposal of SNF in a mined repository in a shale host rock. Building on the progress made in the workshop, the study team further explored the concepts and processes needed to form the basis for a high-temperature shale repository reference case. The results are described in this report and summarized..
Adams, Brian M.; Bohnhoff, William J.; Dalbey, Keith R.; Ebeida, Mohamed S.; Eddy, John P.; Eldred, Michael S.; Hooper, Russell W.; Hough, Patricia D.; Hu, Kenneth T.; Jakeman, John D.; Khalil, Mohammad; Maupin, Kathryn A.; Monschke, Jason A.; Ridgway, Elliott M.; Rushdi, Ahmad; Seidl, Daniel T.; Stephens, John A.; Winokur, Justin G.
The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user’s manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
The Materials Protection, Accounting, and Control Technologies (MPACT) campaign, within the U.S. Department of Energy Office of Nuclear Energy, has developed a Virtual Facility Distributed Test Bed for safeguards and security design for future nuclear fuel cycle facilities. The purpose of the Virtual Test Bed is to bring together experimental and modeling capabilities across the U.S. national laboratory and university complex to provide a one-stop-shop for advanced Safeguards and Security by Design (SSBD). Experimental testing alone of safeguards and security technologies would be cost prohibitive, but testbeds and laboratory processing facilities with safeguards measurement opportunities, coupled with modeling and simulation, provide the ability to generate modern, efficient safeguards and security systems for new facilities. This Virtual Test Bed concept has been demonstrated using a generic electrochemical reprocessing facility as an example, but the concept can be extended to other facilities. While much of the recent work in the MPACT program has focused on electrochemical safeguards and security technologies, the laboratory capabilities have been applied to other facilities in the past (including aqueous reprocessing, fuel fabrication, and molten salt reactors as examples). This paper provides an overview of the Virtual Test Bed concept, a description of the design process, and a baseline safeguards and security design for the example facility. Parallel papers in this issue go into more detail on the various technologies, experimental testing, modeling capabilities, and performance testing.
The Waste Isolation Pilot Plant (WIPP) facility is a U.S. Department of Energy (DOE) operating repository 654 m below the surface in a thick salt formation in southeastern New Mexico. The DOE disposes transuranic (TRU) waste produced from atomic energy defense activities at the WIPP facility. A portion of the waste shipped to the WIPP facility contains TRU radionuclides co-mingled with polychlorinated biphenyls (PCBs), which fall under U.S. Environmental Protection Agency (EPA) regulations implementing the Toxic Substances Control Act (TSCA). This report documents the risks of PCBs co-mingled with TRU waste (hereafter designated as PCB/TRU waste) designated for disposal at the WIPP facility. This analysis is input to the National Environmental Policy Act (NEPA) assessment by the DOE Carlsbad Field Office (CBFO) for the proposed increase of the WIPP facility disposal area to include additional waste panels (but not to increase the legislated WIPP volume). This analysis is not a compliance calculation to support a certification renewal nor does it support a planned change request (PCR) or planned change notice (PCN) to be submitted to the EPA.
The Computer Science Research Institute (CSRI) brings university faculty and students to Sandia for focused collaborative research on Department of Energy (DOE) computer and computational science problems. The institute provides an opportunity for university researchers to learn about problems in computer and computational science at DOE laboratories. Participants conduct leading-edge research, interact with scientists and engineers at the laboratories, and help transfer results of their research to programs at the labs. Some specific CSRI research interest areas are: scalable solvers, optimization, adaptivity and mesh refinement, graph-based, discrete, and combinatorial algorithms, uncertainty estimation, mesh generation, dynamic load-balancing, virus and other malicious-code defense, visualization, scalable cluster computers, data-intensive computing, environments for scalable computing, parallel input/output, advanced architectures, and theoretical computer science. The CSRI Summer Program is organized by CSRI and typically includes the organization of a weekly seminar series and the publication of a summer proceedings. In 2020, the CSRI summer program was executed completely virtually; all student interns worked from home, due to the COVID-19 pandemic.
Ryder, Kaitlyn L.; Ryder, Landen D.; Sternberg, Andrew L.; Kozub, John A.; Zhang, En X.; Lalumondiere, Stephen D.; Monahan, Daniele M.; Bonsall, Jeremey P.; Khachatrian, Ani; Buchner, Stephen P.; Mcmorrow, Dale; Hales, Joel M.; Zhao, Yuanfu; Wang, Liang; Wang, Chuanmin; Weller, Robert A.; Schrimpf, Ronald D.; Weiss, Sharon M.; Reed, Robert
Ryder, Landen D.; Ryder, Kaitlyn L.; Sternberg, Andrew L.; Kozub, John A.; Zhang, En X.; Linten, Dimitri; Croes, Kristof; Weller, Robert A.; Schrimpf, Ronald D.; Weiss, Sharon M.; Reed, Robert