In recent years, the pervasive use of lithium ion (Li-ion) batteries in applications such as cell phones, laptop computers, electric vehicles, and grid energy storage systems has prompted the development of specialized battery management systems (BMS). The primary goal of a BMS is to maintain a reliable and safe battery power source while maximizing the calendar life and performance of the cells. To maintain safe operation, a BMS should be programmed to minimize degradation and prevent damage to a Li-ion cell, which can lead to thermal runaway. Cell damage can occur over time if a BMS is not properly configured to avoid overcharging and discharging. To prevent cell damage, efficient and accurate cell charging cycle characteristics algorithms must be employed. In this paper, computationally efficient and accurate ensemble learning algorithms capable of detecting Li-ion cell charging irregularities are described. Additionally, it is shown using machine and deep learning that it is possible to accurately and efficiently detect when a cell has experienced thermal and electrical stress due to cell overcharging by measuring charging cycle divergence.
This paper explores unsupervised learning approaches for analysis and categorization of turbulent flow data. Single point statistics from several high-fidelity turbulent flow simulation data sets are classified using a Gaussian mixture model clustering algorithm. Candidate features are proposed, which include barycentric coordinates of the Reynolds stress anisotropy tensor, as well as scalar and angular invariants of the Reynolds stress and mean strain rate tensors. A feature selection algorithm is applied to the data in a sequential fashion, flow by flow, to identify a good feature set and an optimal number of clusters for each data set. The algorithm is first applied to Direct Numerical Simulation data for plane channel flow, and produces clusters that are consistent with turbulent flow theory and empirical results that divide the channel flow into a number of regions (viscous sub-layer, log layer, etc). Clusters are then identified for flow over a wavy-walled channel, flow over a bump in a channel, and flow past a square cylinder. Some clusters are closely identified with the anisotropy state of the turbulence, as indicated by the location within the barycentric map of the Reynolds stress tensor. Other clusters can be connected to physical phenomena, such as boundary layer separation and free shear layers. Exemplar points from the clusters, or prototypes, are then identified using a prototype selection method. These exemplars summarize the dataset by a factor of 10 to 1000. The clustering and prototype selection algorithms provide a foundation for physics-based, semi-automated classification of turbulent flow states and extraction of a subset of data points that can serve as the basis for the development of explainable machine-learned turbulence models.
By strategically curtailing active power and providing reactive power support, photovoltaic (PV) systems with advanced inverters can mitigate voltage and thermal violations in distribution networks. Quasi-static time-series (QSTS) simulations are increasingly being utilized to study the implementation of these inverter functions as alternatives to traditional circuit upgrades. However, QSTS analyses can yield significantly different results based on the availability and resolution of input data and other modeling considerations. In this paper, we quantified the uncertainty of QSTS-based curtailment evaluations for two different grid-support functions (autonomous Volt-Var and centralized PV curtailment for preventing reverse power conditions) through extensive sensitivity analyses and hardware testing. We found that Volt-Var curtailment evaluations were most sensitive to poor inverter convergence (-56.4%), PV time-series data (-18.4% to +16.5%), QSTS resolution (-15.7%), and inverter modeling uncertainty (+14.7%), while the centralized control case was most sensitive to load modeling (-26.5% to +21.4%) and PV time-series data (-6.0% to +12.4%). These findings provide valuable insights for improving the reliability and accuracy of QSTS analyses for evaluating curtailment and other PV impact studies.
Shah, Chinmay; Campo-Ossa, Daniel D.; Patarroyo-Montenegro, Juan F.; Guruwacharya, Nischal; Bhujel, Niranjan; Trevizan, Rodrigo D.; Andrade, Fabio; Shirazi, Mariko; Tonkoski, Reinaldo; Wies, Richard; Hansen, Timothy M.; Cicilio, Phylicia
In response to national and international carbon reduction goals, renewable energy resources like photovoltaics (PV) and wind, and energy storage technologies like fuel-cells are being extensively integrated in electric grids. All these energy resources require power electronic converters (PECs) to interconnect to the electric grid. These PECs have different response characteristics to dynamic stability issues compared to conventional synchronous generators. As a result, the demand for validated models to study and control these stability issues of PECs has increased drastically. This paper provides a review of the existing PEC model types and their applicable uses. The paper provides a description of the suitable model types based on the relevant dynamic stability issues. Challenges and benefits of using the appropriate PEC model type for studying each type of stability issue are also presented.
The ever increasing need to ensure that code is reliably, efficiently and safely constructed has fueled the evolution of popular static binary code analysis tools. In identifying potential coding flaws in binaries, tools such as IDA Pro are used to disassemble the binaries into an opcode/assembly language format in support of manual static code analysis. Because of the highly manual and resource intensive nature involved with analyzing large binaries, the probability of overlooking potential coding irregularities and inefficiencies is quite high. In this paper, a light-weight, unsupervised data flow methodology is described which uses highly-correlated data flow graph (CDFGs) to identify coding irregularities such that analysis time and required computing resources are minimized. Such analysis accuracy and efficiency gains are achieved by using a combination of graph analysis and unsupervised machine learning techniques which allows an analyst to focus on the most statistically significant flow patterns while performing binary static code analysis.
Airborne contaminants from fires containing nuclear waste represent significant health hazards and shape the design and operation of nuclear facilities. Much of the data used to formulate DOE-HDBK-3010-94, “Airborne Release Fractions/Rates and Respirable Fractions for Nonreactor Nuclear Facilities,” from the U.S. Department of Energy, were taken over 40 years ago. The objectives of this study were to reproduce experiments from Pacific Northwest Laboratories conducted in June 1973 employing current aerosol measurement methods and instrumentation, develop an enhanced understanding of particulate formation and transport from fires containing nuclear waste, and provide modeling and experimental capabilities for updating current standards and practices in nuclear facilities. A special chamber was designed to conduct small fires containing 25 mL of flammable waste containing lutetium nitrate, ytterbium nitrate, or depleted uranium nitrate. Carbon soot aerosols showed aggregates of primary particles ranging from 20 to 60 nm in diameter. In scanning electron microscopy, ~200-nm spheroidal particles were also observed dispersed among the fractal aggregates. The 200-nm spherical particles were composed of metal phosphates. Airborne release fractions (ARFs) were characterized by leaching filter deposits and quantifying metal concentrations with mass spectrometry. The average mass-based ARF for 238U experiments was 1.0 × 10−3 with a standard deviation of 7.5 × 10−4. For the original experiments, DOE-HDBK-3010-94 states, “Uranium ARFs range from 2 × 10−4 to 3 × 10−3, an uncertainty of approximately an order of magnitude.” Thus, current measurements were consistent with DOE-HDBK-3010-94 values. ARF values for lutetium and ytterbium were approximately one to two orders of magnitude lower than 238U. Metal nitrate solubility may have varied with elemental composition and temperature, thereby affecting ARF values for uranium surrogates (Yb and Lu). In addition to ARF data, solution boiling temperatures and evaporation rates can also be deduced from experimental data.
Clark, Raimi; Young, Jacob; Brooks, William; Hopkins, Matthew M.; Mankowski, John; Stephens, Jacob; Neuber, Andreas
Early light emission provides information about the dominant mechanisms culminating in vacuum surface flashover (anode-initiated vs. cathode-initiated) for particular geometries. From experimental evidence gathered elsewhere, for the case of an insulator oriented at 45° with respect to the anode, anode-initiated flashover is believed to dominate since the field at the anode triple point is roughly three times that of the cathode. Similar to previous work performed on cathode-initiated flashover, light emission from the voltage rise through the impedance collapse is collected into two optical fibers focused on light emanating from the insulator in regions near the anode and cathode. The optical fibers are either connected to PMTs for spectrally integrated localized light intensity information or to a spectrograph used in conjunction with an ICCD camera. Challenges associated with localizing the flashover for optical diagnostics and incorporating the optical diagnostics into the high-field environment are discussed. Initial results for cross-linked polystyrene (Rexolite 1422) support the premise that flashover is initiated from the anode for these geometries, as early light from the anode leads cathode light up to photocathode saturation. Early spectroscopy results show promise for future characterization of the spatio-temporal development of emission from desorbed gas species across the insulator surface and identification of bulk insulator involvement if it occurs.
The last decade has witnessed remarkable progress in the development of quantum technologies. Although fault-tolerant devices likely remain years away, the noisy intermediate-scale quantum devices of today may be leveraged for other purposes. Leading candidates are variational quantum algorithms (VQAs), which have been developed for applications including chemistry, optimization, and machine learning, but whose implementations on quantum devices have yet to demonstrate improvements over classical capabilities. In this Perspective, we propose a variety of ways that the performance of VQAs could be informed by quantum optimal control theory. A major theme throughout is the need for sufficient control resources in VQA implementations; we discuss different ways this need can manifest, outline a variety of open questions, and look to the future.
We present recent results toward the quantification of spray characteristics at engine conditions for an eight-hole counter-bored (stepped) GDI injector – Spray G in the ECN denomination. This computational study is characterized by two novel features: the detailed description of a real injector's internal surfaces via tomographic reconstruction; and a general equation of state that represents the thermodynamic properties of homogeneous liquid-vapor mixtures. The combined level-set moment-of-fluid approach, coupled to an embedded boundary formulation for moving solid walls, makes it possible to seamlessly connect the injector's internal flow to the spray. The Large Eddy Simulation (LES) discussed here presents evidence of partial hydraulic flipping and, during the closing transient, string cavitation. Results are validated by measurements of spray density profiles and droplet size distribution.
SCWS 2021: 2021 SC Workshops Supplementary Proceedings, Held in conjunction with SC 2021: The International Conference for High Performance Computing, Networking, Storage and Analysis
The next generation of supercomputing resources is expected to greatly expand the scope of HPC environments, both in terms of more diverse workloads and user bases, as well as the integration of edge computing infrastructures. This will likely require new mechanisms and approaches at the Operating System level to support these broader classes of workloads along with their different security requirements. We claim that a key mechanism needed for these workloads is the ability to securely compartmentalize the system software executing on a given node. In this paper, we present initial efforts in exploring the integration of secure and trusted computing capabilities into an HPC system software stack. As part of this work we have ported the Kitten Lightweight Kernel (LWK) to the ARM64 architecture and integrated it with the Hafnium hypervisor, a reference implementation of a secure partition manager (SPM) that provides security isolation for virtual machines. By integrating Kitten with Hafnium, we are able to replace the commodity oriented Linux based resource management infrastructure and reduce the overheads introduced by using a full weight kernel (FWK) as the node-level resource scheduler. While our results are very preliminary, we are able to demonstrate measurable performance improvements on small scale ARM based SOC platforms.
We describe efforts in generating synthetic malware samples that have specified behaviors that can then be used to train a machine learning (ML) algorithm to detect behaviors in malware. The idea behind detecting behaviors is that a set of core behaviors exists that are often shared in many malware variants and that being able to detect behaviors will improve the detection of novel malware. However, empirically the multi-label task of detecting behaviors is significantly more difficult than malware classification, only achieving on average 84% accuracy across all behaviors as opposed to the greater than 95% multi-class or binary accuracy reported in many malware detection studies. One of the difficulties in identifying behaviors is that while there are ample malware samples, most data sources do not include behavioral labels, which means that generally there is insufficient training data for behavior identification. Inspired by the success of generative models in improving image processing techniques, we examine and extend a 1) conditional variational auto-encoder and 2) a flow-based generative model for malware generation with behavior labels. Initial experiments indicate that synthetic data is able to capture behavioral information and increase the recall of behaviors in novel malware from 32% to 45% without increasing false positives and to 52% with increased false positives.
Many individuals' mobility can be characterized by strong patterns of regular movements and is influenced by social relationships. Social networks are also often organized into overlapping communities which are associated in time or space. We develop a model that can generate the structure of a social network and attribute purpose to individuals' movements, based solely on records of individuals' locations over time. This model distinguishes the attributed purpose of check-ins based on temporal and spatial patterns in check-in data. Because a location-based social network dataset with authoritative ground-truth to test our entire model does not exist, we generate large scale datasets containing social networks and individual check-in data to test our model. We find that our model reliably assigns community purpose to social check-in data, and is robust over a variety of different situations.
This work reports an on-wafer study of avalanche behavior and failure analysis of in-house fabricated 1.3 kV GaN-on-GaN P-N diodes. DC breakdown is measured at different temperatures to confirm avalanche behavior. Diode's avalanche ruggedness is measured directly on-wafer using a modified unclamped inductive switching (UIS) test set-up with an integrated thermal chuck and high-speed CCD for real-time imaging during the test. The avalanche ruggedness of the GaN P-N diode is evaluated and compared with a commercial SiC Schottky diode of similar voltage and current rating. Failure analysis is done using SEM and optical microscopy to gain insight into the diode's failure mechanism during avalanche operation.
We propose primal-dual mesh optimization algorithms that overcome shortcomings of the standard algorithm while retaining some of its desirable features. “Hodge-Optimized Triangulations” defines the “HOT energy” as a bound on the discretization error of the diagonalized Delaunay Hodge star operator. HOT energy is a natural choice for an objective function, but unstable for both mathematical and algorithmic reasons: it has minima for collapsed edges, and its extrapolation to non-regular triangulations is inaccurate and has unbounded minima. We propose a different extrapolation with a stronger theoretical foundation. We propose new objectives, based on normalizations of the HOT energy, with barriers to edge collapses and other undesirable configurations. We propose mesh improvement algorithms coupling these. When HOT optimization nearly collapses an edge, we actually collapse the edge. Otherwise, we use the barrier objective to update positions and weights. By combining discrete connectivity changes with continuous optimization, we more fully explore the space of possible meshes and obtain higher quality solutions.
As climate change and human migration accelerate globally, decision-makers are seeking tools that can deepen their understanding of the complex nexus between climate change and human migration. These tools can help to identify populations under pressure to migrate, and to explore proactive policy options and adaptive measures. Given the complexity of factors influencing migration, this article presents a system dynamics-based model that couples migration decision making and behavior with the interacting dynamics of economy, labor, population, violence, governance, water, food, and disease. The regional model is applied here to the test case of migration within and beyond Mali. The study explores potential systems impacts of a range of proactive policy solutions and shows that improving the effectiveness of governance and increasing foreign aid to urban areas have the highest potential of those investigated to reduce the necessity to migrate in the face of climate change.
This work compared the effects of modeling grain structure in hypervelocity impact simulations. Comparisons of strain rate at failure (fragment size) and material temperature were made between a suite of simulations performed with the standard bulk modeling structure and one in which individual grains were modeled. Smaller fragments or higher temperatures are needed to match EO/IR signatures from observed impacts. Results from the various studies described herein indicate that strain rate at failure is influenced primarily by projectile size, impact velocity, and material porosity. Material temperature is predominantly influenced by impact velocity and porosity; not by projectile size. Changes to the material properties within grains tended to affect lower strain rates only, but material interfaces (here, manifested as material porosity) drastically increased strain rate at failure and material temperatures. Higher strain rates are likely to produce smaller debris fragments, which, along with hot debris may help provide evidence supporting the generation of sub-micron fragments currently required by many EO/IR predictive models to successfully compare with observed hypervelocity impacts. Future work will focus on extending the study to three dimensions, assessing more realistic grain aspect ratios, and simulating other types of interfaces such as inclusions and dislocations.
The accumulation of point defects and defect clusters in materials, as seen in irradiated metals for example, can lead to the formation and growth of voids. Void nucleation is derived from the condensation of supersaturated vacancies and depends strongly on the stress state. It is usually assumed that such stress states can be produced by microstructural defects such dislocations, grain boundaries or triple junctions, however, much less attention has been brought to the formation of voids near microcracks. In this paper, we investigate the coupling between point-defect diffusion/recombination and concentrated stress fields near mode-I crack tips via a spatially-resolved rate theory approach. A modified chemical potential enables point-defect diffusion to be partially driven by the mechanical fields in the vicinity of the crack tip. Simulations are carried out for microcracks using the Griffith model with increasing stress intensity factor K1. Our results show that below a threshold for the stress intensity factor, the microcrack acts purely as a microstructural sink, absorbing point defects. Above this threshold, vacancies accumulate at the crack tip. These results suggest that, even in the absence of plastic deformation, voids can form in the vicinity of a microcrack for a given load when the crack’s characteristic length is above a critical length. While in ductile metals, irradiation damage generally causes hardening and corresponding quasi-brittle cleavage, our results show that irradiation conditions can favor void formation near microstructural stressors such as crack tips leading to lower resistance to crack propagation as predicted by traditional failure analysis.
In this study, a complete inelastic equation of state (IEOS) for solids is developed based on a superposition of thermodynamic energy potentials. The IEOS allows for a tensorial stress state by including an isochoric hyperelastic Helmholtz potential in addition to the zero-kelvin isotherm and lattice vibration energy contributions. Inelasticity is introduced through the nonlinear equations of finite strain plasticity which utilize the temperature dependent Johnson–Cook yield model. Material failure is incorporated into the model by a coupling of the damage history variable to the energy potentials. The numerical evaluation of the IEOS requires a nonlinear solution of stress, temperature and history variables associated with elastic trial states for stress and temperature. The model is implemented into the ALEGRA shock and multi-physics code and the applications presented include single element deformation paths, the Taylor anvil problem and an energetically driven thermo-mechanical problem.
In this paper, we characterize the logarithmic singularities arising in the method of moments from the Green’s function in integrals over the test domain, and we use two approaches for designing geometrically symmetric quadrature rules to integrate these singular integrands. These rules exhibit better convergence properties than quadrature rules for polynomials and, in general, lead to better accuracy with a lower number of quadrature points. In this work, we demonstrate their effectiveness for several examples encountered in both the scalar and vector potentials of the electric-field integral equation (singular, near-singular, and far interactions) as compared to the commonly employed polynomial scheme and the double Ma–Rokhlin–Wandzura (DMRW) rules, whose sample points are located asymmetrically within triangles.
Modulation doping is a commonly adopted technique to create two-dimensional (2D) electrons or holes in semiconductor heterostructures. One constraint, however, is that the intentional dopants required for modulation doping are controlled and incorporated during the growth of heterostructures. Using undoped strained germanium quantum wells as the model material system, we show, in this work, that modulation doping can be achieved post-growth of heterostructures by ion implantation and dopant-activation anneals. The carrier density is controlled ex situ by varying the ion fluence and implant energy, and an empirical calibration curve is obtained. While the mobility of the resulting 2D holes is lower than that in undoped heterostructure field-effect transistors built using the same material, the achievable carrier density is significantly higher. Potential applications of this modulation-doping technique are discussed.
Branch, Brittany A.; Frank, Geoff; Abbott, Andrew; Lacina, David; Dattelbaum, Dana M.; Neel, Christopher; Spowart, Jonathan
With the advent of additive manufacturing (AM) techniques, a new class of shockwave mitigation and structural supports has been realized through the hierarchical assembly of polymer materials. To date, there have been a limited number of studies investigating the role of structure on shockwave localization and whether AM offers a means to tailor shockwave behavior. Of particular interest is whether the mesoscopic structure can be tailored to achieve shockwave properties in one direction of impact vs the other. Here, we illustrate directional response in engineered polymer foams. In situ time-resolved x-ray phase contrast imaging at the Advanced Photon Source was used to characterize these diode-like structures. This work offers a breakthrough in materials technology for the development of protective structures that require augmentation of shock in one direction while diminishing transmission in the opposite direction.
Water flow in nanometer or sub-nanometer hydrophilic channels bears special importance in diverse fields of science and engineering. However, the nature of such water flow remains elusive. Here, we report our molecular-modeling results on water flow in a sub-nanometer clay interlayer between two montmorillonite layers. We show that a fast advective flow can be induced by evaporation at one end of the interlayer channel, that is, a large suction pressure created by evaporation (∼818 MPa) is able to drive the fast water flow through the channel (∼0.88 m/s for a 46 Å-long channel). Scaled up for the pressure gradient to a 2 μm particle, the velocity of water is estimated to be about 95 μm/s, indicating that water can quickly flow through a μm-sized clay particle within seconds. The prediction seems to be confirmed by our thermogravimetric analysis of bentonite hydration and dehydration processes, which indicates that water transport at the early stage of the dehydration is a fast advective process, followed by a slow diffusion process. The possible occurrence of a fast advective water flow in clay interlayers prompts us to reassess water transport in a broad set of natural and engineered systems such as clay swelling/shrinking, moisture transport in soils, water uptake by plants, water imbibition/release in unconventional hydrocarbon reservoirs, and cap rock integrity of supercritical CO2 storage.
Motivated by the need to simulate the effects of underwater explosion on ship structures, we develop a new cavitating acoustics formulation. The proposed approach is consistent with existing methods where the cavitation phenomenon is captured with a bilinear constitutive law. However, the new formulation is in terms of velocity potential, as opposed to the existing displacement-potential and pressure formulations. Also unique to the proposed formulation is a new generalized time-stepping procedure specific to cavitating acoustics, which has the ability to introduce numerical damping to control frothing. Numerical examples of varying complexity are presented to illustrate the effectiveness of the proposed approach and the ability to use velocity potential as a primary field variable for cavitating acoustics simulations.
Karathanassis, Ioannis K.; Hwang, Joonsik; Koukouvinis, Phoevos; Pickett, Lyle M.
A high-speed flow visualisation set-up comprising of combined diffuse backlight illumination (DBI) and schlieren imaging has been developed to illustrate the highly transient, two-phase flow arising in a real-size optical fuel injector. The different illumination nature of the two techniques, diffuse and parallel light respectively, allows for the capturing of refractive-index gradients due to the presence of both interfaces and density gradients within the orifice. Hence, the onset of cavitation and secondary-flow motion within the sac and injector hole can be concurrently visualised. Experiments were conducted utilising a diesel injector fitted with a single-hole transparent tip (ECN spray D) at injection pressures of 700–900 bar and ambient pressures in the range of 1–20 bar. High-speed DBI images obtained at 100,000 fps revealed that the orifice, due to its tapered layout, is mildly cavitating with relatively constant cavity sheets arising mainly in regions of manufacturing imperfections. Nevertheless, schlieren images obtained at the same frame rate demonstrated that a multitude of vortices with short lifetimes arise at different scales in the sac and nozzle regions during the entire duration of the injection cycle but the vortices do not necessarily result in phase change. The magnitude and exact location of coherent vortical structures have a measurable influence on the dynamics of the spray emerging downstream the injector outlet, leading to distinct differences in the variation of its cone angle depending on the injection and ambient pressures examined.
Heterojunctions of semiconductors and metals are the fundamental building blocks of modern electronics. Coherent heterostructures between dissimilar materials can be achieved by composition, doping, or heteroepitaxy of chemically different elements. Here, we report the formation of coherent single-layer 1H-1T MoS2 heterostructures by mechanical exfoliation on Au(111), which are chemically homogeneous with matched lattices but show electronically distinct semiconducting (1H phase) and metallic (1T phase) character, with the formation of these heterojunctions attributed to a combination of lattice strain and charge transfer. The exfoliation approach employed is free of tape residues usually found in many exfoliation methods and yields single-layer MoS2 with millimeter (mm) size on the Au surface. Raman spectroscopy, X-ray photoelectron spectroscopy (XPS), atomic force microscopy (AFM), scanning tunneling microscopy (STM), and scanning tunneling spectroscopy (STS) have collectively been employed to elucidate the structural and electronic properties of MoS2 monolayers on Au substrates. Bubbles in the MoS2 formed by the trapping of ambient adsorbates beneath the single layer during deposition, have also been observed and characterized. Our work here provides a basis to produce two-dimensional heterostructures which represent potential candidates for future electronic devices.
We present a new evaluation framework for implicit and explicit (IMEX) Runge-Kutta time-stepping schemes. The new framework uses a linearized nonhydrostatic system of normal modes. We utilize the framework to investigate the stability of IMEX methods and their dispersion and dissipation of gravity, Rossby, and acoustic waves. We test the new framework on a variety of IMEX schemes and use it to develop and analyze a set of second-order low-storage IMEX Runge-Kutta methods with a high Courant-Friedrichs-Lewy (CFL) number. We show that the new framework is more selective than the 2-D acoustic system previously used in the literature. Schemes that are stable for the 2-D acoustic system are not stable for the system of normal modes.
The continental shelves of the Arctic Ocean and surrounding seas contain large stocks of organic matter (OM) and methane (CH4), representing a potential ecosystem feedback to climate change not included in international climate agreements. We performed a structured expert assessment with 25 permafrost researchers to combine quantitative estimates of the stocks and sensitivity of organic carbon in the subsea permafrost domain (i.e. unglaciated portions of the continental shelves exposed during the last glacial period). Experts estimated that the subsea permafrost domain contains ~560 gigatons carbon (GtC; 170–740, 90% confidence interval) in OM and 45 GtC (10–110) in CH4. Current fluxes of CH4 and carbon dioxide (CO2) to the water column were estimated at 18 (2–34) and 38 (13–110) megatons C yr–1, respectively. Under Representative Concentration Pathway (RCP) RCP8.5, the subsea permafrost domain could release 43 Gt CO2–equivalent (CO2e) by 2100 (14–110) and 190 Gt CO2e by 2300 (45–590), with ~30% fewer emissions under RCP2.6. The range of uncertainty demonstrates a serious knowledge gap but provides initial estimates of the magnitude and timing of the subsea permafrost climate feedback.
Networked microgrids are clusters of geographically-close, islanded microgrids that can function as a single, aggregate island. This flexibility enables customer-level resilience and reliability improvements during extreme event outages and also reduces utility costs during normal grid operations. To achieve this cohesive operation, microgrid controllers and external connections (including advanced communication protocols, protocol translators, and/or internet connection) are needed. However, these advancements also increase the vulnerability landscape of networked microgrids, and significant consequences could arise during networked operation, increasing cascading impact. To address these issues, this report seeks to understand the unique components, functions, and communications within networked microgrids and what cybersecurity solutions can be implemented and what solutions need to be developed. A literature review of microgrid cybersecurity research is provided and a gap analysis of what is additionally needed for securing networked microgrids is performed. Relevant cyber hygiene and best practices to implement are provided, as well as ideas on how cybersecurity can be integrated into networked microgrid design. Lastly, future directions of networked microgrid cybersecurity R&D are provided to inform next steps.
We present a new evaluation framework for implicit and explicit (IMEX) Runge–Kutta time-stepping schemes. The new framework uses a linearized nonhydrostatic system of normal modes. We utilize the framework to investigate the stability of IMEX methods and their dispersion and dissipation of gravity, Rossby, and acoustic waves. We test the new framework on a variety of IMEX schemes and use it to develop and analyze a set of second-order low-storage IMEX Runge–Kutta methods with a high Courant–Friedrichs–Lewy (CFL) number. We show that the new framework is more selective than the 2-D acoustic system previously used in the literature. Schemes that are stable for the 2-D acoustic system are not stable for the system of normal modes.
One of the objectives of the United States (U.S.) Department of Energy's (DOE) Office of Nuclear Energy's Spent Fuel and Waste Science and Technology Campaign is to better understand the technical basis, risks, and uncertainty associated with the safe and secure disposition of spent nuclear fuel (SNF) and high-level radioactive waste. Commercial nuclear power generation in the U.S. has resulted in thousands of metric tons of SNF, the disposal of which is the responsibility of the DOE (Nuclear Waste Policy Act 1982). Any repository licensed to dispose the SNF must meet requirements regarding the longterm performance of that repository. For an evaluation of the long-term performance of the repository, one of the events that may need to be considered is the SNF achieving a critical configuration. Of particular interest is the potential behavior of SNF in dual-purpose canisters (DPCs), which are currently being used to store and transport SNF but were not designed for permanent geologic disposal. A two-phase study has been initiated to begin examining the potential consequences, with respect to longterm repository performance, of criticality events that might occur during the postclosure period in a hypothetical repository containing DPCs. Phase I, a scoping phase, consisted of developing an approach intended to be a starting point for the development of the modeling tools and techniques that may eventually be required either to exclude criticality from or to include criticality in a performance assessment (PA) as appropriate; Phase I is documented in Price et al. (2019). The Phase I approach guided the analyses and simulations done in Phase II to further the development of these modeling tools and techniques as well as the overall knowledge base. The purpose of this report is to document the results of the analyses conducted during Phase II. The remainder of Section 1 presents the background, objective, and scope of this report, as well as the relevant key assumptions used in the Phase II analyses and simulations. Subsequent sections discuss the analyses that were conducted (Section 2), the results of those analyses (Section 3), and the summary and conclusions (Section 4). This report fulfills the Spent Fuel and Waste Science and Technology Campaign deliverable M2SF-20SN010305061.
Harris, Alexandra; Mcmillan, Jeremiah T.; Listyg, Ben J.; Matzen, Laura E.; Carter, Nathan T.
The Sandia Matrices are a free alternative to the Raven’s Progressive Matrices (RPMs). This study offers a psychometric review of Sandia Matrices items focused on two of the most commonly investigated issues regarding the RPMs: (a) dimensionality and (b) sex differences. Model-data fit of three alternative factor structures are compared using confirmatory multidimensional item response theory (IRT) analyses, and measurement equivalence analyses are conducted to evaluate potential sex bias. Although results are somewhat inconclusive regarding factor structure, results do not show evidence of bias or mean differences by sex. Finally, although the Sandia Matrices software can generate infinite items, editing and validating items may be infeasible for many researchers. Further, to aide implementation of the Sandia Matrices, we provide scoring materials for two brief static tests and a computer adaptive test. Implications and suggestions for future research using the Sandia Matrices are discussed.
This paper presents new measurements of species concentrations, temperature and mixture fraction in selected regions of a turbulent ethanol spray flame. The line-Raman–LIF–COsingle bondOH setup developed at the Sandia's Combustion Research Facility is utilised to probe regions of a spray flame where laser breakdown of liquid droplets is avoided and the remaining interferences can be corrected. The spray flame is stabilised on the piloted Sydney needle spray burner, where axial translation of the liquid injecting needle in the air-blast stream can transition the spray from dilute to dense. The solution to obtaining successful measurements is found to be multifaceted and includes: the appropriate selection of flame conditions; high sensitivity of the Raman detection system permitting reduced laser energies; development of a pre-processing algorithm to reject strong droplet interferences; and application of the hybrid matrix inversion method combined with wavelet denoising to account for interference corrections and noise at the very low signal levels obtained. Unique and necessary for the successful measurements reported in this paper, a pre-processing algorithm is outlined that removes data points corrupted with strong interferences from droplets. These interferences arise from a range of sources, but the most intense are due to the laser interaction with surrounding mist or liquid fragments, such that measurements near the jet centreline are corrupted and hence discarded. Reliable measurements of mixture fraction, temperature obtained from the sum of the species number densities, and species mole fractions are reported for regions in the flames sufficiently far from the centreline. The paper demonstrates the feasibility of the judicious use of Raman scattering in turbulent spray flames, the results of which will be extremely useful for validating numerical simulations.
Molecular diffusion coefficients calculated using molecular dynamics (MD) simulations suffer from finite-size (i.e., finite box size and finite particle number) effects. Results from finite-sized MD simulations can be upscaled to infinite simulation size by applying a correction factor. For self-diffusion of single-component fluids, this correction has been well-studied by many researchers including Yeh and Hummer (YH); for binary fluid mixtures, a modified YH correction was recently proposed for correcting MD-predicted Maxwell-Stephan (MS) diffusion rates. Here we use both empirical and machine learning methods to identify improvements to the finite-size correction factors for both self-diffusion and MS diffusion of binary Lennard-Jones (LJ) fluid mixtures. Using artificial neural networks (ANNs), the error in the corrected LJ fluid diffusion is reduced by an order of magnitude versus existing YH corrections, and the ANN models perform well for mixtures with large dissimilarities in size and interaction energies where the YH correction proves insufficient.
Domestic nuclear power is facing increased financial pressures from a variety of areas and there is pressure on these utilities to reduce their cost of operation. Currently, about 20%-30% of all on-site personnel are related to physical security. The LWRS Program recognized that R&D related to physical security could play a role in providing nuclear utilities technical and staffing efficiency options to meet their physical security commitments, but utilities often lack the technical basis or the ability to create the technical basis to realize or implement these efficiencies; towards this end, the LWRS Program created the Physical Security Pathway in September 2019. The pathway performs R&D to develop methods, tools, and technologies to optimize and modernize a nuclear power facility’s security posture. The pathway will: (1) conduct research on risk-informed techniques for physical security that account for a dynamic adversary; (2) apply advanced modeling and simulation tools to better inform physical-security scenarios and reduce uncertainties in force-on-force modeling; (3) assess benefits from proposed enhancements and novel mitigation strategies and explore changes to best practices, guides, or regulation to enable modernization; and (4) enhance and provide the technical basis for stakeholders to employ new security methods, tools, and technologies.
Two methods are examined for extending the life of tritium targets for production of 14 MeV neutrons by the 3H(2H,n)4He nuclear reaction. With thick film targets the neutron production rate decreases with time due to isotope exchange of tritium in the film with implanted deuterium. In this case, the target life is maximized by operating the target at elevated temperature where the implanted deuterium mixes by thermal diffusion throughout the entire thickness of the film. The number of neutrons obtained from a target is then proportional to the initial tritium content of the film. A novel thin-film target design was also developed and tested. With these thin-film targets, the incident deuterium is implanted through the tritide into the underlying substrate material. A thin permeation barrier layer between the tritide film and substrate, reduces the rate of tritium loss from the tritide film. Good thin-film target performance was achieved using W and Fe for the barrier and substrate materials respectively. Thin-film targets were fabricated and tested and shown to produce similar number of neutrons as thick-film targets while using only a small fraction of the amount of tritium.
CTF is a thermal hydraulic subchannel code developed to predict light water reactor (LWR) core behavior. It is a version of Coolant Boiling in Rod Arrays (COBRA) developed by Oak Ridge National Laboratory (ORNL) and North Carolina State University (NCSU) and used in the Consortium for the Advanced Simulation of LWRs (CASL). In this work, the existing CTF code verification matrix is expanded, which ensures that the code is a faithful representation of the underlying mathematical model. The suite of code verification tests are mapped to the underlying conservation equations of CTF and significant gaps are addressed. As such, five new problems are incorporated: isokinetic advection, conduction, pressure drop, convection, and pipe boiling. Convergence behavior and numerical errors are quantified for each of the tests and all tests converge at the correct rate to their corresponding analytic solution. A new verification utility that generalizes the code verification process is used to incorporate these problems into the CTF automated test suite.
Vansco, Michael F.; Caravan, Rebecca L.; Pandit, Shubhrangshu; Zuraski, Kristen; Winiberg, Frank A.F.; Au, Kendrew; Bhagde, Trisha; Trongsiriwat, Nisalak; Walsh, Patrick J.; Osborn, David L.; Percival, Carl J.; Klippenstein, Stephen J.; Taatjes, Craig A.; Lester, Marsha I.
Isoprene is the most abundant non-methane hydrocarbon emitted into the Earth's atmosphere. Ozonolysis is an important atmospheric sink for isoprene, which generates reactive carbonyl oxide species (R1R2CO+O-) known as Criegee intermediates. This study focuses on characterizing the catalyzed isomerization and adduct formation pathways for the reaction between formic acid and methyl vinyl ketone oxide (MVK-oxide), a four-carbon unsaturated Criegee intermediate generated from isoprene ozonolysis. syn-MVK-oxide undergoes intramolecular 1,4 H-atom transfer to form a substituted vinyl hydroperoxide intermediate, 2-hydroperoxybuta-1,3-diene (HPBD), which subsequently decomposes to hydroxyl and vinoxylic radical products. Here, we report direct observation of HPBD generated by formic acid catalyzed isomerization of MVK-oxide under thermal conditions (298 K, 10 torr) using multiplexed photoionization mass spectrometry. The acid catalyzed isomerization of MVK-oxide proceeds by a double hydrogen-bonded interaction followed by a concerted H-atom transfer via submerged barriers to produce HPBD and regenerate formic acid. The analogous isomerization pathway catalyzed with deuterated formic acid (D2-formic acid) enables migration of a D atom to yield partially deuterated HPBD (DPBD), which is identified by its distinct mass (m/z 87) and photoionization threshold. In addition, bimolecular reaction of MVK-oxide with D2-formic acid forms a functionalized hydroperoxide adduct, which is the dominant product channel, and is compared to a previous bimolecular reaction study with normal formic acid. Complementary high-level theoretical calculations are performed to further investigate the reaction pathways and kinetics.
Accurate synthetic spectra that rely on large Line-By-Line (LBL)-databases are used in a wide range of applications such as high temperature combustion, atmospheric re-entry, planetary surveillance and laboratory plasmas. Conventionally synthetic spectra are calculated by computing a lineshape for every spectral line in the database and adding those together, which may take multiple hours for large databases. In this paper we propose a new approach for spectral synthesis based on an integral transform: the synthetic spectrum is calculated as the integral over the product of a Voigt profile and a newly proposed three-dimensional “lineshape distribution function”, which is a function of spectral position and Gaussian- & Lorentzian width coordinates. A fast discrete version of this transform based on the Fast Fourier Transform (FFT) is proposed, which improves performance compared to the conventional approach by several orders of magnitude while maintaining accuracy. Strategies that minimize the discretization error are discussed. A Python implementation of the method is compared against state-of-the-art spectral code RADIS, and is since adopted as RADIS's default synthesis method. The synthesis of a benchmark CO2 spectrum consisting of 1.8 M spectral lines and 200k spectral points took only 3.1 s using the proposed method (1011 lines × spectral points/s), a factor ~300 improvement over the state-of-the-art, with the relative improvement generally increasing for higher number of lines and/or number of spectral points. Finally, an experimental GPU-implementation of the method was also benchmarked, which demonstrated another 2~3 orders performance increase, achieving up to 5 ∙ 1014 lines × spectral points/s.
Source code is a form of human communication, albeit one where the information shared between the programmers reading and writing the code is constrained by the requirement that the code executes correctly. Programming languages are more syntactically constrained than natural languages, but they are also very expressive, allowing a great many different ways to express even very simple computations. Still, code written by developers is highly predictable, and many programming tools have taken advantage of this phenomenon, relying on language model surprisal as a guiding mechanism. Additionally, while surprisal has been validated as a measure of cognitive load in natural language, its relation to human cognitive processes in code is still poorly understood. In this paper, we explore the relationship between surprisal and programmer preference at a small granularity—do programmers prefer more predictable expressions in code? Using meaning-preserving transformations, we produce equivalent alternatives to developer-written code expressions and run a corpus study on Java and Python projects. In general, language models rate the code expressions developers choose to write as more predictable than these transformed alternatives. Then, we perform two human subject studies asking participants to choose between two equivalent snippets of Java code with different surprisal scores (one original and transformed). We find that programmers do prefer more predictable variants, and that stronger language models like the transformer align more often and more consistently with these preferences.
This paper describes the development of vertical GaN PN diodes for high-voltage applications. A centerpiece of this work is the creation of a foundry effort that incorporates epitaxial growth, wafer metrology, device design, processing, and characterization, and reliability evaluation and failure analysis. A parallel effort aims to develop very high voltage (up to 20 kV) GaN PN diodes for use as devices to protect the electric grid against electromagnetic pulses.
Two active airflow control methods are investigated to mitigate advective and particle losses from the open aperture of a falling particle receiver. Advective losses can be reduced via active airflow methods. However, in the case of once-through suction, energy lost as enthalpy of hot air due to active airflow needs to be minimized so that thermal efficiency can be maximized. In the case of forced air injection, a properly configured aerowindow can reduce advective losses substantially for calm conditions. Although some improvement is offered in windy conditions, an aerowindow in the presence of winds does not show an ability to mitigate advective losses to values achievable by an aerowindow in the absence of wind. The two active airflow methods considered in this paper both show potential for efficiency improvement, but the improvement many not be justified given the added complexity and cost of implementing an active airflow system. While active airflow methods are tractable for a 1 MWth cavity receiver with a 1 m square aperture, the scalability of these active airflow methods is questionable when considering commercial scale receivers with 10–20 m square apertures or larger.
A strategy to optimize the thermal efficiency of falling particle receivers (FPRs) in concentrating solar power applications is described in this paper. FPRs are a critical component of a falling particle system, and receiver designs with high thermal efficiencies (~90%) for particle outlet temperatures > 700°C have been targeted for next generation systems. Advective losses are one of the most significant loss mechanisms for FPRs. Hence, this optimization aims to find receiver geometries that passively minimize these losses. The optimization strategy consists of a series of simulations varying different geometric parameters on a conceptual receiver design for the Generation 3 Particle Pilot Plant (G3P3) project using simplified CFD models to model the flow. A linear polynomial surrogate model was fit to the resulting data set, and a global optimization routine was then executed on the surrogate to reveal an optimized receiver geometry that minimized advective losses. This optimized receiver geometry was then evaluated with more rigorous CFD models, revealing a thermal efficiency of 86.9% for an average particle temperature increase of 193.6°C and advective losses less than 3.5% of the total incident thermal power in quiescent conditions.
This paper captures guidelines for the design and operation of sCO2 systems for research and development applications with specific emphasis on single-pressure pumped loops for thermal-hydraulic experiments and implications toward larger sCO2 Brayton power cycles. Direct experience with R&D systems at the kilowatt (kW), 50 kW, 200 kW, and 1 megawatt thermal scale has resulted in a recommended work flow to move a design from a thermodynamic flowsheet to a set of detailed build plans that account for industrial standards, engineering analysis, and operating experience. Analyses of operational considerations including CO2 storage, filling, pressurization, inventory management, and sensitivity to pump inlet conditions were conducted and validated during shakedown and operation of a 200 kilowatt-scale sCO2 system.
Particle emissions from a high-temperature falling particle receiver with an open aperture were modeled using computational and analytical methods and compared to U.S. particle-emissions standards to assess potential pollution and health hazards. The modeling was performed subsequent to previous on-sun testing and air sampling that did not collect significant particle concentrations at discrete locations near the tower, but the impacts of wind on collection efficiency, especial for small particles less than 10 microns, were uncertain. The emissions of both large (~350 microns) and small (<10 microns) particles were modeled for a large-scale (100 MWe) particle receiver system using expected emission rates based on previous testing and meteorological conditions for Albuquerque, New Mexico. Results showed that the expected emission rates yielded particle concentrations that were significantly less than either the pollution or inhalation metrics of 12 Pg/m3 (averaged annually) and 15 mg/m3, respectively. Particle emission rates would have to increase by a factor of ~400 (~0.1 kg/s) to begin approaching the most stringent standards.
Dannemann Dugick, Fransiska K.; Stump, Brian W.; Blom, Philip S.; Marcillo, Omar E.; Hayward, Chris T.; Carmichael, Joshua D.; Arrowsmith, Stephen
Physical and deployment factors that influence infrasound signal detection and assess automatic detection performance for a regional infrasound network of arrays in the Western U.S. are explored using signatures of ground truth (GT) explosions (yields). Despite these repeated known sources, published infrasound event bulletins contain few GT events. Arrays are primarily distributed toward the south-southeast and south-southwest at distances between 84 and 458 km of the source with one array offering azimuthal resolution toward the northeast. Events occurred throughout the spring, summer, and fall of 2012 with the majority occurring during the summer months. Depending upon the array, automatic detection, which utilizes the adaptive F-detector successfully, identifies between 14% and 80% of the GT events, whereas a subsequent analyst review increases successful detection to 24%–90%. Combined background noise quantification, atmospheric propagation analyses, and comparison of spectral amplitudes determine the mechanisms that contribute to missed detections across the network. This analysis provides an estimate of detector performance across the network, as well as a qualitative assessment of conditions that impact infrasound monitoring capabilities. Finally, the mechanisms that lead to missed detections at individual arrays contribute to network-level estimates of detection capabilities and provide a basis for deployment decisions for regional infrasound arrays in areas of interest.
Sullivan, Eduardo'; Montoya, Eduardo; Sun, Shi K.; Vasiliauskas, Jonathan G.; Kirk, Cameron; Dixon Wilkins, Malin C.; Weck, Philippe F.; Kim, Eunja; Knight, Kevin S.; Hyatt, Neil C.
The synthesis, structure, and thermal stability of the periodate double perovskites A2NaIO6 (A= Ba, Sr, Ca) were investigated in the context of potential application for the immobilization of radioiodine. A combination of X-ray diffraction and neutron diffraction, Raman spectroscopy, and DFT simulations were applied to determine accurate crystal structures of these compounds and understand their relative stability. The compounds were found to exhibit rock-salt ordering of Na and I on the perovskite B-site; Ba2NaIO6 was found to adopt the Fm-3m aristotype structure, whereas Sr2NaIO6 and Ca2NaIO6 adopt the P21/n hettotype structure, characterized by cooperative octahedral tilting. DFT simulations determined the Fm-3m and P21/n structures of Ba2NaIO6 to be energetically degenerate at room temperature, whereas diffraction and spectroscopy data evidence only the presence of the Fm-3m phase at room temperature, which may imply an incipient phase transition for this compound. The periodate double perovskites were found to exhibit remarkable thermal stability, with Ba2NaIO6 only decomposing above 1050 °C in air, which is apparently the highest recorded decomposition temperature so far recorded for any iodine bearing compound. As such, these compounds offer some potential for application in the immobilization of iodine-129, from nuclear fuel reprocessing, with an iodine incorporation rate of 25–40 wt%. The synthesis of these compounds, elaborated here, is also compatible with both current conventional and future advanced processes for iodine recovery from the dissolver off-gas.
On December 9, 2020, Sandia National Laboratories (SNL) convened a diverse set of voices from across the federal government, the United States (U.S.) military, the private sector, and national laboratories to understand current and future trends affecting our national cyber strategy, and to illuminate the role of Federally Funded Research and Development Centers (FFRDCs) in contributing to national cyber strategy objectives.
The impedance bandwidth of a microstrip patch antenna may be increased by additional resonances in the antenna structure. This work uses Characteristic Mode Analysis to show that a text-book coplanar parasitically coupled patch design is well described by Coupled Mode Theory. Comparisons to other multimode patch antennas also described by Coupled Mode Theory are made, and some intrinsic properties of the coplanar parasitically coupled patch geometry are noted.
Monolithic integration of lattice-mismatched semiconductor materials opens up access to a wide range of bandgaps and new device functionalities. However, it is inevitably accompanied by defect formation. A thorough analysis of how these defects propagate and interact with interfaces is critical to understanding their effects on device parameters. In this study, we present a comprehensive study of dislocation networks in the GaSb/GaAs heteroepitaxial system using transmission electron microscopy (TEM). Specifically, the sample analyzed is a GaSb film grown on GaAs using dislocation–reduction strategies such as interfacial misfit array formation and introduction of a dislocation filtering layer. Using various TEM techniques, it is shown that such an analysis can reveal important information on the dislocation behavior including filtering mechanism, types of dislocation reactions, and other interactions with interfaces. A novel method that enables plan-view imaging of deeply embedded interfaces using TEM and a demonstration of independent imaging of different dislocation types are also presented. While clearly effective in characterizing dislocation behavior in GaSb/GaAs, we believe that the methods outlined in this article can be extended to study other heteroepitaxial material systems.
In the subsurface, MgO engineered barriers are employed at the Waste Isolation Pilot Plant (WIPP), a transuranic waste repository near Carlsbad, NM. During service, the MgO will be exposed to high concentration brine environments and may form stable intermediate phases that can alter the barriers effectiveness. Here, MgO was aged in water and three different brine solutions. X-ray diffraction (XRD) and 1H nuclear magnetic resonance (NMR) analysis were performed to identify the formation of secondary phases. After aging, ~4% of the MgO was hydrated and fine-grained powders resulted in greater loss of crystallinity than hard granular grains. 1H magic angle spinning (MAS) NMR spectra resolved minor phases not visible in XRD, indicating that diverse 1H environments are present along with Mg(OH)2. Density functional theory (DFT) simulations for several proposed Mg-O-H, Mg-CI-O-H, and Na-O-H containing phases were performed to index peaks in the experimental 1H MAS NMR spectra. While proposed intermediate crystal structures exhibited overlapping 1H NMR peaks, Mg-O-H intermediates were attributed to the growth of the 1.0-0.0ppm peak while the Mg-CI-O-H structures contributed to the 2.5- 5.0ppm peak in the chloride containing brines. Overall, NMR analysis of aged MgO indicates the formation of a range of possible intermediate structures that cannot be resolved with XRD analysis alone.
Our goal was to characterize certain aspects of shaped charges. In order to determine the pressure field created by the jet and the jet velocity, I worked with and modified simulations using CTH, a code developed at Sandia. I was able to manipulate and modify certain variables in order to observe their effects and measure the pressure and velocities at different points in the simulation.
Chris Saunders and three technologists are in high demand from Sandia’s deep learning teams, and they’re kept busy by building new clusters of computer nodes for researchers who need the power of supercomputing on a smaller scale. Sandia researchers working on Laboratory Directed Research & Development (LDRD) projects, or innovative ideas for solutions on short timeframes, formulate new ideas on old themes and frequently rely on smaller cluster machines to help solve problems before introducing their code to larger HPC resources. These research teams need an agile hardware and software environment where nascent ideas can be tested and cultivated on a smaller scale.
Sandia has been developing and supporting data transfer tools for over 20 years and has the expertise to take DOE into the Extreme Scale era. In looking at Exascale and beyond (Extreme Scale Computing), data sets can be thousands of 500TBs in size, a single file can be in the 100TB range, and billions of files are expected. Huge bursts of data need to be transferred, even today. While data archiving is often not thought about, it is an integral part of the full data management path when data is generated on HPC systems. In order to move generated data to its final resting place (data archive) or to transfer between file systems, a capable data transfer tool is required.
Carnac, located at Sandia's California site, is an institutional cluster for Emulytics that provides security researchers with resources to model enterprise computer networks and evaluate how resilient they are from attacks. While multiple Emulytics cluster computers have been built at Sandia, Carnac is the first system that was developed as an institutional resource that can be shared among different groups with disparate requirements.
The Rapid Sample Insertion/Extraction System for Gamma Irradiation, otherwise known as the "rabbit" system, was a four-week long project which included many different aspects; from coding an Arduino to building PVC piping to 3-D printing the "rabbit" capsules. The "rabbit" system is a system of PVC piping that allows a quick and efficient transfer of materials into/out of one of the irradiation chambers in the Gamma Irradiation Facility (GIF) with the use of a 3-D printed "rabbit." This "rabbit" encapsulates material to be irradiated and carries it from a position outside of the irradiation chamber to the basket inside of the chamber. The main purpose of this system is to save time and provide more exact data without any delays that normally occur when a person has to enter the chamber, retrieve data, and then analyze the data. This system should take measurements and retrieve the data instantaneously. The way in which the "rabbit" is sent through the PVC piping is with an advanced bi-directional, high-throughput pneumatic system, or a shop vacuum cleaner. When the vacuum is set to blow or suck then the "rabbit" will be pulled through the PVC piping to its intended destination and will hit sensors along the sides of the tubing when it reaches the end of the piping. These sensors tell the Arduino that the "rabbit" is finished moving throughout the tubing and stops a timer. Another timer is used to see how long the "rabbit" is being irradiated so when the "rabbit" reaches the sensors in the basket in the irradiation chamber another timer is started and it ends when the sensors no longer detect the "rabbit," which means that it has begun its journey back to the starting point.
Accurate and timely weather predictions are critical to many aspects of society with a profound impact on our economy, general well-being, and national security. In particular, our ability to forecast severe weather systems is necessary to avoid injuries and fatalities, but also important to minimize infrastructure damage and maximize mitigation strategies. The weather community has developed a range of sophisticated numerical models that are executed at various spatial and temporal scales in an attempt to issue global, regional, and local forecasts in pseudo real time. The accuracy however depends on the time period of the forecast, the nonlinearities of the dynamics, and the target spatial resolution. Significant uncertainties plague these predictions including errors in initial conditions, material properties, data, and model approximations. To address these shortcomings, a continuous data collection occurs at an effort level that is even larger than the modeling process. It has been demonstrated that the accuracy of the predictions depends on the quality of the data and is independent to a certain extent on the sophistication of the numerical models. Data assimilation has become one of the more critical steps in the overall weather prediction business and consequently substantial improvements in the quality of the data would have transformational benefits. This paper describes the use of infrasound inversion technology, enabled through exascale computing, that could potentially achieve orders of magnitude improvement in data quality and therefore transform weather predictions with significant impact on many aspects of our society.
Sandia National Laboratories (SNL) is a multi-purpose engineering and science laboratory owned by the U.S. Department of Energy (DOE)/National Nuclear Security Administration. SNL is managed and operated by Sandia Corporation (Sandia), a wholly-owned subsidiary of Lockheed Martin Corporation. Sandia National Laboratories, New Mexico (SNL/NM) is located within the boundaries of Kirtland Air Force Base (KAFB), southeast of the City of Albuquerque in Bernalillo County, New Mexico. The Mixed Waste Landfill (MWL) is located 4 miles south of SNL/NM central facilities and 5 miles southeast of Albuquerque International Sunport, in the north-central portion of Technical Area (TA)-III. The MWL disposal area comprises 2.6 acres. During operations, the MWL accepted containerized and other low-level radioactive waste and minor amounts of mixed waste from SNL/NM research facilities and off-site DOE and U.S. Department of Defense generators from March 1959 to December 1988. More specific information regarding the MWL inventory and past disposal practices is presented in the MWL Phase 2 RCRA Facility Investigation Report (Peace et al. September 2002) and the extensive MWL Administrative Record.
After decades of R&D, quantum computers comprising more than 2 qubits are appearing. If this progress is to continue, the research community requires a capability for precise characterization (“tomography”) of these enlarged devices, which will enable benchmarking, improvement, and finally certification as mission-ready. As world leaders in characterization -- our gate set tomography (GST) method is the current state of the art – the project team is keenly aware that every existing protocol is either (1) catastrophically inefficient for more than 2 qubits, or (2) not rich enough to predict device behavior. GST scales poorly, while the popular randomized benchmarking technique only measures a single aggregated error probability. This project explored a new insight: that the combinatorial explosion plaguing standard GST could be avoided by using an ansatz of few-qubit interactions to build a complete, efficient model for multi-qubit errors. We developed this approach, prototyped it, and tested it on a cutting-edge quantum processor developed by Rigetti Quantum Computing (RQC), a US-based startup. We implemented our new models within Sandia’s PyGSTi open-source code, and tested them experimentally on the RQC device by probing crosstalk. We found two major results: first, our schema worked and is viable for further development; second, while the Rigetti device is indeed a “real” 8-qubit quantum processor, its behavior fluctuated significantly over time while we were experimenting with it and this drift made it difficult to fit our models of crosstalk to the data.
Propagation of thermal events from one damaged cell in a battery module to adjacent cells is a safety concern. A team of researchers from Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL) have developed a “safety-map” to evaluate propensity for failure propagation. The model results were used to evaluate passive thermal management designs for Li-ion battery modules.
Ozkaya, Yusuf; Sariyuce, A.E.; Catalyurek, Umit V.; Pinar, Ali P.
Centrality rankings such as degree, closeness, betweenness, Katz, PageRank, etc. are commonly used to identify critical nodes in a graph. These methods are based on two assumptions that restrict their wider applicability. First, they assume the exact topology of the network is available. Secondly, they do not take into account the activity over the network and only rely on its topology. However, in many applications, the network is autonomous, vast, and distributed, and it is hard to collect the exact topology. At the same time, the underlying pairwise activity between node pairs is not uniform and node criticality strongly depends on the activity on the underlying network. In this paper, we propose active betweenness cardinality, as a new measure, where the node criticalities are based on not the static structure, but the activity of the network. We show how this metric can be computed efficiently by using only local information for a given node and how we can find the most critical nodes starting from only a few nodes. We also show how this metric can be used to monitor a network and identify failed nodes. We present experimental results to show effectiveness by demonstrating how the failed nodes can be identified by measuring active betweenness cardinality of a few nodes in the system.
Hoekstra, Robert J.; Malone, C.M.; Montoya, D.R.; Ferencz, M.R.; Kuhl, A.L.; Wagner, J.
The review was conducted on May 8-9, 2017 at the University of Utah. Overall the review team was impressed with the work presented and found that the CCMSC had met or exceeded the Year 3 milestones. Specific details, comments, and recommendations are included in this document.
Millions of dollars and significant resources are being spent by developers of utility-scale solar photovoltaic (PV) and concentrating solar power (CSP) plants to address federal and local requirements regarding glare and avian hazards. Solar glare can occur from the glass surfaces of PV modules and from mirrors in CSP systems, which can produce safety and health risks for pilots, motorists, and residents located near these systems. In addition, concentrated solar flux at CSP plants has the potential to singe birds as they fly through regions of high solar flux. This work will develop tools to characterize and mitigate these potential hazards, which will address regulatory policies and reduce costs and efforts associated with the proposed deployment of gigawatts of solar energy systems throughout the nation. The development of standardized and publicly available tools to address these regulatory policies and ensure public and environmental safety is an appropriate role for the government.
This work is developing particle flow control and measurement methods for next-generation concentrating solar power systems employing particle-based technologies. Particle receivers are being pursued to provide substantial performance improvements through higher temperatures (>700 °C) for more efficient and cost-effective CSP systems with direct storage for electricity generation, process heating, thermochemistry, and solar fuels production. This specific work will develop technologies that enable more efficient particle receivers and scalable methods to accommodate variable irradiances during commercial on-sun operation. The development of next-generation particle-receiver systems and methods with potentially high consequences for improved performance and cost savings for CSP applications is an appropriate role for the government.
Particle receivers are being pursued to provide substantial performance improvements through higher temperatures (>700 °C) for more efficient and cost-effective concentrating solar power (CSP) systems with direct storage. However, the interface between the solar-collection and power-block subsystems - a high-temperature particle/supercritical CO2 (sCO2) heat exchanger - has not been developed. The objective of this project is to design, construct, and test a first-of-a-kind particle-to-sCO2 heat exchanger. This work will enable emerging sCO2 power cycles that have the potential to meet SunShot targets of 50% thermal-to-electric efficiency, dry cooling with 40 °C ambient temperature, and $0.06/kWh for CSP systems. The development of next-generation particle-based systems and methods with potentially high consequences for improved performance and cost savings for CSP applications is an appropriate role for the government.
This is the second part of a two-part contribution on modeling of the anisotropic elastic-plastic response of aluminum 7079 from an extruded tube. Part I focused on calibrating a suite of yield and hardening functions from tension test data; Part II concentrates on evaluating those calibrations. Here, a rectangular validation specimen with a blind hole was designed to provide heterogeneous strain fields that exercise the material anisotropy, while at the same time avoiding strain concentrations near sample edges where Digital Image Correlation (DIC) measurements are difficult to make. Specimens were extracted from the tube in four different orientations and tested in tension with stereo-DIC measurements on both sides of the specimen. Corresponding Finite Element Analysis (FEA) with calibrated isotropic (von Mises) and anisotropic (Yld2004-18p) yield functions were also conducted, and both global force-extension curves as well as full-field strains were compared between the experiments and simulations. Specifically, quantitative full-field strain error maps were computed using the DIC-leveling approach proposed by Lava et al. The specimens experienced small deviations from ideal boundary conditions in the experiments, which had a first-order effect on the results. Therefore, the actual experimental boundary conditions had to be applied to the FEA in order to make valid comparisons. The predicted global force-extension curves agreed well with the measurements overall, but were sensitive to the boundary conditions in the nonlinear regime and could not differentiate between the two yield functions. Interrogation of the strain fields both qualitatively and quantitatively showed that the Yld2004-18p model was clearly able to better describe the strain fields on the surface of the specimen compared to the von Mises model. These results justify the increased complexity of the calibration process required for the Yld2004-18p model in applications where capturing the strain field evolution accurately is important, but not if only the global force-extension response of the elastic–plastic region is of interest.
The US Department of Energy (DOE) Nuclear Energy Research Initiative funded the design and construction of the Seven Percent Critical Experiment (7uPCX) at Sandia National Laboratories. The start-up of the experiment facility and the execution of the experiments described here were funded by the DOE Nuclear Criticality Safety Program. The 7uPCX is designed to investigate critical systems with fuel for light water reactors in the enrichment range above 5% 235U. The 7uPCX assembly is a water-moderated and -reflected array of aluminum-clad square-pitched U(6.90%)O2 fuel rods. Other critical experiments performed in the 7uPCX assembly are documented in LEU-COMP-THERM-078, LEU-COMP-THERM-080, LEU-COMPTHERM- 096, LEU-COMP-THERM-097, and LEU-COMP-THERM-101. The twenty-seven critical experiments in this series were performed in 2020 in the SCX at the Sandia Pulsed Reactor Facility. The experiments are grouped by fuel rod pitch. Case 1 is a base case with a pitch of 0.8001 cm and no water holes in the array. Cases 2 through 6 have the same pitch as Case 1 but contain various configurations with water holes, providing slight variations in the fuel-to-water ratio. Similarly, Case 7 is a base case with a pitch of 0.854964 cm and no water holes in the array. Cases 8 through 11 have the same pitch as Case 7 but contain various configurations with water holes. Cases 12 through 15 have a pitch of 1.131512 cm and differ according to the number of water holes in the array, with Case 12 having no water holes. Cases 16 through 19 have a pitch of 1.209102 cm and differ according to number of water holes in the array, with Case 16 having no water holes. Cases 20 through 23 have a pitch of 1.6002 cm and differ according to number of water holes in the array, with Case 20 having no water holes. Cases 24 through 27 have a pitch of 1.709928 cm and differ according to number of water holes in the array, with Case 24 having no water holes. As the experiment case number increases, the fuel-to-water volume ratio decreases.
Inertial confinement fusion approaches involve the creation of high-energy-density states through compression. High gain scenarios may be enabled by the beneficial heating from fast electrons produced with an intense laser and by energy containment with a high-strength magnetic field. Here, we report experimental measurements from a configuration integrating a magnetized, imploded cylindrical plasma and intense laser-driven electrons as well as multi-stage simulations that show fast electrons transport pathways at different times during the implosion and quantify their energy deposition contribution. The experiment consisted of a CH foam cylinder, inside an external coaxial magnetic field of 5 T, that was imploded using 36 OMEGA laser beams. Two-dimensional (2D) hydrodynamic modelling predicts the CH density reaches 9.0 g cm–3, the temperature reaches 920 eV and the external B-field is amplified at maximum compression to 580 T. At pre-determined times during the compression, the intense OMEGA EP laser irradiated one end of the cylinder to accelerate relativistic electrons into the dense imploded plasma providing additional heating. The relativistic electron beam generation was simulated using a 2D particle-in-cell (PIC) code. Finally, three-dimensional hybrid-PIC simulations calculated the electron propagation and energy deposition inside the target and revealed the roles the compressed and self-generated B-fields play in transport. During a time window before the maximum compression time, the self-generated B-field on the compression front confines the injected electrons inside the target, increasing the temperature through Joule heating. For a stronger B-field seed of 20 T, the electrons are predicted to be guided into the compressed target and provide additional collisional heating.
The accelerating chemical effect of ozone addition on the oxidation chemistry of methyl hexanoate [CH3(CH2)4C(= O)OCH3] was investigated over a temperature range from 460 to 940 K. Using an externally heated jet-stirred reactor at p = 700 Torr (residence time τ = 1.3 s, stoichiometry ψ = 0.5, 80% argon dilution), we explored the relevant chemical pathways by employing molecular-beam mass spectrometry with electron and single-photon ionization to trace the temperature dependencies of key intermediates, including many hydroperoxides. In the absence of ozone, reactivity is observed in the so-called low-temperature chemistry (LTC) regime between 550 and 700 K, which is governed by hydroperoxides formed from sequential O2 addition and isomerization reactions. At temperatures above 700 K, we observed the negative temperature coefficient (NTC) regime, in which the reactivity decreases with increasing temperatures, until near 800 K, where the reactivity increases again. Upon addition of ozone (1000 ppm), the overall reactivity of the system is dramatically changed due to the time scale of ozone decomposition in comparison to fuel oxidation time scales of the mixtures at different temperatures. While the LTC regime seems to be only slightly affected by the addition of ozone with respect to the identity and quantity of the observed intermediates, we observed an increased reactivity in the intermediate NTC temperature range. Furthermore, we observed experimental evidence for an additional oxidation regime in the range near 500 K, herein referred to as the extreme low-temperature chemistry (ELTC) regime. Experimental evidence and theoretical rate constant calculations indicate that this ELTC regime is likely to be initiated by H abstraction from methyl hexanoate via O atoms, which originate from thermal O3 decomposition. The theoretical calculations show that the rate constants for methyl ester initiation via abstraction by O atoms increase dramatically with the size of the methyl ester, suggesting that ELTC is likely not important for the smaller methyl esters. Experimental evidence is provided indicating that, similar to the LTC regime, the chemistry in the ELTC regime is dominated by hydroperoxide chemistry. However, mass spectra recorded at various reactor temperatures and at different photon energies provide experimental evidence of some differences in chemical species between the ELTC and the LTC temperature ranges.
Over the past few decades, software has become ubiquitous as it has been integrated into nearly every aspect of society, including household appliances, consumer electronics, industrial control systems, public utilities, government operations, and military systems. Consequently, many critical national security questions can no longer be answered convincingly without understanding software, including its purpose, its capabilities, its flaws, its communication, or how it processes and stores data. As software continues to become larger, more complex, and more widespread, our ability to answer important mission questions and reason about software in a timely way is falling behind. Today, to achieve such understanding of third-party software, we rely predominantly on the ability of reverse engineering experts to manually answer each particular mission question for every software system of interest. This approach often requires heroic human effort that nevertheless fails to meet current mission needs and will never scale to meet future needs. The result is an emerging crisis: a massive and expanding gap between the national security need to answer mission questions about software and our ability to do so. Sandia National Laboratories has established the Rapid Analysis of Mission Software Systems (RAMSeS) effort, a collaborative long-term effort aimed at dramatically improving our nation’s ability to answer mission questions about third-party software by growing an ecosystem of tools that augment the human reverse engineer through automation, interoperability, and reuse. Focusing on static analysis of binary programs, we are attempting to identify reusable software analysis components that advance our ability to reason about software, to automate useful aspects of the software analysis process, and to integrate new methodologies and capabilities into a working ecosystem of tools and experts. We aim to integrate existing tools where possible, adapt tools when modest modifications will enable them to interoperate, and implement missing capability when necessary. Although we do hope to automate a growing set of analysis tasks, we will approach this goal incrementally by assisting the human in an ever-widening range of tasks.
This report details work to study trade-offs in topology and network bandwidth for potential interconnects in the exascale (2021-2022) timeframe. The work was done using multiple interconnect models across two parallel discrete event simulators. Results from each independent simulator are shown and discussed and the areas of agreement and disagreement are explored.
Steinberger, William M.; Ruch, Marc L.; Di Fulvio, Angela; Marleau, Peter M.; Clarke, Shaun D.; Pozzi, Sara A.
A compact radiation imaging system capable of detecting, localizing, and characterizing special nuclear material (e.g. highly-enriched uranium, plutonium…) would be useful for national security missions involving inspection, emergency response, or war-fighters. Previously-designed radiation imaging systems have been large and bulky with significant portions of volume occupied by photomultiplier tubes (PMTs). The prototype imaging system presented here uses silicon photomultipliers (SiPMs) in place of PMTs because SiPMs are much more compact and operate at low power and voltage. The SiPMs are coupled to the ends of eight stilbene organic scintillators, which have an overall volume of 5.74 × 5.74 × 7.11 cm3. The prototype dual-particle imager’s capabilities were evaluated by performing measurements with a 252Cf source, a sphere of 4.5 kg of alpha-phase weapons-grade plutonium known as the BeRP ball, a 6 kg sphere of neptunium, and a canister of 3.4 kg of plutonium oxide (7% 240Pu and 93% 239Pu). These measurements demonstrate neutron spectroscopic capabilities, a neutron image resolution for a Watt spectrum of 9.65 ± 0.94° in the azimuthal direction and 22.59 ± 5.81° in the altitude direction, imaging of gamma rays using organic scintillators, and imaging of multiple sources in the same field of view.
Conventional electrodes and associated positioning systems for intracellular recording from single neurons in vitro and in vivo are large and bulky, which has largely limited their scalability. Further, acquiring successful intracellular recordings is very tedious, requiring a high degree of skill not readily achieved in a typical laboratory. We report here a robotic, MEMS-based intracellular recording system to overcome the above limitations associated with form factor, scalability, and highly skilled and tedious manual operations required for intracellular recordings. This system combines three distinct technologies: (1) novel microscale, glass–polysilicon penetrating electrode for intracellular recording; (2) electrothermal microactuators for precise microscale movement of each electrode; and (3) closed-loop control algorithm for autonomous positioning of electrode inside single neurons. Here we demonstrate the novel, fully integrated system of glass–polysilicon microelectrode, microscale actuators, and controller for autonomous intracellular recordings from single neurons in the abdominal ganglion of Aplysia californica (n = 5 cells). Consistent resting potentials (<−35 mV) and action potentials (>60 mV) were recorded after each successful penetration attempt with the controller and microactuated glass–polysilicon microelectrodes. The success rate of penetration and quality of intracellular recordings achieved using electrothermal microactuators were comparable to that of conventional positioning systems. Preliminary data from in vivo experiments in anesthetized rats show successful intracellular recordings. The MEMS-based system offers significant advantages: (1) reduction in overall size for potential use in behaving animals, (2) scalable approach to potentially realize multi-channel recordings, and (3) a viable method to fully automate measurement of intracellular recordings. This system will be evaluated in vivo in future rodent studies.
Through a combination of single crystal growth, experiments involving in situ deposition of surface adatoms, and complimentary modeling, we examine the electronic transport properties of lithium-decorated ZrTe5 thin films. We observe that the surface states in ZrTe5 are robust against Li adsorption. Both the surface electron density and the associated Berry phase are remarkably robust to adsorption of Li atoms. Fitting to the Hall conductivity data reveals that there exist two types of bulk carriers: those for which the carrier density is insensitive to Li adsorption, and those whose density decreases during initial Li depositions and then saturates with further Li adsorption. We propose this dependence is due to the gating effect of a Li-adsorption-generated dipole layer at the ZrTe5 surface.
Coupled poroelastic stressing and pore-pressure accumulation along pre-existing faults in deep basement contribute to recent occurrence of seismic events at subsurface energy exploration sites. Our coupled fluid-flow and geomechanical model describes the physical processes inducing seismicity corresponding to the sequential stimulation operations in Pohang, South Korea. Simulation results show that prolonged accumulation of poroelastic energy and pore pressure along a fault can nucleate seismic events larger than Mw3 even after terminating well operations. In particular the possibility of large seismic events can be increased by multiple-well operations with alternate injection and extraction that can enhance the degree of pore-pressure diffusion and subsequent stress transfer through a rigid and low-permeability rock to the fault. This study demonstrates that the proper mechanistic model and optimal well operations need to be accounted for to mitigate unexpected seismic hazards in the presence of the site-specific uncertainty such as hidden/undetected faults and stress regime.
The method of manufactured solutions (MMS) has become increasingly popular in conducting code verification studies on predictive codes, such as nuclear power system codes and computational fluid dynamic codes. The reason for the popularity of this approach is that it can be used when an analytical solution is not available. Using MMS, code developers are able to verify that their code is free of coding errors that impact the observed order of accuracy. While MMS is still an excellent tool for code verification, it does not identify coding errors that are of the same order as the numerical method. This paper presents a method that combines MMS with modified equation analysis (MEA), which calculates the local truncation error (LTE) to identify coding error up to and including the order of the numerical method. This method is referred to as modified equation analysis methd of manufactured solutions (MEAMMS). MEAMMS is then applied to a custom-built code, which solves the shallow water equations, to test the performance of the code verification method. MEAMMS is able to detect all coding errors that impact the implementation of the numerical scheme. To show how MEAMMS is different than MMS, they are both applied to the same first-order numerical method test problem with a first-order coding error. When there are first-order coding errors, only MEAMMS is able to identify them. This shows that MEAMMS is able to identify a larger set of coding errors while still being able to identify the coding errors MMS is able to identify.
Ammonothermal growth of bulk gallium nitride (GaN) crystals is considered the most suitable method to meet the demand for high quality bulk substrates for power electronics. A non-destructive evaluation of defect content in state-of-the-art ammonothermal substrates has been carried out by synchrotron X-ray topography. Using a monochromatic beam in grazing incidence geometry, high resolution X-ray topographs reveal the various dislocation types present. Ray-tracing simulations that were modified to take both surface relaxation and absorption effects into account allowed improved correlation with observed dislocation contrast so that the Burgers vectors of the dislocations could be determined. The images show the very high quality of the ammonothermal GaN substrate wafers which contain low densities of threading dislocations (TDs) but are free of basal plane dislocations (BPDs). Threading mixed dislocations (TMDs) were found to be dominant among the TDs, and the overall TD density (TDD) of a 1-inch wafer was found to be as low as 5.16 × 103 cm−2.
If quantum information processors are to fulfill their potential, the diverse errors that affect them must be understood and suppressed. But errors typically fluctuate over time, and the most widely used tools for characterizing them assume static error modes and rates. This mismatch can cause unheralded failures, misidentified error modes, and wasted experimental effort. Here, we demonstrate a spectral analysis technique for resolving time dependence in quantum processors. Our method is fast, simple, and statistically sound. It can be applied to time-series data from any quantum processor experiment. We use data from simulations and trapped-ion qubit experiments to show how our method can resolve time dependence when applied to popular characterization protocols, including randomized benchmarking, gate set tomography, and Ramsey spectroscopy. In the experiments, we detect instability and localize its source, implement drift control techniques to compensate for this instability, and then demonstrate that the instability has been suppressed.
Proceedings - 2020 IEEE 22nd International Conference on High Performance Computing and Communications, IEEE 18th International Conference on Smart City and IEEE 6th International Conference on Data Science and Systems, HPCC-SmartCity-DSS 2020
The Message Passing Interface (MPI) standard allows user-level threads to concurrently call into an MPI library. While this feature is currently rarely used, there is considerable interest from developers in adopting it in the near future. There is reason to believe that multithreaded communication may incur additional message processing overheads in terms of number of items searched during demultiplexing and amount of time spent searching because it has the potential to increase the number of messages exchanged and to introduce non-deterministic message ordering. Therefore, understanding the implications of adding multithreading to MPI applications is important for future application development.One strategy for advancing this understanding is through 'low-cost' benchmarks that emulate full communication patterns using fewer resources. For example, while a complete, 'real-world' multithreaded halo exchange requires 9 or 27 nodes, the low-cost alternative needs only two, making it deployable on systems where acquiring resources is difficult because of high utilization (e.g., busy capacity-computing systems), or impossible because the necessary resources do not exist (e.g., testbeds with too few nodes). While such benchmarks have been proposed, the reported results have been limited to a single architecture or derived indirectly through simulation, and no attempt has been made to confirm that a low-cost benchmark accurately captures features of full (non-emulated) exchanges. Moreover, benchmark code has not been made publicly available.The purpose of the study presented in this paper is to quantify how accurately the low-cost benchmark captures the matching behavior of the full, real-world benchmark. In the process, we also advocate for the feasibility and utility of the low-cost benchmark. We present a 'real-world' benchmark implementing a full multithreaded halo exchange on 9 and 27 nodes, as defined by 5-point and 9-point 2D stencils, and 7-point and 27-point 3D stencils. Likewise, we present a 'low-cost' benchmark that emulates these communication patterns using only two nodes. We then confirm, across multiple architectures, that the low-cost benchmark gives accurate estimates of both number of items searched during message processing, and time spent processing those messages. Finally, we demonstrate the utility of the low-cost benchmark by using it to profile the performance impact of state-of-The-Art Mellanox ConnectX-5 hardware support for offloaded MPI message demultiplexing. To facilitate further research on the effects of multithreaded MPI on message matching behavior, the source of our two benchmarks is to be included in the next release version of the Sandia MPI Micro-Benchmark Suite.
The amount of data produced by sensors, social and digital media, and Internet of Things (IoTs) are rapidly increasing each day. Decision makers often need to sift through a sea of Big Data to utilize information from a variety of sources in order to determine a course of action. This can be a very difficult and time-consuming task. For each data source encountered, the information can be redundant, conflicting, and/or incomplete. For near-real-time application, there is insufficient time for a human to interpret all the information from different sources. In this project, we have developed a near-real-time, data-agnostic, software architecture that is capable of using several disparate sources to autonomously generate Actionable Intelligence with a human in the loop. We demonstrated our solution through a traffic prediction exemplar problem.
Existing communication protocols in high consequence security networks are highly centralized. While this naively makes the controls easier to physically secure, external actors require fewer resources to disrupt the system because there are fewer points in the system can be destroyed or interrupted without the entire system failing. We present a solution to this problem using a proof-of-work-based blockchain implementation built on MultiChain. We construct a test-bed network containing two types of data input: visual imagers and microwave sensor information. These data types are ubiquitous in perimeter intrusion detection security systems and allow a realistic representation of a real-world network architecture. The cameras in this system use an object detection algorithm to nd important targets in the scene. The raw data from the camera and the outputs from the detection algorithm are then placed in a transaction on the distributed ledger. Similarly, microwave data is used to detect relevant events and are placed in a transaction. These transactions are then bundled into blocks and broadcast to the rest of the network using the Bitcoin-based MultiChain protocol. We develop five tests to examine the security metrics of our network. We performed the five security metric test using different sized networks from 7 to 39 nodes to determine how the metrics scale with respect to size. We nd that when compared to a centralized architecture our implementation provides a resiliency increase that is expected from a blockchain-based protocol without slowing the system so much that a human operator would notice. Furthermore, our approach is able to detect tampering in real time. Based on these results, we theorize that security networks in general could use a blockchain-based approach in a meaningful way.
In this work, we have studied the pressure-induced structural and electronic phase transitions in WO3 to 60 GPa using micro-Raman spectroscopy, synchrotron X-ray diffraction, and electrical resistivity measurements. The results indicate that WO3 undergoes a series of phase transitions with increasing pressure: triclinic WO3-I initially transforms to monoclinic WO3-II (P21/c) at 1 GPa, involving a tetrahedral distortion in a corner-shared octahedral framework, and then to a mixed corner and edge-shared seven-coordinated WO3-III (P21/c) at 27 GPa with a large volume change of ~6% and further to WO3-IV (Pc) above 37 GPa. These structural phase transitions also accompany a significant drop in resistivity from insulating WO3-I to semiconducting WO3-II, and poor metallic WO3-III and IV, arising from the Jahn–Teller distortion in WO6 and the hybridization between O 2p and W 5d orbitals in WO7, respectively. Unlike its molecular analogue of MoO3, the transitions in WO3 show little effect on the use of different pressure transmitting media.
A vaccine for smallpox is no longer administered to the general public, and there is no proven, safe treatment specific to poxvirus infections, leaving people susceptible to infections by smallpox and other zoonotic Orthopoxviruses such as monkeypox. Using vaccinia virus (VACV) as a model organism for other Orthopoxviruses, CRISPR–Cas9 technology was used to target three essential genes that are conserved across the genus, including A17L, E3L, and I2L. Three individual single guide RNAs (sgRNAs) were designed per gene to facilitate redundancy in rendering the genes inactive, thereby reducing the reproduction of the virus. The efficacy of the CRISPR targets was tested by transfecting human embryonic kidney (HEK293) cells with plasmids encoding both SaCas9 and an individual sgRNA. This resulted in a reduction of VACV titer by up to 93.19% per target. Following the verification of CRISPR targets, safe and targeted delivery of the VACV CRISPR antivirals was tested using adeno-associated virus (AAV) as a packaging vector for both SaCas9 and sgRNA. Similarly, AAV delivery of the CRISPR antivirals resulted in a reduction of viral titer by up to 92.97% for an individual target. Overall, we have identified highly specific CRISPR targets that significantly reduce VACV titer as well as an appropriate vector for delivering these CRISPR antiviral components to host cells in vitro.
In support of analyst requests for Mobile Guardian Transport studies, researchers at Sandia National Laboratories have expanded data types for the Slycat ensemble-analysis and visualization tool to include 3D surface meshes. This new capability represents a significant advance in our ability to perform detailed comparative analysis of simulation results. Analyzing mesh data rather than images provides greater flexibility for post-processing exploratory analysis.
With the growing number of applications designed for heterogeneous HPC devices, application programmers and users are finding it challenging to compose scalable workflows as ensembles of these applications, that are portable, performant and resilient. The Kokkos C++ library has been designed to simplify this cumbersome procedure by providing an intra-application uniform programming model and portable performance. However, assembling multiple Kokkos-enabled applications into a complex workflow is still a challenge. Although Kokkos enables a uniform programming model, the inter-application data exchange still remains a challenge from both performance and software development cost perspectives. In order to address this issue, we propose a Kokkos-DataSpaces Integration, with the goal of providing a virtual shared-space abstraction that can be accessed concurrently by all applications in an Kokkos workflow, thus extending Kokkos to support inter-application data exchange.
A collection of x-ray computed tomography scans of Myotis keaysi pilosotibialis specimens from Texas A&Ms Biodiveristy Research and Teaching Collections.
A team at Sandia National Laboratories (SNL) recognized the growing need to maintain and organize the internal community of Techno - Economic Assessment analysts at the lab . To meet this need, an internal core team identified a working group of experienced, new, and future analysts to: 1) document TEA best practices; 2) identify existing resources at Sandia and elsewhere; and 3) identify gaps in our existing capabilities . Sandia has a long history of using techno - economic analyses to evaluate various technologies , including consideration of system resilience . Expanding our TEA capabilities will provide a rigorous basis for evaluating science, engineering and technology - oriented projects, allowing Sandia programs to quantify the impact of targeted research and development (R&D), and improving Sandia's competitiveness for external funding options . Developing this working group reaffirms the successful use of TEA and related techniques when evaluating the impact of R&D investments, proposed work, and internal approaches to leverage deep technical and robust, business - oriented insights . The main findings of this effort demonstrated the high - impact TEA has on future cost, adoption for applications and impact metric forecasting insights via key past exemplar applied techniques in a broad technology application space . Recommendations from this effort include maintaining and growing the best practices approaches when applying TEA, appreciating the tools (and their limits) from other national laboratories and the academic community, and finally a recognition that more proposals and R&D investment decision s locally at Sandia , and more broadly in the research community from funding agencies , require TEA approaches to justify and support well thought - out project planning.
The design and construction of a nuclear power plant must include robust structures and a security boundary that is difficult to penetrate. For security considerations, the reactors would ideally be sited underground, beneath a massive solid block, which would be too thick to be penetrated by tools or explosives. Additionally, all communications and power transfer lines would also be located underground and would be fortified against any possible design basis threats. Limiting access with difficult-to-penetrate physical barriers is a key aspect for determining response and staffing requirements. Considerations considered in a graded approach to physical protection are described.
In September of 2020, dust samples were collected from the surface of spent nuclear fuel (SNF) dry storage canisters during an inspection at an inland Independent Spent Fuel Storage Installation. The purpose of the sampling was to assess the composition and abundance of the soluble salts present on the canister surface, information which provides a metric for potential corrosion risks. The samples were delivered to Sandia National laboratories for analysis. At Sandia, the soluble salts were leached from the dust and quantified by ion chromatography. In addition, subsamples of the dust were taken for scanning electron microscope analysis to determine the texture and mineralogy of the dust and salts. The results of those analyses are presented in this report.
Mitigating particulate matter (PM) emissions while simultaneously controlling nitrogen oxide and hydrocarbon emissions is critical for both gasoline and diesel engines. The problem is especially critical during cold-start cycles where aftertreatment devices are less effective. Understanding how liquid sprays and films form PM and designing to change the outcome requires advanced combustion concepts developed through joint experimental and computational efforts. However, existing spray and soot computational models are oversimplified and non-physical, and are therefore unable to reliably capture quantitative or even qualitative trends over a wide range of engine operating conditions. This task involves the development and application of advanced optical diagnostics and high-pressure gas and particle sampling/analysis in unique high-temperature, high-pressure vessels to investigate spray dynamics and soot formation with the objective of providing fundamental understanding about soot processes under relevant engine conditions to aid the development of improved soot models for commercial CFD codes