Although analytical solutions exist, the analysis of two-dimensional spectroscopy (2DCS) data can be tedious. A machine learning approach to analyzing 2DCS spectra is presented. We test the accuracy of the algorithm on simulated and experimental data.
In the area of information extraction from text data, there exists a number of tools with the capability of extracting entities, topics, and their relationships with one another from both structured and unstructured text sources. Such information has endless uses in a number of domains, however, the solutions to getting this information are still in early stages and has room for improvement. The topic has been explored from a research perspective by academic institutions, as well as formal tool creation from corporations but has not made much advancement since the early 2000's. Overall, entity extraction, and the related topic of entity linking, is common among these tools, though with varying degrees of accuracy, while relationship extraction is more difficult to find and seems limited to same sentence analysis. In this report, we take a look at the top state of the art tools currently available and identify their capabilities, strengths, and weaknesses. We explore the common algorithms in the successful approaches to entity extraction and their ability to efficiently handle both structured and unstructured text data. Finally, we highlight some of the common issues among these tools and summarize the current ability to extract relationship information.
As part of the Department of Energy response to the novel coronavirus pandemic of 2020, a modeling effort was sponsored by the DOE Office of Science. One task of this modeling effort at Sandia was to develop a model to predict medical resource needs given various patient arrival scenarios. Resources needed include personnel resources (nurses, ICU nurses, physicians, respiratory therapists), fixed resources (regular or ICU beds and ventilators), and consumable resources (masks, gowns, gloves, face shields, sedatives). This report documents the uncertainty analysis that was performed on the resource model. The uncertainty analysis involved sampling 26 input parameters to the model. The sampling was performed conditional on the patient arrival streams that also were inputs to the model. These patient arrival streams were derived from various epidemiology models and had a significant effect on the projected resource needs. In this report, we document the sampling approach, the parameter ranges used, and the computational workflow necessary to perform large-scale uncertainty studies for every county and state in the United States.
Environmental stewardship has long been a guiding principle among indigenous peoples. Acoma Pueblo, located approximately 45 miles west of Albuquerque, New Mexico, shares these values. They are exploring how to incorporate these values into future development projects. As such, Acoma recently created a wastewater treatment facility to provide irrigation water to nearby crops. Treatment facilities like this further sustainable practices and benefit the community but may draw power from the grid. This report looks at design considerations for possible photovoltaic utilization at the site to offset current power and monetary costs. Lessons learned from a larger, municipal wastewater treatment center will be applied with special considerations for Acoma Pueblo's values and unique wastewater treatment facility configuration.
In this work, we investigate cascaded third harmonic generation in a dielectric metasurface by exploiting high quality factor Fano resonances obtained using broken symmetry unit cells.
A Sandia COVID-19 LDRD effort, the Sandia E-PiPEline Team, systematically evaluated design options for face shields constructed from commonly available materials (CAMs). This study is not focused on face shields for medical applications, and as such, has excluded labeling and flammability considerations suggested by the FDA. Design options for face shields were analyzed with subject matter expert input considering the design's effectiveness (seal around face), reusability (compatibility with solvents, degree of inertness), producibility (ability to obtain materials, build time), cost, and comfort (fit around head, contact surface interface). Observations for the design of face shields using CAMS are provided here.
Time-dependent, viscoelastic responses of materials like polymers and glasses have long been studied. As such, a variety of models have been put forth to describe the behavior including simple rheological models (e.g. Maxwell, Kelvin), linear "fading memory" theories, and hereditary integral based linear thermal viscoelastic approaches as well as more recent nonlinear theories that are either integral, fictive temperature, or differential internal state variable based. The current work details a new LINEAR_THERMOVISCOELASTIC model that has been added to LAME. This formulation represents a viscoelastic theory that neglects some of the phenomenological details of the PEC/SPEC models in favor of efficiency and simplicity. Furthermore, this new model is a first step towards developing "modular" viscoelastic capabilities akin to those available with hardening descriptions for plasticity models in LAME. Specifically, multiple different (including user-defined) shift-factor forms are implemented with each being easily selected via parameter specification rather than requiring distinct material models.
The Center for Disease Control has recommended that the public should wear cloth face coverings in public settings'. A Sandia COVID-19 LDRD effort, the Sandia E-PiPEline Team, systematically evaluated design options for face coverings constructed from commonly available materials (CAMS). The design options were analyzed with subject matter expert input considering the design's effectiveness (metric fiber density, material construction, and water saturation), reusability (degree of inertness), producibility (ability to obtain materials, build time), cost, and comfort (fit on face, breathability). Observations for the design of face coverings using CAMs are provided here.
Through cyberattacks on information technology and digital communications systems, antagonists have increasingly been able to alter the strategic balance in their favor without provoking serious consequences. Conflict within and through the cyber domain is inherently different from conflict in other domains that house our critical systems. These differences result in new challenges for defending and creating resilient systems, and for deterring those who would wish to disrupt or destroy them. The purpose of this paper is to further examine the question of whether or not deterrence can be an effective strategy in cyber conflict, given our broad and varied interests in cyberspace. We define deterrence broadly as the creation of conditions that dissuade antagonists from taking unwanted actions because they believe that they will incur unacceptably high costs and/or receive insufficient benefits from taking that action. Deterrence may or may not be the most credible or effective strategy for achieving our desired end states in cybersecurity. Regardless of the answer here, however, it is important to consider why deterrence strategies might succeed under certain conditions, and to understand why deterrence is not effective within the myriad contexts that it appears fail. Deterrence remains a key component of U.S. cyber strategy, but there is little detail on how to operationalize or implement this policy, how to bring a whole-of-government and whole-of- private-sector approach to cyber deterrence, which types of antagonists can or should be deterred, and in which contexts. Moreover, discussion about how nations can and should respond to significant cyber incidents largely centers around whether or not the incident constitutes a "use of force," which would justify certain types of responses according to international law. However, we believe the "use of force" threshold is inadequate to describe the myriad interests and objectives of actors in cyberspace, both attackers and defenders. In this paper, we propose an approach to further examine if deterrence is an effective strategy and under which conditions. Our approach includes systematic analysis of cyber incident scenarios using a framework to evaluate the effectiveness of various activities in influencing antagonist behavior. While we only examine a single scenario for this paper, we propose that additional work is needed to more fully understand how various alternative thresholds constrain or unleash options for actors to influence one another's behavior in the cyber domain.
Space-charge-limited (SCL) emission parameters are varied to study the performance effects in a planar diode using an electromagnetic particle-in-cell simulation software suite, EMPIRE. Oscillations in the simulations are found and linked to the emission parameters, namely the breakdown threshold, the emission delay time, and the current density ramp time. The oscillations are suggested to be a transverse oscillator due to the perfect magnetic conductor boundary condition in steady-state operation and the formation of a virtual cathode in the diode driven by the SCL boundary condition.
Agencies that monitor for underground nuclear tests are interested in techniques that automatically characterize mining blasts to reduce the human analyst effort required to produce high-quality event bulletins. Waveform correlation is effective in finding similar waveforms from repeating seismic events, including mining blasts. We report the results of an experiment that uses waveform templates recorded by multiple International Monitoring System stations of the Comprehensive Nuclear-Test-Ban Treaty for up to 10 years prior to detect and identify mining blasts that occur during single weeks of study. We discuss approaches for template selection, threshold setting, and event detection that are specialized for mining blasts and a sparse, global network. We apply the approaches to two different weeks of study for each of two geographic regions, Wyoming and Scandinavia, to evaluate the potential for establishing a set of standards for waveform correlation processing of mining blasts that can be effective for operational monitoring systems with a sparse network. We compare candidate events detected with our processing methods to the Reviewed Event Bulletin of the International Data Centre to develop an intuition about potential reduction in analyst workload.
A broad set of data science and engineering questions may be organized as graphs, providing a powerful means for describing relational data. Although experts now routinely compute graph algorithms on huge, unstructured graphs using high performance computing (HPC) or cloud resources, this practice hasn't yet broken into the mainstream. Such computations require great expertise, yet users often need rapid prototyping and development to quickly customize existing code. Toward that end, we are exploring the use of the Chapel programming language as a means of making some important graph analytics more accessible, examining the breadth of characteristics that would make for a productive programming environment, one that is expressive, performant, portable, and robust.
We demonstrate injection-locked operation of a silicon-based Brillouin laser for the first time. The unique spatio-temporal inter-modal Brillouin dynamics enable nonreciprocal control and low-phase-noise operation within a monolithically integrated system.
Data movement is a significant and growing consumer of energy in modern systems, from specialized low-power accelerators to GPUs with power budgets in the hundreds of Watts. Given the importance of the problem, prior work has proposed designing interconnects on which the energy cost of transmitting a 0 is significantly lower than that of transmitting a 1. With such an interconnect, data movement energy is reduced by encoding the transmitted data such that the number of 1s is minimized. Although promising, these data encoding proposals do not take full advantage of application level semantics. As an example of a neglected optimization opportunity, consider the case of a dot product computation as part of a neural network inference task. The order in which the neural network weights are fetched and processed does not affect correctness, and can be optimized to further reduce data movement energy. This paper presents commutative data reordering (CDR), a hardware-software approach that leverages the commutative property in linear algebra to strategically select the order in which weight matrix coefficients are fetched from memory. To find a low-energy transmission order, weight ordering is modeled as an instance of one of two well-studied problems, the Traveling Salesman Problem and the Capacitated Vehicle Routing Problem. This reduction makes it possible to leverage the vast body of work on efficient approximation methods to find a good transmission order. CDR exploits the indirection inherent to sparse matrix formats such that no additional metadata is required to specify the selected order. The hardware modifications required to support CDR are minimal, and incur an area penalty of less than 0.01% when implemented on top of a mobile-class GPU. When applied to 7 neural network inference tasks running on a GPU-based system, CDR respectively reduces average DRAM IO energy by 53.1% and 22.2% over the data bus invert encoding scheme used by LPDDR4, and the recently proposed Base + XOR encoding. These savings are attained with no changes to the mobile system software and no runtime performance penalty.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: 1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. 2) A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to develop new types of analysis without requiring the implementation of analysis-specific device models. 3) Device models that are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only). 4) Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.
Photonic Doppler Velocimetry (PDV) is a fiber-based diagnostic for the extreme conditions created by high-speed impact, explosive detonation, electrical pulsed power, and intense laser ablation. PDV is a conceptually simple application of the optical Doppler effect, but measurements above 1 km/s only became practical at the beginning of the twenty-first century. This review discusses the evolution of PDV, its operational details, practical analysis, and outstanding challenges.
Proceedings - 2020 IEEE 34th International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2020
Ozkaya, M.Y.; Balin, M.F.; Pinar, Ali P.; Catalyurek, Umit V.
Graphs are commonly used to model the relationships between various entities. These graphs can be enormously large and thus, scalable graph analysis has been the subject of many research efforts. To enable scalable analytics, many researchers have focused on generating realistic graphs that support controlled experiments for understanding how algorithms perform under changing graph features. Significant progress has been made on scalable graph generation which preserve some important graph properties (e.g., degree distribution, clustering coefficients). In this paper, we study how to sample a graph from the space of graphs with a given shell distribution. Shell distribution is related to the k-core, which is the largest subgraph where each vertex is connected to at least kother vertices. A k-shell is the subset of vertices that are in k-core but not ( k +1)-core, and the shell distribution comprises the sizes of these shells. Core decompositions are widely used to extract information from graphs and to assist other computations. We present a scalable shared and distributed memory graph generator that, given a shell decomposition, generates a random graph that conforms to it. Our extensive experimental results show the efficiency and scalability of our methods. Our algorithm generates 233 vertices and 237 edges in less than 50 seconds on 384 cores.11This work is funded by the Laboratory Directed Research and Development program of Sandia National Laboratories. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
Noble gases are generated within solids in nuclear environments and coalesce to form gas stabilized voids or cavities. Ion implantation has become a prevalent technique for probing how gas accumulation affects microstructural and mechanical properties. Transmission electron microscopy (TEM) allows measurement of cavity density, size, and spatial distributions post-implantation. While post-implantation microstructural information is valuable for determining the physical origins of mechanical property degradation in these materials, dynamic microstructural changes can only be determined by in situ experimentation techniques. We present in situ TEM experiments performed on Pd, a model face-centered cubic metal that reveals real-time cavity evolution dynamics. Observations of cavity nucleation and evolution under extreme environments are discussed.
The need to understand the risks and implications of traffic incidents involving hydrogen fuel cell electric vehicles in tunnels is increasing in importance with higher numbers of these vehicles being deployed. A risk analysis was performed to capture potential scenarios that could occur in the event of a crash and provide a quantitative calculation for the probability of each scenario occurring, with a qualitative categorization of possible consequences. The risk analysis was structured using an event sequence diagram with probability distributions on each event in the tree and random sampling was used to estimate resulting probability distributions for each end-state scenario. The most likely consequence of a crash is no additional hazard from the hydrogen fuel (98.1–99.9% probability) beyond the existing hazards in a vehicle crash, although some factors need additional data and study to validate. These scenarios include minor crashes with no release or ignition of hydrogen. When the hydrogen does ignite, it is most likely a jet flame from the pressure relief device release due to a hydrocarbon fire (0.03–1.8% probability). This work represents a detailed assessment of the state-of-knowledge of the likelihood associated with various vehicle crash scenarios. This is used in an event sequence framework with uncertainty propagation to estimate uncertainty around the probability of each scenario occurring.
When batteries supply behind-the-meter services such as arbitrage or peak load management, an optimal controller can be designed to minimize the total electric bill. The limitations of the batteries, such as on voltage or state-of-charge, are represented in the model used to forecast the system's state dynamics. Control model inaccuracy can lead to an optimistic shortfall, where the achievable schedule will be costlier than the schedule derived using the model. To improve control performance and avoid optimistic shortfall, we develop a novel methodology for high performance, risk-averse battery energy storage controller design. Our method is based on two contributions. First, the application of a more accurate, but non-convex, battery system model is enabled by calculating upper and lower bounds on the globally optimal control solution. Second, the battery model is then modified to consistently underestimate capacity by a statistically selected margin, thereby hedging its control decisions against normal variations in battery system performance. The proposed model predictive controller, developed using this methodology, performs better and is more robust than the state-of-the-art approach, achieving lower bills for energy customers and being less susceptible to optimistic shortfall.
The magnetic susceptibility of NOx-loaded RE-DOBDC (rare earth (RE): Y, Eu, Tb, Yb; DOBDC: 2,5-dihydroxyterephthalic acid) metal-organic frameworks (MOFs) is unique to the MOF metal center. RE-DOBDC samples were synthesized, activated, and subsequently exposed to humid NOx. Each NOx-loaded MOF was characterized by powder X-ray diffraction, and the magnetic characteristics were probed by using a VersaLab vibrating sample magnetometer (VSM). Lanthanide-containing RE-DOBDC (Eu, Tb, Yb) are paramagnetic with a reduction in paramagnetism upon adsorption of NOx. Y-DOBDC has a diamagnetic moment with a slight reduction upon adsorption of NOx. The magnetic susceptibility of the MOF is determined by the magnetism imparted by the framework metal center. The electronic population of orbitals contributes to determining the extent of magnetism and change with NOx (electron acceptor) adsorption. Eu-DOBDC results in the largest mass magnetization change upon adsorption of NOx due to more available unpaired f electrons. Experimental changes in magnetic moment were supported by density functional theory (DFT) simulations of NOx adsorbed in lanthanide Eu-DOBDC and transition metal Y-DOBDC MOFs.
Sandia’s Z Pulsed Power Facility is able to dynamically compress matter to extreme states with exceptional uniformity, duration, and size, which are ideal for investigating fundamental material properties of high energy density conditions. X-ray diffraction (XRD) is a key atomic scale probe since it provides direct observation of the compression and strain of the crystal lattice and is used to detect, identify, and quantify phase transitions. Because of the destructive nature of Z-Dynamic Material Property (DMP) experiments and low signal vs background emission levels of XRD, it is very challenging to detect a diffraction signal close to the Z-DMP load and to recover the data. We have developed a new Spherical Crystal Diffraction Imager (SCDI) diagnostic to relay and image the diffracted x-ray pattern away from the load debris field. The SCDI diagnostic utilizes the Z-Beamlet laser to generate 6.2-keV Mn–Heα x rays to probe a shock-compressed material on the Z-DMP load. Finally, a spherically bent crystal composed of highly oriented pyrolytic graphite is used to collect and focus the diffracted x rays into a 1-in. thick tungsten housing, where an image plate is used to record the data.
Pawlowski, Roger P.; Stimpson, Shane G.; Clarno, Kevin; Gardner, Russell; Powers, Jeffrey; Collins, Benjamin S.; Toth, Alex; Novascone, Stephen R.; Pitts, Stephanie; Hales, Jason D.; Pastore, Giovanni
As the core simulator capabilities in the Virtual Environment for Reactor Applications (VERA) have become more mature and stable, increased attention has been focused on coupling Bison to provide fuel performance simulations. This technique has been a very important driver for the pellet-clad interaction challenge problem being addressed by the Consortium for Advanced Simulation of Light Water Reactors (CASL). Here, two coupling approaches are demonstrated on quarter core problems based on Watts Bar Nuclear Plant Unit 1, Cycle 1: (1) an inline approach in which a one-way coupling is used between neutronics/thermal hydraulics (through MPACT/CTF) and fuel performance (through Bison) but no fuel temperature information is passed back to MPACT/CTF, and (2) a two-way approach (coupled) in which the fuel temperature is passed from Bison to MPACT/CTF. In both approaches, power and temperature distributions from MPACT/CTF are used to inform the Bison simulations for each rod in the core.The demonstrations presented here are the first integrated fuel performance simulations in VERA, which opens many possibilities for future work, including applications to accident-tolerant fuel efforts and transient simulations, which are of critical importance to CASL. These demonstrations also highlight the potential to move away from the current Bison-informed fuel temperature lookup table approach, which is the default in MPACT/CTF simulations, if performance improvements are made in the near future.
This document presents the Sierra/SolidMechanics (Sierra/SM) verification plan. This plan centers on the tests in the Sierra/SM verification test suite, a subset of which are documented in the Sierra/SM Verification Tests Manual. Most of these tests are run nightly with the Sierra/SM code suite, and the results of the tests are checked against analytic solutions. For each of the tests presented in the Verification Tests Manual, the test setup, a description of the analytic solution, and comparison of the Sierra/SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This verification plan discusses these various types of tests and what they mean for Sierra/SM verification. Many other activities also contribute to Sierra/SM quality. These address code and solution quality and range from low-level unit tests, run nightly, up to full-fidelity acceptance tests, used to verify release stability. This acceptance test suite checks that new versions of Sierra/SM continue to yield the same answers for high-resolution analyst problems. Further code quality measures include an extensive suite of intermediate-size regression tests and automated nightly code quality checks. While these additional activities do not fall under a strict definition of verification, they greatly add to the quality, stability, and reliability of Sierra/SM, and are discussed here as well.
The principle finding of this report is that both commercial and a novel material used for N95 mask filters can endure many cycles of disinfection by ozone gas (20 ppm for 30 minutes) without detectable degradation or loss of filtration efficiency. N95 masks and surgical masks (hereafter referred to as masks) typically use a filtration material fabricated from meltblown polypropylene. To achieve maximum filtration efficiency while maintaining a reasonable pressure drop, these nonwoven fabrics are also electrostatically charged (corona discharge is the most common method used), to maximize attraction and capture of aerosols and solid particulates. Under normal circumstances, the reuse of masks is generally discouraged, but in times of crisis has become a necessity, making disinfection after each use a necessity. To be acceptable, any disinfection procedure must cause minimal degradation to the performance of the filter material. Possible performance degradation mechanisms include mechanical damage, loss of electrostatic charge, or both. One of the most practical and direct ways to measure combined mechanical and electrostatic integrity, and the subsequent ability to reuse mask filter material, is by the direct measurement of filtration efficiency. In this paper, we report that small numbers of disinfection cycles at reasonable virucidal doses of ozone do not significantly degrade the filtration efficiency of meltblown polypropylene filter material. By comparison, laundering quickly results in a significant loss of filtration efficiency and requires subsequent recharging to restore the electrostatic charge and filtration efficiency. A common assumption among biomedical scientists that ozone is far too destructive for this application. However, these direct measurements show that mask materials, specifically the filtration material, can withstand dozens of ozone disinfection cycles without any measurable degradation of filtration efficiency, nor any visible discoloration or loss of fiber integrity. The data are clear: when subjected to a virucidal dose of ozone for a much longer duration than is required for viral inactivation, there was no degradation of N95 filtration efficiency. The specific dosages of ozone needed for ~99% viral inactivation are thought to be at least 10 ppm for up to 30 minutes based upon an extensive literature review, but to standardize our testing, we consider a dose of 20 ppm for 30 minutes to be a reasonable and conservatively high ozone disinfection cycle. Finally, the material tested in this study withstood dosages of up to 200 ppm for 90 minutes, or alternatively 20 ppm for up to 36 hours, without detectable degradation, and further testing suggests that up to 30 or more disinfection cycles (at 20 ppm for 30 minutes) would result in less than a 5% loss of filtration efficiency. This report does not address the effect of ozone cycling on other mask components, such as elastics.
Bayesian optimization (BO) is an effective surrogate-based method that has been widely used to optimize simulation-based applications. While the traditional Bayesian optimization approach only applies to single-fidelity models, many realistic applications provide multiple levels of fidelity with various levels of computational complexity and predictive capability. In this work, we propose a multi-fidelity Bayesian optimization method for design applications with both known and unknown constraints. The proposed framework, called sMF-BO-2CoGP, is built on a multi-level CoKriging method to predict the objective function. An external binary classifier, which we approximate using a separate CoKriging model, is used to distinguish between feasible and infeasible regions. Finally, the sMF-BO-2CoGP method is demonstrated using a series of analytical examples and a flip-chip application for design optimization to minimize the deformation due to warping under thermal loading conditions.
Proceedings of SPIE - The International Society for Optical Engineering
Fink, Douglas R.; Lee, Seunghyun; Kodati, Sri H.; Rogers, V.; Ronningen, Theodore J.; Winslow, Martin; Grein, Christoph H.; Jones, Andrew H.; Campbell, Joe C.; Klem, John F.; Krishna, Sanjay
Here, we present a method of determining the background doping type in semiconductors using capacitance-voltage measurements on overetched double mesa p-i-n or n-i-p structures. Unlike Hall measurements, this method is not limited by the conductivity of the substrate. By measuring the capacitance of devices with varying top and bottom mesa sizes, we were able to conclusively determine which mesa contained the p-n junction, revealing the polarity of the intrinsic layer. This method, when demonstrated on GaSb p-i-n and n-i-p structures, determined that the material is residually doped p-type, which is well established by other sources. The method was then applied on a 10 monolayer InAs/10 monolayer AlSb superlattice, for which the doping polarity was unknown, and indicated that this material is also p-type.
This report is the Task E specification (Revision 0) for DECOVALEX-2023. Task E is focused on understanding thermal, hydrological, mechanical and chemical (THMC) processes, especially related to predicting brine migration in heated salt. The main test case being used is the ongoing Brine Availability Test in Salt (BATS) heater test located underground at the Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico. This report provides short motivational background, a summary of relevant experiments and data, and a step-by-step plan for the analysis by the teams participating in Task E (Rev. 0 includes detailed description of steps 0 and 1). This document will be revised, and more detail will be added to later steps during DECOVALEX-2023.
Sandia National Laboratories (SNL) assessed the filtration performance of materials from Sierra Peaks to identify alternatives which may perform similarly to materials used in FDA-approved N95 respirators. This work is meant to characterize the aerosol performance of materials to give Sierra Peaks information for them to determine if they elect to submit masks made using these materials for follow-on N95 certification testing at an accredited facility. The R&D testbed used is a large-scale filtration system designed to test commercial filter boxes. System modifications were performed to simulate, where possible, parameters defined by the National Institute for Occupational Safety and Health (NIOSH) for certification of filter materials for N95 respirators (NIOSH 2019). The system is a pull-through design. Air enters through a Laminar Flow Element (LFE) and the volumetric flow is measured based on the pressure drop across the LFE. Pressure is measured via a Pressure Transducer (PT). The air then passes through a High Efficiency Particulate Air (HEPA) filter to purge the air of ambient airborne particulates. Test aerosol is injected into the flow shortly after and mixing is induced via a coarse mesh. The airflow is allowed to fully develop prior to arriving at the test section. The aerosol then passes through the test material mounted in a box in the test section. Pressure drop across the test article is measured and aerosol sampling probes measure the aerosol concentrations upstream and downstream of the sample. The air passes through a second HEPA filter prior to being exhausted to ambient by a blower. A Topas aerosol generator is used to produce the test aerosol from Sodium Chloride (NaC1) dissolved in deionized (DI) water. Generated aerosol passes through a heated mixing chamber and a desiccant dryer to produce nanosized solid-state particulates. A dilution loop allows for the aerosol concentration to be regulated. The aerosol sampling probes upstream and downstream of the test section are aligned with the flow path. These are ducted directly to the aerosol sizing and counting instruments. A Laser Aerosol Spectrometer (LAS) was used for data collection in the original configuration of the system and was also used for initial testing in this project. Because the lower measurement range for the LAS is 90 nanometers (nm), the LAS was switched out for a more complicated Scanning Mobility Particle Sizer (SMPS) spectrometer system. The SMPS is comprised of an Electrostatic Classifier (EC), Differential Mobility Analyzer (DMA), and a Condensation Particle Counter (CPC). This enabled data collection at 75 nm, the particle size called out in the NIOSH guidelines.
Sandia National Laboratories currently has 27 COVID-related Laboratory Directed Research & Development (LDRD) projects focused on helping the nation during the pandemic. These LDRD projects cross many disciplines including bioscience, computing & information sciences, engineering science, materials science, nanodevices & microsystems, and radiation effects & high energy density science.
Metal-organic frameworks (MOFs) NU-1000 and UiO-66 are herein exposed to two different gamma irradiation doses and dose rates and analyzed to determine the structural features that affect their stability in these environments. MOFs have shown promise for the capture and sensing of off-gases at civilian nuclear energy reprocessing sites, nuclear waste repositories, and nuclear accident locations. However, little is understood about the structural features of MOFs that contribute to their stability levels under the ionizing radiation conditions present at such sites. This study is the first of its kind to explore the structural features of MOFs that contribute to their radiolytic stability. Both NU-1000 and UiO-66 are MOFs that contain Zr metal-centers with the same metal absorption cross section. However, the two MOFs exhibit different linker connectivities, linker aromaticities, node densities, node connectivities, and interligand separations. In this study, NU-1000 and UiO-66 were exposed to high (423.3 Gy/min, 23 min, and 37 s) and low (0.78 Gy/min, 4320 min) dose rates of 60Co gamma irradiation. NU-1000 displayed insignificant radiation damage under both dose rates due to its high linker connectivity, low node density, and low node connectivity. However, low radiation dose rates caused considerable damage to UiO-66, a framework with lower aromaticity and smaller interligand separation. Results suggest that chronic, low-radiation environments are more detrimental to Zr MOF stability than acute, high-radiation conditions.
Single-ion conducting polymers such as ionomers are promising battery electrolyte materials, but it is critical to understand how rates and mechanisms of free cation transport depend on the nanoscale aggregation of cations and polymer-bound anions. We perform coarse-grained molecular dynamics simulations of ionomer melts to understand cation mobility as a function of polymer architecture, background relative permittivity, and corresponding ionic aggregate morphology. In systems exhibiting percolated ionic aggregates, cations diffuse via stepping motions along the ionic aggregates. These diffusivities can be quantitatively predicted by calculating the lifetimes of continuous association between oppositely charged ions, which equal the time scales of the stepping (diffusive) motions. In contrast, predicting cation diffusivity for systems with isolated ionic aggregates requires another time scale. Our results suggest that to improve conductivity the Coulombic interaction strength should be strong enough to favor percolated aggregates but weak enough to facilitate ion dissociation.
A discussion of many of the recently implemented features of GAMESS (General Atomic and Molecular Electronic Structure System) and LibCChem (the C++ CPU/GPU library associated with GAMESS) is presented. These features include fragmentation methods such as the fragment molecular orbital, effective fragment potential and effective fragment molecular orbital methods, hybrid MPI/OpenMP approaches to Hartree-Fock, and resolution of the identity second order perturbation theory. Many new coupled cluster theory methods have been implemented in GAMESS, as have multiple levels of density functional/tight binding theory. The role of accelerators, especially graphical processing units, is discussed in the context of the new features of LibCChem, as it is the associated problem of power consumption as the power of computers increases dramatically. The process by which a complex program suite such as GAMESS is maintained and developed is considered. Future developments are briefly summarized.
Development of calcium metal batteries has been historically frustrated by a lack of electrolytes capable of supporting reversible calcium electrodeposition. In this paper, we report the study of an electrolyte consisting of Ca(BH4)2 in tetrahydrofuran (THF) to gain important insight into the role of the liquid solvation environment in facilitating the reversible electrodeposition of this highly reactive, divalent metal. Through interrogation of the Ca2+ solvation environment and comparison with Mg2+ analogs, we show that an ability to reversibly electrodeposit metal at reasonable rates is strongly regulated by dication charge density and polarizability. Our results indicate that the greater polarizability of Ca2+ over Mg2+ confers greater configurational flexibility, enabling ionic cluster formation via neutral multimer intermediates. Increased concentration of the proposed electroactive species, CaBH4+, enables rapid and stable delivery of Ca2+ to the electrode interface. This work helps set the stage for future progress in the development of electrolytes for calcium and other divalent metal batteries.
Rempe, Susan R.; Chaudhari, Mangesh I.; Vanegas, Juan M.; Pratt, L.R.; Muralidharan, Ajay
Ions transiting biomembranes might pass readily from water through ion-specific membrane proteins if these protein channels provide environments similar to the aqueous solution hydration environment. Indeed, bulk aqueous solution is an important reference condition for the ion permeation process. Assessment of this hydration mimicry concept depends on understanding the hydration structure and free energies of metal ions in water in order to provide a comparison for the membrane channel environment. To refine these considerations, we review local hydration structures of ions in bulk water and the molecular quasi-chemical theory that provides hydration free energies. In doing so, we note some current views of ion binding to membrane channels and suggest new physical chemical calculations and experiments that might further clarify the hydration mimicry concept.
We report on detailed experimental studies of a high-quality heterojunction insulated-gate field-effect transistor (HIGFET) to probe the particle-hole symmetry of the fractional quantum Hall effect (FQHE) states about half-filling in the lowest Landau level. The HIGFET is specially designed to vary the density of a two-dimensional electronic system under constant magnetic fields. We find in our constant magnetic field, variable density measurements that the sequence of FQHE states at filling factors ν=1/3,2/5,3/7... and its particle-hole conjugate states at filling factors 1-ν=2/3,3/5,4/7... have a very similar energy gap. Moreover, a reflection symmetry can be established in the magnetoconductivities between the ν and 1-ν states about half-filling. Our results demonstrate that the FQHE states in the lowest Landau level are manifestly particle-hole symmetric.
Gupta, Nikunj; Mayo, Jackson M.; Lemoine, Adrian S.; Hartmut, Kaiser
The DOE Office of Science Exascale Computing Project (ECP) outlines the next milestones in the supercomputing domain. The target computing systems under the project will deliver 10x performance while keeping the power budget under 30 megawatts. With such large machines, the need to make applications resilient has become paramount. The benefits of adding resiliency to mission critical and scientific applications, includes the reduced cost of restarting the failed simulation both in terms of time and power. Most of the current implementation of resiliency at the software level makes use of a Coordinated Checkpoint and Restart (C/R). This technique of resiliency generates a consistent global snapshot, also called a checkpoint. Generating snapshots involves global communication and coordination and is achieved by synchronizing all running processes. The generated checkpoint is then stored in some form of persistent storage. On failure detection, the runtime initiates a global rollback to the most recent previously saved checkpoint. This involves aborting all running processes, rolling them back to the previous state and restarting them.
Physics-informed neural networks (PINNs) are effective in solving inverse problems based on differential and integro-differential equations with sparse, noisy, unstructured, and multifidelity data. PINNs incorporate all available information, including governing equations (reflecting physical laws), initial-boundary conditions, and observations of quantities of interest, into a loss function to be minimized, thus recasting the original problem into an optimization problem. In this paper, we extend PINNs to parameter and function inference for integral equations such as nonlocal Poisson and nonlocal turbulence models, and we refer to them as nonlocal PINNs (nPINNs). The contribution of the paper is three-fold. First, we propose a unified nonlocal Laplace operator, which converges to the classical Laplacian as one of the operator parameters, the nonlocal interaction radius δ goes to zero, and to the fractional Laplacian as δ goes to infinity. This universal operator forms a super-set of classical Laplacian and fractional Laplacian operators and, thus, has the potential to fit a broad spectrum of data sets. We provide theoretical convergence rates with respect to δ and verify them via numerical experiments. Second, we use nPINNs to estimate the two parameters, δ and α, characterizing the kernel of the unified operator. The strong non-convexity of the loss function yielding multiple (good) local minima reveals the occurrence of the operator mimicking phenomenon, that is, different pairs of estimated parameters could produce multiple solutions of comparable accuracy. Third, we propose another nonlocal operator with spatially variable order α(γ), which is more suitable for modeling turbulent Couette flow. Our results show that nPINNs can jointly infer this function as well as δ. More importantly, these parameters exhibit a universal behavior with respect to the Reynolds number, a finding that contributes to our understanding of nonlocal interactions in wall-bounded turbulence.
We propose a multilevel approach for trace systems resulting from hybridized discontinuous Galerkin (HDG) methods. The key is to blend ideas from nested dissection, domain decomposition, and high-order characteristic of HDG discretizations. Specifically, we first create a coarse solver by eliminating and/or limiting the front growth in nested dissection. This is accomplished by projecting the trace data into a sequence of same or high-order polynomials on a set of increasingly h-coarser edges/faces. We then combine the coarse solver with a block-Jacobi fine scale solver to form a two-level solver/preconditioner. Numerical experiments indicate that the performance of the resulting two-level solver/preconditioner depends on the smoothness of the solution and can offer significant speedups and memory savings compared to the nested dissection direct solver. While the proposed algorithms are developed within the HDG framework, they are applicable to other hybrid(ized) high-order finite element methods. Moreover, we show that our multilevel algorithms can be interpreted as a multigrid method with specific intergrid transfer and smoothing operators. With several numerical examples from Poisson, pure transport, and convection-diffusion equations we demonstrate the robustness and scalability of the algorithms with respect to solution order. While scalability with mesh size in general is not guaranteed and depends on the smoothness of the solution and the type of equation, improving it is a part of future work.
Lagrangian spray modeling represents a critical boundary condition for multidimensional simulations of in-cylinder flow structure, mixture formation and combustion in internal combustion engines. Segregated models for injection, breakup, collision and vaporization are usually employed to pass appropriate momentum, mass, and energy source terms to the gas-phase solver. Careful calibration of each sub-model generally produces appropriate results. Yet, the predictiveness of this modeling approach has been questioned by recent experimental observations, which showed that at trans- A nd super-critical conditions relevant to diesel injection, classical atomization and vaporization behavior is replaced by a mixing-controlled phase transition process of a dense fluid. In this work, we assessed the shortcomings of classical spray modeling with respect to real-gas and phase-change behavior, employing a multicomponent phase equilibrium solver and liquid-jet theory. A Peng-Robinson Equation of State (PR-EoS) model was implemented, and EoS-neutral thermodynamics derivatives were introduced in the FRESCO CFD platform turbulent NS solver. A phase equilibrium solver based on Gibbs free energy minimization was implemented to test phase stability and to compute phase equilibrium. Zero-dimensional flash calculations were employed to validate the solver with single- A nd multi-component fuels, at conditions relevant to diesel injection. The validation showed that 2-phase mixture temperature in the jet core can deviate up to 40K from the single-phase solution. Surface equilibrium with Raoult's law employed for drop vaporization calculation was observed to deviate up to 100% from the actual multiphase real-gas behavior. Liquid-jet spray structure in high pressure fuel injection CFD calculations was modeled using an equilibrium-phase (EP) Lagrangian injection model, where liquid fuel mass is released to the Eulerian liquid phase, assuming phase-equilibrium in every cell. Comparison to state-of-the-art modeling featuring KH-RT breakup and multicomponent fuel vaporization highlighted the superior predictive capabilities of the EP model in capturing liquid spray structure at several conditions with limited calibration efforts.
Combustion issued from an eight-hole, direct-injection spray was experimentally studied in a constant-volume pre-burn combustion vessel using simultaneous high-speed diffused back-illumination extinction imaging (DBIEI) and OH∗ chemiluminescence. DBIEI has been employed to observe the liquid-phase of the spray and to quantitatively investigate the soot formation and oxidation taking place during combustion. The fuel-air mixture was ignited with a plasma induced by a single-shot Nd:YAG laser, permitting precise control of the ignition location in space and time. OH∗ chemiluminescence was used to track the high-temperature ignition and flame. The study showed that increasing the delay between the end of injection and ignition drastically reduces soot formation without necessarily compromising combustion efficiency. For long delays between the end of injection and ignition (1.9 ms) soot formation was eliminated in the main downstream charge of the fuel spray. However, poorly atomized and large droplets formed at the end of injection (dribble) eventually do form soot near the injector even when none is formed in the main charge. The quantitative soot measurements for these spray and ignition scenarios, resolved in time and space, represents a significant new achievement. Reynolds-averaged Navier-Stokes (RANS) simulations were performed to assess spray mixing and combustion. An analysis of the predicted fuel-air mixture in key regions, defined based upon experimental observations, was used to explain different flame propagation speeds and soot production tendencies when varying ignition timing. The mixture analysis indicates that soot production can be avoided if the flame propagates into regions where the equivalence ratio (φ) is already below 2. Reactive RANS simulations have also been performed, but with a poor match against the experiment, as the flame speed and heat-release rate are largely over estimated. This modeling weakness appears related to a very high level of turbulent viscosity predicted for the high-momentum spray in the RANS simulations, which is an important consideration for modeling ignition and flame propagation in mixtures immediately created by the spray.
A complementary metal oxide semiconductor (CMOS) compatible fabrication method for creating three-dimensional (3D) meta-films is presented. In contrast to metasurfaces, meta-films possess structural variation throughout the thickness of the film and can possess a sub-wavelength scale structure in all three dimensions. Here we use this approach to create 2D arrays of cubic silicon nitride unit cells with plasmonic inclusions of elliptical metallic disks in horizontal and vertical orientations with lateral array-dimensions on the order of millimeters. Fourier transform infrared (FTIR) spectroscopy is used to measure the infrared transmission of meta-films with either horizontally or vertically oriented ellipses with varying eccentricity. Shape effects due to the ellipse eccentricity, as well as localized surface plasmon resonance (LSPR) effects due to the effective plasmonic wavelength are observed in the scattering response. The structures were modeled using rigorous coupled wave analysis (RCWA), finite difference time domain (Lumerical), and frequency domain finite element (COMSOL). The silicon nitride support structure possesses a complex in-plane photonic crystal slab band structure due to the periodicity of the unit cells. We show that adjustments to the physical dimensions of the ellipses can be used to control the coupling to this band structure. The horizontally oriented ellipses show narrow, distinct plasmonic resonances while the vertically oriented ellipses possess broader resonances, with lower overall transmission amplitude for a given ellipse geometry. We attribute this difference in resonance behavior to retardation effects. The ability to couple photonic slab modes with plasmonic inclusions enables a richer space of optical functionality for design of metamaterial-inspired optical components.
Ducted fuel injection (DFI) has been shown to attenuate engine-out soot emissions from diesel engines. The concept is to inject fuel through a small tube within the combustion chamber to enable lower equivalence ratios at the autoignition zone, relative to conventional diesel combustion. Previous experiments have demonstrated that DFI enables significant soot attenuation relative to conventional diesel combustion for a small set of operating conditions at relatively low engine loads. This is the first study to compare DFI to conventional diesel combustion over a wide range of operating conditions and at higher loads (up to 8.5 bar gross indicated mean effective pressure) with a four-orifice fuel injector. This study compares DFI to conventional diesel combustion through sweeps of intake-oxygen mole fraction (XO2), injection duration, intake pressure, start of combustion (SOC) timing, fuel-injection pressure, and intake temperature. DFI is shown to curtail engine-out soot emissions at all tested conditions. Under certain conditions, DFI can attenuate engine-out soot by over a factor of 100. In addition to producing significantly lower engine-out soot emissions, DFI enables the engine to be operated at low-NOx conditions that are not feasible with conventional diesel combustion due to high soot emissions.
Moffitt, Stephanie L.; Riley, Conor; Ellis, Benjamin H.; Fleming, Robert A.; Thompson, Corey S.; Burton, Patrick D.; Gordon, Margaret E.; Zakutayev, Andriy; Schelhas, Laura T.
Characterization of photovoltaic (PV) module materials throughout different stages of service life is crucial to understanding and improving the durability of these materials. Currently the large-scale of PV modules (>1 m2) is imbalanced with the small-scale of most materials characterization tools (≤1 cm2). Furthermore, understanding degradation mechanisms often requires a combination of multiple characterization techniques. Here, we present adaptations of three standard materials characterization techniques to enable mapping characterization over moderate sample areas (≥25 cm2). Contact angle, ellipsometry, and UV-vis spectroscopy are each adapted and demonstrated on two representative samples: a commercial multifunctional coating for PV glass and an oxide combinatorial sample library. Best practices are discussed for adapting characterization techniques for large-area mapping and combining mapping information from multiple techniques.
Here, we study orthogonal polynomials with respect to self-similar measures, focusing on the class of infinite Bernoulli convolutions, which are defined by iterated function systems with overlaps, especially those defined by the Pisot, Garsia, and Salem numbers. By using an algorithm of Mantica, we obtain graphs of the coefficients of the 3-term recursion relation defining the orthogonal polynomials. We use these graphs to predict whether the singular infinite Bernoulli convolutions belong to the Nevai class. Based on our numerical results, we conjecture that all infinite Bernoulli convolutions with contraction ratios greater than or equal to 1/2 belong to Nevai’s class, regardless of the probability weights assigned to the self-similar measures.
Aria is a Galerkin finite element based program for solving coupled-physics problems described by systems of PDEs and is capable of solving nonlinear, implicit, transient and direct-to-steady state problems in two and three dimensions on parallel architectures. The suite of physics currently supported by Aria includes thermal energy transport, species transport, and electrostatics as well as generalized scalar, vector and tensor transport equations. Additionally, Aria includes support for manufacturing process flows via the incompressible Navier-Stokes equations specialized to a low Reynolds number (Re < 1) regime. Enhanced modeling support of manufacturing processing is made possible through use of either arbitrary Lagrangian-Eulerian (ALE) and level set based free and moving boundary tracking in conjunction with quasi-static nonlinear elastic solid mechanics for mesh control. Coupled physics problems are solved in several ways including fully-coupled Newton’s method with analytic or numerical sensitivities, fully-coupled Newton-Krylov methods and a loosely-coupled nonlinear iteration about subsets of the system that are solved using combinations of the aforementioned methods. Error estimation, uniform and dynamic h-adaptivity and dynamic load balancing are some of Aria’s more advanced capabilities.
Complex metal hydrides provide high-density hydrogen storage, which is essential for vehicular applications. However, the utility of these materials has been limited by thermodynamic and kinetic barriers present during the dehydrogenation and rehydrogenation processes as new phases form inside parent phases. Better understanding of the mixed-phase mesostructures and their interfaces may assist in improving cyclability. In this work, the evolution of the phases during hydrogenation of lithium nitride and dehydrogenation of lithium amide with lithium hydride are probed with scanning-transmission X-ray microscopy at the nitrogen K edge. With this technique, intriguing core-shell structures were observed in particles of both partially hydrogenated Li3N and partially dehydrogenated LiNH2 + 2 LiH. The potential contributions of both internal hydrogen mobility and interfacial energies on the generation of these structures are discussed.
Rempe, Susan R.; Maldonado, Alex M.; Basdogan, Yasemin; Berryman, Joshua T.; Keith, John A.
Mixed solvents (i.e., binary or higher order mixtures of ionic or nonionic liquids) play crucial roles in chemical syntheses, separations, and electrochemical devices because they can be tuned for specific reactions and applications. Apart from fully explicit solvation treatments that can be difficult to parameterize or computationally expensive, there is currently no well-established first-principles regimen for reliably modeling atomic-scale chemistry in mixed solvent environments. We offer our perspective on how this process could be achieved in the near future as mixed solvent systems become more explored using theoretical and computational chemistry. We first outline what makes mixed solvent systems far more complex compared to single-component solvents. An overview of current and promising techniques for modeling mixed solvent environments is provided. We focus on so-called hybrid solvation treatments such as the conductor-like screening model for real solvents and the reference interaction site model, which are far less computationally demanding than explicit simulations. We also propose that cluster-continuum approaches rooted in physically rigorous quasi-chemical theory provide a robust, yet practical, route for studying chemical processes in mixed solvents.
Polymer nanoparticle composites (PNCs) with ultrahigh loading of nanoparticles (NPs) (>50%) have been shown to exhibit markedly improved strength, stiffness, and toughness simultaneously compared to the neat systems of their components. Recent experimental studies on the effect of polymer fill fraction in these highly loaded PNCs reveal that even at low polymer fill fractions, hardness and modulus increase significantly. In this work, we aim to understand the origin of these performance enhancements by examining the dynamics of both polymer and NPs under tensile deformation. We perform molecular dynamics simulations of coarse-grained, glassy polymer in random-close-packed NP packings with a varying polymer fill fraction. We characterize the mechanical properties of the PNC systems, compare the NP rearrangement behavior, and study the polymer segmental and chain-level dynamics during deformation below the polymer glass transition. Finally, our simulation results confirm the experimentally observed increase in modulus at low polymer fill fractions, and we provide evidence that the source of mechanical enhancement is the polymer bridging effect.
Here, we describe direct measurements of ozone concentration achievable in small enclosed containers (plastic storage boxes) for use as improvised decontamination systems for small articles such as disposable PPE (N95 masks, nitrile gloves, etc.), clothing, mail and small packages, food, and other miscellaneous articles. The emphasis is on the reliable and sustained generation of ozone gas concentrations of sufficient concentration and duration to create an effective virucidal environment to achieve more than 95% to 99% viral inactivation, based upon the data already published in the peer-review literature on this topic. The suggestion that ozone be used to inactivate virus is certainly not a new idea. Our objective in this report is to make clear that the necessary levels of ozone can be improvised using simple, easy-to-use, inexpensive, and widely available supplies, and that there is every theoretical and experimental reason to believe that this approach is as highly effective in viral inactivation by ozone as are the far more expensive, complex, cumbersome, and less available equivalent ozone (and other) disinfectant systems that have themselves become unavailable during times of pandemic crisis. Using multiple types of readily available commercial ozone generators, concentration in the tested improvised enclosure is tracked over time to assess ozone charging and decay rates, and the ozone quenching effects of items placed in the box. Generator performance is compared against published ozone dosage values for virucidal and antimicrobial activity. Bubbler and box-fan-type ozone generators were found to be effective at achieving and maintaining target concentrations of 10ppm ozone or higher, whereas automotive cigarette lighter and universal serial bus type plug in “air freshener” ozone generators could not achieve the target concentrations in these experiments. Calculations and practical guidelines for assembly and effective use of an ozone box for improvised decontamination are offered. The majority of this report is directed toward the scientific justification and rationale for this approach. The end of the document summarizes the findings and offers simplified designs for the construction and use of ozone boxes as an improvised method of disinfection.
The helium-related characteristics of aged palladium tritide, including tritium loss, the formation of He bubbles from tritium decay-induced He atoms, and He thermal desorption behavior are computed using a continuum model of bubble evolution that incorporates the log-normal bubble spacing distribution deduced from 3 He nuclear magnetic resonance (NMR) measurements. This spacing distribution produces significant differences between mean (expectation) values of the bubble size and pressure and the average (median) values of these aging characteristics obtained by a previous calculation. The new calculations find much of the He is retained in a large quantity of smaller bubbles with higher bubble pressures, improving the overall stability of the bubbles and extending the age of He retention. By contrast, the integrated tritide material characteristics of swelling and tritium pressure are found relatively insensitive to details of the bubble spacing distribution. Inter-bubble fracture is predicted to begin with small, closely-spaced bubbles and progress to include larger bubbles near the onset of rapid He release. The critical concentration for this onset increases almost linearly with material tensile strength and decreases with shear modulus. Release of He from the aging solid is modeled as a growing network of fractured inter-bubble ligaments. For small particle or small grain material, the physical size of the fracture cluster, relative to the material dimensions, becomes important. In contrast with normal aging, inter-bubble fracture occurring during thermal desorption appears to begin with large, widely-spaced bubbles, indicating elevated temperature techniques may be of limited use for evaluating development of the bubble fracture network.
Sandia has an extensive background in cybersecurity research and is currently extending its state-of-the-art modeling via emulation capability. However, a key part of Sandia's modeling methodology is the discovery and specification of the information-system under study, and the ability to recreate that specification with the highest fidelity possible in order to extrapolate meaningful results. This work details a method to conduct information system discovery and develop tools to enable the creation of high-fidelity emulation models that can be used to enable assessment of our infrastructure information system security posture and potential system impacts that could result from cyber threats. The outcome are a set of tools and techniques to go from network discovery of operational systems to emulating complex systems. As a concrete usecase, we have applied these tools and techniques at Supercomputing 2016 to model SCinet, the world's largest research network. This model includes five routers and nearly 10,000 endpoints which we have launched in our emulation platform.
This article is concerned with the approximation of high-dimensional functions by kernel-based methods. Motivated by uncertainty quantification, which often necessitates the construction of approximations that are accurate with respect to a probability density function of random variables, we aim at minimizing the approximation error with respect to a weighted $L^p$-norm. We present a greedy procedure for designing computer experiments based upon a weighted modification of the pivoted Cholesky factorization. The method successively generates nested samples with the goal of minimizing error in regions of high probability. Numerical experiments validate that this new importance sampling strategy is superior to other sampling approaches, especially when used with non-product probability density functions. We also show how to use the proposed algorithm to efficiently generate surrogates for inferring unknown model parameters from data.
Thermal conductivity has been determined for a variety of energetic materials (EMs) using finite element analysis (FEA) and cookoff data from the Sandia Instrumented Thermal Ignition (SITI) experiment. Materials studied include melt-cast, pressed, and low-density explosives. The low-density explosives were either prills or powders with some experiments run at pour density (not pressed). We have compared several of our thermal conductivities with those in the literature as well as investigated contact resistance between the confining aluminum and explosive, multidimensional heat transfer effects, and uncertainty in the thermocouple bead positions. We have determined that contact resistance is minimal in the SITI experiment, the heat transfer along the midplane is one-dimensional, and that uncertainty in the thermocouple location is greatest near the heated boundary. Our values of thermal conductivity can be used with kinetic mechanisms to accurately predict thermal profiles and energy dissipation during the cookoff of explosives.
Research reactors play an important role in higher education, scientific research, and medical radioisotope production around the world. It is thus important to ensure the safety of facility workers and the public. This work presents a new reactor transient analysis code referred to as Razorback. The code has been developed for the evaluation of large rapid reactivity addition in research reactors. Its initial focus is the Annular Core Research Reactor (ACRR) at Sandia National Laboratories. Results have been validated against ACRR pulse operations. Razorback models the reactor kinetics, fuel element heat transfer, fuel element thermal expansion, and natural circulation coolant channel thermal-hydraulic response. Simulation results for ACRR pulse operations are shown to agree very well with operational data obtained from the ACRR. Razorback is expected to be a valuable tool for ACRR pulse performance prediction and ACRR reactor safety analyses.
US homeland security concerns regarding the potential misuse of some radiation sources used in radiobiological research, for example, cesium-137 (137Cs), have resulted in recommendations by the National Research Council to conduct studies into replacing these sources with suitable X-ray instruments. The objective of this research is to compare the effectiveness of an X-RAD 320 irradiator (PXINC 2010) with a 137Cs irradiator (Gammacell-1000 Unit) using an established bone marrow chimeric model. Using measured radiation doses for each instrument, we characterized the dose–response relationships for bone marrow and splenocyte ablation, using a cytotoxicity-hazard model. Our results show that the X-RAD 320 photon energy spectrum was suitable for ablating bone marrow at the 3 exposure levels used, similar to that of 137Cs photons. However, the 320-kV X-rays were not as effective as the much higher energy γ rays at depleting mouse splenocytes. Furthermore, the 3 X-ray levels used were less effective than the higher energy γ rays in allowing the successful engraftment of donor bone marrow, potentially as a result of the incomplete depletion of the spleen cells. More defined studies are warranted for determining whether bone marrow transplantation in mice can be successfully achieved using 320-kV X-rays. A higher X-ray dose then used is likely needed for transplantation success.
An interlaboratory effort has developed a probabilistic framework to characterize uncertainty in data products that are developed by the US Department of Energy Consequence Management Program in support of the Federal Radiological Monitoring and Assessment Center. The purpose of this paper is to provide an overview of the probability distributions of input variables and the statistical methods used to propagate and quantify the overall uncertainty of the derived response levels that are used as contours on data products due to the uncertainty in input parameters. Uncertainty analysis results are also presented for several study scenarios. This paper includes an example data product to illustrate the potential real-world implications of incorporating uncertainty analysis results into data products that inform protective action decisions. Data product contours that indicate areas where public protection actions may be warranted can be customized to an acceptable level of uncertainty. The investigators seek feedback from decision makers and the radiological emergency response community to determine how uncertainty information can be used to support the protective action decision-making process and how it can be presented on data products.
Digital Image Correlation (DIC) is a well-established, non-contact diagnostic technique used to measure shape, displacement and strain of a solid specimen subjected to loading or deformation. However, measurements using standard DIC can have significant errors or be completely infeasible in challenging experiments, such as explosive, combustion, or fluid-structure interaction applications, where beam-steering due to index of refraction variation biases measurements or where the sample is engulfed in flames or soot. To address these challenges, we propose using X-ray imaging instead of visible light imaging for stereo-DIC, since refraction of X-rays is negligible in many situations, and X-rays can penetrate occluding material. Two methods of creating an appropriate pattern for X-ray DIC are presented, both based on adding a dense material in a random speckle pattern on top of a less-dense specimen. A standard dot-calibration target is adapted for X-ray imaging, allowing the common bundle-adjustment calibration process in commercial stereo-DIC software to be used. High-quality X-ray images with sufficient signal-to-noise ratios for DIC are obtained for aluminum specimens with thickness up to 22.2 mm, with a speckle pattern thickness of only 80 μm of tantalum. The accuracy and precision of X-ray DIC measurements are verified through simultaneous optical and X-ray stereo-DIC measurements during rigid in-plane and out-of-plane translations, where errors in the X-ray DIC displacements were approximately 2–10 μm for applied displacements up to 20 mm. Finally, a vast reduction in measurement error—5–20 times reduction of displacement error and 2–3 times reduction of strain error—is demonstrated, by comparing X-ray and optical DIC when a hot plate induced a heterogeneous index of refraction field in the air between the specimen and the imaging systems. Collectively, these results show the feasibility of using X-ray-based stereo-DIC for non-contact measurements in exacting experimental conditions, where optical DIC cannot be used.
Light-emitting diode (LED) arrays fabricated on a polycrystalline metal substrate are demonstrated using a novel technique that enables the growth of epitaxial metal-organic chemical vapor deposition (MOCVD) GaN layers on non-single-crystal substrates. Epitaxial GaN is deposited directly on metal foil using an intermediate ion beam-assisted deposition (IBAD) aligned layer. For a single 170 μm-diameter LED on the metal foil, electroluminescence (EL) spectrum shows a peak wavelength of ≈452 nm and a full width at half maximum (FWHM) of ≈24 nm. The current–voltage (I–V) characteristics show a turn-on voltage of 3.7 V, a series resistance of 10 Ω. LEDs on metal show a relative external quantum efficiency (EQE) that is roughly 3× lower than that of similar LEDs fabricated on a sapphire substrate. InGaN LEDs on large-area non-single-crystal substrates such as metal foils enable large-area manufacturing, reducing production cost, and opening the door for new applications in lighting and displays.
This article considers two algorithms of a finite-volume solver for the MHD equations with a real-gas equation of state (EOS). Both algorithms use a multistate form of the Harten-Lax-Van Leer approximate Riemann solver as formulated for MHD discontinuities. This solver is modified to use the generalized sound speed from the real-gas EOS. Two methods are tested: EOS evaluation at cell centers and flux interfaces where the former is more computationally efficient. A battery of 1-D and 2-D tests is employed: convergence of 1-D and 2-D linearized waves, shock tube Riemann problems, a 2-D nonlinear circularly polarized Alfvén wave, and a 2-D magneto-Rayleigh-Taylor instability test. The cell-centered-EOS-evaluation algorithm produces unresolvable thermodynamic inconsistencies in the intermediate states leading to spurious solutions while the flux-interface EOS evaluation algorithm robustly produces the correct solution. The linearized wave tests show that this inconsistency is associated with the magnetosonic waves and the magneto-Rayleigh-Taylor instability test demonstrates simulation results, where the spurious solution leads to an unphysical simulation.
Dust accumulation significantly affects the performance of photovoltaic modules and its impact can be mitigated by various cleaning methods. Optimizing the cleaning frequency is essential to minimize the soiling losses and, at the same time, the costs. However, the effectiveness of cleaning lowers with time because of the reduced energy yield due to degradation. Additionally, economic factors such as the escalation in electricity price and inflation can compound or counterbalance the effect of degradation on the soiling mitigation profits. The present study analyzes the impact of degradation, escalation in electricity price and inflation on the revenues and costs of cleanings and proposes a methodology to maximize the profits of soiling mitigation of any system. The energy performance and soiling losses of a 1 MW system installed in southern Spain were analyzed and integrated with theoretical linear and nonlinear degradation rate patterns. The Levelized Cost of Energy and Net Present Value were used as criteria to identify the optimum cleaning strategies. The results showed that the two metrics convey distinct cleaning recommendations, as they are influenced by different factors. For the given site, despite the degradation effects, the optimum cleaning frequency is found to increase with time of operation.
In the present research, epitaxial regrowth by molecular beam epitaxy (MBE) is investigated as a fabrication process for void-semiconductor photonic crystal (PhC) surface emitting lasers (PCSELs). The PhC is patterned by electron beam lithography (EBL) and inductively coupled plasma (ICP) etch and is subsequently regrown by molecular beam epitaxy to embed a series of voids in bulk semiconductor. Experiments are conducted to investigate the effects of regrowth on air-hole morphology. The resulting voids have a distinct teardrop shape with the radius and depth of the etched hole playing a very critical role in the final regrown void's dimensions. We demonstrate that specific hole diameters can encourage deposition to the bottom of the voids or to their sidewalls, allowing us to engineer the shape of the void more precisely as is required by the PCSEL design. A 980 nm InGaAs quantum well laser structure is optimized for low threshold lasing at the design wavelength and full device structures are patterned and regrown. An optically pumped PCSEL is demonstrated from this process.
The defect detection capabilities of Power Spectrum Analysis (PSA) [1] have been successfully combined with local laser heating to isolate defective circuitry in a high-speed Si Phase Locked Loop (PLL). The defective operation resulted in missed counts when operating at multi-GHz speeds and elevated temperatures. By monitoring PSA signals at a specific frequency through zero-spanning and scanning the suspect device with a heating laser (1340 nm wavelength), the area(s) causing failure were localized. PSA circumvents the need for a rapid pass/fail detector like that used for Soft Defect Localization (SDL) [2] or Laser-Assisted Defect Analysis (LADA) [3] and converts the at-speed failure to a DC signature. The experimental setup for image acquisition and examples demonstrating utility are described.
Non-volatile memory arrays can deploy pre-trained neural network models for edge inference. However, these systems are affected by device-level noise and retention issues. Here, we examine damage caused by these effects, introduce a mitigation strategy, and demonstrate its use in fabricated array of SONOS (Silicon-Oxide-Nitride-Oxide-Silicon) devices. On MNIST, fashion-MNIST, and CIFAR-10 tasks, our approach increases resilience to synaptic noise and drift. We also show strong performance can be realized with ADCs of 5-8 bits precision.
As the potential applications of GaN and Ga2O3 are limited by the inadequacy of conventional doping techniques, specifically when uniform selective area p-type doping is required, the potential for transmutation doping of these materials is analyzed. All transmuted element concentrations are reported as a function of time for several common proton and neutron radiation sources, showing that previously published results considered a small subset of the dopants produced. A 40 MeV proton accelerator is identified as the most effective transmutation doping source considered, with a 2.25 × 1017 protons per cm2 fluence yielding net concentrations of uncompensated p-type dopants of 7.7 × 1015 and 8.1 × 1015 cm-3 for GaN and Ga2O3, respectively. Furthermore, it is shown that high energy proton accelerator spectra are capable of producing dopants required for magnetic and neutron detection applications, although not of the concentrations required for current applications using available irradiation methods.
Measurements performed on a population of electronic devices reveal part-to-part variation due to manufacturing process variation. Corner models are a useful tool for the designers to bound the effect of this variation on circuit performance. To accurately simulate the circuit level behavior, compact model parameters for devices within a circuit must be calibrated to experimental data. However, determination of the bounding data for corner model calibration is difficult, primarily because available tolerance bound calculation methods only consider variability along one dimension and, do not adequately consider the variabilities across both the current and voltage axes. This paper presents the demonstration of a novel functional data analysis approach to generate tolerance bounds on these two types of variability separately and these bounds are then transformed to be used in corner model calibration.
Proper edge termination is required to reach large blocking voltages in vertical power devices. Limitations in selective area p-type doping in GaN restrict the types of structures that can be used for this purpose. A junction termination extension (JTE) can be employed to reduce field crowding at the junction periphery where the charge in the JTE is designed to sink the critical electric field lines at breakdown. One practical way to fabricate this structure in GaN is by a step-etched single-zone or multi-zone JTE where the etch depths and doping levels are used to control the charge in the JTE. The multi-zone JTE is beneficial for increasing the process window and allowing for more variability in parameter changes while still maintaining a designed percentage of the ideal breakdown voltage. Impact ionization parameters reported in literature for GaN are compared in a simulation study to ascertain the dependence on breakdown performance. Two 3-zone JTE designs utilizing different impact ionization coefficients are compared. Simulations confirm that the choice of impact ionization parameters affects both the predicted breakdown of the device as well as the fabrication process variation tolerance for a multi-zone JTE. Regardless of the impact ionization coefficients utilized, a step-etched JTE has the potential to provide an efficient, controllable edge termination design.
The ExaLearn miniGAN team (Ellis and Rajamanickam) have released miniGAN, a generative adversarial network(GAN) proxy application, through the ECP proxy application suite. miniGAN is the first machine learning proxy application in the suite (note: the ECP CANDLE project did previously release some benchmarks) and models the performance for training generator and discriminator networks. The GAN's generator and discriminator generate plausible 2D/3D maps and identify fake maps, respectively. miniGAN aims to be a proxy application for related applications in cosmology (CosmoFlow, ExaGAN) and wind energy (ExaWind). miniGAN has been developed so that optimized mathematical kernels (e.g., kernels provided by Kokkos Kernels) can be plugged into to the proxy application to explore potential performance improvements. miniGAN has been released as open source software and is available through the ECP proxy application website (https://proxyapps.exascaleproject.ordecp-proxy-appssuite/) and on GitHub (https://github.com/SandiaMLMiniApps/miniGAN). As part of this release, a generator is provided to generate a data set (series of images) that are inputs to the proxy application.