Publications

Results 26201–26300 of 99,299

Search results

Jump to search filters

November 2016 HERMES Outdoor Shot Series 10268-313: Air Conductivity Measurements

Yee, Benjamin T.; Cartwright, Keith

Of specific concern to this report and the related experiments is ionization of air by gammas rays and the cascading electrons in the High-Energy Radiation Megavolt Electron Source (HERMES) III courtyard. When photons generated by HERMES encounter a neutral atom or molecule, there is a chance that they will interact via one of several mechanisms: photoelectric effect, Compton scattering, or pair production. In both the photoelectric effect and Compton scattering, an electron is liberated from the atom or molecule with a direction of travel preferentially aligned with the gamma ray. This results in a flow of electrons away from the source region, which results in large scale electric and magnetic fields. The strength of these fields and their dynamics are dependent on the conductivity of the air. A more comprehensive description is provided by Longmire and Gilbert.

More Details

DARMA-EMPIRE Integration and Performance Assessment – Interim Report

Lifflander, Jonathan J.; Bettencourt, Matthew T.; Slattengren, Nicole L.; Templet, Gary J.; Miller, Phil; Perrinel, Meriadeg; Rizzi, Francesco; Pebay, Philippe P.

We begin by presenting an overview of the general philosophy that is guiding the novel DARMA developments, followed by a brief reminder about the background of this project. We finally present the FY19 design requirements. As the Exascale era arises, DARMA is uniquely positioned at the forefront of asychronous many-task (AMT) research and development (R&D) to explore emerging programming model paradigms for next-generation HPC applications at Sandia, across NNSA labs, and beyond. The DARMA project explores how to fundamentally shift the expression(PM) and execution(EM)of massively concurrent HPC scientific algorithms to be more asynchronous, resilient to executional aberrations in heterogeneous/unpredictable environments, and data-dependency conscious—thereby enabling an intelligent, dynamic, and self-aware runtime to guide execution.

More Details

Recovery and calibration of legacy analog data from the Leo Brady Seismic Network for the Source Physics Experiment

Young, Brian A.; Abbott, Robert

The Leo Brady Seismic Network (LBSN) was established in 1960 by Sandia National Laboratories for monitoring underground nuclear tests (UGTs) at the Nevada Test Site— renamed in 2010 to the Nevada National Security Site (NNSS). The LBSN has been in various configurations throughout its existence, but it has been generally comprised of four to six stations at regional distances from the NNSS with evenly spaced azimuthal coverage. Between 1962 and the early 1980s, the LBSN—and a sister network operated by Lawrence Livermore National Laboratory—were the most comprehensive U.S. source of regional seismic data of UGTs. During the pre-digital era, LBSN data were transmitted as frequency-modulated (FM) audio over telephone lines to the NTS and recorded in analog on hi-fi 8-track AMPEX tapes. These tapes have been stored in temperature-stable buildings or bunkers on the NNSS and Kirtland Air Force Base in Albuquerque, NM for decades and contain the sole record of this irreplaceable data from the analog era; full waveforms of UGTs during this time were never routinely converted to digital form. We have been developing a process over the past few years to recover and calibrate data from these tapes, converting them from FM audio to digital waveforms in ground motion units. The calibration of legacy data from the LBSN is still ongoing. To date, we have digitized tapes from 592 separate UGTs. As a proof-of-concept, we calibrated data from the BOXCAR event.

More Details

The 2018 Nonlinear Mechanics and Dynamics Research Institute

Kuether, Robert J.; Allensworth, Brooke M.; Smith, Jeffrey A.; Peebles, Diane

The 2018 Nonlinear Mechanics and Dynamics (NOMAD) Research Institute was successfully held from June 18 to August 2, 2018. NOMAD brings together participants with diverse technical backgrounds to work in small teams to cultivate new ideas and approaches in engineering mechanics and dynamics research. NOMAD provides an opportunity for researchers -- especially early career researchers - to develop lasting collaborations that go beyond what can be established from the limited interactions at their institutions or at annual conferences. A total of 17 students came to Albuquerque, New Mexico to participate in the seven-week long program held at the Mechanical Engineering building on the University of New Mexico campus. The students collaborated on one of six research projects that were developed by various mentors from Sandia National Laboratories, University of New Mexico, and academic institutions. In addition to the research activities, the students attended weekly technical seminars, various tours, and socialized at various off-hour events including an Albuquerque Isotopes baseball game. At the end of the summer, the students gave a final technical presentation on their research findings. Many of the research discoveries made at NOMAD are published as proceedings at technical conferences and have direct alignment with the critical mission work performed at Sandia.

More Details

November 2016 HERMES Outdoor Shot Series 10268-313: Courtyard Dosimetry and Parametric Fits

Cartwright, Keith; Yee, Benjamin T.; Pointon, Timothy; Gooding, Renee

A series of outdoor shots were conducted at the HERMES III facility in November 2016. There were several goals associated with these experiments, one of which is an improved understanding of the courtyard radiation environment. Previous work had developed parametric fits to the spatial and temporal dose rate in the area of interest. This work explores the inter-shot variation of the dose in the courtyard, updated fit parameters, and an improved dose rate model which better captures high frequency content. The parametric fit for the spatial profile is found to be adequate in the far-field, however near-field radiation dose is still not well-understood.

More Details

Low Energy Photon Filter Box Optimization Study for the Gamma Irradiation Facility (GIF)

Depriest, Kendall R.

As a follow-up to results presented at the 16th International Symposium on Reactor Dosimetry, a new set of low energy photon filter box designs were evaluated for potential testing at the Gamma Irradiation Facility in Sandia National Laboratories' Technical Area V. The goal of this filter box design study is to produce the highest fidelity gamma ray test environment for electronic parts. Using Monte Carlo coupled photon/electron transport, approximately a dozen different designs were evaluated for the effectiveness in reducing the dose enhancement in a silicon sensor. The completion of this study provides the Radiation Metrology Laboratory staff with a starting point for experimental test plans that could lead to improvement in the gamma ray test environment at the Gamma Irradiation Facility.

More Details

Codes and Standards Update January 2019

Conover, David R.

The goal of the DOE OE Energy Storage System Safety Roadmap is to foster confidence in the safety and reliability of energy storage systems. There are three interrelated objectives to support the realization of that goal: research, codes and standards (C/S) and communication/coordination. The objective focused on C/S is "To apply research and development to support efforts that refocused on ensuring that codes and standards are available to enable the safe implementation of energy storage systems in a comprehensive, non-discriminatory and science-based manner."

More Details

Method for Calculating Delayed Gamma-Ray Response in the ACRR Central Cavity and FREC-II Cavity Using MCNP

Moreno, Melissa; Parma, Edward J.

This document presents the process for a new method developed for the characterization of the delayed gamma-ray radiation fields in pulse reactors like the Annular Core Research Reactor (ACRR) and the Fueled Ring External Cavity (FREC-II). The environments used to test this method in the ACRR were FF, LB44, PLG and CdPoly, and the environments used in the FREC-II were FF with rods-down, FF with rods-up, CdPoly with rods-down and CdPoly with rods-up. All environment configurations used the same fission product gamma-ray source energy spectrum. This method required the fission sites located in the MCNP KCODE source tapes. A FORTRAN script was written to translate and extract the coordinates for the fission sites. The 10K fission sites were then input it into an MCNP SOURCE mode script. Using a MATLAB script, a parametric analysis was done, and it helped determine that 10K fission sites are an appropriate number of coordinates to converge to the correct answer. The method gave excellent results and was tested in the ACRR, FREC-II and White Sands Missile Range (WSMR). This method can be applied to other pulse research reactors as well.

More Details

Data-Consistent Solutions to Stochastic Inverse Problems Using a Probabilistic Multi-Fidelity Method Based on Conditional Densities

International Journal for Uncertainty Quantification

Wildey, Timothy; Bruder, L.; Gee, M.W.

In this work, we build upon a recently developed approach for solving stochastic inverse problems based on a combination of measure-theoretic principles and Bayes' rule. We propose a multi-fidelity method to reduce the computational burden of performing uncertainty quantification using high-fidelity models. This approach is based on a Monte Carlo framework for uncertainty quantification that combines information from solvers of various fidelities to obtain statistics on the quantities of interest of the problem. In particular, our goal is to generate samples from a high-fidelity push-forward density at a fraction of the costs of standard Monte Carlo methods, while maintaining flexibility in the number of random model input parameters. Key to this methodology is the construction of a regression model to represent the stochastic mapping between the low- and high-fidelity models, such that most of the computations can be leveraged to the low-fidelity model. To that end, we employ Gaussian process regression and present extensions to multi-level-type hierarchies as well as to the case of multiple quantities of interest. Finally, we demonstrate the feasibility of the framework in several numerical examples.

More Details

Strong and Weak Scaling of the Sierra/SD Eigenvector Problem to a Billion Degrees of Freedom

Bunting, Gregory

Sierra/SD is a structural dynamics finite element software package that is known for its scalability and performance on DOE supercomputers. While there are historical documents demonstrating weak and strong scaling on DOE systems such as Redsky, no such formal studies have been done on modern architectures. This report demonstrates that Sierra/SD still scales on modern architectures. Non structured meshes in the shape of an I-Beam are solved in sizes ranging from fifty thousand degrees of freedom in serial up to one and a half billion degrees of freedom on over eighteen thousand processors using only default solver options. The report serves as a baseline for users to estimate computation cost of finite element analyses in Sierra/SD, understand how solver options relate to computational costs, and pick optimal processor counts to solve a given problem size, as well as a baseline for evaluating computational cost and scalability on next generation architectures.

More Details

Robust Uncertainty Quantification using Response Surface Approximations of Discontinuous Functions

International Journal for Uncertainty Quantification

Wildey, Timothy; Gorodetsky, Alex; Belme, Anca; Shadid, John N.

This paper considers response surface approximations for discontinuous quantities of interest. Our objective is not to adaptively characterize the manifold defining the discontinuity. Instead, we utilize an epistemic description of the uncertainty in the location of a discontinuity to produce robust bounds on sample-based estimates of probabilistic quantities of interest. We demonstrate that two common machine learning strategies for classification, one based on nearest neighbors (Voronoi cells) and one based on support vector machines, provide reasonable descriptions of the region where the discontinuity may reside. In higher dimensional spaces, we demonstrate that support vector machines are more accurate for discontinuities defined by smooth manifolds. We also show how gradient information, often available via adjoint-based approaches, can be used to define indicators to effectively detect a discontinuity and to decompose the samples into clusters using an unsupervised learning technique. Numerical results demonstrate the epistemic bounds on probabilistic quantities of interest for simplistic models and for a compressible fluid model with a shock-induced discontinuity.

More Details

Surfactant-Assisted Synthesis of Monodisperse Methylammonium Lead Iodide Perovskite Nanocrystals

Journal of Nanoscience and Nanotechnology

Fan, Hongyou; Billstrand, Brian; Bian, Kaifu; Alarid, Leanne

Here, we present that lead iodide based perovskites are promising optoelectronic materials ideal for solar cells. Recently emerged perovskite nanocrystals (NCs) offer more advantages including improved size-tunable band gap, structural stability, and solvent-based processing. Here we report a simple surfactant-assisted two-step synthesis to produce monodisperse PbI2 NCs which are then converted to methylammonium lead iodide perovskite NCs. Based on electron microscopy characterization, these NCs showed competitive monodispersity. Additionally, combined results from X-ray diffraction patterns, optical absorption, and photoluminescence confirmed the formation of high quality methylammonium lead iodide perovskite NCs. More importantly, by avoiding the use of hard-to-remove chemicals, the resulted perovskite NCs can be readily integrated in applications, especially solar cells through versatile solution/colloidal-based methods.

More Details

Evaluating demand response opportunities for power systems resilience using MILP and MINLP Formulations

AIChE Journal

Bynum, Michael L.; Castillo, Andrea; Watson, Jean-Paul; Laird, Carl

While peak shaving is commonly used to reduce power costs, chemical process facilities that can reduce power consumption on demand during emergencies (e.g., extreme weather events) bring additional value through improved resilience. For process facilities to effectively negotiate demand response (DR) contracts and make investment decisions regarding flexibility, they need to quantify their additional value to the grid. We present a grid–centric mixed–integer stochastic programming framework to determine the value of DR for improving grid resilience in place of capital investments that can be cost prohibitive for system operators. We formulate problems using both a linear approximation and a nonlinear alternating current power flow model. Our numerical results with both models demonstrate that DR can be used to reduce the capital investment necessary for resilience, increasing the value that chemical process facilities bring through DR. Furthermore, the linearized model often underestimates the amount of DR needed in our case studies.

More Details

General modeling framework for quantum photodetectors

Physical Review A

Leonard, Francois; Young, Steve M.; Sarovar, Mohan

Photodetection plays a key role in basic science and technology, with exquisite performance having been achieved down to the single-photon level. Further improvements in photodetectors would open new possibilities across a broad range of scientific disciplines and enable new types of applications. However, it is still unclear what is possible in terms of ultimate performance and what properties are needed for a photodetector to achieve such performance. Here, we present a general modeling framework for photodetectors whereby the photon field, the absorption process, and the amplification process are all treated as one coupled quantum system. The formalism naturally handles field states with single or multiple photons as well as a variety of detector configurations and includes a mathematical definition of ideal photodetector performance. The framework reveals how specific photodetector architectures introduce limitations and tradeoffs for various performance metrics, providing guidance for optimization and design.

More Details

The Relative Importance of Assumed Infrasound Source Terms and Effects of Atmospheric Models on the Linear Inversion of Infrasound Time Series at the Source Physics Experiment

Bulletin of the Seismological Society of America

Poppeliers, Christian; Aur, Katherine A.; Preston, Leiph

We invert far-field infrasound data for the equivalent seismoacoustic time-domain moment tensor to assess the effects of variable atmospheric models and source phenomena. The infrasound data were produced by a series of underground chemical explosions that were conducted during the Source Physics Experiment (SPE), which was originally designed to study seismoacoustic signal phenomena. The first goal is to investigate the sensitivity of the inversion to the variability of the estimated atmospheric model. The second goal is to determine the relative contribution of two presumed source mechanisms to the observed infrasonic wavefield. Rather than using actual atmospheric observations to estimate the necessary atmospheric Green’s functions, we build a series of atmospheric models that rely on publicly available, regional-scale atmospheric observations. The atmospheric observations are summarized and interpolated onto a 3D grid to produce a model of sound speed at the time of the experiment. For each of four SPE acoustic datasets that we invert, we produced a suite of three atmospheric models for each chemical explosion event, based on 10 yrs of meteorological data: an average model, which averages the atmospheric conditions for 10 yrs prior to each SPE event, as well as two extrema models. To parameterize the inversion, we assume that the source of infrasonic energy results from the linear combination of explosion-induced surface spall and linear seismic-to-elastic mode conversion at the Earth’s free surface. We find that the inversion yields relatively repeatable results for the estimated spall source. Conversely, the estimated isotropic explosion source is highly variable. This suggests that 1) the majority of the observed acoustic energy is produced by the spall and/or 2) our modeling of the elastic energy, and the subsequent conversion to acoustic energy, is too simplistic.

More Details

Talk to me: A case study on coordinating expertise in large-scale scientific software projects

Proceedings - IEEE 14th International Conference on eScience, e-Science 2018

Milewicz, Reed M.; Raybourn, Elaine M.

Large-scale collaborative scientific software projects require more knowledge than any one person typically possesses. This makes coordination and communication of knowledge and expertise a key factor in creating and safeguarding software quality, without which we cannot have sustainable software. However, as researchers attempt to scale up the production of software, they are confronted by problems of awareness and understanding. This presents an opportunity to develop better practices and tools that directly address these challenges. To that end, we conducted a case study of developers of the Trilinos project. We surveyed the software development challenges addressed and show how those problems are connected with what they know and how they communicate. Based on these data, we provide a series of practicable recommendations, and outline a path forward for future research.

More Details

Development of Stable A-$\Phi$ Time-Domain Integral Equations for Multiscale Electromagnetics

IEEE Journal on Multiscale and Multiphysics Computational Techniques

Roth, Thomas E.; Chew, Weng C.

Applications involving quantum physics are becoming an increasingly important area for electromagnetic engineering. To address practical problems in these emerging areas, appropriate numerical techniques must be utilized. However, the unique needs of many of these applications require new computational electromagnetic solvers to be developed. The A-4:1. formulation is a novel approach that can address many of these needs. This formulation utilizes equations developed in terms of the magnetic vector potential (A) and electric scalar potential (t.). The resulting equations overcome many of the limitations of traditional solvers and are ideal for coupling to quantum mechanical calculations. In this work, the A-4. formulation is extended by developing time domain integral equations suitable for multiscale perfect electric conducting objects. These integral equations can be stably discretized and constitute a robust numerical technique that is a vital step in addressing the needs of many emerging applications. To validate the proposed formulation, numerical results are presented which demonstrate the stability and accuracy of the method.

More Details

Optimal Sizing of Behind-the-Meter Energy Storage with Stochastic Load and PV Generation for Islanded Operation

IEEE Power and Energy Society General Meeting

Copp, David A.; Nguyen, Tu A.; Byrne, Raymond H.

Energy storage systems are flexible resources that accommodate and mitigate variability and uncertainty in the load and generation of modern power systems. We present a stochastic optimization approach for sizing and scheduling an energy storage system (ESS) for behind-the-meter use. Specifi-cally, we investigate the use of an ESS with a solar photovoltaic (PV) system and a generator in islanded operation tasked with balancing a critical load. The load and PV generation are uncertain and variable, so forecasts of these variables are used to determine the required energy capacity of the ESS as well as the schedule for operating the ESS and the generator. When the forecasting uncertainties can be fit to normal distributions, the probabilistic load balancing constraint can be reformulated as a linear inequality constraint, and the resulting optimization problem can be solved as a linear program. Finally, we present results from a case study considering the balancing of the critical load of a water treatment plant in islanded operation.

More Details

Leveraging a LiveNirtual/Constructive Testbed for the Evaluation of Moving Target Defenses

Proceedings - International Carnahan Conference on Security Technology

Stout, William; Van Leeuwen, Brian P.; Urias, Vincent; Tuminaro, Julian; Dossaji, Nomaan

Adversary sophistication in the cyber domain is a constantly growing threat. As more systems become accessible from the Internet, the risk of breach, exploitation, and malice grows. To thwart reconnaissance and exploitation, Moving Target Defense (MTD) has been researched and deployed in various systems to modify the threat surface of a system. Tools are necessary to analyze the security, reliability, and resilience of their information systems against cyber-Attack and measure the effectiveness of the MTD technologies. Today's security analyses utilize (1) real systems such as computers, network routers, and other network equipment; (2) computer emulations (e.g., virtual machines); and (3) simulation models separately. In this paper, we describe the progress made in developing and utilizing hybrid Live, Virtual, Constructive (LVC) environments for the evaluation of a set of MTD technologies. The LVC methodology has been most rooted in the Modeling Simulation (MS) work of the Department of Defense. With the recent advances in virtualization and software-defined networking, Sandia has taken the blueprint for LVC and extended it by crafting hybrid environments of simulation, emulation, and human-in-The-loop. Furthermore, we discuss the empirical analysis of MTD technologies and approaches with LVC-based experimentation, incorporating aspects that may impact an operational deployment of the MTD under evaluation.

More Details

Toward a Multi-Agent System Architecture for Insight Cybersecurity in Cyber-Physical Networks

Proceedings - International Carnahan Conference on Security Technology

Stout, William

Operational Technology (OT) networks existed well before the dawn of the Internet, and had enjoyed security through being air-gapped and isolated. However, the interconnectedness of the world has found its way into these OT networks, exposing their vulnerabilities for cyber attacks. As the global Internet continues to grow, it becomes more and more embedded with the physical world. The Internet of Things is one such example of how IT is blurring the cyber-physical boundaries. The eventuality will be a convergence of IT and OT. Until that day comes, cyber practitioners must still deal with the primitive security features of OT networks, maintain a foothold on enterprise and cloud networks, and attempt to instill sound security practices in burgeoning IoT networks. In this paper, we propose a new method to bring cyber security to OT and IoT-based networks, through Multi-Agent Systems (MAS). MAS are flexible enough to integrate with fixed legacy networks, such as ICS, as well with be burned into newer devices and software, such as IoT and IT networks. In this paper, we discuss the features of MAS, the opportunities that exist to benefit cyber security, and a proposed architecture for a OT-based MAS.

More Details

Leveraging a LiveNirtual/Constructive Testbed for the Evaluation of Moving Target Defenses

Proceedings - International Carnahan Conference on Security Technology

Stout, William; Van Leeuwen, Brian P.; Urias, Vincent; Tuminaro, Julian; Dossaji, Nomaan

Adversary sophistication in the cyber domain is a constantly growing threat. As more systems become accessible from the Internet, the risk of breach, exploitation, and malice grows. To thwart reconnaissance and exploitation, Moving Target Defense (MTD) has been researched and deployed in various systems to modify the threat surface of a system. Tools are necessary to analyze the security, reliability, and resilience of their information systems against cyber-Attack and measure the effectiveness of the MTD technologies. Today's security analyses utilize (1) real systems such as computers, network routers, and other network equipment; (2) computer emulations (e.g., virtual machines); and (3) simulation models separately. In this paper, we describe the progress made in developing and utilizing hybrid Live, Virtual, Constructive (LVC) environments for the evaluation of a set of MTD technologies. The LVC methodology has been most rooted in the Modeling Simulation (MS) work of the Department of Defense. With the recent advances in virtualization and software-defined networking, Sandia has taken the blueprint for LVC and extended it by crafting hybrid environments of simulation, emulation, and human-in-The-loop. Furthermore, we discuss the empirical analysis of MTD technologies and approaches with LVC-based experimentation, incorporating aspects that may impact an operational deployment of the MTD under evaluation.

More Details

Quality factor assessment of finite-size all-dielectric metasurfaces at the magnetic dipole resonance

Nanomaterials and Nanotechnology

Warne, Larry K.; Jorgenson, Roy E.; Campione, Salvatore

Recently there has been a large interest in achieving metasurface resonances with large quality factors. In this article, we examine metasurfaces that comprised a finite number of magnetic dipoles oriented parallel or orthogonal to the plane of the metasurface and determine analytic formulas for their resonances’ quality factors. These conditions are experimentally achievable in finite-size metasurfaces made of dielectric cubic resonators at the magnetic dipole resonance. Our results show that finite metasurfaces made of parallel (to the plane) magnetic dipoles exhibit low quality factor resonances with a quality factor that is independent of the number of resonators. More importantly, finite metasurfaces made of orthogonal (to the plane) magnetic dipoles lead to resonances with large quality factors, which ultimately depend on the number of resonators comprising the metasurface. In particular, by properly modulating the array of dipole moments by having a distribution of resonator polarizabilities, one can potentially increase the quality factor of metasurface resonances even further. These results provide design guidelines to achieve a sought quality factor applicable to any resonator geometry for the development of new devices such as photodetectors, modulators, and sensors.

More Details

Opportunities for Energy Storage in CAISO

IEEE Power and Energy Society General Meeting

Byrne, Raymond H.; Nguyen, Tu A.; Concepcion, Ricky

Energy storage is a unique grid asset in that it is capable of providing a number of grid services. In market areas, these grid services are only as valuable as the market prices for the services provided. This paper formulates the optimization problem for maximizing energy storage revenue from arbitrage and frequency regulation in the CAISO market. The optimization algorithm was then applied to three years of historical market data (2014-2016) at 2200 nodes to quantify the locational and time-varying nature of potential revenue. The optimization assumed perfect foresight, so it provides an upper bound on the maximum expected revenue. Since California is starting to experience negative locational marginal prices (LMPs) because of increased renewable generation, the optimization includes a duty cycle constraint to handle negative LMPs. The results show that participating in frequency regulation provides approximately 3.4 times the revenue of arbitrage. In addition, arbitrage potential revenue is highly location-specific. Since there are only a handful of zones for frequency regulation, the distribution of potential revenue from frequency regulation is much tighter.

More Details

Optimal Sizing of Behind-the-Meter Energy Storage with Stochastic Load and PV Generation for Islanded Operation

IEEE Power and Energy Society General Meeting

Copp, David A.; Nguyen, Tu A.; Byrne, Raymond H.

Energy storage systems are flexible resources that accommodate and mitigate variability and uncertainty in the load and generation of modern power systems. We present a stochastic optimization approach for sizing and scheduling an energy storage system (ESS) for behind-the-meter use. Specifi-cally, we investigate the use of an ESS with a solar photovoltaic (PV) system and a generator in islanded operation tasked with balancing a critical load. The load and PV generation are uncertain and variable, so forecasts of these variables are used to determine the required energy capacity of the ESS as well as the schedule for operating the ESS and the generator. When the forecasting uncertainties can be fit to normal distributions, the probabilistic load balancing constraint can be reformulated as a linear inequality constraint, and the resulting optimization problem can be solved as a linear program. Finally, we present results from a case study considering the balancing of the critical load of a water treatment plant in islanded operation.

More Details

Potential Impacts of Misconfiguration of Inverter-Based Frequency Control

IEEE Power and Energy Society General Meeting

Wilches-Bernal, Felipe; Concepcion, Ricky; Johnson, Jay; Byrne, Raymond H.

This paper focuses on a transmission system with a high penetration of converter-interfaced generators participating in its primary frequency regulation. In particular, the effects on system stability of widespread misconfiguration of frequency regulation schemes are considered. Failures in three separate primary frequency control schemes are analyzed by means of time domain simulations where control action was inverted by, for example, negating controller gain. The results indicate that in all cases the frequency response of the system is greatly deteriorated and, in multiple scenarios, the system loses synchronism. It is also shown that including limits to the control action can mitigate the deleterious effects of inverted control configurations.

More Details

Physical Security Assessment Using Temporal Machine Learning

Proceedings - International Carnahan Conference on Security Technology

Sahakian, Meghan A.; Verzi, Stephen J.; Birch, Gabriel C.; Stubbs, Jaclynn J.; Woo, Bryana L.; Kouhestani, Camron G.

Nuisance and false alarms are prevalent in modern physical security systems and often overwhelm the alarm station operators. Deep learning has shown progress in detection and classification tasks, however, it has rarely been implemented as a solution to reduce the nuisance and false alarm rates in a physical security systems. Previous work has shown that transfer learning using a convolutional neural network can provide benefit to physical security systems by achieving high accuracy of physical security targets [10]. We leverage this work by coupling the convolutional neural network, which operates on a frame-by-frame basis, with temporal algorithms which evaluate a sequence of such frames (e.g. video analytics). We discuss several alternatives for performing this temporal analysis, in particular Long Short-Term Memory and Liquid State Machine, and demonstrate their respective value on exemplar physical security videos. We also outline an architecture for developing an ensemble learner which leverages the strength of each individual algorithm in its aggregation. The incorporation of these algorithms into physical security systems creates a new paradigm in which we aim to decrease the volume of nuisance and false alarms in order to allow the alarm station operators to focus on the most relevant threats.

More Details

Human Factors in Security

Proceedings - International Carnahan Conference on Security Technology

Speed, Ann E.; Woo, Bryana L.; Kouhestani, Camron G.; Stubbs, Jaclynn J.; Birch, Gabriel C.

Physical security systems (PSS) and humans are inescapably tied in the current physical security paradigm. Yet, physical security system evaluations often end at the console that displays information to the human. That is, these evaluations do not account for human-in-The-loop factors that can greatly impact performance of the security system, even though methods for doing so are well-established. This paper highlights two examples of methods for evaluating the human component of the current physical security system. One of these methods is qualitative, focusing on the information the human needs to adequately monitor alarms on a physical site. The other of these methods objectively measures the impact of false alarm rates on threat detection. These types of human-centric evaluations are often treated as unnecessary or not cost effective under the belief that human cognition is straightforward and errors can be either trained away or mitigated with technology. These assumptions are not always correct, are often surprising, and can often only be identified with objective assessments of human-system performance. Thus, taking the time to perform human element evaluations can identify unintuitive human-system weaknesses and can provide significant cost savings in the form of mitigating vulnerabilities and reducing costly system patches or retrofits to correct an issue after the system has been deployed.

More Details

Counter Unmanned Aerial System Security Education

Proceedings - International Carnahan Conference on Security Technology

Stubbs, Jaclynn J.; Kouhestani, Camron G.; Woo, Bryana L.; Birch, Gabriel C.

Unmanned aircraft system (UAS) technologies have gained immense popularity in the commercial sector and have enabled capabilities that were not available just a short time ago. Once limited to the domain of highly skilled hobbyists or precision military instruments, consumer UAS are now widespread due to increased computational power, manufacturing techniques, and numerous commercial applications. The rise of consumer UAS and the low barrier to entry necessary to utilize these systems provides an increased potential for using a UAS as a delivery platform for malicious intent. This creates a new security concern which must be addressed. The contribution presented in this work is the realization of counter UAS security technology concepts viewed through the traditional security framework and the associated challenges to such a framework.

More Details

Optimal Time-of-Use Management with Power Factor Correction Using Behind-the-Meter Energy Storage Systems

IEEE Power and Energy Society General Meeting

Nguyen, Tu A.; Byrne, Raymond H.

In this work, we provide an economic analysis of using behind-the-meter (BTM) energy storage systems (ESS) for time-of-use (TOU) bill management together with power factor correction. A nonlinear optimization problem is formulated to find the optimal ESS's charge/discharge operating scheme that minimizes the energy and demand charges while correcting the power factor of the utility customers. The energy storage's state of charge (SOC) and inverter's power factor (PF) are considered in the constraints of the optimization. The problem is then transformed to a Linear Programming (LP) problem and formulated using Pyomo optimization modeling language. Case studies are conducted for a waste water treatment plant (WWTP) in New Mexico.

More Details

Investment Optimization to Improve Power Distribution System Reliability Metrics

IEEE Power and Energy Society General Meeting

Pierre, Brian J.; Arguello, Bryan

Utilizing historical utility outage data, an approach is presented to optimize investments which maximize reliability, i.e., minimize System Average Interruption Duration Index (SAIDI) and System Average Interruption Frequency Index (SAIFI) metrics. This method is designed for distribution system operators (DSOs) to improve reliability through small investments. This approach is not appropriate for large system planning and investments (e.g. new transmission lines or generation) since further economic and stability concerns are required for this type of analysis. The first step in the reliability investment optimization is to create synthetic outage data sets for a future year based on probability density functions of historical utility outage data. Once several (likely hundreds of) future year outage scenarios are created, an optimization model is used to minimize the synthetic outage SAIDI and SAIFI norm (other metrics could also be used). The results from this method can be used for reliability system planning purposes and can inform DSOs which investments to pursue to improve their reliability metrics.

More Details

Next-generation wargames

Science

Reddie, Andrew W.; Goldblum, Bethany L.; Lakkaraju, Kiran; Reinhardt, Jason C.; Nacht, Michael; Epifanovskaya, Laura W.E.

We report that over the past century, and particularly since the outset of the Cold War, wargames (interactive simulations used to evaluate aspects of tactics, operations, and strategy) have become an integral means for militaries and policy-makers to evaluate how strategic decisions are made related to nuclear weapons strategy and international security. Furthermore, these methods have also been applied beyond the military realm, to examine phenomena as varied as elections, government policy, international trade, and supply-chain mechanics. Today, a renewed focus on wargaming combined with access to sophisticated and inexpensive drag-and-drop digital game development frameworks and new cloud computing architectures have democratized the ability to enable massive multiplayer gaming experiences. With the integration of simulation tools and experimental methods from a variety of social science disciplines, a science-based experimental gaming approach has the potential to transform the insights generated from gaming by creating human-derived, large-n datasets for replicable, quantitative analysis. In the following, we outline challenges associated with contemporary simulation and wargaming tools, investigate where scholars have searched for game data, and explore the utility of new experimental gaming and data analysis methods in both policy-making and academic settings.

More Details

Near-Field Imaging of Shallow Chemical Detonations in Granite using Change Detection Methods of Borehole Seismic Data

Schwering, Paul C.; Hoots, Charles R.; Knox, Hunter A.; Abbott, Robert; Preston, Leiph

As part of the Source Physics Experiment (SPE) Phase I shallow chemical detonation series, multiple surface and borehole active-source seismic campaigns were executed to perform high resolution imaging of seismic velocity changes in the granitic substrate. Cross-correlation data processing methods were implemented to efficiently and robustly perform semi-automated change detection of first-arrival times between campaigns. The change detection algorithm updates the arrival times, and consequently the velocity model, of each campaign. The resulting tomographic imagery reveals the evolution of the subsurface velocity structure as the detonations progressed.

More Details

SCEPTRE 2.0 Quick Start Guide

Drumm, Clifton R.; Bruss, Donald E.; Fan, Wesley C.; Pautz, Shawn D.

This report provides a summary of notes for building and running the Sandia Computational Engine for Particle Transport for Radiation Effects (SCEPTRE) code. SCEPTRE is a general purpose C++ code for solving the Boltzmann transport equation in serial or parallel using unstructured spatial finite elements, multigroup energy treatment, and a variety of angular treatments including discrete ordinates and spherical harmonics. Either the first-order form of the Boltzmann equation or one of the second-order forms may be solved. SCEPTRE requires a small number of open-source Third Party Libraries (TPL) to be available, and example scripts for building these TPL's are provided. The TPL's needed by SCEPTRE are Trilinos, boost, and netcdf. SCEPTRE uses an autoconf build system, and a sample configure script is provided. Running the SCEPTRE code requires that the user provide a spatial finite-elements mesh in Exodus format and a cross section library in a format that will be described. SCEPTRE uses an xml-based input, and several examples will be provided.

More Details

NA-SS-SN L-5000-2018-0005 858EL Arsenic Release Above Permit Level (Causal Analysis Report)

Wright, Emily D.

On November 28, 2018 at approximately 4:17pm the arsenic monitor in the Acid Waste Neutralization (AWN) room located in 858N was registering a concentration above the permit level of 51ppb as stated in ABCWUA Permit 2069G Daily Composite Limit. 100ml samples had been drawn from the waste stream at - 6pm November 28, 2018. The samples were analyzed, results received on November 29, 2018 confirmed an arsenic concentration above the permit level.

More Details

Multi-configuration Membrane Distillation Model (MCMD)

Villa, Daniel L.; Morrow, Charles; Vanneste, Johan; Gustafson, Emily; Akar, Sertac; Turchi, Craig; Cath, Tzahi

Many membrane distillation models have been created to simulate the heat and mass exchange process involved but most of the literature only validates models to a couple of cases with minor configuration changes. Tools are needed that allow tradeoffs between many configurations. The multiconfiguration membrane distillation model handles many configurations. This report introduces membrane distillation, provides theory, and presents the work to verify and validate the model against experimental data from Colorado School of Mines and a lower resolution model created at the National Renewable Energy Laboratory. Though more data analysis and testing are needed, an initial look at the model to experimental comparisons indicates that the model correlates to the data well but that design comparisons are likely to be incorrect across a broad range of configurations. More accurate quantification of heat and mass transfer through computational fluid mechanics is suggested.

More Details

System Studies for Global Nuclear Assurance & Security: 3S Risk Analysis for Small Modular Reactors (Volume I)—Technical Evaluation of Safety Safeguards & Security

Williams, Adam D.; Osborn, Douglas; Bland, Jesse J.; Cardoni, Jeffrey; Cohn, Brian; Faucett, Christopher A.; Gilbert, Luke J.; Haddal, Risa; Horowitz, Steven M.; Majedi, Mike; Snell, Mark K.

Coupling interests in small modular reactors (SMR) as efficient and effective method to meet increasing energy demands with a growing aversion to cost and schedule overruns traditionally associated with the current fleet of commercial nuclear power plants (NPP), SMRs are attractive because they offer a significant relative cost reduction to current-generation nuclear reactors—increasing their appeal around the globe. Sandia's Global Nuclear Assurance and Security (GNAS) research perspective reframes the discussion around the "complex risk" of SMRs to address interdependencies between safety, safeguards, and security. This systems study provides technically rigorous analysis of the safety, safeguards, and security risks of SMR technologies. The aim of this research is three-fold. The first aim is to provide analytical evidence to support safety, safeguards, and security claims related to SMRs (Study Report Volume I). Second, this study aims to introduce a systems-theoretic approach for exploring interdependencies between the technical evaluations (Study Report Volume II). The third aim is to demonstrate Sandia's capability for timely, rigorous, and technical analysis to support emerging complex GNAS mission objectives.

More Details

FY18 Thermal Mechanical Failure: SS-304L calibration Taylor-Quinney parameter measurement and kinematic hardening plasticity

Corona, Edmundo; Jones, A.R.; Rees, Jennifer A.

The Thermal-Mechanical Failure project conducted in FY 2018 was divided into three sub projects: 1. Calibration of the uniaxial response of 304L stainless steel specimens at three temperatures (20, 150 and 310°C) and two strain rates (2 x 10-4 and 8 x 10-2 s-1); 2. Measurements of the fraction of plastic work that is converted to heat (Taylor-Quinney parameter) for 304L stainless steel. This fraction is usually assumed to be 0.95 in analysis because data is only available for a few materials; 3. Comparison of the predicted responses by isotropic and kinematic hardening plasticity models in a couple of simplified structural problems. One problem is a can crush followed by pressurization and is loosely associated with a crush-and-burn scenario. The other problem consists of a drop scenario of a thin-walled cylinder that carries a cantilevered internal mass.

More Details

Regional 3-D Geophysical Characterization of the Nevada National Security Site

Preston, Leiph; Poppeliers, Christian; Schodt, David

We perform a joint inversion of absolute and differential P and S body waves, gravity measurements, and surface wave dispersion curves for the 3-D P- and S-wave velocity structure of the Nevada National Security Site (NNSS) and vicinity. Data from earthquakes, past nuclear tests, and other active source chemical explosive experiments, such as the Source Physics Experiments (SPE), are combined with surface wave phase and group speed measurements from ambient noise, source interferometry, and active source experiments to construct a 3-D velocity model of the site with resolvable structures as fine as 6 km horizontal and 2 km vertically. Results compare favorably with previous studies and expand and extend the knowledge of the 3-D structure of the region.

More Details

ECP ST Capability Assesment Report (CAR) for VTK-m (FY18)

Moreland, Kenneth D.

The ECP/VTK-m project is providing the core capabilities to perform scientific visualization on Exascale architectures. The ECP/VTK-m project fills the critical feature gap of performing visualization and analysis on processors like graphics-based processors and many integrated core. The results of this project will be delivered in tools like ParaView, Vislt, and Ascent as well as in stand-alone form. Moreover, these projects are depending on this ECP effort to be able to make effective use of ECP architectures.

More Details

Mechanical Testing Results on Core from Borehole U-15n, NNSS, in support of SPE

Broome, Scott T.; Pfeifle, Thomas W.

The Nevada National Security Site (NNSS) will serve as the geologic setting for a Source Physics Experiment (SPE) program. The SPE will provide ground truth data to create and improve strong ground motion and seismic S-wave generation and propagation models. The NNSS was chosen as the test bed because it provides a variety of geologic settings ranging from relatively simple to very complex. Each series of SPE testing will comprise the setting and firing of explosive charges (source) placed in a central bore hole at varying depths and recording ground motions in instrumented bore holes located in two rings around the source positioned at different radii. Modeling using advanced simulation codes will be performed both a priori and after each test to predict ground response and to improve models based on acquired field data, respectively. A key component in the predictive capability and ultimate validation of the models is the full understanding of the intervening geology between the source and the instrumented bore holes including the geomechanical behavior of the site rock/structural features. This report presents a limited scope of work for an initial phase of primarily unconfined compression testing. Samples tested came from the U-15n core hole, which was drilled in granitic rock (quartz monzonite). The core hole was drilled at the location of the central SPE borehole, and thus represents material in which the explosive charges will be detonated. The U-15n location is the site of the first SPE, in Area 15 of the NNSS.

More Details

Unconfined Compression Results on Core from Boreholes U-15n#12 and U-15n#13, NNSS in support of SPE

Broome, Scott T.; Lee, Moo Y.

The Nevada National Security Site (NNSS) serves as the geologic setting for a Source Physics Experiment (SPE) program. The SPE provides ground truth data to create and improve strong ground motion and seismic S-wave generation and propagation models. The NNSS was chosen as the test bed because it provides a variety of geologic settings ranging from relatively simple to very complex. Each series of SPE testing will comprise the setting and firing of explosive charges (source) placed in a central borehole at varying depths and recording ground motions in instrumented boreholes located in two rings around the source, positioned at different radii. Modeling using advanced simulation codes will be performed both before and after each test to predict ground response and to improve models based on acquired field data, respectively. A key component in the predictive capability and ultimate validation of the models is the full understanding of the intervening geology between the source and the instrumented boreholes including the geomechanical behavior of the site's rock/structural features. This report summarizes unconfined compression testing (UCS) from coreholes U-15n#12 and U-15n#13 and compares those datasets to UCS results from coreholes U-15n and U-15n#10. U-15n#12 corehole was drilled at -60° to the horizontal and U-15n#13 was drilled vertically in granitic rock (quartz monzonite) after the third SPE shot. Figure 1 illustrates at the surface, U 15n#12 and U-15n#13 coreholes were approximately 30 meters and 10 meters from the central SPE borehole (U-15n) respectively. Corehole U-15n#12 intersects the central SPE borehole (U 15n) at a core depth of 174 feet (approximately 150 feet vertical depth). The location of U 15n#12 and U-15n#13 is the site of the first, second and third SPE's, in Area 15 of the NNSS.

More Details

Direct Shear and Triaxial Shear test Results on Core from Borehole U-15n and U-15n#10, NNSS in support of SPE

Broome, Scott; Lee, Moo; Sussman, Aviva J.

Direct Shear (DS) and Triaxial Shear (FCT) tests from Core holes U-15n and U-15n#10 are part of a larger material characterization effort for the Source Physics Experiment (SPE) project. This larger effort encompasses characterizing a granite body from Nevada both before and after each SPE shot. Core hole U-15n is the vertically oriented source hole for all SPE shots; pre shot core was taken from this hole for DS and FCT testing. After two SPE shots were executed, an inclined core hole (U-15n#10) was drilled; both DS and FCT tests were conducted from this core hole. The first shot (SPE-1) conducted on May 3, 2011 was a calibration shot. SPE-1 was an order of magnitude smaller than the second shot (SPE-2). After SPE-2 was conducted on October 25, 2011 the aforementioned inclined core hole (U-15n#10) was drilled. At its bottom, the inclined core hole intersects the source hole. The third shot (SPE-3) occurred on July 24, 2012. Vertical and inclined core holes were drilled post SPE-3 and specimens will soon be selected for geomechanical characterization. At the time of this writing, work is ongoing at Nevada in preparation for the fourth SPE shot (SPE-4).

More Details

Dynamic Brazilian Tension Results on Core from Borehole U-15n, NNSS, in support of SPE

Broome, Scott T.; Lee, Moo Y.

Dynamic Brazilian tension (DBR) tests from core hole U-15n are part of a larger material characterization effort for the Source Physics Experiment (SPE) project. This larger effort encompasses characterizing Climax Stock granite rock from the Nevada National Security Site (NNSS) both before and after each SPE shot. The current test series includes DBR tests on dry intact granite and fault material at depths of -85 and -150 ft.

More Details

Triaxial Compression Results on Core from Borehole U-15n, NNSS, in support of SPE

Broome, Scott T.; Lee, Moo Y.

Triaxial compression tests from core hole U-15n are part of a larger material characterization effort for the Source Physics Experiment (SPE) project. This larger effort encompasses characterizing Climax Stock granite rock from the Nevada National Security Site (NNSS) both before and after each SPE shot. The current test series includes triaxial compression tests on dry and saturated intact granite and fault material at 100, 200, 300, and 400 MPa confining pressure.

More Details

Unconfined Compression Results on Core from Borehole U-15n#10, NNSS, in support of SPE

Broome, Scott T.; Lee, Moo Y.

The Nevada National Security Site (NNSS) serves as the geologic setting for a Source Physics Experiment (SPE) program. The SPE provides ground truth data to create and improve strong ground motion and seismic S-wave generation and propagation models. The NNSS was chosen as the test bed because it provides a variety of geologic settings ranging from relatively simple to very complex. Each series of SPE testing will comprise the setting and firing of explosive charges (source) placed in a central borehole at varying depths and recording ground motions in instrumented boreholes located in two rings around the source, positioned at different radii. Modeling using advanced simulation codes will be performed both before and after each test to predict ground response and to improve models based on acquired field data, respectively. A key component in the predictive capability and ultimate validation of the models is the full understanding of the intervening geology between the source and the instrumented boreholes including the geomechanical behavior of the site's rock/structural features. This memorandum reports on an initial phase of unconfined compression testing from corehole U-15n#10. Specimens tested came from the U-15n#10 core hole, which was drilled at -60° to the horizontal in granitic rock (quartz monzonite) after the second SPE shot (SPE-2). Figure 1 illustrates at the surface, the core hole was approximately 90 feet from the central SPE borehole. Corehole U 15n#10 intersects the central SPE borehole (U-15n) at a core depth of 170 feet (approximately 150 feet vertical depth) which is within the highly damaged zone of SPE-2. The U-15n#10 location is the site of the first, second and third SPE's, in Area 15 of the NNSS.

More Details

Requirements Efficiency: External Questionnaire Results

Drewien, Celeste A.; Wolfgang, Raymond; Bolstad, Cheryl

Efficiency in requirements engineering and management (REM) for complex hardware systems is desirable to reduce program impacts, such as schedule and budget. Sandia National Labs (SNL) investigated external state-of-the-practice REM to capture insights, recommendations, and best practices from external entities on several REM topics. Twenty-one at-will participants contributed responses to closed- and open-ended questions. The results were synthesized and are provided herein. The results help SNL and others to understand where its practices are current; what trends, approaches, or processes in REM might be beneficial if implemented or introduced; what challenges might be avoided; where efficiencies might be realized; and which practices are still maturing or evolving in industry and academia, so that SNL can stay abreast of these developments.

More Details

Post-CMOS Compatible Piezoelectric Micro-Machined Ultrasonic Transducers

IEEE International Ultrasonics Symposium, IUS

Griffin, Benjamin A.; Edstrand, Adam; Yen, Sean; Reger, Robert W.

Fingerprint sensing is pervasive in the cellular telecommunications market. Current commercial fingerprint sensors utilize capacitive scanning. This work focuses on the design, fabrication and characterization of post-complementary-metal-oxide-semiconductor (CMOS) compatible piezoelectric micro-machined ultrasonic transducers for use as ultrasonic pixels to improve robustness to contamination and allow for sub-epidermis scans. Ultrasonic pixels are demonstrated at frequencies ranging from 100 kHz to 800 kHz with several electrode coverages and styles to identify trends.

More Details

Real-time thermomechanical property monitoring during ion beam irradiation using in situ transient grating spectroscopy

Nuclear Instruments and Methods in Physics Research. Section B, Beam Interactions with Materials and Atoms

Dennett, Cody A.; Short, Michael P.; Buller, Daniel L.; Hattar, Khalid M.

A facility for continuously monitoring the thermal and elastic performance of materials under exposure to ion beam irradiation has been designed and commissioned. By coupling an all-optical, non-contact, non-destructive measurement technique known as transient grating spectroscopy (TGS) to a 6 MV tandem ion accelerator, bulk material properties may be measured at high fidelity as a function of irradiation exposure and temperature. Ion beam energies and optical parameters may be tuned to ensure that only the properties of the ion-implanted surface layer are interrogated. This facility provides complementary capabilities to the set of facilities worldwide which have the ability to study the evolution of microstructure in situ during radiation exposure, but lack the ability to measure bulk-like properties. Here, the measurement physics of TGS, design of the experimental facility, and initial results using both light and heavy ion exposures are described. Lastly, several short- and long-term upgrades are discussed which will further increase the capabilities of this diagnostic.

More Details

Evaluation of post-weld heat treatments applied to FeCrAl alloy weldments

Journal of Nuclear Materials

Mahaffey, Jacob T.; Brittan, Andrew; Guckenberger, Aaron; Couet, Adrien; Field, Kevin G.

The nuclear incident at the Fukushima Daiichi nuclear power plant has created a strong push for accident-tolerant fuel cladding to replace current zirconium-based cladding. A current near-term focus on iron-chromium-aluminum (FeCrAl) alloys. Laser-welded FeCrAl samples (C35MN, C37M, and C35M10 TC) were subjected to three different post-weld heat treatment regimes: 650 °C for 5 h, 850 °C for 1 h, and 850 °C for 5 h. Here, the samples were then analyzed using optical light microscopy, micro-hardness indentation, and scanning electron microscopy coupled with energy-dispersive spectroscopy and electron backscatter diffraction. The base microstructure of C37M and C35M10 TC experienced significant grain coarsening outside the fusion zone due to the applied post-weld heat treatments, whereas Nb-rich precipitation in C35MN limited grain growth compared with the other alloys studied.

More Details

Kinetics of the Topochemical Transformation of (PbSe)m(TiSe 2)n(SnSe 2)m(TiSe 2)n to (Pb0.5Sn0.5Se)m(TiSe2)n

Journal of the American Chemical Society

Medlin, Douglas L.

Solid-state reaction kinetics on atomic length scales have not been heavily investigated due to the long times, high reaction temperatures, and small reaction volumes at interfaces in solid-state reactions. All of these conditions present significant analytical challenges in following reaction pathways. Herein we use in situ and ex situ X-ray diffraction, in situ X-ray reflectivity, high-angle annular dark field scanning transmission electron microscopy, and energy-dispersive X-ray spectroscopy to investigate the mechanistic pathways for the formation of a layered (Pb0.5Sn0.5Se)1+δ(TiSe2)m heterostructure, where m is the varying number of TiSe2 layers in the repeating structure. Thin film precursors were vapor deposited as elemental-modulated layers into an artificial superlattice with Pb and Sn in independent layers, creating a repeating unit with twice the size of the final structure. At low temperatures, the precursor undergoes only a crystallization event to form an intermediate (SnSe2)1+γ(TiSe2)m(PbSe)1+δ(TiSe2)m superstructure. At higher temperatures, this superstructure transforms into a (Pb0.5Sn0.5Se)1+δ(TiSe2)m alloyed structure. The rate of decay of superlattice reflections of the (SnSe2)1+γ(TiSe2)m(PbSe)1+δ(TiSe2)m superstructure was used as the indicator of the progress of the reaction. Here, we show that increasing the number of TiSe2 layers does not decrease the rate at which the SnSe2 and PbSe layers alloy, suggesting that at these temperatures it is reduction of the SnSe2 to SnSe and Se that is rate limiting in the formation of the alloy and not the associated diffusion of Sn and Pb through the TiSe2 layers.

More Details

A verified conformal decomposition finite element method for implicit, many-material geometries

Journal of Computational Physics

Roberts, Scott A.; Mendoza, Hector; Brunini, Victor; Noble, David R.

As computing power rapidly increases, quickly creating a representative and accurate discretization of complex geometries arises as a major hurdle towards achieving a next generation simulation capability. Component definitions may be in the form of solid (CAD) models or derived from 3D computed tomography (CT) data, and creating a surface-conformal discretization may be required to resolve complex interfacial physics. The Conformal Decomposition Finite Element Methods (CDFEM) has been shown to be an efficient algorithm for creating conformal tetrahedral discretizations of these implicit geometries without manual mesh generation. In this work we describe an extension to CDFEM to accurately resolve the intersections of many materials within a simulation domain. This capability is demonstrated on both an analytical geometry and an image-based CT mesostructure representation consisting of hundreds of individual particles. Effective geometric and transport properties are the calculated quantities of interest. Solution verification is performed, showing CDFEM to be optimally convergent in nearly all cases. Representative volume element (RVE) size is also explored and per-sample variability quantified. Relatively large domains and small elements are required to reduce uncertainty, with recommended meshes of nearly 10 million elements still containing upwards of 30% uncertainty in certain effective properties. This work instills confidence in the applicability of CDFEM to provide insight into the behaviors of complex composite materials and provides recommendations on domain and mesh requirements.

More Details

Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

Journal of Computational and Applied Mathematics

Lin, Paul T.; Shadid, John N.; Hu, Jonathan J.; Pawlowski, Roger; Cyr, Eric C.

This work explores the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. This study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of the original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.

More Details

Predicting High-Temperature Decomposition of Lithiated Graphite: Part I. Review of Phenomena and a Comprehensive Model

Journal of the Electrochemical Society

Shurtz, Randy C.; Engerer, Jeffrey D.; Hewson, John C.

Heat release that leads to thermal runaway of lithium-ion batteries begins with decomposition reactions associated with lithiated graphite. We broadly review the observed phenomena related to lithiated graphite electrodes and develop a comprehensive model that predicts with a single parameter set and with reasonable accuracy measurements over the available temperature range with a range of graphite particle sizes. The model developed in this work uses a standardized total heat release and takes advantage of a revised dependence of reaction rates and the tunneling barrier on specific surface area. The reaction extent is limited by inadequate electrolyte or lithium. Calorimetry measurements show that heat release from the reaction between lithiated graphite and electrolyte accelerates above ~200°C, and the model addresses this without introducing additional chemical reactions. This method assumes that the electron-tunneling barrier through the solid electrolyte interphase (SEI) grows initially and then becomes constant at some critical magnitude, which allows the reaction to accelerate as the temperature rises by means of its activation energy. Phenomena that could result in the upper limit on the tunneling barrier are discussed. The model predictions with two candidate activation energies are evaluated through comparisons to calorimetry data, and recommendations are made for optimal parameters.

More Details

Sparse Data Acquisition on Emerging Memory Architectures

IEEE Access

Quach, Tu T.; Agarwal, Sapan; James, Conrad D.; Marinella, Matthew; Aimone, James B.

Emerging memory devices, such as resistive crossbars, have the capacity to store large amounts of data in a single array. Acquiring the data stored in large-capacity crossbars in a sequential fashion can become a bottleneck. We present practical methods, based on sparse sampling, to quickly acquire sparse data stored on emerging memory devices that support the basic summation kernel, reducing the acquisition time from linear to sub-linear. The experimental results show that at least an order of magnitude improvement in acquisition time can be achieved when the data are sparse. Finally, in addition, we show that the energy cost associated with our approach is competitive to that of the sequential method.

More Details

Polarimetric Interferometric SAR Change Detection Discrimination

IEEE Transactions on Geoscience and Remote Sensing

West, Roger D.; Riley, Robert M.

A coherent change detection (CCD) image, computed from a geometrically matched, temporally separated pair of complex-valued synthetic aperture radar (SAR) image sets, conveys the pixel-level equivalence between the two observations. Low-coherence values in a CCD image are typically due to either some physical change in the corresponding pixels or a low signal-to-noise observation. A CCD image does not directly convey the nature of the change that occurred to cause low coherence. In this paper, we introduce a mathematical framework for discriminating between different types of change within a CCD image. We utilize the extra degrees of freedom and information from polarimetric interferometric SAR (PolInSAR) data and PolInSAR processing techniques to define a 29-dimensional feature vector that contains information capable of discriminating between different types of change in a scene. We also propose two change-type discrimination functions that can be trained with feature vector training data and demonstrate change-type discrimination on an example image set for three different types of change. In conclusion, we also describe and characterize the performance of the two proposed change-type discrimination functions by way of receiver operating characteristic curves, confusion matrices, and pass matrices.

More Details

Elastic functional principal component regression

Statistical Analysis and Data Mining

Tucker, J.D.; Lewis, John R.; Srivastava, Anuj

We study regression using functional predictors in situations where these functions contains both phase and amplitude variability. In other words, the functions are misaligned due to errors in time measurements, and these errors can significantly degrade both model estimation and prediction performance. The current techniques either ignore the phase variability, or handle it via preprocessing, that is, use an off–the–shelf technique for functional alignment and phase removal. We develop a functional principal component regression model which has a comprehensive approach in handling phase and amplitude variability. The model utilizes a mathematical representation of the data known as the square–root slope function. These functions preserve the L2 norm under warping and are ideally suited for simultaneous estimation of regression and warping parameters. Furthermore, using both simulated and real–world data sets, we demonstrate our approach and evaluate its prediction performance relative to current models. In addition, we propose an extension to functional logistic and multinomial logistic regression.

More Details

Galvanostatic Plating with a Single Additive Electrolyte for Bottom-Up Filling of Copper in Mesoscale TSVs

Journal of the Electrochemical Society

Hollowell, Andrew E.; Menk, Lyle; Baca, Ehren; Blain, Matthew G.; Mcclain, Jaime; Dominguez, Jason; Smith, Anna

A methanesulfonic acid (MSA) electrolyte with a single suppressor additive was used for potentiostatic bottom-up filling of copper in mesoscale through silicon vias (TSVs). Conversly, galvanostatic deposition is desirable for production level full wafer plating tools as they are typically not equipped with reference electrodes which are required for potentiostatic plating. Potentiostatic deposition was used to determine the over-potential required for bottom-up TSV filling and the resultant current was measured to establish a range of current densities to investigate for galvanostatic deposition. Galvanostatic plating conditions were then optimized to achieve void-free bottom-up filling in mesoscale TSVs for a range of sample sizes.

More Details

Osiris: A low-cost mechanism to enable restoration of secure non-volatile memories

Proceedings of the Annual International Symposium on Microarchitecture, MICRO

Ye, Mao; Hughes, Clayton; Awad, Amro

With Non-Volatile Memories (NVMs) beginning to enter the mainstream computing market, it is time to consider how to secure NVM-equipped computing systems. Recent Meltdown and Spectre attacks are evidence that security must be intrinsic to computing systems and not added as an afterthought. Processor vendors are taking the first steps and are beginning to build security primitives into commodity processors. One security primitive that is associated with the use of emerging NVMs is memory encryption. Memory encryption, while necessary, is very challenging when used with NVMs because it exacerbates the write endurance problem. Secure architectures use cryptographic metadata that must be persisted and restored to allow secure recovery of data in the event of power-loss. Specifically, encryption counters must be persistent to enable secure and functional recovery of an interrupted system. However, the cost of ensuring and maintaining persistence for these counters can be significant. In this paper, we propose a novel scheme to maintain encryption counters without the need for frequent updates. Our new memory controller design, Osiris, repurposes memory Error-Correction Codes (ECCs) to enable fast restoration and recovery of encryption counters. To evaluate our design, we use Gem5 to run eight memory-intensive workloads selected from SPEC2006 and U.S. Department of Energy (DoE) proxy applications. Compared to a write-Through counter-cache scheme, on average, Osiris can reduce 48.7% of the memory writes (increase lifetime by 1.95x), and reduce the performance overhead from 51.5% (for write-Through) to only 5.8%. Furthermore, without the need for backup battery or extra power-supply hold-up time, Osiris performs better than a battery-backed write-back (5.8% vs. 6.6% overhead) and has less write-Traffic (2.6% vs. 5.9% overhead).

More Details

Phase-locked photonic wire lasers by π coupling

Nature Photonics

Khalatpour, Ali; Reno, John L.; Hu, Qing

The term photonic wire laser is now widely used for lasers with transverse dimensions much smaller than the wavelength. As a result, a large fraction of the mode propagates outside the solid core. Here, we propose and demonstrate a scheme to form a coupled cavity by taking advantage of this unique feature of photonic wire lasers. In this scheme, we used quantum cascade lasers with antenna-coupled third-order distributed feedback grating as the platform. Inspired by the chemistry of hybridization, our scheme phase-locks multiple such lasers by π coupling. Alongside the coupled-cavity laser, we demonstrated several performance metrics that are important for various applications in sensing and imaging: a continuous electrical tuning of ~10 GHz at ~3.8 THz (fractional tuning of ~0.26%), a good level of output power (~50–90 mW of continuous-wave power) and tight beam patterns (~100 of beam divergence).

More Details

Radiation Response of AlGaN-Channel HEMTs

IEEE Transactions on Nuclear Science

Martinez, Marino; King, Michael P.; Baca, Albert G.; Allerman, A.A.; Armstrong, Andrew A.; Klein, Brianna A.; Douglas, Erica A.; Kaplar, Robert; Swanson, Scot E.

In this paper, we present heavy ion and proton data on AlGaN highvoltage HEMTs showing Single Event Burnout, Total Ionizing Dose, and Displacement Damage responses. These are the first such data for materials of this type. Two different designs of the epitaxial structure were tested for Single Event Burnout (SEB). The default layout design showed burnout voltages that decreased rapidly with increasing LET, falling to about 25% of nominal breakdown voltage for ions with LET of about 34 MeV·cm2/mg for both structures. Samples of the device structure with lower AlN content were tested with varying gate-drain spacing and revealed an improved robustness to heavy ions, resulting in burnout voltages that did not decrease up to at least 33.9 MeV·cm2/mg. Failure analysis showed there was consistently a point, location random, where gate and drain had been shorted. Oscilloscope traces of terminal voltages and currents during burnout events lend support to the hypothesis that burnout events begin with a heavy ion strike in the vulnerable region between gate and drain. This subsequently initiates a cascade of events resulting in damage that is largely manifested elsewhere in the device. This hypothesis also suggests a path for greatly improving the susceptibility to SEB as development of this technology goes forward. Lastly, testing with 2.5 MeV protons showed only minor changes in device characteristics.

More Details
Results 26201–26300 of 99,299
Results 26201–26300 of 99,299