Quantum Performance Assessment (poster)
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Of specific concern to this report and the related experiments is ionization of air by gammas rays and the cascading electrons in the High-Energy Radiation Megavolt Electron Source (HERMES) III courtyard. When photons generated by HERMES encounter a neutral atom or molecule, there is a chance that they will interact via one of several mechanisms: photoelectric effect, Compton scattering, or pair production. In both the photoelectric effect and Compton scattering, an electron is liberated from the atom or molecule with a direction of travel preferentially aligned with the gamma ray. This results in a flow of electrons away from the source region, which results in large scale electric and magnetic fields. The strength of these fields and their dynamics are dependent on the conductivity of the air. A more comprehensive description is provided by Longmire and Gilbert.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We begin by presenting an overview of the general philosophy that is guiding the novel DARMA developments, followed by a brief reminder about the background of this project. We finally present the FY19 design requirements. As the Exascale era arises, DARMA is uniquely positioned at the forefront of asychronous many-task (AMT) research and development (R&D) to explore emerging programming model paradigms for next-generation HPC applications at Sandia, across NNSA labs, and beyond. The DARMA project explores how to fundamentally shift the expression(PM) and execution(EM)of massively concurrent HPC scientific algorithms to be more asynchronous, resilient to executional aberrations in heterogeneous/unpredictable environments, and data-dependency conscious—thereby enabling an intelligent, dynamic, and self-aware runtime to guide execution.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The Leo Brady Seismic Network (LBSN) was established in 1960 by Sandia National Laboratories for monitoring underground nuclear tests (UGTs) at the Nevada Test Site— renamed in 2010 to the Nevada National Security Site (NNSS). The LBSN has been in various configurations throughout its existence, but it has been generally comprised of four to six stations at regional distances from the NNSS with evenly spaced azimuthal coverage. Between 1962 and the early 1980s, the LBSN—and a sister network operated by Lawrence Livermore National Laboratory—were the most comprehensive U.S. source of regional seismic data of UGTs. During the pre-digital era, LBSN data were transmitted as frequency-modulated (FM) audio over telephone lines to the NTS and recorded in analog on hi-fi 8-track AMPEX tapes. These tapes have been stored in temperature-stable buildings or bunkers on the NNSS and Kirtland Air Force Base in Albuquerque, NM for decades and contain the sole record of this irreplaceable data from the analog era; full waveforms of UGTs during this time were never routinely converted to digital form. We have been developing a process over the past few years to recover and calibrate data from these tapes, converting them from FM audio to digital waveforms in ground motion units. The calibration of legacy data from the LBSN is still ongoing. To date, we have digitized tapes from 592 separate UGTs. As a proof-of-concept, we calibrated data from the BOXCAR event.
The 2018 Nonlinear Mechanics and Dynamics (NOMAD) Research Institute was successfully held from June 18 to August 2, 2018. NOMAD brings together participants with diverse technical backgrounds to work in small teams to cultivate new ideas and approaches in engineering mechanics and dynamics research. NOMAD provides an opportunity for researchers -- especially early career researchers - to develop lasting collaborations that go beyond what can be established from the limited interactions at their institutions or at annual conferences. A total of 17 students came to Albuquerque, New Mexico to participate in the seven-week long program held at the Mechanical Engineering building on the University of New Mexico campus. The students collaborated on one of six research projects that were developed by various mentors from Sandia National Laboratories, University of New Mexico, and academic institutions. In addition to the research activities, the students attended weekly technical seminars, various tours, and socialized at various off-hour events including an Albuquerque Isotopes baseball game. At the end of the summer, the students gave a final technical presentation on their research findings. Many of the research discoveries made at NOMAD are published as proceedings at technical conferences and have direct alignment with the critical mission work performed at Sandia.
A series of outdoor shots were conducted at the HERMES III facility in November 2016. There were several goals associated with these experiments, one of which is an improved understanding of the courtyard radiation environment. Previous work had developed parametric fits to the spatial and temporal dose rate in the area of interest. This work explores the inter-shot variation of the dose in the courtyard, updated fit parameters, and an improved dose rate model which better captures high frequency content. The parametric fit for the spatial profile is found to be adequate in the far-field, however near-field radiation dose is still not well-understood.
As a follow-up to results presented at the 16th International Symposium on Reactor Dosimetry, a new set of low energy photon filter box designs were evaluated for potential testing at the Gamma Irradiation Facility in Sandia National Laboratories' Technical Area V. The goal of this filter box design study is to produce the highest fidelity gamma ray test environment for electronic parts. Using Monte Carlo coupled photon/electron transport, approximately a dozen different designs were evaluated for the effectiveness in reducing the dose enhancement in a silicon sensor. The completion of this study provides the Radiation Metrology Laboratory staff with a starting point for experimental test plans that could lead to improvement in the gamma ray test environment at the Gamma Irradiation Facility.
The goal of the DOE OE Energy Storage System Safety Roadmap is to foster confidence in the safety and reliability of energy storage systems. There are three interrelated objectives to support the realization of that goal: research, codes and standards (C/S) and communication/coordination. The objective focused on C/S is "To apply research and development to support efforts that refocused on ensuring that codes and standards are available to enable the safe implementation of energy storage systems in a comprehensive, non-discriminatory and science-based manner."
This document presents the process for a new method developed for the characterization of the delayed gamma-ray radiation fields in pulse reactors like the Annular Core Research Reactor (ACRR) and the Fueled Ring External Cavity (FREC-II). The environments used to test this method in the ACRR were FF, LB44, PLG and CdPoly, and the environments used in the FREC-II were FF with rods-down, FF with rods-up, CdPoly with rods-down and CdPoly with rods-up. All environment configurations used the same fission product gamma-ray source energy spectrum. This method required the fission sites located in the MCNP KCODE source tapes. A FORTRAN script was written to translate and extract the coordinates for the fission sites. The 10K fission sites were then input it into an MCNP SOURCE mode script. Using a MATLAB script, a parametric analysis was done, and it helped determine that 10K fission sites are an appropriate number of coordinates to converge to the correct answer. The method gave excellent results and was tested in the ACRR, FREC-II and White Sands Missile Range (WSMR). This method can be applied to other pulse research reactors as well.
International Journal for Uncertainty Quantification
In this work, we build upon a recently developed approach for solving stochastic inverse problems based on a combination of measure-theoretic principles and Bayes' rule. We propose a multi-fidelity method to reduce the computational burden of performing uncertainty quantification using high-fidelity models. This approach is based on a Monte Carlo framework for uncertainty quantification that combines information from solvers of various fidelities to obtain statistics on the quantities of interest of the problem. In particular, our goal is to generate samples from a high-fidelity push-forward density at a fraction of the costs of standard Monte Carlo methods, while maintaining flexibility in the number of random model input parameters. Key to this methodology is the construction of a regression model to represent the stochastic mapping between the low- and high-fidelity models, such that most of the computations can be leveraged to the low-fidelity model. To that end, we employ Gaussian process regression and present extensions to multi-level-type hierarchies as well as to the case of multiple quantities of interest. Finally, we demonstrate the feasibility of the framework in several numerical examples.
Applied Physics Reviews
Abstract not provided.
Sierra/SD is a structural dynamics finite element software package that is known for its scalability and performance on DOE supercomputers. While there are historical documents demonstrating weak and strong scaling on DOE systems such as Redsky, no such formal studies have been done on modern architectures. This report demonstrates that Sierra/SD still scales on modern architectures. Non structured meshes in the shape of an I-Beam are solved in sizes ranging from fifty thousand degrees of freedom in serial up to one and a half billion degrees of freedom on over eighteen thousand processors using only default solver options. The report serves as a baseline for users to estimate computation cost of finite element analyses in Sierra/SD, understand how solver options relate to computational costs, and pick optimal processor counts to solve a given problem size, as well as a baseline for evaluating computational cost and scalability on next generation architectures.
International Journal for Uncertainty Quantification
This paper considers response surface approximations for discontinuous quantities of interest. Our objective is not to adaptively characterize the manifold defining the discontinuity. Instead, we utilize an epistemic description of the uncertainty in the location of a discontinuity to produce robust bounds on sample-based estimates of probabilistic quantities of interest. We demonstrate that two common machine learning strategies for classification, one based on nearest neighbors (Voronoi cells) and one based on support vector machines, provide reasonable descriptions of the region where the discontinuity may reside. In higher dimensional spaces, we demonstrate that support vector machines are more accurate for discontinuities defined by smooth manifolds. We also show how gradient information, often available via adjoint-based approaches, can be used to define indicators to effectively detect a discontinuity and to decompose the samples into clusters using an unsupervised learning technique. Numerical results demonstrate the epistemic bounds on probabilistic quantities of interest for simplistic models and for a compressible fluid model with a shock-induced discontinuity.
Journal of Nanoscience and Nanotechnology
Here, we present that lead iodide based perovskites are promising optoelectronic materials ideal for solar cells. Recently emerged perovskite nanocrystals (NCs) offer more advantages including improved size-tunable band gap, structural stability, and solvent-based processing. Here we report a simple surfactant-assisted two-step synthesis to produce monodisperse PbI2 NCs which are then converted to methylammonium lead iodide perovskite NCs. Based on electron microscopy characterization, these NCs showed competitive monodispersity. Additionally, combined results from X-ray diffraction patterns, optical absorption, and photoluminescence confirmed the formation of high quality methylammonium lead iodide perovskite NCs. More importantly, by avoiding the use of hard-to-remove chemicals, the resulted perovskite NCs can be readily integrated in applications, especially solar cells through versatile solution/colloidal-based methods.
AIChE Journal
While peak shaving is commonly used to reduce power costs, chemical process facilities that can reduce power consumption on demand during emergencies (e.g., extreme weather events) bring additional value through improved resilience. For process facilities to effectively negotiate demand response (DR) contracts and make investment decisions regarding flexibility, they need to quantify their additional value to the grid. We present a grid–centric mixed–integer stochastic programming framework to determine the value of DR for improving grid resilience in place of capital investments that can be cost prohibitive for system operators. We formulate problems using both a linear approximation and a nonlinear alternating current power flow model. Our numerical results with both models demonstrate that DR can be used to reduce the capital investment necessary for resilience, increasing the value that chemical process facilities bring through DR. Furthermore, the linearized model often underestimates the amount of DR needed in our case studies.
Physical Review A
Photodetection plays a key role in basic science and technology, with exquisite performance having been achieved down to the single-photon level. Further improvements in photodetectors would open new possibilities across a broad range of scientific disciplines and enable new types of applications. However, it is still unclear what is possible in terms of ultimate performance and what properties are needed for a photodetector to achieve such performance. Here, we present a general modeling framework for photodetectors whereby the photon field, the absorption process, and the amplification process are all treated as one coupled quantum system. The formalism naturally handles field states with single or multiple photons as well as a variety of detector configurations and includes a mathematical definition of ideal photodetector performance. The framework reveals how specific photodetector architectures introduce limitations and tradeoffs for various performance metrics, providing guidance for optimization and design.
Bulletin of the Seismological Society of America
We invert far-field infrasound data for the equivalent seismoacoustic time-domain moment tensor to assess the effects of variable atmospheric models and source phenomena. The infrasound data were produced by a series of underground chemical explosions that were conducted during the Source Physics Experiment (SPE), which was originally designed to study seismoacoustic signal phenomena. The first goal is to investigate the sensitivity of the inversion to the variability of the estimated atmospheric model. The second goal is to determine the relative contribution of two presumed source mechanisms to the observed infrasonic wavefield. Rather than using actual atmospheric observations to estimate the necessary atmospheric Green’s functions, we build a series of atmospheric models that rely on publicly available, regional-scale atmospheric observations. The atmospheric observations are summarized and interpolated onto a 3D grid to produce a model of sound speed at the time of the experiment. For each of four SPE acoustic datasets that we invert, we produced a suite of three atmospheric models for each chemical explosion event, based on 10 yrs of meteorological data: an average model, which averages the atmospheric conditions for 10 yrs prior to each SPE event, as well as two extrema models. To parameterize the inversion, we assume that the source of infrasonic energy results from the linear combination of explosion-induced surface spall and linear seismic-to-elastic mode conversion at the Earth’s free surface. We find that the inversion yields relatively repeatable results for the estimated spall source. Conversely, the estimated isotropic explosion source is highly variable. This suggests that 1) the majority of the observed acoustic energy is produced by the spall and/or 2) our modeling of the elastic energy, and the subsequent conversion to acoustic energy, is too simplistic.
Proceedings - IEEE 14th International Conference on eScience, e-Science 2018
Large-scale collaborative scientific software projects require more knowledge than any one person typically possesses. This makes coordination and communication of knowledge and expertise a key factor in creating and safeguarding software quality, without which we cannot have sustainable software. However, as researchers attempt to scale up the production of software, they are confronted by problems of awareness and understanding. This presents an opportunity to develop better practices and tools that directly address these challenges. To that end, we conducted a case study of developers of the Trilinos project. We surveyed the software development challenges addressed and show how those problems are connected with what they know and how they communicate. Based on these data, we provide a series of practicable recommendations, and outline a path forward for future research.
IEEE Journal on Multiscale and Multiphysics Computational Techniques
Applications involving quantum physics are becoming an increasingly important area for electromagnetic engineering. To address practical problems in these emerging areas, appropriate numerical techniques must be utilized. However, the unique needs of many of these applications require new computational electromagnetic solvers to be developed. The A-4:1. formulation is a novel approach that can address many of these needs. This formulation utilizes equations developed in terms of the magnetic vector potential (A) and electric scalar potential (t.). The resulting equations overcome many of the limitations of traditional solvers and are ideal for coupling to quantum mechanical calculations. In this work, the A-4. formulation is extended by developing time domain integral equations suitable for multiscale perfect electric conducting objects. These integral equations can be stably discretized and constitute a robust numerical technique that is a vital step in addressing the needs of many emerging applications. To validate the proposed formulation, numerical results are presented which demonstrate the stability and accuracy of the method.
IEEE Power and Energy Society General Meeting
Energy storage systems are flexible resources that accommodate and mitigate variability and uncertainty in the load and generation of modern power systems. We present a stochastic optimization approach for sizing and scheduling an energy storage system (ESS) for behind-the-meter use. Specifi-cally, we investigate the use of an ESS with a solar photovoltaic (PV) system and a generator in islanded operation tasked with balancing a critical load. The load and PV generation are uncertain and variable, so forecasts of these variables are used to determine the required energy capacity of the ESS as well as the schedule for operating the ESS and the generator. When the forecasting uncertainties can be fit to normal distributions, the probabilistic load balancing constraint can be reformulated as a linear inequality constraint, and the resulting optimization problem can be solved as a linear program. Finally, we present results from a case study considering the balancing of the critical load of a water treatment plant in islanded operation.
Proceedings - International Carnahan Conference on Security Technology
Adversary sophistication in the cyber domain is a constantly growing threat. As more systems become accessible from the Internet, the risk of breach, exploitation, and malice grows. To thwart reconnaissance and exploitation, Moving Target Defense (MTD) has been researched and deployed in various systems to modify the threat surface of a system. Tools are necessary to analyze the security, reliability, and resilience of their information systems against cyber-Attack and measure the effectiveness of the MTD technologies. Today's security analyses utilize (1) real systems such as computers, network routers, and other network equipment; (2) computer emulations (e.g., virtual machines); and (3) simulation models separately. In this paper, we describe the progress made in developing and utilizing hybrid Live, Virtual, Constructive (LVC) environments for the evaluation of a set of MTD technologies. The LVC methodology has been most rooted in the Modeling Simulation (MS) work of the Department of Defense. With the recent advances in virtualization and software-defined networking, Sandia has taken the blueprint for LVC and extended it by crafting hybrid environments of simulation, emulation, and human-in-The-loop. Furthermore, we discuss the empirical analysis of MTD technologies and approaches with LVC-based experimentation, incorporating aspects that may impact an operational deployment of the MTD under evaluation.
Proceedings - International Carnahan Conference on Security Technology
Operational Technology (OT) networks existed well before the dawn of the Internet, and had enjoyed security through being air-gapped and isolated. However, the interconnectedness of the world has found its way into these OT networks, exposing their vulnerabilities for cyber attacks. As the global Internet continues to grow, it becomes more and more embedded with the physical world. The Internet of Things is one such example of how IT is blurring the cyber-physical boundaries. The eventuality will be a convergence of IT and OT. Until that day comes, cyber practitioners must still deal with the primitive security features of OT networks, maintain a foothold on enterprise and cloud networks, and attempt to instill sound security practices in burgeoning IoT networks. In this paper, we propose a new method to bring cyber security to OT and IoT-based networks, through Multi-Agent Systems (MAS). MAS are flexible enough to integrate with fixed legacy networks, such as ICS, as well with be burned into newer devices and software, such as IoT and IT networks. In this paper, we discuss the features of MAS, the opportunities that exist to benefit cyber security, and a proposed architecture for a OT-based MAS.
Proceedings - International Carnahan Conference on Security Technology
Adversary sophistication in the cyber domain is a constantly growing threat. As more systems become accessible from the Internet, the risk of breach, exploitation, and malice grows. To thwart reconnaissance and exploitation, Moving Target Defense (MTD) has been researched and deployed in various systems to modify the threat surface of a system. Tools are necessary to analyze the security, reliability, and resilience of their information systems against cyber-Attack and measure the effectiveness of the MTD technologies. Today's security analyses utilize (1) real systems such as computers, network routers, and other network equipment; (2) computer emulations (e.g., virtual machines); and (3) simulation models separately. In this paper, we describe the progress made in developing and utilizing hybrid Live, Virtual, Constructive (LVC) environments for the evaluation of a set of MTD technologies. The LVC methodology has been most rooted in the Modeling Simulation (MS) work of the Department of Defense. With the recent advances in virtualization and software-defined networking, Sandia has taken the blueprint for LVC and extended it by crafting hybrid environments of simulation, emulation, and human-in-The-loop. Furthermore, we discuss the empirical analysis of MTD technologies and approaches with LVC-based experimentation, incorporating aspects that may impact an operational deployment of the MTD under evaluation.
Nanomaterials and Nanotechnology
Recently there has been a large interest in achieving metasurface resonances with large quality factors. In this article, we examine metasurfaces that comprised a finite number of magnetic dipoles oriented parallel or orthogonal to the plane of the metasurface and determine analytic formulas for their resonances’ quality factors. These conditions are experimentally achievable in finite-size metasurfaces made of dielectric cubic resonators at the magnetic dipole resonance. Our results show that finite metasurfaces made of parallel (to the plane) magnetic dipoles exhibit low quality factor resonances with a quality factor that is independent of the number of resonators. More importantly, finite metasurfaces made of orthogonal (to the plane) magnetic dipoles lead to resonances with large quality factors, which ultimately depend on the number of resonators comprising the metasurface. In particular, by properly modulating the array of dipole moments by having a distribution of resonator polarizabilities, one can potentially increase the quality factor of metasurface resonances even further. These results provide design guidelines to achieve a sought quality factor applicable to any resonator geometry for the development of new devices such as photodetectors, modulators, and sensors.
IEEE Power and Energy Society General Meeting
Energy storage is a unique grid asset in that it is capable of providing a number of grid services. In market areas, these grid services are only as valuable as the market prices for the services provided. This paper formulates the optimization problem for maximizing energy storage revenue from arbitrage and frequency regulation in the CAISO market. The optimization algorithm was then applied to three years of historical market data (2014-2016) at 2200 nodes to quantify the locational and time-varying nature of potential revenue. The optimization assumed perfect foresight, so it provides an upper bound on the maximum expected revenue. Since California is starting to experience negative locational marginal prices (LMPs) because of increased renewable generation, the optimization includes a duty cycle constraint to handle negative LMPs. The results show that participating in frequency regulation provides approximately 3.4 times the revenue of arbitrage. In addition, arbitrage potential revenue is highly location-specific. Since there are only a handful of zones for frequency regulation, the distribution of potential revenue from frequency regulation is much tighter.
IEEE Power and Energy Society General Meeting
Energy storage systems are flexible resources that accommodate and mitigate variability and uncertainty in the load and generation of modern power systems. We present a stochastic optimization approach for sizing and scheduling an energy storage system (ESS) for behind-the-meter use. Specifi-cally, we investigate the use of an ESS with a solar photovoltaic (PV) system and a generator in islanded operation tasked with balancing a critical load. The load and PV generation are uncertain and variable, so forecasts of these variables are used to determine the required energy capacity of the ESS as well as the schedule for operating the ESS and the generator. When the forecasting uncertainties can be fit to normal distributions, the probabilistic load balancing constraint can be reformulated as a linear inequality constraint, and the resulting optimization problem can be solved as a linear program. Finally, we present results from a case study considering the balancing of the critical load of a water treatment plant in islanded operation.
IEEE Power and Energy Society General Meeting
This paper focuses on a transmission system with a high penetration of converter-interfaced generators participating in its primary frequency regulation. In particular, the effects on system stability of widespread misconfiguration of frequency regulation schemes are considered. Failures in three separate primary frequency control schemes are analyzed by means of time domain simulations where control action was inverted by, for example, negating controller gain. The results indicate that in all cases the frequency response of the system is greatly deteriorated and, in multiple scenarios, the system loses synchronism. It is also shown that including limits to the control action can mitigate the deleterious effects of inverted control configurations.
Proceedings - International Carnahan Conference on Security Technology
Nuisance and false alarms are prevalent in modern physical security systems and often overwhelm the alarm station operators. Deep learning has shown progress in detection and classification tasks, however, it has rarely been implemented as a solution to reduce the nuisance and false alarm rates in a physical security systems. Previous work has shown that transfer learning using a convolutional neural network can provide benefit to physical security systems by achieving high accuracy of physical security targets [10]. We leverage this work by coupling the convolutional neural network, which operates on a frame-by-frame basis, with temporal algorithms which evaluate a sequence of such frames (e.g. video analytics). We discuss several alternatives for performing this temporal analysis, in particular Long Short-Term Memory and Liquid State Machine, and demonstrate their respective value on exemplar physical security videos. We also outline an architecture for developing an ensemble learner which leverages the strength of each individual algorithm in its aggregation. The incorporation of these algorithms into physical security systems creates a new paradigm in which we aim to decrease the volume of nuisance and false alarms in order to allow the alarm station operators to focus on the most relevant threats.
Proceedings - International Carnahan Conference on Security Technology
Physical security systems (PSS) and humans are inescapably tied in the current physical security paradigm. Yet, physical security system evaluations often end at the console that displays information to the human. That is, these evaluations do not account for human-in-The-loop factors that can greatly impact performance of the security system, even though methods for doing so are well-established. This paper highlights two examples of methods for evaluating the human component of the current physical security system. One of these methods is qualitative, focusing on the information the human needs to adequately monitor alarms on a physical site. The other of these methods objectively measures the impact of false alarm rates on threat detection. These types of human-centric evaluations are often treated as unnecessary or not cost effective under the belief that human cognition is straightforward and errors can be either trained away or mitigated with technology. These assumptions are not always correct, are often surprising, and can often only be identified with objective assessments of human-system performance. Thus, taking the time to perform human element evaluations can identify unintuitive human-system weaknesses and can provide significant cost savings in the form of mitigating vulnerabilities and reducing costly system patches or retrofits to correct an issue after the system has been deployed.
Proceedings - International Carnahan Conference on Security Technology
Unmanned aircraft system (UAS) technologies have gained immense popularity in the commercial sector and have enabled capabilities that were not available just a short time ago. Once limited to the domain of highly skilled hobbyists or precision military instruments, consumer UAS are now widespread due to increased computational power, manufacturing techniques, and numerous commercial applications. The rise of consumer UAS and the low barrier to entry necessary to utilize these systems provides an increased potential for using a UAS as a delivery platform for malicious intent. This creates a new security concern which must be addressed. The contribution presented in this work is the realization of counter UAS security technology concepts viewed through the traditional security framework and the associated challenges to such a framework.
IEEE Power and Energy Society General Meeting
In this work, we provide an economic analysis of using behind-the-meter (BTM) energy storage systems (ESS) for time-of-use (TOU) bill management together with power factor correction. A nonlinear optimization problem is formulated to find the optimal ESS's charge/discharge operating scheme that minimizes the energy and demand charges while correcting the power factor of the utility customers. The energy storage's state of charge (SOC) and inverter's power factor (PF) are considered in the constraints of the optimization. The problem is then transformed to a Linear Programming (LP) problem and formulated using Pyomo optimization modeling language. Case studies are conducted for a waste water treatment plant (WWTP) in New Mexico.
IEEE Power and Energy Society General Meeting
Utilizing historical utility outage data, an approach is presented to optimize investments which maximize reliability, i.e., minimize System Average Interruption Duration Index (SAIDI) and System Average Interruption Frequency Index (SAIFI) metrics. This method is designed for distribution system operators (DSOs) to improve reliability through small investments. This approach is not appropriate for large system planning and investments (e.g. new transmission lines or generation) since further economic and stability concerns are required for this type of analysis. The first step in the reliability investment optimization is to create synthetic outage data sets for a future year based on probability density functions of historical utility outage data. Once several (likely hundreds of) future year outage scenarios are created, an optimization model is used to minimize the synthetic outage SAIDI and SAIFI norm (other metrics could also be used). The results from this method can be used for reliability system planning purposes and can inform DSOs which investments to pursue to improve their reliability metrics.
Science
We report that over the past century, and particularly since the outset of the Cold War, wargames (interactive simulations used to evaluate aspects of tactics, operations, and strategy) have become an integral means for militaries and policy-makers to evaluate how strategic decisions are made related to nuclear weapons strategy and international security. Furthermore, these methods have also been applied beyond the military realm, to examine phenomena as varied as elections, government policy, international trade, and supply-chain mechanics. Today, a renewed focus on wargaming combined with access to sophisticated and inexpensive drag-and-drop digital game development frameworks and new cloud computing architectures have democratized the ability to enable massive multiplayer gaming experiences. With the integration of simulation tools and experimental methods from a variety of social science disciplines, a science-based experimental gaming approach has the potential to transform the insights generated from gaming by creating human-derived, large-n datasets for replicable, quantitative analysis. In the following, we outline challenges associated with contemporary simulation and wargaming tools, investigate where scholars have searched for game data, and explore the utility of new experimental gaming and data analysis methods in both policy-making and academic settings.
As part of the Source Physics Experiment (SPE) Phase I shallow chemical detonation series, multiple surface and borehole active-source seismic campaigns were executed to perform high resolution imaging of seismic velocity changes in the granitic substrate. Cross-correlation data processing methods were implemented to efficiently and robustly perform semi-automated change detection of first-arrival times between campaigns. The change detection algorithm updates the arrival times, and consequently the velocity model, of each campaign. The resulting tomographic imagery reveals the evolution of the subsurface velocity structure as the detonations progressed.
This report provides a summary of notes for building and running the Sandia Computational Engine for Particle Transport for Radiation Effects (SCEPTRE) code. SCEPTRE is a general purpose C++ code for solving the Boltzmann transport equation in serial or parallel using unstructured spatial finite elements, multigroup energy treatment, and a variety of angular treatments including discrete ordinates and spherical harmonics. Either the first-order form of the Boltzmann equation or one of the second-order forms may be solved. SCEPTRE requires a small number of open-source Third Party Libraries (TPL) to be available, and example scripts for building these TPL's are provided. The TPL's needed by SCEPTRE are Trilinos, boost, and netcdf. SCEPTRE uses an autoconf build system, and a sample configure script is provided. Running the SCEPTRE code requires that the user provide a spatial finite-elements mesh in Exodus format and a cross section library in a format that will be described. SCEPTRE uses an xml-based input, and several examples will be provided.
On November 28, 2018 at approximately 4:17pm the arsenic monitor in the Acid Waste Neutralization (AWN) room located in 858N was registering a concentration above the permit level of 51ppb as stated in ABCWUA Permit 2069G Daily Composite Limit. 100ml samples had been drawn from the waste stream at - 6pm November 28, 2018. The samples were analyzed, results received on November 29, 2018 confirmed an arsenic concentration above the permit level.
Many membrane distillation models have been created to simulate the heat and mass exchange process involved but most of the literature only validates models to a couple of cases with minor configuration changes. Tools are needed that allow tradeoffs between many configurations. The multiconfiguration membrane distillation model handles many configurations. This report introduces membrane distillation, provides theory, and presents the work to verify and validate the model against experimental data from Colorado School of Mines and a lower resolution model created at the National Renewable Energy Laboratory. Though more data analysis and testing are needed, an initial look at the model to experimental comparisons indicates that the model correlates to the data well but that design comparisons are likely to be incorrect across a broad range of configurations. More accurate quantification of heat and mass transfer through computational fluid mechanics is suggested.
Coupling interests in small modular reactors (SMR) as efficient and effective method to meet increasing energy demands with a growing aversion to cost and schedule overruns traditionally associated with the current fleet of commercial nuclear power plants (NPP), SMRs are attractive because they offer a significant relative cost reduction to current-generation nuclear reactors—increasing their appeal around the globe. Sandia's Global Nuclear Assurance and Security (GNAS) research perspective reframes the discussion around the "complex risk" of SMRs to address interdependencies between safety, safeguards, and security. This systems study provides technically rigorous analysis of the safety, safeguards, and security risks of SMR technologies. The aim of this research is three-fold. The first aim is to provide analytical evidence to support safety, safeguards, and security claims related to SMRs (Study Report Volume I). Second, this study aims to introduce a systems-theoretic approach for exploring interdependencies between the technical evaluations (Study Report Volume II). The third aim is to demonstrate Sandia's capability for timely, rigorous, and technical analysis to support emerging complex GNAS mission objectives.
The Thermal-Mechanical Failure project conducted in FY 2018 was divided into three sub projects: 1. Calibration of the uniaxial response of 304L stainless steel specimens at three temperatures (20, 150 and 310°C) and two strain rates (2 x 10-4 and 8 x 10-2 s-1); 2. Measurements of the fraction of plastic work that is converted to heat (Taylor-Quinney parameter) for 304L stainless steel. This fraction is usually assumed to be 0.95 in analysis because data is only available for a few materials; 3. Comparison of the predicted responses by isotropic and kinematic hardening plasticity models in a couple of simplified structural problems. One problem is a can crush followed by pressurization and is loosely associated with a crush-and-burn scenario. The other problem consists of a drop scenario of a thin-walled cylinder that carries a cantilevered internal mass.
We perform a joint inversion of absolute and differential P and S body waves, gravity measurements, and surface wave dispersion curves for the 3-D P- and S-wave velocity structure of the Nevada National Security Site (NNSS) and vicinity. Data from earthquakes, past nuclear tests, and other active source chemical explosive experiments, such as the Source Physics Experiments (SPE), are combined with surface wave phase and group speed measurements from ambient noise, source interferometry, and active source experiments to construct a 3-D velocity model of the site with resolvable structures as fine as 6 km horizontal and 2 km vertically. Results compare favorably with previous studies and expand and extend the knowledge of the 3-D structure of the region.
The ECP/VTK-m project is providing the core capabilities to perform scientific visualization on Exascale architectures. The ECP/VTK-m project fills the critical feature gap of performing visualization and analysis on processors like graphics-based processors and many integrated core. The results of this project will be delivered in tools like ParaView, Vislt, and Ascent as well as in stand-alone form. Moreover, these projects are depending on this ECP effort to be able to make effective use of ECP architectures.
The Nevada National Security Site (NNSS) will serve as the geologic setting for a Source Physics Experiment (SPE) program. The SPE will provide ground truth data to create and improve strong ground motion and seismic S-wave generation and propagation models. The NNSS was chosen as the test bed because it provides a variety of geologic settings ranging from relatively simple to very complex. Each series of SPE testing will comprise the setting and firing of explosive charges (source) placed in a central bore hole at varying depths and recording ground motions in instrumented bore holes located in two rings around the source positioned at different radii. Modeling using advanced simulation codes will be performed both a priori and after each test to predict ground response and to improve models based on acquired field data, respectively. A key component in the predictive capability and ultimate validation of the models is the full understanding of the intervening geology between the source and the instrumented bore holes including the geomechanical behavior of the site rock/structural features. This report presents a limited scope of work for an initial phase of primarily unconfined compression testing. Samples tested came from the U-15n core hole, which was drilled in granitic rock (quartz monzonite). The core hole was drilled at the location of the central SPE borehole, and thus represents material in which the explosive charges will be detonated. The U-15n location is the site of the first SPE, in Area 15 of the NNSS.
The Nevada National Security Site (NNSS) serves as the geologic setting for a Source Physics Experiment (SPE) program. The SPE provides ground truth data to create and improve strong ground motion and seismic S-wave generation and propagation models. The NNSS was chosen as the test bed because it provides a variety of geologic settings ranging from relatively simple to very complex. Each series of SPE testing will comprise the setting and firing of explosive charges (source) placed in a central borehole at varying depths and recording ground motions in instrumented boreholes located in two rings around the source, positioned at different radii. Modeling using advanced simulation codes will be performed both before and after each test to predict ground response and to improve models based on acquired field data, respectively. A key component in the predictive capability and ultimate validation of the models is the full understanding of the intervening geology between the source and the instrumented boreholes including the geomechanical behavior of the site's rock/structural features. This report summarizes unconfined compression testing (UCS) from coreholes U-15n#12 and U-15n#13 and compares those datasets to UCS results from coreholes U-15n and U-15n#10. U-15n#12 corehole was drilled at -60° to the horizontal and U-15n#13 was drilled vertically in granitic rock (quartz monzonite) after the third SPE shot. Figure 1 illustrates at the surface, U 15n#12 and U-15n#13 coreholes were approximately 30 meters and 10 meters from the central SPE borehole (U-15n) respectively. Corehole U-15n#12 intersects the central SPE borehole (U 15n) at a core depth of 174 feet (approximately 150 feet vertical depth). The location of U 15n#12 and U-15n#13 is the site of the first, second and third SPE's, in Area 15 of the NNSS.
Direct Shear (DS) and Triaxial Shear (FCT) tests from Core holes U-15n and U-15n#10 are part of a larger material characterization effort for the Source Physics Experiment (SPE) project. This larger effort encompasses characterizing a granite body from Nevada both before and after each SPE shot. Core hole U-15n is the vertically oriented source hole for all SPE shots; pre shot core was taken from this hole for DS and FCT testing. After two SPE shots were executed, an inclined core hole (U-15n#10) was drilled; both DS and FCT tests were conducted from this core hole. The first shot (SPE-1) conducted on May 3, 2011 was a calibration shot. SPE-1 was an order of magnitude smaller than the second shot (SPE-2). After SPE-2 was conducted on October 25, 2011 the aforementioned inclined core hole (U-15n#10) was drilled. At its bottom, the inclined core hole intersects the source hole. The third shot (SPE-3) occurred on July 24, 2012. Vertical and inclined core holes were drilled post SPE-3 and specimens will soon be selected for geomechanical characterization. At the time of this writing, work is ongoing at Nevada in preparation for the fourth SPE shot (SPE-4).
Dynamic Brazilian tension (DBR) tests from core hole U-15n are part of a larger material characterization effort for the Source Physics Experiment (SPE) project. This larger effort encompasses characterizing Climax Stock granite rock from the Nevada National Security Site (NNSS) both before and after each SPE shot. The current test series includes DBR tests on dry intact granite and fault material at depths of -85 and -150 ft.
Triaxial compression tests from core hole U-15n are part of a larger material characterization effort for the Source Physics Experiment (SPE) project. This larger effort encompasses characterizing Climax Stock granite rock from the Nevada National Security Site (NNSS) both before and after each SPE shot. The current test series includes triaxial compression tests on dry and saturated intact granite and fault material at 100, 200, 300, and 400 MPa confining pressure.
The Nevada National Security Site (NNSS) serves as the geologic setting for a Source Physics Experiment (SPE) program. The SPE provides ground truth data to create and improve strong ground motion and seismic S-wave generation and propagation models. The NNSS was chosen as the test bed because it provides a variety of geologic settings ranging from relatively simple to very complex. Each series of SPE testing will comprise the setting and firing of explosive charges (source) placed in a central borehole at varying depths and recording ground motions in instrumented boreholes located in two rings around the source, positioned at different radii. Modeling using advanced simulation codes will be performed both before and after each test to predict ground response and to improve models based on acquired field data, respectively. A key component in the predictive capability and ultimate validation of the models is the full understanding of the intervening geology between the source and the instrumented boreholes including the geomechanical behavior of the site's rock/structural features. This memorandum reports on an initial phase of unconfined compression testing from corehole U-15n#10. Specimens tested came from the U-15n#10 core hole, which was drilled at -60° to the horizontal in granitic rock (quartz monzonite) after the second SPE shot (SPE-2). Figure 1 illustrates at the surface, the core hole was approximately 90 feet from the central SPE borehole. Corehole U 15n#10 intersects the central SPE borehole (U-15n) at a core depth of 170 feet (approximately 150 feet vertical depth) which is within the highly damaged zone of SPE-2. The U-15n#10 location is the site of the first, second and third SPE's, in Area 15 of the NNSS.
Efficiency in requirements engineering and management (REM) for complex hardware systems is desirable to reduce program impacts, such as schedule and budget. Sandia National Labs (SNL) investigated external state-of-the-practice REM to capture insights, recommendations, and best practices from external entities on several REM topics. Twenty-one at-will participants contributed responses to closed- and open-ended questions. The results were synthesized and are provided herein. The results help SNL and others to understand where its practices are current; what trends, approaches, or processes in REM might be beneficial if implemented or introduced; what challenges might be avoided; where efficiencies might be realized; and which practices are still maturing or evolving in industry and academia, so that SNL can stay abreast of these developments.
IEEE International Ultrasonics Symposium, IUS
Fingerprint sensing is pervasive in the cellular telecommunications market. Current commercial fingerprint sensors utilize capacitive scanning. This work focuses on the design, fabrication and characterization of post-complementary-metal-oxide-semiconductor (CMOS) compatible piezoelectric micro-machined ultrasonic transducers for use as ultrasonic pixels to improve robustness to contamination and allow for sub-epidermis scans. Ultrasonic pixels are demonstrated at frequencies ranging from 100 kHz to 800 kHz with several electrode coverages and styles to identify trends.
Nuclear Instruments and Methods in Physics Research. Section B, Beam Interactions with Materials and Atoms
A facility for continuously monitoring the thermal and elastic performance of materials under exposure to ion beam irradiation has been designed and commissioned. By coupling an all-optical, non-contact, non-destructive measurement technique known as transient grating spectroscopy (TGS) to a 6 MV tandem ion accelerator, bulk material properties may be measured at high fidelity as a function of irradiation exposure and temperature. Ion beam energies and optical parameters may be tuned to ensure that only the properties of the ion-implanted surface layer are interrogated. This facility provides complementary capabilities to the set of facilities worldwide which have the ability to study the evolution of microstructure in situ during radiation exposure, but lack the ability to measure bulk-like properties. Here, the measurement physics of TGS, design of the experimental facility, and initial results using both light and heavy ion exposures are described. Lastly, several short- and long-term upgrades are discussed which will further increase the capabilities of this diagnostic.
Journal of Nuclear Materials
The nuclear incident at the Fukushima Daiichi nuclear power plant has created a strong push for accident-tolerant fuel cladding to replace current zirconium-based cladding. A current near-term focus on iron-chromium-aluminum (FeCrAl) alloys. Laser-welded FeCrAl samples (C35MN, C37M, and C35M10 TC) were subjected to three different post-weld heat treatment regimes: 650 °C for 5 h, 850 °C for 1 h, and 850 °C for 5 h. Here, the samples were then analyzed using optical light microscopy, micro-hardness indentation, and scanning electron microscopy coupled with energy-dispersive spectroscopy and electron backscatter diffraction. The base microstructure of C37M and C35M10 TC experienced significant grain coarsening outside the fusion zone due to the applied post-weld heat treatments, whereas Nb-rich precipitation in C35MN limited grain growth compared with the other alloys studied.
Journal of the American Chemical Society
Solid-state reaction kinetics on atomic length scales have not been heavily investigated due to the long times, high reaction temperatures, and small reaction volumes at interfaces in solid-state reactions. All of these conditions present significant analytical challenges in following reaction pathways. Herein we use in situ and ex situ X-ray diffraction, in situ X-ray reflectivity, high-angle annular dark field scanning transmission electron microscopy, and energy-dispersive X-ray spectroscopy to investigate the mechanistic pathways for the formation of a layered (Pb0.5Sn0.5Se)1+δ(TiSe2)m heterostructure, where m is the varying number of TiSe2 layers in the repeating structure. Thin film precursors were vapor deposited as elemental-modulated layers into an artificial superlattice with Pb and Sn in independent layers, creating a repeating unit with twice the size of the final structure. At low temperatures, the precursor undergoes only a crystallization event to form an intermediate (SnSe2)1+γ(TiSe2)m(PbSe)1+δ(TiSe2)m superstructure. At higher temperatures, this superstructure transforms into a (Pb0.5Sn0.5Se)1+δ(TiSe2)m alloyed structure. The rate of decay of superlattice reflections of the (SnSe2)1+γ(TiSe2)m(PbSe)1+δ(TiSe2)m superstructure was used as the indicator of the progress of the reaction. Here, we show that increasing the number of TiSe2 layers does not decrease the rate at which the SnSe2 and PbSe layers alloy, suggesting that at these temperatures it is reduction of the SnSe2 to SnSe and Se that is rate limiting in the formation of the alloy and not the associated diffusion of Sn and Pb through the TiSe2 layers.
Journal of Computational Physics
As computing power rapidly increases, quickly creating a representative and accurate discretization of complex geometries arises as a major hurdle towards achieving a next generation simulation capability. Component definitions may be in the form of solid (CAD) models or derived from 3D computed tomography (CT) data, and creating a surface-conformal discretization may be required to resolve complex interfacial physics. The Conformal Decomposition Finite Element Methods (CDFEM) has been shown to be an efficient algorithm for creating conformal tetrahedral discretizations of these implicit geometries without manual mesh generation. In this work we describe an extension to CDFEM to accurately resolve the intersections of many materials within a simulation domain. This capability is demonstrated on both an analytical geometry and an image-based CT mesostructure representation consisting of hundreds of individual particles. Effective geometric and transport properties are the calculated quantities of interest. Solution verification is performed, showing CDFEM to be optimally convergent in nearly all cases. Representative volume element (RVE) size is also explored and per-sample variability quantified. Relatively large domains and small elements are required to reduce uncertainty, with recommended meshes of nearly 10 million elements still containing upwards of 30% uncertainty in certain effective properties. This work instills confidence in the applicability of CDFEM to provide insight into the behaviors of complex composite materials and provides recommendations on domain and mesh requirements.
Journal of Computational and Applied Mathematics
This work explores the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. This study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of the original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.
Journal of the Electrochemical Society
Heat release that leads to thermal runaway of lithium-ion batteries begins with decomposition reactions associated with lithiated graphite. We broadly review the observed phenomena related to lithiated graphite electrodes and develop a comprehensive model that predicts with a single parameter set and with reasonable accuracy measurements over the available temperature range with a range of graphite particle sizes. The model developed in this work uses a standardized total heat release and takes advantage of a revised dependence of reaction rates and the tunneling barrier on specific surface area. The reaction extent is limited by inadequate electrolyte or lithium. Calorimetry measurements show that heat release from the reaction between lithiated graphite and electrolyte accelerates above ~200°C, and the model addresses this without introducing additional chemical reactions. This method assumes that the electron-tunneling barrier through the solid electrolyte interphase (SEI) grows initially and then becomes constant at some critical magnitude, which allows the reaction to accelerate as the temperature rises by means of its activation energy. Phenomena that could result in the upper limit on the tunneling barrier are discussed. The model predictions with two candidate activation energies are evaluated through comparisons to calorimetry data, and recommendations are made for optimal parameters.
IEEE Access
Emerging memory devices, such as resistive crossbars, have the capacity to store large amounts of data in a single array. Acquiring the data stored in large-capacity crossbars in a sequential fashion can become a bottleneck. We present practical methods, based on sparse sampling, to quickly acquire sparse data stored on emerging memory devices that support the basic summation kernel, reducing the acquisition time from linear to sub-linear. The experimental results show that at least an order of magnitude improvement in acquisition time can be achieved when the data are sparse. Finally, in addition, we show that the energy cost associated with our approach is competitive to that of the sequential method.
IEEE Transactions on Geoscience and Remote Sensing
A coherent change detection (CCD) image, computed from a geometrically matched, temporally separated pair of complex-valued synthetic aperture radar (SAR) image sets, conveys the pixel-level equivalence between the two observations. Low-coherence values in a CCD image are typically due to either some physical change in the corresponding pixels or a low signal-to-noise observation. A CCD image does not directly convey the nature of the change that occurred to cause low coherence. In this paper, we introduce a mathematical framework for discriminating between different types of change within a CCD image. We utilize the extra degrees of freedom and information from polarimetric interferometric SAR (PolInSAR) data and PolInSAR processing techniques to define a 29-dimensional feature vector that contains information capable of discriminating between different types of change in a scene. We also propose two change-type discrimination functions that can be trained with feature vector training data and demonstrate change-type discrimination on an example image set for three different types of change. In conclusion, we also describe and characterize the performance of the two proposed change-type discrimination functions by way of receiver operating characteristic curves, confusion matrices, and pass matrices.
Statistical Analysis and Data Mining
We study regression using functional predictors in situations where these functions contains both phase and amplitude variability. In other words, the functions are misaligned due to errors in time measurements, and these errors can significantly degrade both model estimation and prediction performance. The current techniques either ignore the phase variability, or handle it via preprocessing, that is, use an off–the–shelf technique for functional alignment and phase removal. We develop a functional principal component regression model which has a comprehensive approach in handling phase and amplitude variability. The model utilizes a mathematical representation of the data known as the square–root slope function. These functions preserve the L2 norm under warping and are ideally suited for simultaneous estimation of regression and warping parameters. Furthermore, using both simulated and real–world data sets, we demonstrate our approach and evaluate its prediction performance relative to current models. In addition, we propose an extension to functional logistic and multinomial logistic regression.
Journal of the Electrochemical Society
A methanesulfonic acid (MSA) electrolyte with a single suppressor additive was used for potentiostatic bottom-up filling of copper in mesoscale through silicon vias (TSVs). Conversly, galvanostatic deposition is desirable for production level full wafer plating tools as they are typically not equipped with reference electrodes which are required for potentiostatic plating. Potentiostatic deposition was used to determine the over-potential required for bottom-up TSV filling and the resultant current was measured to establish a range of current densities to investigate for galvanostatic deposition. Galvanostatic plating conditions were then optimized to achieve void-free bottom-up filling in mesoscale TSVs for a range of sample sizes.
Proceedings of the Annual International Symposium on Microarchitecture, MICRO
With Non-Volatile Memories (NVMs) beginning to enter the mainstream computing market, it is time to consider how to secure NVM-equipped computing systems. Recent Meltdown and Spectre attacks are evidence that security must be intrinsic to computing systems and not added as an afterthought. Processor vendors are taking the first steps and are beginning to build security primitives into commodity processors. One security primitive that is associated with the use of emerging NVMs is memory encryption. Memory encryption, while necessary, is very challenging when used with NVMs because it exacerbates the write endurance problem. Secure architectures use cryptographic metadata that must be persisted and restored to allow secure recovery of data in the event of power-loss. Specifically, encryption counters must be persistent to enable secure and functional recovery of an interrupted system. However, the cost of ensuring and maintaining persistence for these counters can be significant. In this paper, we propose a novel scheme to maintain encryption counters without the need for frequent updates. Our new memory controller design, Osiris, repurposes memory Error-Correction Codes (ECCs) to enable fast restoration and recovery of encryption counters. To evaluate our design, we use Gem5 to run eight memory-intensive workloads selected from SPEC2006 and U.S. Department of Energy (DoE) proxy applications. Compared to a write-Through counter-cache scheme, on average, Osiris can reduce 48.7% of the memory writes (increase lifetime by 1.95x), and reduce the performance overhead from 51.5% (for write-Through) to only 5.8%. Furthermore, without the need for backup battery or extra power-supply hold-up time, Osiris performs better than a battery-backed write-back (5.8% vs. 6.6% overhead) and has less write-Traffic (2.6% vs. 5.9% overhead).
Nature Photonics
The term photonic wire laser is now widely used for lasers with transverse dimensions much smaller than the wavelength. As a result, a large fraction of the mode propagates outside the solid core. Here, we propose and demonstrate a scheme to form a coupled cavity by taking advantage of this unique feature of photonic wire lasers. In this scheme, we used quantum cascade lasers with antenna-coupled third-order distributed feedback grating as the platform. Inspired by the chemistry of hybridization, our scheme phase-locks multiple such lasers by π coupling. Alongside the coupled-cavity laser, we demonstrated several performance metrics that are important for various applications in sensing and imaging: a continuous electrical tuning of ~10 GHz at ~3.8 THz (fractional tuning of ~0.26%), a good level of output power (~50–90 mW of continuous-wave power) and tight beam patterns (~100 of beam divergence).
IEEE Transactions on Nuclear Science
In this paper, we present heavy ion and proton data on AlGaN highvoltage HEMTs showing Single Event Burnout, Total Ionizing Dose, and Displacement Damage responses. These are the first such data for materials of this type. Two different designs of the epitaxial structure were tested for Single Event Burnout (SEB). The default layout design showed burnout voltages that decreased rapidly with increasing LET, falling to about 25% of nominal breakdown voltage for ions with LET of about 34 MeV·cm2/mg for both structures. Samples of the device structure with lower AlN content were tested with varying gate-drain spacing and revealed an improved robustness to heavy ions, resulting in burnout voltages that did not decrease up to at least 33.9 MeV·cm2/mg. Failure analysis showed there was consistently a point, location random, where gate and drain had been shorted. Oscilloscope traces of terminal voltages and currents during burnout events lend support to the hypothesis that burnout events begin with a heavy ion strike in the vulnerable region between gate and drain. This subsequently initiates a cascade of events resulting in damage that is largely manifested elsewhere in the device. This hypothesis also suggests a path for greatly improving the susceptibility to SEB as development of this technology goes forward. Lastly, testing with 2.5 MeV protons showed only minor changes in device characteristics.
Journal of Physical Chemistry A
This paper provides experimental evidence for the chemical structures of aliphatically substituted and bridged polycyclic aromatic hydrocarbon (PAH) species in gas-physe combustion environments. The identification of these single- and multicore aromatic species, which have been hypothesized to be important in PAH growth and soot nucleation, was made possible through a combination of sampling gaseous constituents from an atmospheric pressure inverse coflow diffusion flame of ethylene and high-resolution tandem mass spectrometry (MS-MS). In these experiments, the flame-sampled components were ionized using a continuous VUV lamp at 10.0 eV and the ions were subsequently fragmented through collisions with Ar atoms in a collision-induced dissociation (CID) process. The resulting fragment ions, which were separated using a reflectron time-of-flight mass spectrometer, were used to extract structural information about the sampled aromatic compounds. The high-resolution mass spectra revealed the presence of alkylated single-core aromatic compounds and the fragment ions that were observed correspond to the loss of saturated and unsaturated units containing up to a total of 6 carbon atoms. Furthermore, the aromatic structures that form the foundational building blocks of the larger PAHs were identified to be smaller single-ring and pericondensed aromatic species with repetitive structural features. For demonstrative purposes, details are provided for the CID of molecular ions at masses 202 and 434. Insights into the role of the aliphatically substituted and bridged aromatics in the reaction network of PAH growth chemistry were obtained from spatially resolved measurements of the flame. The experimental results are consistent with a growth mechanism in which alkylated aromatics are oxidized to form pericondensed ring structures or react and recombine with other aromatics to form larger, potentially three-dimensional, aliphatically bridged multicore aromatic hydrocarbons.
Cyber-Physical Systems Security
Modern digital hardware and software designs are increasingly complex but are themselves only idealizations of a real system that is instantiated in, and interacts with, an analog physical environment. Insights from physics, formal methods, and complex systems theory can aid in extending reliability and security measures from pure digital computation (itself a challenging problem) to the broader cyber-physical and out-of-nominal arena. Example applications to design and analysis of high-consequence controllers and extreme-scale scientific computing illustrate the interplay of physics and computation. In particular, we discuss the limitations of digital models in an analog world, the modeling and verification of out-of-nominal logic, and the resilience of computational physics simulation. A common theme is that robustness to failures and attacks is fostered by cyber-physical system designs that are constrained to possess inherent stability or smoothness. This chapter contains excerpts from previous publications by the authors.
Cyber-Physical Systems Security
Sandia National Laboratories performed a 6-month effort to stand up a "zero-entry" cyber range environment for the purpose of providing self-directed practice to augment transmedia learning across diverse media and/or devices that may be part of a loosely coupled, distributed ecosystem. This 6-month effort leveraged Minimega, an open-source Emulytics™ (emulation + analytics) tool for launching and managing virtual machines in a cyber range. The proof of concept addressed a set of learning objectives for cybersecurity operations by providing three, short "zero-entry" exercises for beginner, intermediate, and advanced levels in network forensics, social engineering, penetration testing, and reverse engineering. Learners provided answers to problems they explored in networked virtual machines. The hands-on environment, Cyber Scorpion, participated in a preliminary demonstration in April 2017 at Ft. Bragg, NC. The present chapter describes the learning experience research and software development effort for a cybersecurity use case and subsequent lessons learned. It offers general recommendations for challenges which may be present in future learning ecosystems.
Cyber-Physical Systems Security
Mixed, augmented, and virtual reality holds promise for many securityrelated applications including physical security systems. When combined with models of a site, an augmented reality (AR) approach can be designed to enhance knowledge and understanding of the status of the facility. The present chapter describes how improved modeling and simulation will increase situational awareness by blurring the lines among the use of tools for analysis, rehearsal, and training-especially when coupled with immersive interaction experiences offered by augmented reality. We demonstrate how the notion of a digital twin can blur these lines. We conclude with challenges that must be overcome when applying digital twins, advanced modeling, and augmented reality to the design and development of next-generation physical security systems.
Journal of Spacecraft and Rockets
In the present study, three boundary-layer stability codes are compared based on hypersonic high-enthalpy boundary-layer flows around a blunted 7 deg half-angle cone. The code-to-code comparison is conducted between the following codes: the Nonlocal Transition analysis code of the DLR, German Aerospace Center (DLR); the Stability and Transition Analysis for hypersonic Boundary Layers code of VirtusAero LLC; and the VKI Extensible Stability and Transition Analysis code of the von Kármán Institute for Fluid Dynamics. The comparison focuses on the role of real-gas effects on the second-mode instability, in particular the disturbance frequency, and deals with the question on how far not accounting for real-gas effects compromises the stability analysis. Here, the experimental test cases for the comparison are provided by the DLR High Enthalpy Shock Tunnel Göttingen and the Japan Aerospace Exploration Agency High Enthalpy Shock Tunnel. The focus of the comparison between the stability results and the measurements is, besides real-gas effects, the influence of uncertainties in the mean flow on the stability analysis.
Cyber-Physical Systems Security
Deep neural networks are often computationally expensive, during both the training stage and inference stage. Training is always expensive, because back-propagation requires high-precision floating-pointmultiplication and addition. However, various mathematical optimizations may be employed to reduce the computational cost of inference. Optimized inference is important for reducing power consumption and latency and for increasing throughput. This chapter introduces the central approaches for optimizing deep neural network inference: pruning "unnecessary" weights, quantizing weights and inputs, sharing weights between layer units, compressing weights before transferring from main memory, distilling large high-performance models into smaller models, and decomposing convolutional filters to reduce multiply and accumulate operations. In this chapter, using a unified notation, we provide a mathematical and algorithmic description of the aforementioned deep neural network inference optimization methods.
Physical Review Accelerators and Beams
Herein we present details of the design, simulation, and performance of a 100-GW linear transformer driver (LTD) cavity at Sandia National Laboratories. The cavity consists of 20 "bricks." Each brick is comprised of two 80 nF, 100 kV capacitors connected electrically in series with a custom, 200 kV, three-electrode, field-distortion gas switch. The brick capacitors are bipolar charged to ±100 kV for a total switch voltage of 200 kV. Typical brick circuit parameters are 40 nF capacitance (two 80 nF capacitors in series) and 160 nH inductance. The switch electrodes are fabricated from a WCu alloy and are operated with breathable air. Over the course of 6,556 shots the cavity generated a peak electrical current and power of 1.03 MA (±1.8%) and 106 GW (±3.1%). Experimental results are consistent (to within uncertainties) with circuit simulations for normal operation, and expected failure modes including prefire and late-fire events. New features of this development that are reported here in detail include: (1) 100 ns, 1 MA, 100-GW output from a 2.2 m diameter LTD into a 0.1 Ω load, (2) high-impedance solid charging resistors that are optimized for this application, and (3) evaluation of maintenance-free trigger circuits using capacitive coupling and inductive isolation.
Computers and Fluids
An implicit, low-dissipation, low-Mach, variable density control volume finite element formulation is used to explore foundational understanding of numerical accuracy for large-eddy simulation applications on hybrid meshes. Detailed simulation comparisons are made between low-order hexahedral, tetrahedral, pyramid, and wedge/prism topologies against a third-order, unstructured hexahedral topology. Using smooth analytical and manufactured low-Mach solutions, design-order convergence is established for the hexahedral, tetrahedral, pyramid, and wedge element topologies using a new open boundary condition based on energy-stable methodologies previously deployed within a finite-difference context. A wide range of simulations demonstrate that low-order hexahedral- and wedge-based element topologies behave nearly identically in both computed numerical errors and overall simulation timings. Moreover, low-order tetrahedral and pyramid element topologies also display nearly the same numerical characteristics. Although the superiority of the hexahedral-based topology is clearly demonstrated for trivial laminar, principally-aligned flows, e.g., a 1x2x10 channel flow with specified pressure drop, this advantage is reduced for non-aligned, turbulent flows including the Taylor–Green Vortex, turbulent plane channel flow (Reτ395), and buoyant flow past a heated cylinder. With the order of accuracy demonstrated for both homogenous and hybrid meshes, it is shown that solution verification for the selected complex flows can be established for all topology types. Although the number of elements in a mesh of like spacing comprised of tetrahedral, wedge, or pyramid elements increases as compared to the hexahedral counterpart, for wall-resolved large-eddy simulation, the increased assembly and residual evaluation computational time for non-hexahedral is offset by more efficient linear solver times. Lastly, most simulation results indicate that modest polynomial promotion provides a significant increase in solution accuracy.
The Center for Computing Research (CCR) at Sandia National Laboratories organizes an active and productive summer program each year, in coordination with the Computer Science Research Institute (CSRI) and Cyber Engineering Research Institute (CERI). CERI focuses on open, exploratory research in cyber security in partnership with academia, industry, and government, and provides collaborators an accessible portal to Sandia's cybersecurity experts and facilities. Moreover, CERI provides an environment for visionary, threat-informed research on national cyber challenges. CSRI brings university faculty and students to Sandia National Laboratories for focused collaborative research on DOE computer and computational science problems. CSRI provides a mechanism by which university researchers learn about problems in computer and computational science at DOE Laboratories. Participants conduct leading - edge research, interact with scientists and engineers at the laboratories, and help transfer the results of their research to programs at the labs.
Polymer
Post-polymerization reactions of Diels-Alder polyphenylene with ring-substituted benzoyl chloride derivatives using triflic acid as the catalyst, effected selective Friedel-Crafts acylation of the lateral phenyl groups attached to the polyphenylene backbone. Using 4-(trifluoromethyl) benzoyl chloride gave a polymer with increased hydrophobicity. Using 4-fluorobenzoyl chloride afforded lateral 4-(fluorobenzoyl)phenyl substituents, which were further functionalized by nucleophilic aromatic substitution of the reactive fluoro substituent by 4-methoxyphenol.
Proceedings - 2017 International Conference on Computational Science and Computational Intelligence, CSCI 2017
A forensics investigation after a breach often uncovers network and host indicators of compromise (IOCs) that can be deployed to sensors to allow early detection of the adversary in the future. Over time, the adversary will change tactics, techniques, and procedures (TTPs), which will also change the data generated. If the IOCs are not kept up-to-date with the adversary's new TTPs, the adversary will no longer be detected once all of the IOCs become invalid. Tracking the Known (TTK) is the problem of keeping IOCs, in this case regular expression (regexes), up-to-date with a dynamic adversary. Our framework solves the TTK problem in an automated, cyclic fashion to bracket a previously discovered adversary. This tracking is accomplished through a data-driven approach of self-adapting a given model based on its own detection capabilities.In our initial experiments, we found that the true positive rate (TPR) of the adaptive solution degrades much less significantly over time than the naïve solution, suggesting that self-updating the model allows the continued detection of positives (i.e., adversaries). The cost for this performance is in the false positive rate (FPR), which increases over time for the adaptive solution, but remains constant for the naïve solution. However, the difference in overall detection performance, as measured by the area under the curve (AUC), between the two methods is negligible. This result suggests that self-updating the model over time should be done in practice to continue to detect known, evolving adversaries.
In this project we studied undoped Ge/SiGe heterostructure field-effect transistors, which had a very wide hole density range from 1x1010cm-2 to 3.5x1011 cm-2 tunable by (negative) gate voltage. At low temperatures reasonably high carriers mobility of about 3.4x105 cm2/Vs was achieved.
Macromolecular Materials and Engineering
The use of self–assembling, pre–polymer materials in 3D printing is rare, due to difficulties of facilitating printing with low molecular weight species and preserving their reactivity and/or functions on the macroscale. Akin to 3D printing of small molecules, examples of extrusion–based printing of pre–polymer thermosets are uncommon, arising from their limited rheological tuneability and slow reactions kinetics. The direct ink write (DIW) 3D printing of a two–part resin, Epon 828 and Jeffamine D230, using a self–assembly approach is reported. Through the addition of self–assembling, ureidopyrimidinone–modified Jeffamine D230 and nanoclay filler, suitable viscoelastic properties are obtained, enabling 3D printing of the epoxy–amine pre–polymer resin. A significant increase in viscosity is observed, with an infinite shear rate viscosity of approximately two orders of magnitude higher than control resins, in addition to, an increase in yield strength and thixotropic behavior. As a result, printing of simple geometries is demonstrated with parts showing excellent interlayer adhesion, unachievable using control resins.
With current lithium ion batteries optimized for performance under relatively low charge rate conditions, implementation of XFC has been hindered by drawbacks including Li plating, kinetic polarization, and heat dissipation. This project will utilize model-informed design of 3-D hierarchical electrodes to tune key XFC related variables like 1) bulk porosity/tortuosity 2) vertical pore diameter, spacing, and lattice 3) crystallographic orientation of graphite particles relative to exposed surfaces 4) interfacial chemistry of the graphite surfaces through "artificial sEr formation using ALD 5) current collector surface roughness (aspect ratio, roughness factor, etc.). A key aspect of implementing novel electrodes is characterizing them in relevant settings. For this project, ultimately led out of University of Michigan by Neil Dasgupta, that includes both coin cell and 2+ Ah pouch cell testing, as well as comparison testing against baselines. Sandia National Labs will be conducting detailed cell characterization on iterative versions/improvements of the model-based hierarchical electrodes, as well as COTS cells for baseline comparisons. Key metrics include performance under fast charge conditions, as well as the absence or degree of lithium plating. Sandia will use their unique high precision cycling and rapid EIS capabilities to accurately characterize performance and any lithium plating during 6C charging and beyond, coupling electrochemical observations with cell teardown. Sandia will also design custom fixturing to cool cells during rapid charge, to decouple any kinetic effects brought about by cell heating and allow comparisons between different cells and charge rates. Using these techniques, Sandia will assess HOH electrodes from the University of Michigan, as well as aiding in iterative model and electrode design.
Nature Communications
The uncontrolled interaction of a quantum system with its environment is detrimental for quantum coherence. For quantum bits in the solid state, decoherence from thermal vibrations of the surrounding lattice can typically only be suppressed by lowering the temperature of operation. Here, we use a nano-electro-mechanical system to mitigate the effect of thermal phonons on a spin qubit - the silicon-vacancy colour centre in diamond - without changing the system temperature. By controlling the strain environment of the colour centre, we tune its electronic levels to probe, control, and eventually suppress the interaction of its spin with the thermal bath. Strain control provides both large tunability of the optical transitions and significantly improved spin coherence. Finally, our findings indicate the possibility to achieve strong coupling between the silicon-vacancy spin and single phonons, which can lead to the realisation of phonon-mediated quantum gates and nonlinear quantum phononics.
Scientific Reports
Optical nonlocalities are elusive and hardly observable in traditional plasmonic materials like noble and alkali metals. Here we report experimental observation of viscoelastic nonlocalities in the infrared optical response of epsilon-near-zero nanofilms made of low-loss doped cadmium-oxide. The nonlocality is detectable thanks to the low damping rate of conduction electrons and the virtual absence of interband transitions at infrared wavelengths. We describe the motion of conduction electrons using a hydrodynamic model for a viscoelastic fluid, and find excellent agreement with experimental results. The electrons' elasticity blue-shifts the infrared plasmonic resonance associated with the main epsilon-near-zero mode, and triggers the onset of higher-order resonances due to the excitation of electron-pressure modes above the bulk plasma frequency. We also provide evidence of the existence of nonlocal damping, i.e., viscosity, in the motion of optically-excited conduction electrons using a combination of spectroscopic ellipsometry data and predictions based on the viscoelastic hydrodynamic model.
Scientific Reports
Emerging sequencing technologies are allowing us to characterize environmental, clinical and laboratory samples with increasing speed and detail, including real-time analysis and interpretation of data. One example of this is being able to rapidly and accurately detect a wide range of pathogenic organisms, both in the clinic and the field. Genomes can have radically different GC content however, such that accurate sequence analysis can be challenging depending upon the technology used. Here, we have characterized the performance of the Oxford MinION nanopore sequencer for detection and evaluation of organisms with a range of genomic nucleotide bias. We have diagnosed the quality of base-calling across individual reads and discovered that the position within the read affects base-calling and quality scores. Finally, we have evaluated the performance of the current state-of-the-art neural network-based MinION basecaller, characterizing its behavior with respect to systemic errors as well as context- and sequence-specific errors. Overall, we present a detailed characterization the capabilities of the MinION in terms of generating high-accuracy sequence data from genomes with a wide range of nucleotide content. This study provides a framework for designing the appropriate experiments that are the likely to lead to accurate and rapid field-forward diagnostics.
Nature Communications
Methanol is a benchmark for understanding tropospheric oxidation, but is underpredicted by up to 100% in atmospheric models. Recent work has suggested this discrepancy can be reconciled by the rapid reaction of hydroxyl and methylperoxy radicals with a methanol branching fraction of 30%. However, for fractions below 15%, methanol underprediction is exacerbated. Theoretical investigations of this reaction are challenging because of intersystem crossing between singlet and triplet surfaces – ∼45% of reaction products are obtained via intersystem crossing of a pre-product complex – which demands experimental determinations of product branching. Here we report direct measurements of methanol from this reaction. A branching fraction below 15% is established, consequently highlighting a large gap in the understanding of global methanol sources. These results support the recent high-level theoretical work and substantially reduce its uncertainties.
Metallurgical and Materials Transactions A: Physical Metallurgy and Materials Science
U-Pu-Zr alloys are considered ideal metallic fuels for experimental breeder reactors because of their superior material properties and potential for increased burnup performance. However, significant constituent redistribution has been observed in these alloys when irradiated, or subject to a thermal gradient, resulting in inhomogeneity of both composition and phase, which, in turn, alters the fuel performance. The hybrid Potts-phase field method is reformulated for ternary alloys in a thermal gradient and utilized to simulate and predict constituent redistribution and phase transformations in the U-Pu-Zr nuclear fuel system. Simulated evolution profiles for the U-16Pu-23Zr (at. pct) alloy show concentric zones that are compared with published experimental results; discrepancies in zone size are attributed to thermal profile differences and assumptions related to the diffusivity values used. Twenty-one alloys, over the entire ternary compositional spectrum, are also simulated to investigate the effects of alloy composition on constituent redistribution and phase transformations. The U-40Pu-20Zr (at. pct) alloy shows the most potential for compositional uniformity and phase homogeneity, throughout a thermal gradient, while remaining in the compositional range of feasible alloys.
Scientific Reports
We study semiconductor hyperbolic metamaterials (SHMs) at the quantum limit experimentally using spectroscopic ellipsometry as well as theoretically using a new microscopic theory. The theory is a combination of microscopic density matrix approach for the material response and Green’s function approach for the propagating electric field. Our approach predicts absorptivity of the full multilayer system and for the first time allows the prediction of in-plane and out-of-plane dielectric functions for every individual layer constructing the SHM as well as effective dielectric functions that can be used to describe a homogenized SHM.
Journal of Computational Physics
Predictive analysis of complex computational models, such as uncertainty quantification (UQ), must often rely on using an existing database of simulation runs. In this paper we consider the task of performing low-multilinear-rank regression on such a database. Specifically we develop and analyze an efficient gradient computation that enables gradient-based optimization procedures, including stochastic gradient descent and quasi-Newton methods, for learning the parameters of a functional tensor-train (FT). We compare our algorithms with 22 other nonparametric and parametric regression methods on 10 real-world data sets and show that for many physical systems, exploiting low-rank structure facilitates efficient construction of surrogate models. We use a number of synthetic functions to build insight into behavior of our algorithms, including the rank adaptation and group-sparsity regularization procedures that we developed to reduce overfitting. Finally we conclude the paper by building a surrogate of a physical model of a propulsion plant on a naval vessel.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Scientific Reports
Calcite (CaCO3) is one of the most abundant minerals in the Earth’s crust, and it is susceptible to subcritical chemically-driven fracturing. Understanding chemical processes at individual fracture tips, and how they control the development of fractures and fracture networks in the subsurface, is critical for carbon and nuclear waste storage, resource extraction, and predicting earthquakes. Chemical processes controlling subcritical fracture in calcite are poorly understood. We demonstrate a novel approach to quantify the coupled chemical-mechanical effects on subcritical fracture. The calcite surface was indented using a Vickers-geometry indenter tip, which resulted in repeatable micron-scale fractures propagating from the indent. Individual indented samples were submerged in an array of aqueous fluids and an optical microscope was used to track the fracture growth in situ. The fracture propagation rate varied from 1.6 × 10−8 m s−1 to 2.4 × 10−10 m s−1. The rate depended on the type of aqueous ligand present, and did not correlate with the measured dissolution rate of calcite or trends in zeta-potential. We postulate that chemical complexation at the fracture tip in calcite controls the growth of subcritical fracture. Previous studies indirectly pointed to the zeta-potential being the most critical factor, while our work indicates that variation in the zeta-potential has a secondary effect.
Abstract not provided.
MRS Energy and Sustainability
We raise for debate and discussion what in our opinion is a growing mis-control and mis-protection of U.S. energy research. We outline the origin of this mis-control and mis-protection, and propose two guiding principles to mitigate them and instead nurture research: (1) focus on people, not projects; and (2) culturally insulate research from development, but not science from technology. Energy research is critical to continuing advances in human productivity and welfare. In this Commentary, we raise for debate and discussion what in our view is a growing mis-control and mis-protection of U.S. energy research. This flawed approach originates in natural human tendencies exacerbated by an historical misunderstanding of research and development, science and technology, and the relationships between them. We outline the origin of the mis-control and mis-protection, and propose two guiding principles to mitigate them and instead nurture research: (i) focus on people, not projects; and (ii) culturally insulate research from development, but not science from technology. Our hope is to introduce these principles into the discourse now, so they can help guide policy changes in U.S. energy research and development that are currently being driven by powerful geopolitical winds. Summary: Two foundational guiding principles are proposed to mitigate a growing mis-control and mis-protection of U.S. energy research, and instead to nurture it.
Scientific Reports
Nanostructures may be exposed to irradiation during their manufacture, their engineering and whilst in-service. The consequences of such bombardment can be vastly different from those seen in the bulk. In this paper, we combine transmission electron microscopy with in situ ion irradiation with complementary computer modelling techniques to explore the physics governing the effects of 1.7 MeV Au ions on gold nanorods. Phenomena surrounding the sputtering and associated morphological changes caused by the ion irradiation have been explored. In both the experiments and the simulations, large variations in the sputter yields from individual nanorods were observed. These sputter yields have been shown to correlate with the strength of channelling directions close to the direction in which the ion beam was incident. Craters decorated by ejecta blankets were found to form due to cluster emission thus explaining the high sputter yields.
Scientific Reports
By combining optical imaging, Raman spectroscopy, kelvin probe force microscopy (KFPM), and photoemission electron microscopy (PEEM), we show that graphene's layer orientation, as well as layer thickness, measurably changes the surface potential (Φ). Detailed mapping of variable-thickness, rotationally-faulted graphene films allows us to correlate Φ with specific morphological features. Using KPFM and PEEM we measure ΔΦ up to 39 mV for layers with different twist angles, while ΔΦ ranges from 36-129 mV for different layer thicknesses. The surface potential between different twist angles or layer thicknesses is measured at the KPFM instrument resolution of ≤ 200 nm. The PEEM measured work function of 4.4 eV for graphene is consistent with doping levels on the order of 1012cm-2. We find that Φ scales linearly with Raman G-peak wavenumber shift (slope = 22.2 mV/cm-1) for all layers and twist angles, which is consistent with doping-dependent changes to graphene's Fermi energy in the 'high' doping limit. Our results here emphasize that layer orientation is equally important as layer thickness when designing multilayer two-dimensional systems where surface potential is considered.
Journal of Microelectromechanical Systems
This paper describes the theoretical and experimental investigation of interdigitated transducers capable of producing focused acoustical beams in thin film piezoelectric materials. A mathematical formalism describing focused acoustical beams, lamb beams, is presented and related to their optical counterparts in two- and three-dimensions. A novel Fourier domain transducer design methodology is developed and utilized to produce near diffraction limited focused beams within a thin film AlN membrane. The properties of the acoustic beam formed by the transducer were studied by means of Doppler vibrometry implemented with a scanning confocal balanced homodyne interferometer. The Fourier domain modal analysis confirmed that 83% of the acoustical power was delivered to the targeted focused beam which was constituted from the lowest order symmetric mode, while 1% was delivered unintentionally to the beam formed from the anti-symmetric mode, and the remaining power was isotropically scattered. The transmission properties of the acoustic beams as they interact with devices with wavelength scale features were also studied, demonstrating minimal insertion loss for devices in which a subwavelength and pinhole apertures were included. [2018-0059]
Bulletin of the Seismological Society of America
Backprojection techniques are a class of methods for detecting and locating events that have been successfully implemented at local scales for dense networks. This article develops the framework for applying a backprojection method to detect and locate a range of event sizes across a heteorogeneous regional network. This article extends previous work on the development of a backprojection method for local and regional seismic event detection, the Waveform Correlation Event Detection System (WCEDS). The improvements outlined here make the technique much more flexible for regional earthquake or explosion monitoring. We first explore how the backprojection operator can be formulated using either a travel-time model or a stack of full waveforms, showing that the former approach is much more flexible and can lead to the detection of smaller events, and to significant improvements in the resolution of event parameters. Second, we discuss the factors that influence the grid of event hypotheses used for backprojection, and develop an algorithm for generating suitable grids for networks with variable density. Third, we explore the effect of including different phases in the backprojection operator, showing that the best results for the study region can be obtained using only the Pg phase, and by including terms for penalizing early arrivals when evaluating the fit for a given event hypothesis. Fourth, we incorporate two parallel backprojection computations with different distance thresholds to enable the robust detection of both network-wide and small (sub-network-only) events. The set of improvements are outlined by applying WCEDS to four example events on the University of Utah Seismograph Stations (UUSS) network.
Nature Communications
High-intensity lasers interacting with solid foils produce copious numbers of relativistic electrons, which in turn create strong sheath electric fields around the target. The proton beams accelerated in such fields have remarkable properties, enabling ultrafast radiography of plasma phenomena or isochoric heating of dense materials. In view of longer-term multidisciplinary purposes (e.g., spallation neutron sources or cancer therapy), the current challenge is to achieve proton energies well in excess of 100 MeV, which is commonly thought to be possible by raising the on-target laser intensity. Here we present experimental and numerical results demonstrating that magnetostatic fields self-generated on the target surface may pose a fundamental limit to sheath-driven ion acceleration for high enough laser intensities. Those fields can be strong enough (~105 T at laser intensities ~1021 W cm-2) to magnetize the sheath electrons and deflect protons off the accelerating region, hence degrading the maximum energy the latter can acquire.
Rock Mechanics and Rock Engineering
Following the ISRM Suggested Method on Failure Criteria, ‘A failure criterion for rocks based on true triaxial testing’ by Chang and Haimson (2012), we attempted to obtain experiment-based Nadai (1950) and Mogi (1971) failure criteria for the aforementioned four sandstones: TCDP (Oku et al. 2007), Coconino, Bentheim (Ma and Haimson 2016; Ma et al. 2017a), and Castlegate (Ingraham et al. 2013). Here, the current work extends beyond the scope of Chang and Haimson (2012), to compare σ1 at failure (i.e., σ1,peak) from test data with predictions based on the experimentally generated Nadai and Mogi criteria. The applicability of Nadai and Mogi criteria to porous sandstones is then evaluated and discussed, considering failure mode evolution in these rocks.
Nature Communications
A frequency mixer is a nonlinear device that combines electromagnetic waves to create waves at new frequencies. Mixers are ubiquitous components in modern radio-frequency technology and microwave signal processing. The development of versatile frequency mixers for optical frequencies remains challenging: such devices generally rely on weak nonlinear optical processes and, thus, must satisfy phase-matching conditions. Here we utilize a GaAs-based dielectric metasurface to demonstrate an optical frequency mixer that concurrently generates eleven new frequencies spanning the ultraviolet to near-infrared. The even and odd order nonlinearities of GaAs enable our observation of second-harmonic, third-harmonic, and fourth-harmonic generation, sum-frequency generation, two-photon absorption-induced photoluminescence, four-wave mixing and six-wave mixing. The simultaneous occurrence of these seven nonlinear processes is assisted by the combined effects of strong intrinsic material nonlinearities, enhanced electromagnetic fields, and relaxed phase-matching requirements. Such ultracompact optical mixers may enable a plethora of applications in biology, chemistry, sensing, communications, and quantum optics.
AIAA Journal
Compressible jet-in-crossflow interactions are difficult to simulate accurately using Reynolds-averaged Navier-Stokes (RANS) models. This could be due to simplifications inherent in RANS or the use of inappropriate RANS constants estimated by fitting to experiments of simple or canonical flows. Our previous work on Bayesian calibration of a k - ϵ model to experimental data had led to a weak hypothesis that inaccurate simulations could be due to inappropriate constants more than model-form inadequacies of RANS. In this work, Bayesian calibration of k - ϵ constants to a set of experiments that span a range of Mach numbers and jet strengths has been performed. The variation of the calibrated constants has been checked to assess the degree to which parametric estimates compensate for RANS's model-form errors. An analytical model of jet-in-crossflow interactions has also been developed, and estimates of k - ϵ constants that are free of any conflation of parametric and RANS's model-form uncertainties have been obtained. It has been found that the analytical k - ϵ constants provide mean-flow predictions that are similar to those provided by the calibrated constants. Further, both of them provide predictions that are far closer to experimental measurements than those computed using "nominal" values of these constants simply obtained from the literature. It can be concluded that the lack of predictive skill of RANS jet-in-crossflow simulations is mostly due to parametric inadequacies, and our analytical estimates may provide a simple way of obtaining predictive compressible jet-in-crossflow simulations.
Scripta Materialia
Strength and ductility are mutually exclusive in metallic materials. To break this relationship, we start with nanocrystalline Zirconium with very high strength and low ductility. We then ion irradiate the specimens to introduce vacancies, which promote diffusional plasticity without reducing strength. Mechanical tests inside the Transmission Electron Microscope reveal about 300% increase in plastic strain after self ion-irradiation. Molecular dynamics simulation showed that 4.3% increase in vacancies near the grain boundaries can result in about 60% increase in plastic strain. Both experimental and computational results support our hypothesis that vacancies may enhance plasticity through higher atomic diffusivity at the grain boundaries.
Combustion and Flame
We have used several configurations of the Sandia Instrumented Thermal Ignition (SITI) experiment to develop a pressure-dependent, four-step ignition model for a plastic bonded explosive (PBX 9407) consisting of 94 wt.% RDX (hexahydro-1,3,5-trinitro-1,3,5-triazine), and a 6 wt.% VCTFE binder (vinyl chloride/chlorotrifluoroethylene copolymer). The four steps include desorption of water, decomposition of RDX to form equilibrium products, pressure-dependent decomposition of RDX forming equilibrium products, and decomposition of the binder to form hydrogen chloride and a nonvolatile residue (NVR). We address drying, binder decomposition, and decomposition of the RDX component from the pristine state through the melt and into ignition. We used Latin Hypercube Sampling (LHS) of the parameters to determine the sensitivity of the model to variation in the parameters. We also successfully validated the model using one-dimensional time-to-explosion (ODTX and P-ODTX) data from a different laboratory. Our SITI test matrix included 1) different densities ranging from 0.7 to 1.63 g/cm3, 2) free gas volumes ranging from 1.2 to 38 cm3, and 3) boundary temperatures ranging from 170 to 190 °C. We measured internal temperatures using embedded thermocouples at various radial locations as well as pressure using tubing that was connected from the free gas volume (ullage) to a pressure gauge. We also measured gas flow from our vented experiments. A borescope was included to obtain in situ video during some SITI experiments. We observed significant changes in the explosive volume prior to ignition. Our model, in conjunction with data observations, imply that internal accumulation of decomposition gases in high density PBX 9407 (90% of the theoretical maximum density) can contribute to significant strain whether or not the experiment is vented or sealed.
Journal of Computational Electronics
A detailed description and analysis of the Fermi kinetics transport (FKT) equations for simulating charge transport in semiconductor devices is presented. The fully coupled nonlinear discrete FKT equations are elaborated, as well as solution methods and work-flow for the simulation of RF electronic devices under large-signal conditions. The importance of full-wave electromagnetics is discussed in the context of high-speed device simulation, and the meshing requirements to integrate the full-wave solver with the transport equations are given in detail. The method includes full semiconductor band structure effects to capture the scattering details for the Boltzmann transport equation. The method is applied to high-speed gallium nitride devices. Finally, numerical convergence and stability examples provide insight into the mesh convergence behavior of the deterministic solver.
Scientific Reports
In this work, we demonstrate high-performance electrically injected GaN/InGaN core-shell nanowire-based LEDs grown using selective-area epitaxy and characterize their electro-optical properties. To assess the quality of the quantum wells, we measure the internal quantum efficiency (IQE) using conventional low temperature/room temperature integrated photoluminescence. The quantum wells show a peak IQE of 62%, which is among the highest reported values for nanostructure-based LEDs. Time-resolved photoluminescence (TRPL) is also used to study the carrier dynamics and response times of the LEDs. TRPL measurements yield carrier lifetimes in the range of 1-2 ns at high excitation powers. To examine the electrical performance of the LEDs, current density-voltage (J-V) and light-current density-voltage (L-J-V) characteristics are measured. We also estimate the peak external quantum efficiency (EQE) to be 8.3% from a single side of the chip with no packaging. The LEDs have a turn-on voltage of 2.9 V and low series resistance. Based on FDTD simulations, the LEDs exhibit a relatively directional far-field emission pattern in the range of pm ± 15°. This work demonstrates that it is feasible for electrically injected nanowire-based LEDs to achieve the performance levels needed for a variety of optical device applications.
Scientific Reports
When a material that contains precipitates is deformed, the precipitates and the matrix may strain plastically by different amounts causing stresses to build up at the precipitate-matrix interfaces. If premature failure is to be avoided, it is therefore essential to reduce the difference in the plastic strain between the two phases. Here, we conduct nanoscale digital image correlation to measure a new variable that quantifies this plastic strain difference and show how its value can be used to estimate the associated interfacial stresses, which are found to be approximately three times greater in an Fe-Ni2AlTi steel than in the more ductile Ni-based superalloy CMSX-4®. It is then demonstrated that decreasing these stresses significantly improves the ability of the Fe-Ni2AlTi microstructure to deform under tensile loads without loss in strength.
Journal of Computational Physics
In this work, we provide a method for enhancing stochastic Galerkin moment calculations to the linear elliptic equation with random diffusivity using an ensemble of Monte Carlo solutions. This hybrid approach combines the accuracy of low-order stochastic Galerkin and the computational efficiency of Monte Carlo methods to provide statistical moment estimates which are significantly more accurate than performing each method individually. The hybrid approach involves computing a low-order stochastic Galerkin solution, after which Monte Carlo techniques are used to estimate the residual. We show that the combined stochastic Galerkin solution and residual is superior in both time and accuracy for a one-dimensional test problem and a more computational intensive two-dimensional linear elliptic problem for both the mean and variance quantities.
Journal of Computational Physics
High resolution simulation of viscous fingering can offer an accurate and detailed prediction for subsurface engineering processes involving fingering phenomena. The fully implicit discontinuous Galerkin (DG) method has been shown to be an accurate and stable method to model viscous fingering with high Peclet number and mobility ratio. In this paper, we present two techniques to speedup large scale simulations of this kind. The first technique relies on a simple p-adaptive scheme in which high order basis functions are employed only in elements near the finger fronts where the concentration has a sharp change. As a result, the number of degrees of freedom is significantly reduced and the simulation yields almost identical results to the more expensive simulation with uniform high order elements throughout the mesh. The second technique for speedup involves improving the solver efficiency. We present an algebraic multigrid (AMG) preconditioner which allows the DG matrix to leverage the robust AMG preconditioner designed for the continuous Galerkin (CG) finite element method. The resulting preconditioner works effectively for fixed order DG as well as p-adaptive DG problems. With the improvements provided by the p-adaptivity and AMG preconditioning, we can perform high resolution three-dimensional viscous fingering simulations required for miscible displacement with high Peclet number and mobility ratio in greater detail than before for well injection problems.
Scientific Reports
Deformation mechanisms in bcc metals, especially in dynamic regimes, show unusual complexity, which complicates their use in high-reliability applications. Here, we employ novel, high-velocity cylinder impact experiments to explore plastic anisotropy in single crystal specimens under high-rate loading. The bcc tantalum single crystals exhibit unusually high deformation localization and strong plastic anisotropy when compared to polycrystalline samples. Several impact orientations - [100], [110], [111] and [149] -Are characterized over a range of impact velocities to examine orientation-dependent mechanical behavior versus strain rate. Moreover, the anisotropy and localized plastic strain seen in the recovered cylinders exhibit strong axial symmetries which differed according to lattice orientation. Two-, three-, and four-fold symmetries are observed. We propose a simple crystallographic argument, based on the Schmid law, to understand the observed symmetries. These tests are the first to explore the role of single-crystal orientation in Taylor impact tests and they clearly demonstrate the importance of crystallography in high strain rate and temperature deformation regimes. These results provide critical data to allow dramatically improved high-rate crystal plasticity models and will spur renewed interest in the role of crystallography to deformation in dynamics regimes.
Bulletin of the Seismological Society of America
This article describes a new method of seismic signal detection that improves upon the conventional waveform correlation method. Recent studies suggested that a significant limiting factor in the application of waveform correlation to regional and global scale monitoring is the false alarm rate. The false alarms do not originate from detections on noise but rather from seismic arrivals with unrelated source locations. This article presents results from an approach to waveform correlation that exploits techniques from signal processing and machine learning to improve the accuracy of detecting seismic arrivals. We modify the detection model for waveform correlation such that transient signals from noncollocated seismicity are considered when designing the detectors. The new approach uses waveform templates from known catalog events to train a supervised machine learning algorithm that derives a new set of detectors to represent the unique characteristics of the template waveforms; these new detectors maximize the likelihood of detecting only the desired events, thereby minimizing false alarms. We train a waveform correlation template library for a single three-component seismic monitoring station. We then review results from applying the new detectors, known as alternate null hypothesis correlation (ANCorr) templates, to a test set of seismic waveforms. We compare ANCorr results with those from application of the conventional waveform correlation matched filter technique.
Communications Biology
Dermal interstitial fluid (ISF) is an underutilized information-rich biofluid potentially useful in health status monitoring applications whose contents remain challenging to characterize. Here, we present a facile microneedle approach for dermal ISF extraction with minimal pain and no blistering for human subjects and rats. Extracted ISF volumes were sufficient for determining transcriptome, and proteome signatures. We noted similar profiles in ISF, serum, and plasma samples, suggesting that ISF can be a proxy for direct blood sampling. Dynamic changes in RNA-seq were recorded in ISF from induced hypoxia conditions. Finally, we report the first isolation and characterization, to our knowledge, of exosomes from dermal ISF. The ISF exosome concentration is 12–13 times more enriched when compared to plasma and serum and represents a previously unexplored biofluid for exosome isolation. This minimally invasive extraction approach can enable mechanistic studies of ISF and demonstrates the potential of ISF for real-time health monitoring applications.
Nature Communications
The silicon metal-oxide-semiconductor (MOS) material system is a technologically important implementation of spin-based quantum information processing. However, the MOS interface is imperfect leading to concerns about 1/f trap noise and variability in the electron g-factor due to spin-orbit (SO) effects. Here we advantageously use interface-SO coupling for a critical control axis in a double-quantum-dot singlet-triplet qubit. The magnetic fieldorientation dependence of the g-factors is consistent with Rashba and Dresselhaus interface-SO contributions. The resulting all-electrical, two-Axis control is also used to probe the MOS interface noise. The measured inhomogeneous dephasing time, T2m, of 1.6 ?s is consistent with 99.95% 28Si enrichment. Furthermore, when tuned to be sensitive to exchange fluctuations, a quasi-static charge noise detuning variance of 2 μeV is observed, competitive with low-noise reports in other semiconductor qubits. This work, therefore, demonstrates that the MOS interface inherently provides properties for two-Axis qubit control, while not increasing noise relative to other material choices.
Scientific Reports
Li+ transport within a solid electrolyte interphase (SEI) in lithium ion batteries has challenged molecular dynamics (MD) studies due to limited compositional control of that layer. In recent years, experiments and ab initio simulations have identified dilithium ethylene dicarbonate (Li2EDC) as the dominant component of SEI layers. Here, we adopt a parameterized, non-polarizable MD force field for Li2EDC to study transport characteristics of Li+ in this model SEI layer at moderate temperatures over long times. The observed correlations are consistent with recent MD results using a polarizable force field, suggesting that this non-polarizable model is effective for our purposes of investigating Li+ dynamics. Mean-squared displacements distinguish three distinct Li+ transport regimes in EDC-ballistic, trapping, and diffusive. Compared to liquid ethylene carbonate (EC), the nanosecond trapping times in EDC are significantly longer and naturally decrease at higher temperatures. New materials developed for fast-charging Li-ion batteries should have a smaller trapping region. The analyses implemented in this paper can be used for testing transport of Li+ ion in novel battery materials. Non-Gaussian features of van Hove self-correlation functions for Li+ in EDC, along with the mean-squared displacements, are consistent in describing EDC as a glassy material compared with liquid EC. Vibrational modes of Li+ ion, identified by MD, characterize the trapping and are further validated by electronic structure calculations. Some of this work appeared in an extended abstract and has been reproduced with permission from ECS Transactions, 77, 1155-1162 (2017). Copyright 2017, Electrochemical Society, INC.
Nature Communications
Metallic nanoparticles, such as gold and silver nanoparticles, can self-assemble into highly ordered arrays known as supercrystals for potential applications in areas such as optics, electronics, and sensor platforms. Here we report the formation of self-assembled 3D faceted gold nanoparticle supercrystals with controlled nanoparticle packing and unique facet-dependent optical property by using a binary solvent diffusion method. The nanoparticle packing structures from specific facets of the supercrystals are characterized by small/wide-angle X-ray scattering for detailed reconstruction of nanoparticle translation and shape orientation from mesometric to atomic levels within the supercrystals. We discover that the binary diffusion results in hexagonal close packed supercrystals whose size and quality are determined by initial nanoparticle concentration and diffusion speed. The supercrystal solids display unique facet-dependent surface plasmonic and surface-enhanced Raman characteristics. The ease of the growth of large supercrystal solids facilitates essential correlation between structure and property of nanoparticle solids for practical integrations.
Scientific Reports
Venezuelan equine encephalitis virus (VEEV) poses a major public health risk due to its amenability for use as a bioterrorism agent and its severe health consequences in humans. ML336 is a recently developed chemical inhibitor of VEEV, shown to effectively reduce VEEV infection in vitro and in vivo. However, its limited solubility and stability could hinder its clinical translation. To overcome these limitations, lipid-coated mesoporous silica nanoparticles (LC-MSNs) were employed. The large surface area of the MSN core promotes hydrophobic drug loading while the liposome coating retains the drug and enables enhanced circulation time and biocompatibility, providing an ideal ML336 delivery platform. LC-MSNs loaded 20 ± 3.4 μg ML336/mg LC-MSN and released 6.6 ± 1.3 μg/mg ML336 over 24 hours. ML336-loaded LC-MSNs significantly inhibited VEEV in vitro in a dose-dependent manner as compared to unloaded LC-MSNs controls. Moreover, cell-based studies suggested that additional release of ML336 occurs after endocytosis. In vivo safety studies were conducted in mice, and LC-MSNs were not toxic when dosed at 0.11 g LC-MSNs/kg/day for four days. ML336-loaded LC-MSNs showed significant reduction of brain viral titer in VEEV infected mice compared to PBS controls. Overall, these results highlight the utility of LC-MSNs as drug delivery vehicles to treat VEEV.
Nature Communications
The limited flux and selectivities of current carbon dioxide membranes and the high costs associated with conventional absorption-based CO2 sequestration call for alternative CO2 separation approaches. Here we describe an enzymatically active, ultra-thin, biomimetic membrane enabling CO2 capture and separation under ambient pressure and temperature conditions. The membrane comprises a ~18-nm-thick close-packed array of 8 nm diameter hydrophilic pores that stabilize water by capillary condensation and precisely accommodate the metalloenzyme carbonic anhydrase (CA). CA catalyzes the rapid interconversion of CO2 and water into carbonic acid. By minimizing diffusional constraints, stabilizing and concentrating CA within the nanopore array to a concentration 10× greater than achievable in solution, our enzymatic liquid membrane separates CO2 at room temperature and atmospheric pressure at a rate of 2600 GPU with CO2/N2 and CO2/H2 selectivities as high as 788 and 1500, respectively, the highest combined flux and selectivity yet reported for ambient condition operation.
AIP Advances
The size dependence of the dielectric constants of barium titanate or other ferroelectric particles can be explored by embedding particles into an epoxy matrix whose dielectric constant can be measured directly. However, to extract the particle dielectric constant requires a model of the composite medium. We compare a finite element model for various volume fractions and particle arrangements to several effective medium approximations, which do not consider particle arrangement explicitly. For a fixed number of particles, the composite dielectric constant increases with the degree of agglomeration, and we relate this increase to the number of regions of enhanced electric field along the applied field between particles in an agglomerate. Additionally, even for dispersed particles, we find that the composite method of assessing the particle dielectric constant may not be effective if the particle dielectric constant is too high compared to the background medium dielectric constant.
Advances in Water Resources
Numerous methods are used to measure contact angles (θ) in multiphase systems. The wettability and θ are primary controls on CO2 residual trapping during Geologic Carbon Storage (GCS) and determining these values within rock pores is paramount to increasing storage efficiency. One traditional experimental approach is the sessile drop method which involves θ measurements on a single image of droplets. More recent developments utilize X-ray micro-computed tomography (CT) scans which provide the resolutions necessary to image in situ θ of fluids at representative conditions; however, experimental micro-CT data is limited and varied. To further examine θ distributions in supercritical-CO2-brine-sandstone systems, a combination of manual and automated θ measurement methods were utilized to measure θ using both sessile drop and micro-CT images of two sandstone cores. The purpose of this work was threefold: (1) compare two current and two new θ measuring methods using micro-CT images of scCO2-brine-sandstone systems; (2) determine how traditional experimental method (sessile drop) θ results compare to in situ θ results (micro-CT); and (3) determine if the Matlab Contact Angle Algorithm (MCAA) from Klise et al. (2016) can be used to measure θ scCO2-brine-sandstone systems. One of the two new methods utilizing open-source software resulted in comparable average θ and θ ranges to the primary manual measuring method (Andrew et al., 2014b) reported in literature that requires commercial software to complete. An additional new method involves immersive interaction with micro-CT image volumes that no other software currently provides. Both processes are found to be promising for future work. θ measured using micro-CT images at in situ conditions result in a broader θ distribution than those measured using sessile drop images. These findings suggest some pores are intermediate-wet in an in situ sandstone system and factors other than interfacial tension influence trapping. Lastly, MCAA θ results consistently produced broader θ distributions and higher average θ than the manual θ measurements. This is a result of some automated measurements incorrectly identifying directional quantities leading to skewed results. MCAA is still promising for future work with careful attention to data interpretation.
Journal of Computational Physics
Predictive analysis of complex computational models, such as uncertainty quantification (UQ), must often rely on using an existing database of simulation runs. In this paper we consider the task of performing low-multilinear-rank regression on such a database. Specifically we develop and analyze an efficient gradient computation that enables gradient-based optimization procedures, including stochastic gradient descent and quasi-Newton methods, for learning the parameters of a functional tensor-train (FT). We compare our algorithms with 22 other nonparametric and parametric regression methods on 10 real-world data sets and show that for many physical systems, exploiting low-rank structure facilitates efficient construction of surrogate models. We use a number of synthetic functions to build insight into behavior of our algorithms, including the rank adaptation and group-sparsity regularization procedures that we developed to reduce overfitting. Finally we conclude the paper by building a surrogate of a physical model of a propulsion plant on a naval vessel.
Acta Materialia
One of the most confounding controversies in the ductile fracture community is the large discrepancy between predicted and experimentally observed strain-to-failure values during shear-dominant loading. Currently proposed solutions focus on better accounting for how the deviatoric stress state influences void growth or on measuring strain at the microscale rather than the macroscale. While these approaches are useful, they do not address a significant aspect of the problem: the only rupture micromechanisms that are generally considered are void nucleation, growth, and coalescence (for tensile-dominated loading), and shear-localization and void coalescence (for shear-dominated loading). Current phenomenological models have thus focused on predicting the competition between these mechanisms based on the stress state and the strain-hardening capacity of the material. However, in the present study, we demonstrate that there are at least five other failure mechanisms. Because these have long been ignored, little is known about how all seven mechanisms interact with one another or the factors that control their competition. These questions are addressed by characterizing the fracture process in three high-purity face-centered cubic (FCC) metals of medium-to-high stacking fault energy: copper, nickel, and aluminum. These data demonstrate that, for a given stress state and material, several mechanisms frequently work together in a sequential manner to cause fracture. The selection of a failure mechanism is significantly affected by the plasticity-induced microstructural evolution that occurs before tearing begins, which can create or eliminate sites for void nucleation. At the macroscale, failure mechanisms that do not involve cracking or pore growth were observed to facilitate subsequent void growth and coalescence processes. While the focus of this study is on damage accumulation in pure metals, these results are also applicable to understanding failure in engineering alloys.
Structural and Multidisciplinary Optimization
We present a Matlab implementation of topology optimization for compliance minimization on unstructured polygonal finite element meshes that efficiently accommodates many materials and many volume constraints. Leveraging the modular structure of the educational code, PolyTop, we extend it to the multi-material version, PolyMat, with only a few modifications. First, a design variable for each candidate material is defined in each finite element. Next, we couple a Discrete Material Optimization interpolation with the existing penalization and introduce a new parameter such that we can employ continuation and smoothly transition from a convex problem without any penalization to a non-convex problem in which material mixing and intermediate densities are penalized. Mixing that remains due to the density filter operation is eliminated via continuation on the filter radius. To accommodate flexibility in the volume constraint definition, the constraint function is modified to compute multiple volume constraints and the design variable update is modified in accordance with the Zhang-Paulino-Ramos Jr. (ZPR) update scheme, which updates the design variables associated with each constraint independently. The formulation allows for volume constraints controlling any subset of the design variables, i.e., they can be defined globally or locally for any subset of the candidate materials. Borrowing ideas for mesh generation on complex domains from PolyMesher, we determine which design variables are associated with each local constraint of arbitrary geometry. A number of examples are presented to demonstrate the many material capability, the flexibility of the volume constraint definition, the ease with which we can accommodate passive regions, and how we may use local constraints to break symmetries or achieve graded geometries.
Computer
The U.S. National Quantum Initiative places quantum computer scaling in the same category as Moore's law. While the technical basis of semiconductor scale up is well known, the equivalent principle for quantum computers is still being developed. Let's explore these new ideas.
Nature Communications
Organic acids play a key role in the troposphere, contributing to atmospheric aqueous-phase chemistry, aerosol formation, and precipitation acidity. Atmospheric models currently account for less than half the observed, globally averaged formic acid loading. Here we report that acetaldehyde photo-tautomerizes to vinyl alcohol under atmospherically relevant pressures of nitrogen, in the actinic wavelength range, λ = 300-330 nm, with measured quantum yields of 2-25%. Recent theoretical kinetics studies show hydroxyl-initiated oxidation of vinyl alcohol produces formic acid. Adding these pathways to an atmospheric chemistry box model (Master Chemical Mechanism) demonstrates increased formic acid concentrations by a factor of ∼1.7 in the polluted troposphere and a factor of ∼3 under pristine conditions. Incorporating this mechanism into the GEOS-Chem 3D global chemical transport model reveals an estimated 7% contribution to worldwide formic acid production, with up to 60% of the total modeled formic acid production over oceans arising from photo-tautomerization.
Optical Engineering
Time-resolved visualization of fast processes using high-speed digital video-cameras has been widely used in most fields of scientific research for over a decade. In many applications, high-speed imaging is used not only to record the time history of a phenomenon but also to quantify it, hence requiring dependable equipment. Important aspects of two-dimensional imaging instrumentation used to qualitatively or quantitatively assess fast-moving scenes include sensitivity, linearity, as well as signal-to-noise ratio (SNR). Under certain circumstances, the weaknesses of commercially available high-speed cameras, i.e., sensitivity, linearity, image lag, etc., render the experiment complicated and uncertain. Our study evaluated two advanced CMOS-based, continuous-recording, high-speed cameras available at the moment of writing. Various parameters, potentially important toward accurate time-resolved measurements and photonic quantification, have been measured under controlled conditions on the bench, using scientific instrumentation. Testing procedures to measure sensitivity, linearity, SNR, shutter accuracy, and image lag are proposed and detailed. The results of the tests, comparing the two high-speed cameras under study, are also presented and discussed. Results show that, with careful implementation and understanding of their performance and limitations, these high-speed cameras are reasonable alternatives to scientific CCD cameras, while also delivering time-resolved imaging data.
Nature Communications
A surface-emitting distributed feedback (DFB) laser with second-order gratings typically excites an antisymmetric mode that has low radiative efficiency and a double-lobed far-field beam. The radiative efficiency could be increased by using curved and chirped gratings for infrared diode lasers, plasmon-assisted mode selection for mid-infrared quantum cascade lasers (QCLs), and graded photonic structures for terahertz QCLs. Here, we demonstrate a new hybrid grating scheme that uses a superposition of second and fourth-order Bragg gratings that excite a symmetric mode with much greater radiative efficiency. The scheme is implemented for terahertz QCLs with metallic waveguides. Peak power output of 170 mW with a slope-efficiency of 993 mW A-1 is detected with robust single-mode single-lobed emission for a 3.4 THz QCL operating at 62 K. The hybrid grating scheme is arguably simpler to implement than aforementioned DFB schemes and could be used to increase power output for surface-emitting DFB lasers at any wavelength.
MRS Energy and Sustainability
We present and analyze three powerful long-term historical trends in the electrification of energy by free-fuel sources. These trends point toward a future in which energy is affordable, abundant, and efficiently deployed; with major economic, geo-political, and environmental benefits to humanity. We present and analyze three powerful long-term historical trends in energy, particularly electrical energy, as well as the opportunities and challenges associated with these trends. The first trend is from a world containing a diversity of energy currencies to one whose predominant currency is electricity, driven by electricity’s transportability, exchangeability, and steadily decreasing cost. The second trend is from electricity generated from a diversity of sources to electricity generated predominantly by free-fuel sources, driven by their steadily decreasing cost and long-term abundance. These trends necessitate a just-emerging third trend: from a grid in which electricity is transported unidirectionally, traded at near-static prices, and consumed under direct human control; to a grid in which electricity is transported bidirectionally, traded at dynamic prices, and consumed under human-tailored artificial agential control. These trends point toward a future in which energy is not costly, scarce, or inefficiently deployed but instead is affordable, abundant, and efficiently deployed; with major economic, geo-political, and environmental benefits to humanity.
Scientific Reports
Nanoparticles have shown great promise in improving cancer treatment efficacy while reducing toxicity and treatment side effects. Predicting the treatment outcome for nanoparticle systems by measuring nanoparticle biodistribution has been challenging due to the commonly unmatched, heterogeneous distribution of nanoparticles relative to free drug distribution. We here present a proof-of-concept study that uses mathematical modeling together with experimentation to address this challenge. Individual mice with 4T1 breast cancer were treated with either nanoparticle-delivered or free doxorubicin, with results demonstrating improved cancer kill efficacy of doxorubicin loaded nanoparticles in comparison to free doxorubicin. We then developed a mathematical theory to render model predictions from measured nanoparticle biodistribution, as determined using graphite furnace atomic absorption. Model analysis finds that treatment efficacy increased exponentially with increased nanoparticle accumulation within the tumor, emphasizing the significance of developing new ways to optimize the delivery efficiency of nanoparticles to the tumor microenvironment.
Technological Forecasting and Social Change
Meeting technology-based policy goals without sufficient lead time may present several technology, regulatory and market-based challenges due to the speed of technological adoption in existing and emerging markets. Installing incremental amounts of technologies, e.g., cleaner fossil, renewable or transformative energy technologies throughout the coming decades, may prove to be a more attainable goal than a radical and immediate change the year before a policy goal is set to be in place. This notion of steady installation growth over acute installations of technology to meet policy goals is the core topic of discussion for this research. This research operationalizes this notion by developing the theoretical underpinnings of regulatory and market acceptance delays by building upon the common Technology Readiness Level (TRL) framework and offers two new additions to the research community. The Regulatory Readiness Level (RRL) and Market Readiness Level (MRL) frameworks were developed. These components, collectively called the Technology, Regulatory and Market (TRM) readiness level framework allow one to build new constraints into existing Integrated Assessment Models (IAMs). A system dynamics model was developed to illustrate the TRM framework. The framework helps identify the factors, and specifically the rate at which we must support technology development, necessary to meet our desired technical and policy goals in the coming decades.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report describes the progress on the validation of the development of MELCOR Sodium Chemistry (NAC) package. The primary focus for this report is to ensure that the implementation of the CONTAIN-LMR sodium models into MELCOR is correctly done. Thus, the verification test is to conduct the code-to-code comparison with MELCOR and CONTAIN-LMR. Last year we had reported the development of NAC package which included three sodium models: spray fire, pool fire and atmospheric chemistry. The first 2 models were completed and additional improvement for these two models were done this year to allow upward spray capability and various functional capability for modeling the pool fire experiment better, respectively. This year, the atmospheric chemistry implementation has been progressed to a point for testing in the presence of water vapor (modeled as ideal gas) as a part of the two-condensable option model in the CONTAIN- LMR. The user's guide and reference manual for the NAC package including these improvements are described in a separate document being published as a part of the MELCOR 2.2 release. For this report, we would discuss the experimental validation using the implemented spray fire and pool fire models. A code-to-code comparison with CONTAIN-LMR is described for a spray fire experiment. Note that the atmospheric chemistry model has not fully implemented due to the absence of the two condensable option. Only the chemical reactions between the sodium aerosol and water vapor can be modeled. ACKNOWLEDGEMENTS This work was overseen and managed by Matthew R. Denman (Sandia National Laboratories). In addition, we appreciate that Chris Faucett for developing experimental data and provided the initial input decks as a part of the MELCOR assessment report development for U.S. Nuclear Regulatory Commission's project. This work is supported by the Office of Nuclear Energy of the U.S. Department of Energy work package number AT-17SN170204 and NT-185N05030102.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.