The morphology of the stagnated plasma resulting from Magnetized Liner Inertial Fusion (MagLIF) is measured by imaging the self-emission x-rays coming from the multi-keV plasma, and the evolution of the imploding liner is measured by radiographs. Equivalent diagnostic response can be derived from integrated rad-MHD simulations from programs such as Hydra and Gorgon. There have been only limited quantitative ways to compare the image morphology, that is the texture, of simulations and experiments. We have developed a metric of image morphology based on the Mallat Scattering Transformation (MST), a transformation that has proved to be effective at distinguishing textures, sounds, and written characters. This metric has demonstrated excellent performance in classifying ensembles of synthetic stagnation images. We use this metric to quantitatively compare simulations to experimental images, cross experimental images, and to estimate the parameters of the images with uncertainty via a linear regression of the synthetic images to the parameter used to generate them. This coordinate space has proved very adept at doing a sophisticated relative back-ground subtraction in the MST space. This was needed to compare the experimental self emission images to the rad-MHD simulation images. We have also developed theory that connects the transformation to the causal dynamics of physical systems. This has been done from the classical kinetic perspective and from the field theory perspective, where the MST is the generalized Green's function, or S-matrix of the field theory in the scale basis. From both perspectives the first order MST is the current state of the system, and the second order MST are the transition rates from one state to another. An efficient, GPU accelerated, Python implementation of the MST was developed. Future applications are discussed.
Recent years have seen an explosion in research efforts discovering and understanding novel electronic and optical properties of topological quantum materials (TQMs). In this LDRD, a synergistic effort of materials growth, characterization, electrical-magneto-optical measurements, combined with density functional theory and modeling has been established to address the unique properties of TQMs. Particularly, we have carried out extensive studies in search for Majorana fermions (MFs) in TQMs for topological quantum computation. Moreover, we have focused on three important science questions. 1) How can we controllably tune the properties of TQMs to make them suitable for quantum information applications? 2) What materials parameters are most important for successfully observing MFs in TQMs? 3) Can the physical properties of TQMs be tailored by topological band engineering? Results obtained in this LDRD not only deepen our current knowledge in fundamental quantum physics but also hold great promise for advanced electronic/photonic applications in information technologies.
This paper presents a nonlinear geometric buoy design for Wave Energy Converters (WECs). A nonlinear dynamic model is presented for an hour glass (HG) configured WEC. The HG buoy operates in heave motion or as a single Degree-of-Freedom (DOF). The unique formulation of the interaction between the buoy and the waves produces a nonlinear stiffening effect that provides the actual energy storage or reactive power during operation. A Complex Conjugate Control (C3) with a practical Proportional-Derivative (PD) controller is employed to optimize power absorption for off-resonance conditions and applied to a linear right circular cylinder (RCC) WEC. For a single frequency the PDC3 RCC buoy is compared with the HG buoy design. A Bretschneider spectrum of wave excitation input conditions are reviewed and evaluated for the HG buoy. Numerical simulations demonstrate power and energy capture for the HG geometric buoy design which incorporates and capitalizes on the nonlinear geometry to provide reactive power for the single DOF WEC. By exploiting the nonlinear physics in the HG design simplified operational performance is observed when compared to an optimized linear cylindrical WEC. The HG steepness angle α with respect to the wave is varied and initially optimized for improved energy capture.
Wave Energy Converter (WEC) technologies transform power from the waves to the electrical grid. WEC system components are investigated that support the performance, stability, and efficiency as part of a WEC array. To this end, Aquaharmonics Inc took home the 1.5 million grand prize in the 2016 U.S. Department of Energy Wave Energy Prize, an 18-month design-build-test competition to increase the energy capture potential of wave energy devices. Aquaharmonics intends to develop, build, and perform open ocean testing on a 1: 7 scale device. Preliminary wave tank testing on the mechanical system of the 1: 20 scale device has yielded a data-set of operational conditions and performance. In this paper, the Hamiltonian surface shaping and power flow control (HSSPFC) method is used in conjunction with scaled wave tank test data to explore the design space for the electrical transmission of energy to the shore-side power grid. Of primary interest is the energy storage system (ESS) that will electrically link the WEC to the shore. Initial analysis results contained in this paper provide a trade-off in storage device performance and design selection.
The cybersecurity research community has focused primarily on the analysis and automation of intrusion detection systems by examining network traffic behaviors. Expanding on this expertise, advanced cyber defense analysis is turning to host-based data to use in research and development to produce the next generation network defense tools. The ability to perform deep packet inspection of network traffic is increasingly harder with most boundary network traffic moving to HTTPS. Additionally, network data alone does not provide a full picture of end-to-end activity. These are some of the reasons that necessitate looking at other data sources such as host data. We outline our investigation into the processing, formatting, and storing of the data along with the preliminary results from our exploratory data analysis. In writing this report, it is our goal to aid in guiding future research by providing foundational understanding for an area of cybersecurity that is rich with a variety of complex, categorical, and sparse data, with a strong human influence component. Including suggestions for guiding potential directions for future research.
One of the first milestones of the Behind the Meter Storage (BTMS) program was to develop testing protocols so that the state-of-the-art cell chemistries and form factors could be evaluated against BTMS aggressive performance and lifetime metrics. To help guide this conversation, a pack estimation calculation was run. At the time the team was assuming a worst-case scenario in which the battery alone would need to charge an electric vehicle in 15 minutes with no support from the grid. This calculation varied the amount of current applied by each string or module in the storage system and estimated how many cells (and estimated cost) would be needed to charge an electric vehicle in 15 minutes under the current applied.
The Waste Isolation Pilot Plant (WIPP) is an operating geologic repository in southeastern New Mexico for transuranic (TRU) waste from nuclear defense activities. Past nuclear criticality concerns have generally been low at the WIPP due to the low initial concentration of fissile material and the natural tendency of fissile solute to disperse during fluid transport in porous media (Rechard et al. 2000). On the other hand, the list of acceptable WIPP waste types has expanded over the years to include Criticality Control Overpack (CCO) containers and Pipe Overpack (POP) containers. Containers bound for WIPP are bundled together in hexagon shaped 7-packs (six containers surround one container in the center). Two 7-packs are often combined into a TRUPACT-II package for a total of 14 containers. Most TRUPACT-II packages are restricted to a maximum fissile mass equivalent to plutonium (FMEP) between 0.1 and 0.38 kg, but a CCO TRUPACT-II package and a POP TRUPACT-II package are respectively permitted to have 5.32 kg and 2.80 kg FMEP (see Section 3 of US DOE (2013)). Consequently, CCO container criticality after emplacement at the WIPP was evaluated in Saylor and Scaglione (2018), and Oak Ridge National Laboratories is currently at work on POP container criticality analyses.
We have created a demonstration permissioned Distributed Ledger Technology (DLT) datastore for the UF6 cylinder tracking safeguards use-case utilizing the Ethereum DLT framework and using Solidity for smart contract code. Our demonstration creates a simulated dataset representing tracking of 75,000 UF6 cylinders across 11 example nuclear facilities worldwide. Our DLT system allows for easy input and reading of shipping and receiving data, including a Graphical User Interface (GUI). Sandia’s Emulytics capability was leveraged to help create the DLT node network and assess performance. We find that our DLT prototype can easily handle to ~150,000 UF6 cylinder shipments per year worldwide, without any excessive computational or storage burden on the IAEA or Member States. Next steps could include a demonstration to the IAEA and potentially demonstrating integration with TradeLens, a DLT in use by a consortium of international shipping companies representing over half of world shipping trade.
The purpose of this document is to discuss the construction of two MACCS dose conversion factor (DCF) files in some detail, an older file created in 2007 named FGR13DCF.inp and a newer file created in 2018 called FGR13GyEquiv_RevA.inp. Very briefly, the difference between the two files is that the older file follows the standard conventions of assigning a radiation weighting factor of 20 for alpha radiation for all tissues and organs; whereas, the newer file complies with the FGR 13 health effects modeling and uses modified radiation weighting factors (referred to as relative biological effectiveness (RBE) factors) for alpha radiation of 10 for breast and of 1 for red bone marrow. During an intermediate period the creation of these two DCF files, a file called FGR13GyEquiv was created and used for the SOARCA calculations. This file was not released to the MACCS user community, but it is also discussed briefly in this document. DCF files are used by MACCS to convert air and ground radionuclide concentrations to doses to an organ or to the whole body. The MACCS calculation considers duration and shielding factor for each exposure pathway including cloudshine, groundshine, inhalation, and ingestion. Dose coefficients for cloudshine and groundshine are expressed as dose rates; dose coefficients for inhalation and ingestion are expressed as committed doses. In this document, dose coefficients (newer ICRP terminology) is used to describe the values contained in dose conversion factor (older ICRP terminology) files.
ACM Transactions on Architecture and Code Optimization
Srikanth, Sriseshan; Jain, Anirudh; Lennon, Joseph M.; Conte, Thomas M.; Debenedictis, Erik; Cook, Jeanine C.
Reduction is an operation performed on the values of two or more key-value pairs that share the same key. Reduction of sparse data streams finds application in a wide variety of domains such as data and graph analytics, cybersecurity, machine learning, and HPC applications. However, these applications exhibit low locality of reference, rendering traditional architectures and data representations inefficient. This article presents MetaStrider, a significant algorithmic and architectural enhancement to the state-of-the-art, SuperStrider. Furthermore, these enhancements enable a variety of parallel, memory-centric architectures that we propose, resulting in demonstrated performance that scales near-linearly with available memory-level parallelism.
Selective laser melting (SLM) is a powder-based additive manufacturing technique which creates parts by fusing together successive layers of powder with a laser. The quality of produced parts is highly dependent on the proper selection of processing parameters, requiring significant testing and experimentation to determine parameters for a given machine and material. Computational modeling could potentially be used to shorten this process by identifying parameters through simulation. However, simulating complete SLM builds is challenging due to the difference in scale between the size of the particles and laser used in the build and the size of the part produced. Often, continuum models are employed which approximate the powder as a continuous medium to avoid the need to model powder particles individually. While computationally expedient, continuum models require as inputs effective material properties for the powder which are often difficult to obtain experimentally. Building on previous works which have developed methods for estimating these effective properties along with their uncertainties through the use of detailed models, this work presents a part scale continuum model capable of predicting residual thermal stresses in an SLM build with uncertainty estimates. Model predictions are compared to experimental measurements from the literature.
The New York State Public Service Commission recently made significant changes to the compensation mechanisms for distributed energy resources, such as solar generation. The new mechanisms, called the Value of Distributed Energy Resources (VDER), alter the value proposition of potential installations. In particular, multiple time-of-generation based pricing alternatives were established, which could lead to potential benefits from pairing energy storage systems with solar installations. This paper presents the calculations to maximize revenue from a solar photovoltaic and energy storage system installation operating under the VDER pricing structures. Two systems in two different zones within the New York Independent System Operator area were modeled. The impact of AC versus DC energy storage system interconnections with solar generation resources was also explored. The results show that energy storage systems could generate significant revenue depending on the pricing alternative being targeted and the zone selected for the project.
Motivation. Critical infrastructures are large, complex engineered systems that must be operated robustly under abnormal conditions resulting from natural hazards or intentional acts. For example, electric power systems must be robust to line faults, water utilities must rapidly mitigate contamination incidents, and computing networks must adapt to adversarial intrusions to protect critical information. Problem. It is difficult for decision-makers within resiliency analysis in critical infrastructure to optimize designs and develop effective response strategies that can account for uncertainties. Facing incomplete information and the sheer scope that a natural hazard or attack vector may incorporate, response can be ineffective without reliable, scalable decision support tools. These problems are intrinsically nonlinear and involve discrete decisions, and unfortunately, existing off-the-shelf mathematical programming methods cannot support optimization-based decision-making of these nonlinear at scale. Method/Approach and Results. This project emphasized development of fundamental optimization strategies that supported real- time mitigation and response for critical infrastructures. In particular, the project developed multi- tree approaches based on piecewise outer-approximations for solution of mixed-integer nonlinear programming (MINLP) problems. These techniques alternate between an MILP or MISOCP relaxation to obtain a lower bound and candidate discrete solutions and an NLP subproblem to obtain upper bounds. Using tailored relaxations based on problem structure, these methods were used to solve several key applications in resilience and response of critical infrastructure. This work resulted in two open-source, copyrighted software packages: CORAMIN (https://github.com/Coramin/Coramin) -- an object-oriented mathematical programming framework that supports tailored multi-tree algorithms for solution of large- scale mixed-integer nonlinear programming; and EGRET (Electrical Grid Research and Engineering Toolkit) (https://github.com/grid- parity-exchange/Egret) -- a declarative mathematical programming framework built upon CORAMIN and Pyomo for formulation and solution of resilience and operations problems in power grid systems. Furthermore, these tools resulted in several important published results, including the following: The first known global optimization approach that could solve the unit-commitment problem with nonlinear power flow constraints on medium-sized test problems; Improved parallel optimization-based bounds tightening and strengthening of relaxations of AC power flow constraints; and, Optimization-based approaches for improved grid resilience and use of demand response to improve grid resilience with reduction in capital requirements. Result Implications. This project developed first-of-a-kind algorithms for decision-making in critical infrastructure resilience operations and planning, as well as a next-generation toolkit for MINLP researchers. These approaches leveraged high-performance computing architectures to solve some of the largest, most challenging nonlinear discrete optimization problems to global optimality, and these successes were captured in open-source software to enable optimization-based decision-making, and efficient solution of MINLP formulations for electric power transmission grids.
The Alternative Fuels Risk Assessment Models (A1tRAM) toolkit combines Quantitative Risk Assessment (QRA) with simulations of unignited dispersion, ignited turbulent diffusion flames, and indoor accumulation with delayed ignition of fuels. The models of the physical phenomena need to be validated for each of the fuels in the toolkit. This report shows the validation for methane which is being used as a surrogate for natural gas. For the unignited dispersion model, seven previously published experiments from credible sources were used to validate. The validation looked at gas concentrations with respect to the distance from the release point. Four of these were underexpanded jets (i.e. release velocity equal to or greater than local speed of sound) and the other three subsonic releases. The methane plume model in AltRAM matched both varieties well, with higher accuracy for the underexpanded releases. For the jet flame model, we compared the heat flux and thermal radiation data reported from five separate turbulent jet flame experiments to the quantities calculated by A1tRAM. Four of the five datasets were for underexpanded diffusion jets flames. While the results still match well enough to give a good estimate of what is occurring, the error is higher than what was seen with the plume model. For the underexpanded flames A1tRAM provided reasonable approximations, which would lead to conservative risk assessments. Some modeling errors can be attributed to environmental effects (i.e. wind) since most large scale flame experiments are conducted outdoors. A1tRAM has been shown to be a reasonably accurate tool for calculating the concentration or flame properties of natural gas releases. Improvements could still be made for the plume of subsonic releases and radiative heat fluxes to reduce the conservative nature of these predictions. These models can provide valuable information for the risk assessment of natural gas infrastructure.
The hydrokinetic industry has advanced beyond its initial testing phase with full-scale projects being introduced, constructed and tested globally. However primary hurdles such as reducing the cost of these systems, optimizing individual systems and arrays and balancing energy extraction with environmental impact still requires attention prior to achieving commercial success. The present study addresses the advances and limitations of near-zero head hydrokinetic technologies and the possibility of increased potential and applicability when enhancement techniques within the design, implementation and operational phases are considered. Its goal is threefold: to review small-scale state-of-the-art near-zero hydrokinetic-current-energy-conversion-technologies, to assess barriers including gaps in knowledge, information and data as well as assess time and resource limitations of water-infrastructure owners and operators. A case study summarizes the design and implementation of the first permanent modern hydrokinetic installation in South Africa where improved outputs were achieved through optimization during each design and operation phase. An economic analysis validates a competitive levelized cost of energy and further emphasizes the broad potential that is relatively unexplored within existing water-infrastructure.
Rempe, Susan R.; Muralidharan, A.; Pratt, L.R.; Chaudhari, M.I.
Anion hydration is complicated by H-bond between neighboring water molecules in addition to H-bond donation to the anion. This situation leads to competing structures and anharmonic vibrations for simple clusters like (H2O)nCl-. This study applies quasi-chemical theory to study anion hydration and exploits dynamics calculations on isolated clusters to account for anharmonicity. Comparing singly hydrated halide clusters, classic H-bond donation to the anion occurs for F-, while Cl- clusters exhibit flexible dipole-dominated interactions. The predicted Cl- – F- hydration free energy difference agrees with experiment, a significant theoretical step for addressing issues like Hofmeister ranking and selectivity in ion channels.
This study investigates the mechanical and corrosion properties of as-built and annealed equiatomic CoCrFeMnNi alloy produced by laser-based directed energy deposition (DED) Additive Manufacturing (AM). The high cooling rates of DED produced a single-phase, cellular microstructure with cells on the order of 4 μm in diameter and inter-cellular regions that were enriched in Mn and Ni. Annealing created a chemically homogeneous recrystallized microstructure with a high density of annealing twins. The average yield strength of the as-built condition was 424 MPa and exceeded the annealed condition (232 MPa), however; the strain hardening rate was lower for the as-built material stemming from higher dislocation density associated with DED parts and the fine cell size. In general, the yield strength, ultimate tensile strength, and elongation-to-failure for the as-built material exceeded values from previous studies that explored other AM techniques to produce the CoCrFeMnNi alloy. Ductile fracture occurred for all specimens with dimple initiation associated with nanoscale oxide inclusions. The breakdown potential (onset of pitting corrosion) was similar for the as-built and annealed conditions at 0.40 VAg/AgCl when immersed in 0.6 M NaCl. Pit morphology/propagation for the as-built condition exhibited preferential corrosion of inter-cellular Ni/Mn regions leading to a tortuous pit bottom and cover, while the annealed conditions pits resembled lacy pits similar to 304 L steel. A passive oxide film depleted in Cr cations with substantial incorporation of Mn cations is proposed as the primary mechanism for local corrosion susceptibility of the CoCrFeMnNi alloy.
In preparation for the next phase of the Source Physics Experiments, we acquired an active-source seismic dataset along two transects totaling more than 30 km in length at Yucca Flat, Nevada, on the Nevada National Security Site. Yucca Flat is a sedimentary basin which has hosted more than 650 underground nuclear tests (UGTs). The survey source was a novel 13,000 kg modified industrial pile driver. This weight drop source proved to be broadband and repeatable, richer in low frequencies (1-3 Hz) than traditional vibrator sources and capable of producing peak particle velocities similar to those produced by a 50 kg explosive charge. In this study, we performed a joint inversion of P-wave refraction travel times and Rayleigh-wave phase-velocity dispersion curves for the P- and S-wave velocity structure of Yucca Flat. Phase-velocity surface-wave dispersion measurements were obtained via the refraction microtremor method on 1 km arrays, with 80% overlap. Our P-wave velocity models verify and expand the current understanding of Yucca Flat’s subsurface geometry and bulk properties such as depth to Paleozoic basement and shallow alluvium velocity. Areas of disagreement between this study and the current geologic model of Yucca Flat (derived from borehole studies) generally correlate with areas of widely spaced borehole control points. This provides an opportunity to update the existing model, which is used for modeling groundwater flow and radionuclide transport. Scattering caused by UGT-related high-contrast velocity anomalies substantially reduced the number and frequency bandwidth of usable dispersion picks. The S-wave velocity models presented in this study agree with existing basin-wide studies of Yucca Flat, but are compromised by diminished surface-wave coherence as a product of this scattering. As nuclear nonproliferation monitoring moves from teleseismic to regional or even local distances, such high-frequency (>5 Hz) scattering could prove challenging when attempting to discriminate events in areas of previous testing.
Aria is a Galerkin finite element based program for solving coupled-physics problems described by systems of PDEs and is capable of solving nonlinear, implicit, transient and direct-to-steady state problems in two and three dimensions on parallel architectures. The suite of physics currently supported by Aria includes thermal energy transport, species transport, and electrostatics as well as generalized scalar, vector and tensor transport equations. Additionally, Aria includes support for manufacturing process flows via the incompressible Navier-Stokes equations specialized to a low Reynolds number (Re %3C 1) regime. Enhanced modeling support of manufacturing processing is made possible through use of either arbitrary Lagrangian-Eulerian (ALE) and level set based free and moving boundary tracking in conjunction with quasi-static nonlinear elastic solid mechanics for mesh control. Coupled physics problems are solved in several ways including fully-coupled Newton's method with analytic or numerical sensitivities, fully-coupled Newton-Krylov methods and a loosely-coupled nonlinear iteration about subsets of the system that are solved using combinations of the aforementioned methods. Error estimation, uniform and dynamic h-adaptivity and dynamic load balancing are some of Aria's more advanced capabilities.
The classic models for ductile fracture of metals were based on experimental observations dating back to the 1950’s. Using advanced microscopy techniques and modeling algorithms that have been developed over the past several decades, it is possible now to examine the micro- and nano-scale mechanisms of ductile rupture in more detail. This new information enables a revised understanding of the ductile rupture process under quasi-static room temperature conditions in ductile pure metals and alloys containing hard particles. While ductile rupture has traditionally been viewed through the lens of nucleation-growth-and-coalescence, a new taxonomy is proposed involving the competition or cooperation of up to seven distinct rupture mechanisms. Generally, void nucleation via vacancy condensation is not rate limiting, but is extensive within localized shear bands of intense deformation. Instead, the controlling process appears to be the development of intense local dislocation activity which enables void growth via dislocation absorption.
DOE has identified consistent safety, codes, and standards as a critical need for the deployment of hydrogen technologies, with key barriers related to the availability and implementation of technical information in the development of regulations, codes, and standards. Advances in codes and standards have been enabled by risk-informed approaches to create and implement revisions to codes, such as National Fire Protection Association (NFPA) 2, NFPA 55, and International Organization for Standardization (ISO) Technical Specification (TS)-19880-1. This project provides the technical basis for these revisions, enabling the assessment of the safety of hydrogen fuel cell systems and infrastructure using QRA and physics-based models of hydrogen behavior. The risk and behavior tools that are developed in this project are motivated by, shared directly with, and used by the committees revising relevant codes and standards, thus forming the scientific basis to ensure that code requirements are consistent, logical, and defensible.
Numerical simulation of non-isothermal multiphase porous flow combined with reactive transport is used in a wide range of applications, which include nuclear waste repositories, enhanced recovery of petroleum reservoirs, contaminant remediation, geothermal engineering, and carbon sequestration. Understanding and predicting underground phenomena can have enormous impact on how to deal with world issues like climate change, clean water, and renewable energy. The main motivation for this proposal arises from safety assessment of future nuclear waste repositories using the U.S. Department of Energy (DOE) Geologic Disposal Safety Assessment (GDSA) Framework, and performance assessment (PA) for Waste Isolation Pilot Plant (WIPP), the nation's only active nuclear waste repository in Carlsbad, New Mexico. I am a technical staff member at Sandia National Laboratories involved in both projects.
The DOE is devoted to improving national energy security and reducing carbon emissions through the development of renewable alternatives to fossil fuel usage. This work demonstrates a pathway to improve the feasibility of large-scale biofuel production by reducing the occurrences of pond failures and their associated economic burdens. We have done this by identifying unique volatile chemical signals that indicate predator attack on an algal biofuel pond. These volatiles are easy to collect in the field and could be rapidly analyzed for state-of- health monitoring. This will allow producers to intervene early during predator attack on a pond and minimize crop loss.
Additional fueling stations need to be constructed in the U.S. to enable the wide-spread adoption of fuel cell electric vehicles. A wide variety of private and public stakeholders are involved in the development of this hydrogen fueling infrastructure. Each stakeholder has particular needs in the station planning, development, and operation process that may include evaluation of potential sites and requirements, understanding the components in a typical system, and/or improving public acceptance of this technology. Publicly available templates of representative station designs can be used to meet many of these stakeholder needs. These 'Reference Stations' help reduce the cost and speed the deployment of hydrogen stations by providing a common baseline with which to start a design, enabling quick assessment of the suitability of a particular site for a hydrogen station, and identifying contributors to poor economics and research and development areas for certain station designs.
This document summarily provides brief descriptions of the MELCOR code enhancement made between code revision number 11932 and 14959. Revision 11932 represents the last official code release; therefore, the modeling features described within this document are provided to assist users that update to the newest official MELCOR code release, 14959. Along with the newly updated MELCOR Users' Guide and Reference Manual, users will be aware and able to assess the new capabilities for their modeling and analysis applications.