Hahn, Nathan T.; Self, Julian; Driscoll, Darren M.; Dandu, Naveen; Han, Kee S.; Murugesan, Vijayakumar; Mueller, Karl T.; Curtiss, Larry A.; Balasubramanian, Mahalingam; Persson, Kristin A.; Zavadil, Kevin R.
Ion interactions strongly determine the solvation environments of multivalent electrolytes even at concentrations below that required for practical battery-based energy storage. This statement is particularly true of electrolytes utilizing ethereal solvents due to their low dielectric constants. These solvents are among the most commonly used for multivalent batteries based on reactive metals (Mg, Ca) due to their reductive stability. Recent developments in multivalent electrolyte design have produced a variety of new salts for Mg2+ and Ca2+ that test the limits of weak coordination strength and oxidative stability. Such electrolytes have great potential for enabling full-cell cycling of batteries based on these working ions. However, the ion interactions in these electrolytes exhibit significant and non-intuitive concentration relationships. In this work, we investigate a promising exemplar, calcium tetrakis(hexafluoroisopropoxy)borate (Ca(BHFIP)2), in the ethereal solvents 1,2-dimethoxyethane (DME) and tetrahydrofuran (THF) across a concentration range of several orders of magnitude. Surprisingly, we find that effective salt dissociation is lower at relatively dilute concentrations (e.g. 0.01 M) than at higher concentrations (e.g. 0.2 M). Combined experimental and computational dielectric and X-ray spectroscopic analyses of the changes occurring in the Ca2+ solvation environment across these concentration regimes reveals a progressive transition from well-defined solvent-separated ion pairs to de-correlated free ions. This transition in ion correlation results in improvements in both conductivity and calcium cycling stability with increased salt concentration. Comparison with previous findings involving more strongly associating salts highlights the generality of this phenomenon, leading to important insight into controlling ion interactions in ether-based multivalent battery electrolytes.
Doucet, Mathieu; Browning, James F.; Doyle, B.L.; Charlton, Timothy R.; Ambaye, Haile; Seo, Joohyun; Mazza, Alessandro R.; Wenzel, John F.; Burns, George R.; Wixom, Ryan R.; Veith, Gabriel M.
Haynes 230 nickel alloy is one of the main contenders for salt containment in the design of thermal energy storage systems based on molten salts. A key problem for these systems is understanding the corrosion phenomena at the alloy–salt interface, and, in particular, the role played by chromium in these processes. In this study, thin films of Haynes 230, which is also rich in chromium, were measured with polarized neutron reflectometry and Rutherford backscattering spectrometry as a function of annealing temperature. Migration of chromium to the surface was observed for films annealed at 400 and 600 °C. Combining the two techniques determined that more than 60% of chromium comprising the as-prepared Haynes 230 layer moves to the surface when annealed at 600 °C, where it forms an oxide layer.
Krylov subspace recycling is a powerful tool when solving a long series of large, sparse linear systems that change only slowly over time. In PDE constrained shape optimization, these series appear naturally, as typically hundreds or thousands of optimization steps are needed with only small changes in the geometry. In this setting, however, applying Krylov subspace recycling can be a difficult task. As the geometry evolves, in general, so does the finite element mesh defined on or representing this geometry, including the numbers of nodes and elements and element connectivity. This is especially the case if re-meshing techniques are used. As a result, the number of algebraic degrees of freedom in the system changes, and in general the linear system matrices resulting from the finite element discretization change size from one optimization step to the next. Changes in the mesh connectivity also lead to structural changes in the matrices. In the case of re-meshing, even if the geometry changes only a little, the corresponding mesh might differ substantially from the previous one. Obviously, this prevents any straightforward mapping of the approximate invariant subspace of the linear system matrix (the focus of recycling in this work) from one optimization step to the next; similar problems arise for other selected subspaces. In this paper, we present an algorithm to map an approximate invariant subspace of the linear system matrix for the previous optimization step to an approximate invariant subspace of the linear system matrix for the current optimization step, for general meshes. This is achieved by exploiting the map from coefficient vectors to finite element functions on the mesh, combined with interpolation or approximation of functions on the finite element mesh. We demonstrate the effectiveness of our approach numerically with several proof of concept studies for a specific meshing technique.
Luo, Chaoqian; Chung, Christopher; Yakacki, Christopher M.; Long, Kevin N.; Yu, Kai
Liquid crystal elastomers (LCEs) exhibit soft elasticity due to the alignment and reorientation of mesogens upon mechanical loading, which provides additional mechanisms to absorb and dissipate energy. This enhanced response makes LCEs potentially transformative materials for biomedical devices, tissue replacements, and protective equipment. However, there is a critical knowledge gap in understanding the highly rate-dependent dissipative behaviors of LCEs due to the lack of real-time characterization techniques that probe the microscale network structure and link it to the mechanical deformation of LCEs. In this work, we employ in situ optical measurements to evaluate the alignment and reorientation degree of mesogens in LCEs. The data are correlated to the quantitative physical analysis using polarized Fourier-transform infrared spectroscopy. The time scale of mesogen alignment is determined at different strain levels and loading rates. The mesogen reorientation kinetics is characterized to establish its relationship with the macroscale tensile strain, and compared to theoretical predictions. Overall, this work provides the first detailed study on the time-dependent evolution of mesogen alignment and reorientation in deformed LCEs. It also provides an effective and more accessible approach for other researchers to investigate the structural-property relationships of different types of polymers.
Graphite electrodes in the lithium-ion battery exhibit various particle shapes, including spherical and platelet morphologies, which influence structural and electrochemical characteristics. It is well established that porous structures exhibit spatial heterogeneity, and the particle morphology can influence transport properties. The impact of the particle morphology on the heterogeneity and anisotropy of geometric and transport properties has not been previously studied. This study characterizes the spatial heterogeneities of 18 graphite electrodes at multiple length scales by calculating and comparing the structural anisotropy, geometric quantities, and transport properties (pore-scale tortuosity and electrical conductivity). We found that the particle morphology and structural anisotropy play an integral role in determining the spatial heterogeneity of directional tortuosity and its dependency on pore-scale heterogeneity. Our analysis reveals that the magnitude of in-plane and through-plane tortuosity difference influences the multiscale heterogeneity in graphite electrodes.
The SIERRA Low Mach Module: Fuego, henceforth referred to as Fuego, is the key element of the ASC fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Using MPMD coupling, Scefire and Nalu handle the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
SIERRA/Aero is a compressible fluid dynamics program intended to solve a wide variety compressible fluid flows including transonic and hypersonic problems. This document describes the commands for assembling a fluid model for analysis with this module, henceforth referred to simply as Aero for brevity. Aero is an application developed using the SIERRA Toolkit (STK). The intent of STK is to provide a set of tools for handling common tasks that programmers encounter when developing a code for numerical simulation. For example, components of STK provide field allocation and management, and parallel input/output of field and mesh data. These services also allow the development of coupled mechanics analysis software for a massively parallel computing environment.
The SIERRA Low Mach Module: Fuego, henceforth referred to as Fuego, is the key element of the ASC fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Using MPMD coupling, Scefire and Nalu handle the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
The SNL Sierra Mechanics code suite is designed to enable simulation of complex multiphysics scenarios. The code suite is composed of several specialized applications which can operate either in standalone mode or coupled with each other. Arpeggio is a supported utility that enables loose coupling of the various Sierra Mechanics applications by providing access to Framework services that facilitate the coupling. More importantly Arpeggio orchestrates the execution of applications that participate in the coupling. This document describes the various components of Arpeggio and their operability. The intent of the document is to provide a fast path for analysts interested in coupled applications via simple examples of its usage.
The SIERRA Low Mach Module: Fuego, henceforth referred to as Fuego, is the key element of the ASC re environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Using MPMD coupling, Scefire and Nalu handle the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
Presented in this document is a small portion of the tests that exist in the Sierra/SolidMechanics (Sierra/SM) verification test suite. Most of these tests are run nightly with the Sierra/SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra/SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra/SM Example Problems Manual. Note, many other verification tests exist in the Sierra/SM test suite, but have not yet been included in this manual.
Presented in this document are the theoretical aspects of capabilities contained in the Sierra / SM code. This manuscript serves as an ideal starting point for understanding the theoretical foundations of the code. For a comprehensive study of these capabilities, the reader is encouraged to explore the many references to scientific articles and textbooks contained in this manual. It is important to point out that some capabilities are still in development and may not be presented in this document. Further updates to this manuscript will be made as these capabilities come closer to production level.
Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutions of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes. This document describes the functionality and input syntax for Sierra/SM.
Natural convection in porous media is a highly nonlinear multiphysical problem relevant to many engineering applications (e.g., the process of CO2 sequestration). Here, we extend and present a non-intrusive reduced order model of natural convection in porous media employing deep convolutional autoencoders for the compression and reconstruction and either radial basis function (RBF) interpolation or artificial neural networks (ANNs) for mapping parameters of partial differential equations (PDEs) on the corresponding nonlinear manifolds. To benchmark our approach, we also describe linear compression and reconstruction processes relying on proper orthogonal decomposition (POD) and ANNs. Further, we present comprehensive comparisons among different models through three benchmark problems. The reduced order models, linear and nonlinear approaches, are much faster than the finite element model, obtaining a maximum speed-up of 7 × 106 because our framework is not bound by the Courant–Friedrichs–Lewy condition; hence, it could deliver quantities of interest at any given time contrary to the finite element model. Our model’s accuracy still lies within a relative error of 7% in the worst-case scenario. We illustrate that, in specific settings, the nonlinear approach outperforms its linear counterpart and vice versa. We hypothesize that a visual comparison between principal component analysis (PCA) and t-Distributed Stochastic Neighbor Embedding (t-SNE) could indicate which method will perform better prior to employing any specific compression strategy.
Rock salt is being considered as a medium for energy storage and radioactive waste disposal. A Disturbed Rock Zone (DRZ) develops in the immediate vicinity of excavations in rock salt, with an increase in permeability, which alters the migration of gases and liquids around the excavation. When creep occurs adjacent to a stiff inclusion such as a concrete plug, it is expected that the stress state near the inclusion will become more hydrostatic and less deviatoric, promoting healing (permeability reduction) of the DRZ. In this scoping study, we measured the permeability of DRZ rock salt with time adjacent to inclusions (plugs) of varying stiffness to determine how the healing of rock salt, as reflected in the permeability changes, is a function of the stress and time. Samples were created with three different inclusion materials in a central hole along the axis of a salt core: (i) very soft silicone sealant, (ii) sorel cement, and (iii) carbon steel. The measured permeabilities are corrected for the gas slippage effect. We observed that the permeability change is a function of the inclusion material. The stiffer the inclusion, the more rapidly the permeability reduces with time.
The detonation of explosives produces luminous fireballs often containing particulates such as carbon soot or remnants of partially reacted explosives. The spatial distribution of these particulates is of great interest for the derivation and validation of models. In this work, three ultra-high-speed imaging techniques: diffuse back-illumination extinction, schlieren, and emission imaging, are utilized to investigate the particulate quantity, spatial distribution, and structure in a small-scale fireball. The measurements show the evolution of the particulate cloud in the fireball, identifying possible emission sources and regions of high optical thickness. Extinction measurements performed at two wavelengths shows that extinction follows the inverse wavelength behavior expected of absorptive particles in the Rayleigh scattering regime. The estimated mass from these extinction measurements shows an average soot yield consistent with previous soot collection experiments. The imaging diagnostics discussed in the current work can provide detailed information on the spatial distribution and concentration of soot, crucial for validation opportunities in the future.
This paper presents the formulation, implementation, and demonstration of a new, largely phenomenological, model for the damage-free (micro-crack-free) thermomechanical behavior of rock salt. Unlike most salt constitutive models, the new model includes both drag stress (isotropic) and back stress (kinematic) hardening. The implementation utilizes a semi-implicit scheme and a fall-back fully-implicit scheme to numerically integrate the model's differential equations. Particular attention was paid to the initial guesses for the fully-implicit scheme. Of the four guesses investigated, an initial guess that interpolated between the previous converged state and the fully saturated hardening state had the best performance. The numerical implementation was then used in simulations that highlighted the difference between drag stress hardening versus combined drag and back stress hardening. Simulations of multi-stage constant stress tests showed that only combined hardening could qualitatively represent reverse (inverse transient) creep, as well as the large transient strains experimentally observed upon switching from axisymmetric compression to axisymmetric extension. Simulations of a gas storage cavern subjected to high and low gas pressure cycles showed that combined hardening led to substantially greater volume loss over time than drag stress hardening alone.
For the model-based control of low-voltage microgrids, state and parameter information are required. Different optimal estimation techniques can be employed for this purpose. However, these estimation techniques require knowledge of noise covariances (process and measurement noise). Incorrect values of noise covariances can deteriorate the estimator performance, which in turn can reduce the overall controller performance. This paper presents a method to identify noise covariances for voltage dynamics estimation in a microgrid. The method is based on the autocovariance least squares technique. A simulation study of a simplified 100 kVA, 208 V microgrid system in MATLAB/Simulink validates the method. Results show that estimation accuracy is close to the actual value for Gaussian noise, and non-Gaussian noise has a slightly larger error.
In accident scenarios involving release of tritium during handling and storage, the level of risk to human health is dominated by the extent to which radioactive tritium is oxidized to the water form (T2O or THO). At some facilities, tritium inventories consist of very small quantities stored at sub-atmospheric pressure, which means that tritium release accident scenarios will likely produce concentrations in air that are well below the lower flammability limit. It is known that isotope effects on reaction rates should result in slower oxidation rates for heavier isotopes of hydrogen, but this effect has not previously been quantified for oxidation at concentrations well below the lower flammability limit for hydrogen. This work describes hydrogen isotope oxidation measurements in an atmospheric tube furnace reactor. These measurements consist of five concentration levels between 0.01% and 1% protium or deuterium and two residence times. Oxidation is observed to occur between about 550°C and 800°C, with higher levels of conversion achieved at lower temperatures for protium with respect to deuterium at the same volumetric inlet concentration and residence time. Computational fluid dynamics simulations of the experiments were used to customize reaction orders and Arrhenius parameters in a 1-step oxidation mechanism. The trends in the rates for protium and deuterium are extrapolated based on guidance from literature to produce kinetic rate parameters appropriate for tritium oxidation at low concentrations.
It is impossible in practice to comprehensively test even small software programs due to the vastness of the reachable state space; however, modern cyber-physical systems such as aircraft require a high degree of confidence in software safety and reliability. Here we explore methods of generating test sets to effectively and efficiently explore the state space for a module based on the Traffic Collision Avoidance System (TCAS) used on commercial aircraft. A formal model of TCAS in the model-checking language NuSMV provides an output oracle. We compare test sets generated using various methods, including covering arrays, random, and a low-complexity input paradigm applied to 28 versions of the TCAS C program containing seeded errors. Faults are triggered by tests for all 28 programs using a combination of covering arrays and random input generation. Complexity-based inputs perform more efficiently than covering arrays, and can be paired with random input generation to create efficient and effective test sets. A random forest classifier identifies variable values that can be targeted to generate tests even more efficiently in future work, by combining a machine-learned fuzzing algorithm with more complex model oracles developed in model-based systems engineering (MBSE) software.
Quantum computers can now run interesting programs, but each processor’s capability—the set of programs that it can run successfully—is limited by hardware errors. These errors can be complicated, making it difficult to accurately predict a processor’s capability. Benchmarks can be used to measure capability directly, but current benchmarks have limited flexibility and scale poorly to many-qubit processors. We show how to construct scalable, efficiently verifiable benchmarks based on any program by using a technique that we call circuit mirroring. With it, we construct two flexible, scalable volumetric benchmarks based on randomized and periodically ordered programs. We use these benchmarks to map out the capabilities of twelve publicly available processors, and to measure the impact of program structure on each one. We find that standard error metrics are poor predictors of whether a program will run successfully on today’s hardware, and that current processors vary widely in their sensitivity to program structure.