Soft ferromagnetic alloys are often utilized in electromagnetic applications due to their desirable magnetic properties. In support of these applications, the ferromagnetic alloys are also desired to bear mechanical load at various environmental temperatures. In this study, a Permendur 2V alloy manufactured by Metalwerks Inc. (but referred to Hiperco 50A, a trademark of Carpenter Technologies Inc.) was dynamically characterized in tension with a Kolsky tension bar and a Dropkinson bar at various strain rates and temperatures. Dynamic tensile stress-strain curves of the Hiperco 50A alloy were obtained at the strain rates ranging from 40 to 230 s-1 and temperatures from -100 to 100°C. All tensile stress-strain curves exhibited an initial linear elastic response to an upper yield followed by a Eiders banding response and then a nearly linear work-hardening behavior. The yield strength of this material was found to be sensitive to both strain rate and temperature; whereas, the hardening rate was independent of strain rate or temperature. The Hiperco 50A alloy exhibited a feature of brittle fracture in tension under dynamic loading with no necking being observed.
The High Burn-Up Demonstration Project was recently initiated by the Department of Energy (DOE) to evaluate the effects of fuel drying and long term dry storage on high burn-up spent nuclear fuel. As part of the project, samples of the He backfill gas were collected 5 hours, 5 days, and 12 days after completion of drying. The samples provide information on the state of the fuel at closure, and on the environment within the cask. At Sandia National Laboratories, the samples were analyzed by gamma-ray spectroscopy to quantify fission product gases and by gas mass spectrometry to quantify bulk and trace gases; water content was measured via humidity probe. Gamma-ray spectroscopy results indicated no detectible 85Kr, indicating no failed fuel rods were present after drying. Mass spectrometry indicated build-up of CO2 to 930 ppmv over two weeks, attributed to oxidation of organic compounds (possibly vacuum grease or vacuum pump oil) within the cask. H2, generated by either radiolysis or metal corrosion, also increased up to —500 ppmv. Water contents in the cask were higher than anticipated, increasing to —17,400 ppmv ±10% after 12 days. Measuring water content proved challenging, and possible improvements to the method for future analyses are proposed.
Fisher, Carolyn L.; Reese, Kristen L.; Lane, Pamela D.; Jaryenneh, James J.; Ward, Christopher S.; Maddalena, Randy; Mayali, Xavier; Moorman, Matthew W.; Lane, Todd W.
In this paper, we study, analyze, and validate some important zero-dimensional physics-based models for vanadium redox batch cell (VRBC) systems and formulate an adequate physics-based model that can predict the battery performance accurately. In the model formulation process, a systems approach to multiple parameters estimation has been conducted using VRBC systems at low C-rates (∼C/30). In this batch cell system, the effect of ions’ crossover through the membrane is dominant, and therefore, the capacity loss phenomena can be explicitly observed. Paradoxically, this means that using the batch system might be a better approach for identifying a more suitable model describing the effect of ions transport. Next, we propose an efficient systems approach, which enables to help understand the battery performance quickly by estimating all parameters of the battery system. Finally, open source codes, executable files, and experimental data are provided to enable people’s access to robust and accurate models and optimizers. In battery simulations, different models and optimizers describing the same systems produce different values of the estimated parameters. Providing an open access platform can accelerate the process to arrive at robust models and optimizers by continuous modification from the users’ side.
This corrigendum clarifies the conditions under which the proof of convergence of Theorem 1 from the original article is valid. We erroneously stated as one of the conditions for the Schwarz alternating method to converge that the energy functional be strictly convex for the solid mechanics problem. We have relaxed that assumption and changed the corresponding parts of the text. None of the results or other parts of the original article are affected.
Jiang, Weilin; Conroy, Michele A.; Kruska, Karen; Olszta, Matthew J.; Droubay, Timothy C.; Schwantes, Jon M.; Taylor, Caitlin A.; Price, Patrick M.; Hattar, Khalid M.; Devanathan, Ram
Molecular dynamics simulations were employed to simulate the mechanical response and grain evolution in a Ni nanowire for both static and cyclic loading conditions at both 300 and 500 K for periods of 40 ns. The loading conditions included thermal annealing with no deformation, constant 1% extension (creep loading) and cyclic loading with strain amplitudes of 0.5% and 1% for 200 cycles. Under cyclic loading, the stress-strain response showed permanent deformation and cyclic hardening behavior. At 300 K, modest grain evolution was observed at all conditions within the 40 ns simulations. At 500 K, substantial grain growth is observed in all cases, but is most pronounced under cyclic loading. This may result mechanistically from a net motion of the boundaries associated with boundary ratcheting. There is a striking qualitative consistency between the present simulation results and the experimental observation of abnormal grain growth in nanocrystalline metals as a precursor to fatigue crack initiation.
In January 2019, the U.S. Department of Energy, Office of Science program in Advanced Scientific Computing Research, convened a workshop to identify priority research directions for in situ data management (ISDM). The workshop defined ISDM as the practices, capabilities, and procedures to control the organization of data and enable the coordination and communication among heterogeneous tasks, executing simultaneously in a high-performance computing system, cooperating toward a common objective. The workshop revealed two primary, interdependent motivations for processing and managing data in situ. The first motivation is that the in situ methodology enables scientific discovery from a broad range of data sources over a wide scale of computing platforms: leadership-class systems, clusters, clouds, workstations, and embedded devices at the edge. The successful development of ISDM capabilities will benefit real-time decision-making, design optimization, and data-driven scientific discovery. The second motivation is the need to decrease data volumes. ISDM can make critical contributions to managing large data volumes from computations and experiments to minimize data movement, save storage space, and boost resource efficiency, often while simultaneously increasing scientific precision.
Nanoparticles; Supramolecular Chemistry; Materials Science Nanoparticles (NPs)of controlled size, shape, and composition are important building blocks for the next generation of devices. There are numerous recent examples of organizing uniformly sized NPs into ordered arrays or superstructures in processes such as solvent evaporation, heterogeneous solution assembly, Langmuir-Blodgett receptor-ligand interactions, and layer-by-layer assembly. This review summarizes recent progress in the development of surfactant-assisted cooperative self-assembly method using amphiphilic surfactants and NPs to synthesize new classes of highly ordered active nanostructures. Driven by cooperative interparticle interactions, surfactant-assisted NP nucleation and growth results in optically and electrically active nanomaterials with hierarchical structure and function. How the approach works with nanoscale materials of different dimensions into active nanostructures is discussed in details. Some applications of these self-assembled nanostructures in the areas of nanoelectronics, photocatalysis, and biomedicine are highlighted. Finally, we conclude with the current research progress and perspectives on the challenges and some future directions.
Scour beneath seafloor pipelines, cables, and other offshore infrastructure is a well-known problem. Recent interest in seafloor mounted wave energy converters brings another dynamic element into the traditional seafloor scour problem. In this paper, we consider the M3 Wave APEX device, which utilizes airflow between two flexible chambers to generate electricity from waves. In an initial at-sea deployment of a demonstration/experimental APEX in September 2014 off the coast of Oregon, scour beneath the device was observed. As sediment from the beneath the device was removed by scour, the device's pitch orientation was shifted. This change in pitch orientation caused a degradation in power performance. Characterizing the scour associated with seafloor mounted wave energy conversion devices such as the M3 device is the objective of the present work.
Snippet is a collaborative source code auditing tool developed by Sandia National Laboratories, in conjunction with researchers from other government agencies and partners. As a tool for use by groups of auditors, Snippet showed the value of providing in-line annotations that captured the thoughts of auditors in such a way that teams of auditors could share their insights. Snippet users found that the annotation mechanisms improved the auditing process but were frustrated at having to learn another environment. To address this issue, collaborators made extensions to existing Integrated Development Environments (IDEs) to incorporate functionality that mimics Snippet's behaviors. This document is one of a pair of documents. The other document, "Snippet Open-Source Specification", goes into details about the history and reasoning behind the requirements made in this document. That document provides for a broader understanding of the motivations backing this document. This document provides the complete specification for collaborative annotation, and a characterization of each specification as required or recommended. Tools that implement these specifications will be considered Snippet compliant. The intent is that complete implementation of these specifications will enable seamless sharing of annotation information between disjoint tools.
Snippet is a collaborative software code auditing tool developed at Sandia National Laboratories (Sandia) in conjunction with several sponsoring government agencies to enable software analysts, individually or in groups, to non-destructively audit large codebases, share files and comments from those audits, and develop Snippet plugins for use with other text editing software applications. This specification defines the requirements and the reasoning behind the requirements for tools developed to be compatible with the technology and purpose of Snippet. A condensation of the requirements is given in the companion document, "Snippet Open-Source Specification Synopsis" also generated by Sandia National Laboratories. Snippet's approach to browsing through codebases is similar to a web browser navigating the internet.
Lipscomb, William H.; Price, Stephen F.; Hoffman, Matthew J.; Leguy, Gunter R.; Bennett, Andrew R.; Bradley, Sarah L.; Evans, Katherine J.; Fyke, Jeremy G.; Kennedy, Joseph H.; Perego, Mauro P.; Ranken, Douglas M.; Sacks, William J.; Salinger, Andrew G.; Vargo, Lauren J.; Worley, Patrick H.
We describe and evaluate version 2.1 of the Community Ice Sheet Model (CISM). CISM is a parallel, 3-D thermomechanical model, written mainly in Fortran, that solves equations for the momentum balance and the thickness and temperature evolution of ice sheets. CISM's velocity solver incorporates a hierarchy of Stokes flow approximations, including shallow-shelf, depth-integrated higher order, and 3-D higher order. CISM also includes a suite of test cases, links to third-party solver libraries, and parameterizations of physical processes such as basal sliding, iceberg calving, and sub-ice-shelf melting. The model has been verified for standard test problems, including the Ice Sheet Model Intercomparison Project for Higher-Order Models (ISMIP-HOM) experiments, and has participated in the initMIP-Greenland initialization experiment. In multimillennial simulations with modern climate forcing on a 4 km grid, CISM reaches a steady state that is broadly consistent with observed flow patterns of the Greenland ice sheet. CISM has been integrated into version 2.0 of the Community Earth System Model, where it is being used for Greenland simulations under past, present, and future climates. The code is open-source with extensive documentation and remains under active development.
The synthesis of materials that can mimic the mechanical, and ultimately functional, properties of biological cells can broadly impact the development of biomimetic materials, as well as engineered tissues and therapeutics. Yet, it is challenging to synthesize, for example, microparticles that share both the anisotropic shapes and the elastic properties of living cells. Here in paper, a cell-directed route to replicate cellular structures into synthetic hydrogels such as polyethylene glycol (PEG) is described. First, the internal and external surfaces of chemically fixed cells are replicated in a conformal layer of silica using a sol–gel process. The template is subsequently removed to render shape-preserved, mesoporous silica replicas. Infiltration and cross-linking of PEG precursors and dissolution of the silica result in a soft hydrogel replica of the cellular template as demonstrated using erythrocytes, HeLa, and neuronal cultured cells. The elastic modulus can be tuned over an order of magnitude (≈10–100 kPa) though with a high degree of variability. Furthermore, synthesis without removing the biotemplate results in stimuli-responsive particles that swell/deswell in response to environmental cues. Overall, this work provides a foundation to develop soft particles with nearly limitless architectural complexity derived from dynamic biological templates.
Voltage-controlled room temperature isothermal reversible spin crossover switching of [Fe{H 2 B(pz) 2 } 2 (bipy)] thin films is demonstrated. This isothermal switching is evident in thin film bilayer structures where the molecular spin crossover film is adjacent to a molecular ferroelectric. The adjacent molecular ferroelectric, either polyvinylidene fluoride hexafluoropropylene or croconic acid (C 5 H 2 O 5 ), appears to lock the spin crossover [Fe{H 2 B(pz) 2 } 2 (bipy)] molecular complex largely in the low or high spin state depending on the direction of ferroelectric polarization. In both a planar two terminal diode structure and a transistor structure, the voltage controlled isothermal reversible spin crossover switching of [Fe{H 2 B(pz) 2 } 2 (bipy)] is accompanied by a resistance change and is seen to be nonvolatile, i.e., retained in the absence of an applied electric field. The result appears general, as the voltage controlled nonvolatile switching can be made to work with two different molecular ferroelectrics: croconic acid and polyvinylidene fluoride hexafluoropropylene.
Existing approaches to evaluating cyber risk are summarized and explored for their applicability to critical infrastructure. The approaches cluster in three different spaces: network security, cyber-physical, and mission assurance. In all approaches, some form of modeling is utilized at varying levels of detail, while the ability to understand consequence varies, as do interpretations of risk. A hybrid approach can account for cyber risk in critical infrastructure and allow for allocation of limited resources across the entirety of the risk spectrum.
An increasing number of jurisdictions are adopting Distributed Energy Resource (DER) interconnection standards which require photovoltaic (PV) inverters, energy storage systems, and other DER to include interoperable grid-support functionality. These functions provide grid operators the nobs to support local and bulk power system operations with DER equipment, but the associated grid operator-to-DER communications networks must be deployed with appropriate cybersecurity features. In some situations, additional security features may prevent control system scalability or increase communication latencies and dropouts. These unintentional consequences of the security features would therefore hinder the ability of the grid operator to implement specific control algorithms. This project evaluated the tradeoffs between power system performance and cybersecurity metrics for several grid services. This was conducted in two parts.
When the core is breached during a severe nuclear accident, a molten mixture of nuclear fuel, cladding, and structural supports is discharged from the reactor vessel. This molten mixture of ceramic and metal is often referred to as “corium”. Predicting the flow and solidification of corium poses challenges for numerical models due to the presence of large Peclet numbers when convective transport dominates the physics. Here, we utilize a control volume finite-element method (CVEM) discretization to stabilize the advection dominated flow and heat transport. This CVFEM approach is coupled with the conformal decomposition finite-element method (CDFEM), which tracks the corium/air interface on an existing background mesh. CDFEM is a sharp-interface method, allowing the direct discretization of the corium front. This CVFEM-CDFEM approach is used to model the spreading of molten corium in both two- and three-dimensions. The CVFEM approach is briefly motivated in a comparison with a streamwise upwind/Petrov-Galerkin (SUPG) stabilized finite-element method, which was not able to suppress spurious temperature oscillations in the simulations. Our model is compared directly with the FARO L26 corium spreading experiments and with previous numerical simulations, showing both quantitative and qualitative agreement with those studies.
Kim, Kyoungtae; Jarrett, William L.; Alam, Todd M.; Otaigbe, Joshua U.
The effect of mixing and sintering processes to prepare tin fluorophosphate glass (Pglass) matrix composites incorporating trisilanol phenyl polyhedral oligomeric silsesquioxane (TSP-POSS) was investigated by comparing manual and suspension mixing, one-step and stepwise sintering processes to explore the structure dynamics and physical properties in the composites as a function of the different process conditions used. Energy Dispersive X-ray analysis confirmed optimal homogeneous dispersion of the TSP-POSS molecules in the composites prepared by the suspension method. The observed increase of glass transition temperature and the reduction of non-bridging bonds in the composites are believed to be the reason for the effective dispersion of TSP-POSS molecules in the composites. The chemical reaction between the TSP-POSS and Pglass was strongly influenced by the mixing/dispersion and sintering processes investigated. 13C cross polarized magic angle spinning (CP MAS) solid state nuclear magnetic resonance (NMR) spectroscopy confirmed the chemical stability of the TSP-POSS during the sintering process at elevated temperatures. In addition, a chemical reaction between the TSP-POSS and Pglass was evidenced by 29Si CP MAS NMR analysis. This study will provide a better fundamental understanding of the effective dispersion mechanism of the TSP-POSS molecules in the Pglass matrix that will facilitate tailoring the physicochemical properties of the composites with addition of various small concentrations of TSP-POSS for a number of applications where the pure Pglass is not applicable due to the intrinsic properties of the Pglass.
Each instrument recorded the x-ray emission from the Z-pinch dynamic hohlraum (ZPDH): LOS 330 TREX 6A & B: recorded time resolved and time integrated absorption spectra from a radiatively heated Ne gas; and, LOS 170 are LM monochromatic and high-pass imagers, imaging the Z-pinch before and near stagnation.
This report contains a response from Sandia National Laboratories for the 2019 update to the 2016 Federal Cybersecurity Research and Development Strategic Plan.
Extensive deployment of interoperable distributed energy resources (DER) on power systems is increasing the power system cyber security attack surface. National and jurisdictional interconnection standards require DER to include a range of autonomous and commanded grid support functions which can drastically influence power quality, voltage, and bulk system frequency. This project was split into two phases. The first provided a survey and roadmap of the cybersecurity for the solar industry. The second investigated multiple PV cybersecurity research and development (R&D) concepts identified in the first phase. In the first year, the team created a roadmap for improving cybersecurity for distributed solar energy resources. This roadmap was intended to provide direction for the nation over the next five years and focused on the intersection of industry and government and recommends activities in four related areas: stakeholder engagement, cyber security research and development, standards development, and industry best practices. At the same time, the team produced a primer for DER vendors, aggregators, and grid operators to establish a common taxonomy and describe basic principles of cyber security, encryption, communication protocols, DER cyber security recommendations and requirements, and device-, aggregator-, and utility-level security best practices to ensure data confidentiality, integrity, and availability. This material was motivated by the need to assist the broader PV industry with cybersecurity resilience and describe the state-of-the-art for securing DER communications. Lastly, an adversary-based assessment of multiple PV devices was completed at the Distributed Energy Technologies Laboratory at Sandia National Laboratories to determine the status of industry cybersecurity practices. The team found multiple deficiencies in the security features of the assessed devices. In the second year, a set of recommendations was created for DER communication protocols— especially with respect to the state-of-the-art requirements in IEEE 2030.5. Additionally, several cybersecurity R&D technologies related to communications-enabled photovoltaic systems were studied to harden DER communication networks. Specifically, the team investigated (a) using software defined networking to create a moving target defense system for DER communications, and (b) engineering controls that prevent misprogramming or adversary action on DER devices/networks by disallowing setpoints that will generate unstable power system operations.
An increasing number of jurisdictions are adopting Distributed Energy Resource (DER) interconnection standards which require photovoltaic (PV) inverters, energy storage systems, and other DER to include interoperable grid-support functionality. These functions provide grid operators the nobs to support local and bulk power system operations with DER equipment, but the associated grid operator-to-DER communications networks must be deployed with appropriate cybersecurity features. In some situations, additional security features may prevent control system scalability or increase communication latencies and dropouts. These unintentional consequences of the security features would therefore hinder the ability of the grid operator to implement specific control algorithms. This project evaluated the tradeoffs between power system performance and cybersecurity metrics for several grid services.
This quarter, we have focused on characterizing the electrochemical response, both through cyclic voltammetry and through constant current charge/discharge characterization of the silicon samples coated with silicates containing varying amounts of Li in the SiOx layer. These studies were performed using a standard Gen-2 electrolyte without FEC. We also performed electrochemical impedance spectroscopy on samples exposed to the Gen-2 electrolyte continually, and collected EIS spectra as a function of time and temperature.
Strong ground motion induces acoustic waves in the atmosphere that can be detected at great distances. These waves provide a record of acceleration at the epicenter of the subterranean event. While this information is valuable for nuclear monitoring purposes, a systematic study of the variation in acoustic parameters with explosive yield and depth has not yet been conducted. Here, we provide a survey of low frequency sound waves generated during the Source Physics Phase 1 experiment, in which six chemical explosions were detonated in granite. We found that pressure amplitudes increase with explosion size but decrease with depth as expected. Pressure amplitude variability increased with signal magnitude. Surprisingly, peak frequency appears to increase with depth. A possible directional signal was identified for one of the events as well. The results presented here may aid the nuclear monitoring community in developing means of determining event depth and yield using acoustic methods. This will complement existing algorithms based on seismic radiation.
Magnesium oxide (MgO)-engineered barriers used in subsurface applications will be exposed to high concentration brine environments and may form stable intermediate phases that can alter the effectiveness of the barrier. To explore the formation of these secondary intermediate phases, MgO was aged in water and three different brine solutions and characterized with X-ray diffraction (XRD) and 1H magic angle spinning (MAS) nuclear magnetic resonance (NMR) spectroscopy. After aging, there is ∼4% molar equivalent of a hydrogen-containing species formed. The 1H MAS NMR spectra resolved multiple minor phases not visible in XRD, indicating that diverse disordered proton-containing environments are present in addition to crystalline Mg(OH)2 brucite. Density functional theory (DFT) simulations for the proposed Mg-O-H-, Mg-Cl-O-H-, and Na-O-H-containing phases were performed to index resonances observed in the experimental 1H MAS NMR spectra. Although the intermediate crystal structures exhibited overlapping 1H NMR resonances in the spectra, Mg-O-H intermediates were attributed to the growth of resonances in the δ +1.0 to 0.0 ppm region, and Mg-Cl-O-H structures produced the increasing contributions of the δ = +2.5 to 5.0 ppm resonances in the chloride-containing brines. Overall, 1H NMR analysis of aged MgO indicates the formation of a wide range of possible intermediate structures that cannot be observed or resolved in the XRD analysis.
We present VideoSwarm, a system for visualizing video ensembles generated by numerical simulations. VideoSwarm is a web application, where linked views of the ensemble each represent the data using a different level of abstraction. VideoSwarm uses multidimensional scaling to reveal relationships between a set of simulations relative to a single moment in time, and to show the evolution of video similarities over a span of time. VideoSwarm is a plug-in for Slycat, a web-based visualization framework which provides a web-server, database, and Python infrastructure. The Slycat framework provides support for managing multiple users, maintains access control, and requires only a Slycat supported commodity browser (such as Firefox, Chrome, or Safari).
This work demonstrates void-free Cu filling of millimeter size Through Silicon Vias (mm-TSV) in an acid copper sulfate electrolyte using a combination of a polyoxamine suppressor and chloride, analogous to previous work filling TSV that were an order of magnitude smaller in size. For high chloride concentration (i.e., 1 mmol/L) bottom-up deposition is demonstrated with the growth front being convex in shape. Instabilities in filling profile arise as the growth front approaches the freesurface due to non-uniform coupling with electrolyte hydrodynamics Filling is negatively impacted by large lithography-induced reentrant notches that increase the via cross section at the bottom. In contrast, deposition from low chloride electrolytes, proceeds with a passive-active transition on the via sidewalls. For a given applied potential the location of the transition is fixed in time and the growth front is concave in nature reflecting the gradient in chloride surface coverage. Application of a suitable potential wave form enables the location of the sidewall transition to be advanced thereby giving rise to void-free filling of the TSV.
In order to support the codesign needs of ECP applications in current and future hardware in the area of machine learning, the ExaLearn team at Sandia studied the different machine learning use cases in three different ECP applications. This report is a summary of the needs of the three applications. The Sandia ExaLearn team will develop a proxy application representative of ECP application needs, specifically the ExaSky and EXAALT ECP projects. The proxy application will allow us to demonstrate performance portable kernels within machine learning codes. Furthermore, current training scalability of machine learning networks in these applications is negatively affected by large batch sizes. Training throughput of the network will increase as batch size increases, but network accuracy and generalization worsens. The proxy application will contain hybrid model- and data-parallelism to improve training efficiency while maintaining network accuracy. The proxy application will also target optimizing 3D convolutional layers, specific to scientific machine learning, which have not been as thoroughly explored by industry.
A comprehensive comparison of the dominant sources of radiation-induced blur for radiographic imaging system performance is made. End-point energies of 6, 10, 15, and 20 MeV bremsstrahlung photon radiation produced at the Los Alamos National Laboratory Microtron facility were used to examine the performance of large-panel cerium-doped lutetium yttrium silicon oxide (LYSO:Ce) scintillators 3, 5 and 10 mm thick. The system resolution was measured and compared between the various end-point energies and scintillator thicknesses. Contrary to expectations, it is found that there was only a minor dependence of system resolution on scintillator thickness or beam end-point energy. This indicates that increased scintillator thickness does not have a dramatic effect on system performance. The data are then compared to Geant4 simulations to assess contributions to the system performance through the examination of modulation transfer functions. It was determined that the low-frequency response of the system is dominated by the radiation-induced signal, while the higher-frequency response of the system is dominated by the optical imaging of the scintillation emission.
Solid-state metal hydrides are prime candidates to replace compressed hydrogen for fuel cell vehicles due to their high volumetric capacities. Sodium aluminum hydride has long been studied as an archetype for higher-capacity metal hydrides, with improved reversibility demonstrated through the addition of titanium catalysts; however, atomistic mechanisms for surface processes, including hydrogen desorption, are still uncertain. Here in this paper, operando and ex situ measurements from a suite of diagnostic tools probing multiple length scales are combined with ab initio simulations to provide a detailed and unbiased view of the evolution of the surface chemistry during hydrogen release. In contrast to some previously proposed mechanisms, the titanium dopant does not directly facilitate desorption at the surface. Instead, oxidized surface species, even on well-protected NaAlH4 samples, evolve during dehydrogenation to form surface hydroxides with differing levels of hydrogen saturation. Additionally, the presence of these oxidized species leads to considerably lower computed barriers for H2 formation compared to pristine hydride surfaces, suggesting that oxygen may actively participate in hydrogen release, rather than merely inhibiting diffusion as is commonly presumed. These results demonstrate how close experiment–theory feedback can elucidate mechanistic understanding of complex metal hydride chemistry and potentially impactful roles of unavoidable surface impurities.
We report on the first accurate validation of low-Z ion-stopping formalisms in the regime ranging from low-velocity ion stopping - through the Bragg peak - to high-velocity ion stopping in well-characterized high-energy-density plasmas. These measurements were executed at electron temperatures and number densities in the range of 1.4-2.8 keV and 4×1023-8×1023 cm-3, respectively. For these conditions, it is experimentally demonstrated that the Brown-Preston-Singleton formalism provides a better description of the ion stopping than other formalisms around the Bragg peak, except for the ion stopping at vi∼0.3vth, where the Brown-Preston-Singleton formalism significantly underpredicts the observation. It is postulated that the inclusion of nuclear-elastic scattering, and possibly coupled modes of the plasma ions, in the modeling of the ion-ion interaction may explain the discrepancy of ∼20% at this velocity, which would have an impact on our understanding of the alpha energy deposition and heating of the fuel ions, and thus reduce the ignition threshold in an ignition experiment.
In the past decade, basic physics, chemistry, and materials science research on topological quantum materials - and their potential use to implement reliable quantum computers - has rapidly expanded to become a major endeavor. A pivotal goal of this research has been to realize materials hosting Majorana quasiparticles, thereby making topological quantum computing a technological reality. While this goal remains elusive, recent data-mining studies, performed using topological quantum chemistry methodologies, have identified thousands of potential topological materials - some, and perhaps many, with potential for hosting Majoranas. We write this Review for advanced materials researchers who are interested in joining this expanding search, but who are not currently specialists in topology. The first half of the Review addresses, in readily understood terms, three main areas associated with topological sciences: (1) a description of topological quantum materials and how they enable quantum computing; (2) an explanation of Majorana quasiparticles, the important topologically endowed properties, and how it arises quantum mechanically; and (3) a description of the basic classes of topological materials where Majoranas might be found. The second half of the Review details selected materials systems where intense research efforts are underway to demonstrate nontrivial topological phenomena in the search for Majoranas. Specific materials reviewed include the groups II-V semiconductors (Cd3As2), the layered chalcogenides (MX2, ZrTe5), and the rare-earth pyrochlore iridates (A2Ir2O7, A = Eu, Pr). In each case, we describe crystallographic structures, bulk phase diagrams, materials synthesis methods (bulk, thin film, and/or nanowire forms), methods used to characterize topological phenomena, and potential evidence for the existence of Majorana quasiparticles.
Here, the formation of zonal flows from inhomogeneous drift-wave (DW) turbulence is often described using statistical theories derived within the quasilinear approximation. However, this approximation neglects wave–wave collisions. Hence, some important effects such as the Batchelor–Kraichnan inverse-energy cascade are not captured within this approach. Here we derive a wave kinetic equation that includes a DW collision operator in the presence of zonal flows. Our derivation makes use of the Weyl calculus, the quasinormal statistical closure and the geometrical-optics approximation. The obtained model conserves both the total enstrophy and energy of the system. The derived DW collision operator breaks down at the Rayleigh–Kuo threshold. This threshold is missed by homogeneous-turbulence theory but expected from a full-wave quasilinear analysis. In the future, this theory might help better understand the interactions between drift waves and zonal flows, including the validity domain of the quasilinear approximation that is commonly used in the literature.
The Nalu Exascale Wind application assembles linear systems using data structures provided by the Tpetra package in Trilinos. This note describes the initialization and assembly process. The purpose of this note is to help Nalu developers and maintainers to understand the code surrounding linear system assembly, in order to facilitate debugging, optimizations, and maintenance.
The repetitive rapid solidification that occurs in metal additive manufacturing (AM) processes creates microstructures distinctly different from wrought materials. Local variability in AM microstructures (either by design or unintentional) raises questions as to how AM structures should be modeled at the part-scale to minimize modeling error. The key goal of this work is to demonstrate a posteriori error estimation applied to an AM part. It is assumed that the actual microstructure is unknown and an approximate, spatially uniform material model is used. Error bounds are calculated for many reference models based on AM microstructures with elongated grain morphology and localized or global fiber textures during a post-processing step. The current findings promote confidence that a posteriori model form error estimation could be used effectively in mechanical performance simulations of AM parts to quickly obtain quantitative error metrics between an approximate model result and many microstructure-based reference models. The a posteriori error estimation introduces significant time savings compared to computing the full reference model solutions. Tight bounds on model form error are obtained when texture variations in the reference models occur on large length scales. For materials with property variation at small length scales, multi-scale error estimation techniques are needed to properly account for the many interfaces present between areas with different properties.
The National Rotor Testbed (NRT) design verification experiment is the first test of the new NRT blades retrofitted to the existing Vestas V27 hub and nacelle operated at the Sandia Scaled Wind Farm Technology (SWiFT) facility. This document lays out a plan for pre-assembly, ground assembly, installation, commissioning, and flight testing the NRT rotor. Its performance will be quantified. Adjustments to torque constant and collective blade pitch will be made to ensure that the tip-speed-ratio and span-wise loading are as close to the NRT design as possible. This will ensure that the NRT creates a scaled wake of the GE 1.5sle turbine. Upon completion of this test, the NRT will be in an operational state, ready for future experiments.
This application note describes some known differences in syntax, parsing, and supported features between the HSPICE and Xyce circuit simulators that might be relevant to both internal Sandia Xyce users and other performers on the DARPA Posh Open Source Hardware (POSH) program. It also presents strategies for converting HSPICE netlists and libraries to Xyce netlists and libraries.
This report explores the reliance on communication systems for bulk grid operations and considers selected options as a supplement to cyber security. The extreme scenario of a complete loss of communications for power grid operation is assessed, presenting a bounded, worst-case perspective. The paper explores grid communications failures and how a system modifications can, at an increased cost, retain a moderate level of preparedness for a loss of communications and control when used in partnership with cyber security protocols. Doing so allows the increased economic and secure operation that communication based controls affords, but also ensures a level of resilient operation if they are lost. The motivation of this paper is due to the proliferation of photovoltaic (PV) resources, and more generally, smart-grid resources within the US grid, which are requiring more and more active and wide-area controls. Though the loss of communication and control can affect nearly any grid control system, the risk of losing load at large scales requires a broad view of system interconnectivity, so it has been evaluated from a transmission perspective in this report.
Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014), Hadgu et al. (2015) and Hadgu and Appel (2016), and Hadgu et al. (2017). The TSPA computing hardware (2014 server cluster -CL2014) and storage system described in Hadgu et al. (2015) were used for the current analysis. One floating license of Gold Sim with Versions 9.60.300, 10.5, 11.1 and 12.0 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The FY18 task included developing an inventory of software used for the Yucca Mountain Project process models and preliminary assessment of status of the software; enhancing security of the cluster and setting a backup system. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.
The STDA05-17 milestone comprises the following 3 deliverables. VTK-m Release 2 We will provide a release of VTK-m software and associated documentation. The source code repository will be tagged at a stable state, and, at a minimum, tarball captures of the source code will be made available from the web site. A version of the VTK-m User's Guide documenting this release will also be made available. Productionize zfp compression The "ZFP: Compressed Floating-Point Arrays" project (WBS 1.3.4.13) is creating an implementation of ZFP compression in VTK-m. Their implementation will be focused on operating in CUDA. The VTK-m project will assist by generalizing the implementation to other devices (such as multi-core CPUs). We will also assist in productionizing the code such that it can be used by external projects and products. Clip Clip operations intersect meshes with implicit functions. It is the foundation of spatial subsetting algorithms, such as "box," and the foundation of data-based subsetting, such as "isovolume." The algorithm requires considering thousands of possible cases, and is thus quite difficult to implement. This milestone will implement clipping to be sufficient for Visit's and ParaView's needs.
The STDA05-16 milestone comprises the following 3 distinct deliverables. OpenMP VTK-m currently supports three types of devices: serial CPU, TBB, and CUDA. To run algorithms on multicore CPU-type devices (such as Xeon and Xeon Phi), TBB is required. However, there are known issues with integrating a software product using TBB with another one using OpenMP. Therefore, we will add an OpenMP device to the VTK-m software. When engaged, this device will run parallel algorithms using OpenMP directives. This will mesh more nicely with other code also using OpenMP. Rendering Topological Entities VTK-m currently supports surface rendering by tessellation of data structures,and rendering the resulting triangles. We will extend current functionality to include face, edge, and point rendering. Better Dynamic Types Impl For the best efficiency across all platforms, VTK-m algorithms use static typing with C++ templates. However, many libraries like VTK, ParaView, and Visit use dynamic types with virtual functions because data types often cannot be determined at compile time. We have an interface in VTK-m to merge these two typing mechanisms by generating all possible combinations of static types when faced with a dynamic type. Although this mechanism works, it generates very large executables and takes a very long time to compile. As we move forward, it is clear that these problems will get worse and become infeasible at exascale. We will rectify the problem by introducing some level of virtual methods, which require only a single code path, within VTK-m algorithms. This first milestone produces a design document to propose an approach to the new system.
We have completed a series of small-scale cook-off experiments of ammonium nitrate (AN) prills in our Sandia Instrumented Thermal Ignition test at nominal packing densities of about 0.8 g/cm3. We increased the boundary temperature of our aluminum confinement cylinder from room temperature to a prescribed set-point temperature in 10 min. Our set-point temperature ranged from 508 to 538 K. The external temperature of the confining cylinder was held at the set-point temperature until ignition. We used type K thermocouples to measure temperatures associated with several polymorphic phase changes as well as melting and boiling. As the AN boiled, our thermocouples were destroyed by corrosion, which may have been caused by reaction of hot nitric acid (HNO3) with nickel to form nickel nitrate, Ni(NO3)2. Videos of the corroding thermocouples showed a green solution that was similar to the color of Ni(NO3)2. We found that ignition was imminent as the AN boiling point was exceeded. Ignition of the AN prills was modeled by solving the energy equation with an energy source due to desorption of moisture and decomposition of AN to form equilibrium products. A Boussinesq approximation was used in conjunction with the momentum equation to model flow of the liquid AN. We found that the prediction of ignition was not sensitive to small perturbations in the latent enthalpies.
Due to the complex multiscale interaction between intense turbulence and relatively weak flames, turbulent premixed flames in the thin and broken reaction zones regimes exhibit strong finite-rate chemistry and strain effects and are hence challenging to model. In this work, a laboratory premixed jet flame in the broken reaction zone, which has recently been studied using direct numerical simulation (DNS), is modeled using a large eddy simulation (LES)/dynamic thickened flame (DTF) approach with detailed chemistry. The presence of substantial flame thickening due to strong turbulence-chemistry interactions, which can be characterized by a high Karlovitz number (Ka), requires the DTF model to thicken the flame in an adaptive way based on the local resolution of flame scales. Here, an appropriate flame sensor and strain-sensitive flame thickness are used to automatically determine the thickening location and thickening factor, respectively. To account for finite-rate chemistry and strain effects, the chemistry is described in two different ways: (1) detailed chemistry denoted as full transport and chemistry (FTC), and (2) tabulated chemistry based on a strained premixed flamelet (SPF) model. The performance of the augmented LES/DTF approach for modeling the high Ka premixed flame is assessed through detailed a posteriori comparisons with DNS of the same flame. It is found that the LES/DTF/FTC model is capable of reproducing most features of the high Ka turbulent premixed flame including accurate CO and NO prediction. The LES/DTF/SPF model has the potential to capture the impact of strong turbulence on the flame structure and provides reasonable prediction of pollutant emissions at a reasonable computational cost. In order to identify the impact of aerodynamic strain, the turbulent flame structure is analyzed and compared with unstrained and strained premixed flamelet solutions. The results indicate that detailed strain effects should be considered when using tabulated methods to model high Ka premixed flames.
Shin, Dong H.; Richardson, Edward S.; Aparece-Scutariu, Vlad; Minamoto, Yuki; Chen, Jacqueline H.
The link between the distribution of fluid residence time and the distribution of reactive scalars is analysed using Direct Numerical Simulation data. Information about the reactive scalar distribution is needed in order to model the reaction terms that appear in Large Eddy and Reynolds-Averaged simulations of turbulent reacting flows. The lifted flame is simulated taking account of multi-step chemistry for dimethyl-ether fuel. Due to autoignition and flame propagation, the reaction progress increases with residence time. The variation of fluid residence time is evaluated by solving an Eulerian transport equation for the fluid age. The fluid age is a passive scalar with a spatially-uniform source term, meaning that its moments and dissipation rates in turbulent flows can be modelled using closures already established for conserved scalars such as mixture fraction. In combination with the mixture fraction, the fluid age serves as a useful mapping variable to distinguish younger less-reacted fluid near the inlet from older more-reacted fluid downstream. The local fluctuations of mixture fraction and fluid age have strong negative correlation and, building upon established presumed-pdf models for mixture fraction, this feature can be used to construct an accurate presumed-pdf model for the joint mixture fraction/fluid age pdf. It is demonstrated that the double-conditional first-order moment closure combined with the proposed presumed model for the joint pdf of mixture fraction and fluid age gives accurate predictions for unconditional reaction rates - both for pre-ignition radical species produced by low-temperature processes upstream of the flame base, and for major species that are produced at the flame front.
A three-dimensional direct numerical simulation (DNS) is performed for a turbulent hydrogen-air flame, represented with detailed chemistry, stabilized in a model gas-turbine combustor. The combustor geometry consists of a mixing duct followed by a sudden expansion and a combustion chamber, which represents a geometrically simplified version of Ansaldo Energia's GT26/GT36 sequential combustor design. In this configuration, a very lean blend of hydrogen and vitiated air is prepared in the mixing duct and convected into the combustion chamber, where the residence time from the inlet of the mixing duct to the combustion chamber is designed to coincide with the ignition delay time of the mixture. The results show that when the flame is stabilized at its design position, combustion occurs due to both autoignition and flame propagation (deflagration) modes at different locations within the combustion chamber. A chemical explosive mode analysis (CEMA) reveals that most of the fuel is consumed due to autoignition in the bulk-flow along the centerline of the combustor, and lower amounts of fuel are consumed by flame propagation near the corners of the sudden expansion, where the unburnt temperature is reduced by the thermal wall boundary layers. An unstable operating condition is also identified, wherein periodic auto-ignition events occur within the mixing duct. These events appear upstream of the intended stabilization position, due to positive temperature fluctuations induced by pressure waves originating from within the combustion chamber. The present DNS investigation represents the initial step of a comprehensive research effort aimed at gaining detailed physical insight into the rate-limiting processes that govern the sequential combustor behavior and avoid the insurgence of the off-design auto-ignition events.
The ignition process in diesel engines is highly complex and incompletely understood. In the present study, two-dimensional direct numerical simulations are performed to investigate the ignition dynamics and their sensitivity to thermochemical and mixing parameters. The thermochemical and mixing conditions are matched to the benchmark Spray A experiment from the Engine Combustion Network. The results reveal a complex ignition process with overlapping stages of: low-temperature ignition (cool flames), rich premixed ignition, and nonpremixed ignition, which are qualitatively consistent with prior experimental and numerical investigations, however, this is the first time that fully-resolved simulations have been reported at the actual Spray A thermochemical condition. Parametric variations are then performed for the Damkohler number Da, oxidiser temperature, oxygen concentration, and peak mixture fraction (a measure of premixedness), to study their effect on the ignition dynamics. It is observed that with both increasing oxidiser temperature and decreasing oxygen concentration, that the cool flame moves to richer mixtures, the overlap in the ignition stages decreases, and the (nondimensional) time taken to reach a fully burning state increases. With increasing Da, the cool-flame speed is decreased due to lower mean mixing rates, which causes a delayed onset of high-temperature ignition. With increasing peak mixture fraction, the onset of each stage of ignition is not affected, but the overall duration of the ignition increases leading to a longer burn duration. Overall, the results suggest that turbulence-chemistry interactions play a significant role in determining the timing and location in composition space of the entire ignition process.
In turbulent premixed flame propagation, the formation of isolated pockets of reactants or products is associated with flame pinch-off events which cause rapid changes in the flame surface area. Previous topological analysis of these phenomena has been carried out based on Morse theory and Direct Numerical Simulation (DNS) in two spatial dimensions. The present work extends the topological analysis to three dimensions with emphasis on the formation and subsequent burnout of reactant pockets. Singular behaviour observed previously for terms of the Surface Density Function (SDF) transport equation in the two-dimensional case is shown to occur also in three dimensions. Further singular behaviour is observed in the displacement speed close to reactant pocket burnout. The theory is compared against DNS data from hydrogen-air flames and good agreement is obtained.
Dalakoti, Deepak K.; Krisman, Alex; Savard, Bruno; Wehrfritz, Armin; Wang, Haiou; Day, Marc S.; Bell, John B.; Hawkes, Evatt R.
We investigate the influence of inflow velocity (Vin) and scalar dissipation rate (χ) on the flame structure and stabilisation mechanism of steady, laminar partially premixed n-dodecane edge flames stabilised on a convective mixing layer. Numerical simulations were performed for three different χ profiles and several Vin (Vin = 0.2 to 2.5m/s). The ambient thermochemical conditions were the same as the Engine Combustion Network's (ECN) Spray A flame, which in turn represents conditions in a typical heavy duty diesel engine. The results of a combustion mode analysis of the simulations indicate that the flame structure and stabilisation mechanism depend on Vin and χ. For low Vin the flame is attached. Increasing Vin causes the high-temperature chemistry (HTC) flame to lift-off, while the low-temperature chemistry (LTC) flame is still attached. A unique speed SR associated with this transition is defined as the velocity at which the lifted height has the maximum sensitivity to changes in Vin. This transition velocity is negatively correlated with χ. Near Vin=SR a tetrabrachial flame structure is observed consisting of a triple flame, stabilised by flame propagation into the products of an upstream LTC branch. Further increasing the inlet velocity changes the flame structure to a pentabrachial one, where an additional HTC ignition branch is observed upstream of the triple flame and ignition begins to contribute to the flame stabilisation. At large Vin, the LTC is eventually lifted, and the speed at which this transition occurs is insensitive to χ. Further increasing Vin increases the contribution of ignition to flame stabilisation until the flame is completely ignition stabilised. Flow divergence caused by the LTC branch reduces the χ at the HTC branches making the HTC more resilient to χ. The results are discussed in the context of identification of possible stabilisation modes in turbulent flames.
Krattiger, Dimitri; Wu, Long; Zacharczuk, Martin; Buck, Martin; Kuether, Robert J.; Allen, Matthew S.; Tiso, Paolo; Brake, Matthew R.W.
The Hurty/Craig-Bampton method in structural dynamics represents the interior dynamics of each subcomponent in a substructured system with a truncated set of normal modes and retains all of the physical degrees of freedom at the substructure interfaces. This makes the assembly of substructures into a reduced-order system model relatively simple, but means that the reduced-order assembly will have as many interface degrees of freedom as the full model. When the full-model mesh is highly refined, and/or when the system is divided into many subcomponents, this can lead to an unacceptably large system of equations of motion. To overcome this, interface reduction methods aim to reduce the size of the Hurty/Craig-Bampton model by reducing the number of interface degrees of freedom. This research presents a survey of interface reduction methods for Hurty/Craig-Bampton models, and proposes improvements and generalizations to some of the methods. Some of these interface reductions operate on the assembled system-level matrices while others perform reduction locally by considering the uncoupled substructures. The advantages and disadvantages of these methods are highlighted and assessed through comparisons of results obtained from a variety of representative linear FE models.
Existing machines for lazy evaluation use a flat representation of environments, storing the terms associated with free variables in an array. Combined with a heap, this structure supports the shared intermediate results required by lazy evaluation. We propose and describe an alternative approach that uses a shared environment to minimize the overhead of delayed computations. We show how a shared environment can act as both an environment and a mechanism for sharing results. To formalize this approach, we introduce a calculus that makes the shared environment explicit, as well as a machine to implement the calculus, the Cactus Environment Machine. A simple compiler implements the machine and is used to run experiments for assessing performance. The results show reasonable performance and suggest that incorporating this approach into real-world compilers could yield performance benefits in some scenarios.
The computational burden of a large-eddy simulation for reactive flows is exacerbated in the presence of uncertainty in flow conditions or kinetic variables. A comprehensive statistical analysis, with a sufficiently large number of samples, remains elusive. Statistical learning is an approach that allows for extracting more information using fewer samples. Such procedures, if successful, will greatly enhance the predictability of models in the sense of improving exploration and characterization of uncertainty due to model error and input dependencies, all while being constrained by the size of the associated statistical samples. In this paper, it is shown how a recently developed procedure for probabilistic learning on manifolds can serve to improve the predictability in a probabilistic framework of a scramjet simulation. The estimates of the probability density functions of the quantities of interest are improved together with estimates of the statistics of their maxima. It is also demonstrated how the improved statistical model adds critical insight to the performance of the model.
We present a meshfree quadrature rule for compactly supported nonlocal integro-differential equations (IDEs) with radial kernels. We apply this rule to develop a meshfree discretization of a peridynamic solid mechanics model that requires no background mesh. Existing discretizations of peridynamic models have been shown to exhibit a lack of asymptotic compatibility to the corresponding linearly elastic local solution. By posing the quadrature rule as an equality constrained least squares problem, we obtain asymptotically compatible convergence by introducing polynomial reproduction constraints. Our approach naturally handles traction-free conditions, surface effects, and damage modeling for both static and dynamic problems. We demonstrate high-order convergence to the local theory by comparing to manufactured solutions and to cases with crack singularities for which an analytic solution is available. Finally, we verify the applicability of the approach to realistic problems by reproducing high-velocity impact results from the Kalthoff–Winkler experiments.
Finite element models are regularly used in many disciplines to predict dynamic behavior of a structure under certain loads and subject to various boundary conditions, in particular when analytical models cannot be used due to geometric complexity. One such example is a structure with an entrained fluid cavity. To assist an experimental study of the acoustoelastic effect, numerical studies of an enclosed cylinder were performed to design the test hardware. With a system that demonstrates acoustoelastic coupling, it was then desired to make changes to decouple the structure from the fluid by making changes to either the fluid or the structure. In this paper, simulation is used to apply various changes and observe the effects on the structural response to choose an effective decoupling approach for the experimental study.
Several recent studies (Mayes, R.L., Pacini, B.R., Roettgen, D.R.: A modal model to simulate typical structural dynamics nonlinearity. In: Proceedings of the 34th International Modal Analysis Conference. Orlando, FL, (2016); Pacini, B.R., Mayes, R.L., Owens, B.C., Schultz, R.: Nonlinear finite element model updating, part I: experimental techniques and nonlinear modal model parameter extraction. In: Proceedings of the 35th international modal analysis conference, Garden Grove, CA, (2017)) have investigated predicting nonlinear structural vibration responses using modified modal models. In such models, a nonlinear element is added in parallel to the traditional linear spring and damping elements. This assumes that the mode shapes do not change with amplitude and there are no interactions between modal degrees of freedom. Previous studies have predominantly applied this method to idealistic structures. In this work, the nonlinear modal modeling technique is applied to a more realistic industrial aerospace structure which exhibits complex bilinear behavior. Linear natural frequencies, damping values, and mode shapes are first extracted from low level shaker testing. Subsequently, the structure is excited using high level tailored shaker inputs. The resulting response data are modally filtered and used to empirically derive the nonlinear elements which, together with their linear counterparts, comprise the nonlinear modal model. This model is then used in both modal and physical domain simulations. Comparisons to measured data are made and the performance of the nonlinear modal model to predict this complex bilinear behavior is discussed.
The atmospheric dispersion of contaminants in the wake of a large urban structure is a challenging fluid mechanics problem of interest to the scientific and engineering communities. Magnetic Resonance Velocimetry (MRV) is a relatively new technique that leverages diagnostic equipment used primarily by the medical field to make 3D engineering measurements of water flow and contaminant dispersal. SIERRA/Fuego, a computational fluid dynamics (CFD) code at Sandia National Labs is employed to make detailed comparisons to the dataset to evaluate the quantitative and qualitative accuracy of the model. The comparison exercise shows good comparison between model and experimental results, with the wake region downstream of the tall building presenting the most significant challenge to the quantitative accuracy of the model. Model uncertainties are assessed through parametric variations. Some observations are made in relation to the future utility of MRV and CFD, and some productive follow-on activities are suggested that can help mature the science of flow modeling and experimental testing.
We consider a standard elliptic partial differential equation and propose a geometric multigrid algorithm based on Dirichlet-to-Neumann (DtN) maps for hybridized high-order finite element methods. The proposed unified approach is applicable to any locally conservative hybridized finite element method including multinumerics with different hybridized methods in different parts of the domain. For these methods, the linear system involves only the unknowns residing on the mesh skeleton, and constructing intergrid transfer operators is therefore not trivial. The key to our geometric multigrid algorithm is the physics-based energy-preserving intergrid transfer operators which depend only on the fine scale DtN maps. Thanks to these operators, we completely avoid upscaling of parameters and no information regarding subgrid physics is explicitly required on coarse meshes. Moreover, our algorithm is agglomeration-based and can straightforwardly handle unstructured meshes. We perform extensive numerical studies with hybridized mixed methods, hybridized discontinuous Galerkin methods, weak Galerkin methods, and hybridized versions of interior penalty discontinuous Galerkin methods on a range of elliptic problems including subsurface flow through highly heterogeneous porous media. We compare the performance of different smoothers and analyze the effect of stabilization parameters on the scalability of the multigrid algorithm.
Data science includes a variety of scientific methods and processes to extract data from various sources. The integration of interdisciplinary fields such as mathematics, statistics, information science, and computer science affords techniques to analyze large volumes of data to arrive at unique insights and make data-driven decisions (Sinelnikov et al., 2015) in real time. The technique lends itself to other applications across many domains including hazard assessments, analysis of near-miss data, identification of leading and lagging indicators from past accidents, and others. Benefits of this technique include efficiency due to improved data acquisition. Near-miss data represents an important source to identify conditions that lead to accidents to develop strategies to prevent them. Analysis of near-miss data sets can involve various techniques. This paper will explore the use of data science to mine accident reports, with a special emphasis on near misses to uncover occurrences that were not initially identified in the documentation. Data-science techniques such as text analyses facilitate searching large volumes of data to uncover patterns for more informed decisions. Regarding near-miss data, data science techniques can be used to test the ability to uncover new hazards/ hazardous preconditions and the accuracy of those findings. With the benefits of crunching large data sets and uncovering new hazards, considerations and implications are also made regarding how that might influence safety culture.
Malicious cyber-attacks are becoming increasingly prominent due to the advance of technology and attack methods over the last decade. These attacks have the potential to bring down critical infrastructures, such as nuclear power plants (NPP’s), which are so vital to the country that their incapacitation would have debilitating effects on national security, public health, or safety. Despite the devastating effects a cyber-attack could have on NPP’s, it is unclear how control room operations would be affected in such a situation. In this project, the authors are collaborating with NPP operators to discern the impact of cyber-attacks on control room operations and lay out a framework to better understand the control room operators’ tasks and decision points. A cyber emulation of a digital control system was developed and coupled with a generic pressurized water reactor (GPWR) training simulator at Idaho National Laboratories. Licensed operators were asked to complete a series of scenarios on the simulator in which some of the scenarios were purposely obfuscated; that is, in which indicators were purposely displaying inaccurate information. Of interest is how this obfuscation impacts the ability to keep the plant safe and how it affects operators’ perceptions of workload and performance. Results, conclusions and lessons learned from this pilot experiment will be discussed. This research sheds light onto about how cyber events impact plant operations.
Aerospace systems and components are designed and qualified against several operational environments. Some of these environments are climatic, mechanical, and electrical in nature. Traditionally, mechanical test specifications are derived with the goal of qualifying a system or component to a suite of independent mechanical environments in series. True operational environments, however, are composed of complex, combined events. This work examines the effect of combined mechanical shock and vibration environments on response of a dynamic system. Responses under combined environments are compared to those under single environments, and the adequacy/limitations of conventional, single environment test approaches (shock only or vibration only) will be assessed. Test integration strategies for combined shock and vibration environments are also discussed.
Marques, Osni A.; Bernholdt, David E.; Raybourn, Elaine M.; Barker, Ashley D.; Hartman-Baker, Rebecca J.
Here, in this contribution, we discuss our experiences organizing the Best Practices for HPC Software Developers (HPC-BP) webinar series, an effort for the dissemination of software development methodologies, tools and experiences to improve developer productivity and software sustainability. HPC-BP is an outreach component of the IDEAS Productivity Project [4] and has been designed to support the IDEAS mission to work with scientific software development teams to enhance their productivity and the sustainability of their codes. The series, which was launched in 2016, has just presented its 22nd webinar. We summarize and distill our experiences with these webinars, including what we consider to be "best practices" in the execution of both individual webinars and a long-running series like HPC-BP. We also discuss future opportunities and challenges in continuing the series.
Background: Prior research in falls risk prediction often relies on qualitative and/or clinical methods. There are two challenges with these methods. First, qualitative methods typically use falls history to determine falls risk. Second, clinical methods do not quantify the uncertainty in the classification decision. In this paper, we propose using Bayesian classification to predict falls risk using vectors of gait variables shown to contribute to falls risk. Research Questions: (1) Using a vector of risk ratios for specific gait variables shown to contribute to falls risk, how can older adults be classified as low or high falls risk? and (2) how can the uncertainty in the classifier decision be quantified when using a vector of gait variables? Methods: Using a pressure sensitive walkway, biomechanical measurements of gait were collected from 854 adults over the age of 65. In our method, we first determine low and high falls risk labels for vectors of risk ratios using the k-means algorithm. Next, the posterior probability of low or high falls risk class membership is obtained from a two component Gaussian mixture model (GMM) of gait vectors, which enables risk assessment directly from the underlying biomechanics. We classify the gait vectors using a threshold based on Youden's J statistic. Results: Through a Monte Carlo simulation and an analysis of the receiver operating characteristic (ROC), we demonstrate that our Bayesian classifier, when compared to the k-means falls risk labels, achieves an accuracy greater than 96% at predicting low or high falls risk. Significance: Our analysis indicates that our approach based on a Bayesian framework and an individual's underlying biomechanics can predict falls risk while quantifying uncertainty in the classification decision.
Recently, an approach for determining the value of a visualization was proposed, one moving beyond simple measurements of task accuracy and speed. The value equation contains components for the time savings a visualization provides, the insights and insightful questions it spurs, the overall essence of the data it conveys, and the confidence about the data and its domain it inspires. This articulation of value is purely descriptive, however, providing no actionable method of assessing a visualization's value. In this work, we create a heuristic-based evaluation methodology to accompany the value equation for assessing interactive visualizations. We refer to the methodology colloquially as ICE-T, based on an anagram of the four value components. Our approach breaks the four components down into guidelines, each of which is made up of a small set of low-level heuristics. Evaluators who have knowledge of visualization design principles then assess the visualization with respect to the heuristics. We conducted an initial trial of the methodology on three interactive visualizations of the same data set, each evaluated by 15 visualization experts. We found that the methodology showed promise, obtaining consistent ratings across the three visualizations and mirroring judgments of the utility of the visualizations by instructors of the course in which they were developed.
Al0.26Ga0.74N/GaN on SiC lateral Schottky diodes were fabricated with variable anode-to-cathode spacing and were analyzed for blocking and on-state device performance. On-chip normally-on High Electron Mobility Transistor (HEMT) structures were also fabricated for a comparison of blocking characteristics. The Schottky diode displayed an ideality factor of 1.59 with a Ni/AlGaN zero bias barrier height of 1.18 eV and a flat band barrier height of 1.59 eV. For anode-to-cathode spacings between 10 and 100 μm, an increase in median breakdown voltages from 529 V to 8519 V and median specific on-resistance (Ron-sp) from 1.5 to 60.7 mΩ cm2 was observed with an increase in spacing. The highest performing diode had a lateral figure of merit of 1.37 GW/cm2 corresponding to a breakdown voltage upwards of 9 kV and a Ron-sp of 59 mΩ cm2. This corresponds to the highest Schottky diode breakdown voltage reported thus far with an Al0.26Ga0.74N/GaN lateral structure.
Statechart modelling notations, with so-called ‘run to completion’ semantics and simulation tools for validation, are popular with engineers for designing systems. However, they do not support formal refinement and they lack formal static verification methods and tools. For example, properties concerning the synchronisation between different parts of a system may be difficult to verify for all scenarios, and impossible to verify at an abstract level before the full details of substates have been added. Event-B, on the other hand, is based on refinement from an initial abstraction and is designed to make formal verification by automatic theorem provers feasible, restricting instantiation and testing to a validation role. In this paper, we introduce a notion of refinement, similar to that of Event-B, into a ‘run to completion’ Statechart modelling notation, and leverage Event-B’s tool support for proof. We describe the pitfalls in translating ‘run to completion’ models into Event-B refinements and suggest a solution. We illustrate the approach using our prototype translation tools and show by example, how a synchronisation property between parallel Statecharts can be automatically proven at an intermediate refinement level.
We describe for the first time hydrogen bonded acid (HBA) polymer, poly[methyl[3-(2-hydroxyl, 4,6-bistrifluoromethyl)- phenyl]propylsiloxane], (DKAP), as stationary phase for gas chromatography (μGC) of organophosphate (OP), chemical warfare agent (CWA) surrogates, dimethylmethylphosphonate (DMMP), diisopropylmethylphosphonate (DIMP), diethylmethylphosphonate (DEMP), and trimethylphosphate (TMP), with high selectivity. Absorption of OPs to DKAP was one-to-several orders of magnitude higher relative to commercial polar, mid-polar, and nonpolar stationary phases. We also present for the first-time thermodynamic studies on the absorption of OP vapors and quantitative binding energy data for interactions with various stationary phases. These data help to identify the best pair of hetero-polar columns for a two-dimensional GC system, employing a nonpolar stationary phase as GC1 and DKAP as the GC2 stationary phase, for selective and rapid field detection of CWAs.
Numerically modeling chatter behavior of small electrical components embedded within larger components is challenging. Reduced order models (ROMs) have been developed to assess these components’ chatter behavior in vibration and shock environments. These ROMs require experimental validation to instill confidence that these components meet their performance requirements. While achieving conservative results, experimental validation is required, especially considering that the ROMs neglect the viscous damping effects of the fluid that surrounds these particular components within their system. Dynamic ring-down data of the electrical receptacles in air will be explored and will be assessed as to whether that data provides a validation data set for this ROM. Additional data will be examined in which dynamic ring-down data was taken on the receptacle while submerged in an oil, resulting in a unique experimental setup that should prove as a proof of concept for this type of testing on small components in unique environments.
Historically, control systems have primarily depended upon their isolation from the Internet and from traditional information technology (IT) networks as a means of maintaining secure operation in the face of potential remote attacks over computer networks. However, these networks are incrementally being upgraded and are becoming more interconnected with external networks so they can be effectively managed and configured remotely. Examples of control systems include the electrical power grid, smart grid networks, microgrid networks, oil and natural gas refineries, water pipelines, and nuclear power plants. Given that these systems are becoming increasingly connected, computer security is an essential requirement as compromises can result in consequences that translate into physical actions and significant economic impacts that threaten public health and safety. Moreover, because the potential consequences are so great and these systems are remotely accessible due to increased interconnectivity, they become attractive targets for adversaries to exploit via computer networks. Several examples of attacks on such systems that have received a significant amount of attention include the Stuxnet attack, the US-Canadian blackout of 2003, the Ukraine blackout in 2015, and attacks that target control system data itself. Improving the cybersecurity of electrical power grids is the focus of our research.
Experimental modal analysis via shaker testing introduces errors in the measured structural response that can be attributed to the force transducer assembly fixed on the vibrating structure. Previous studies developed transducer mass-cancellation techniques for systems with translational degrees of freedom; however, studies addressing this problem when rotations cannot be neglected are sparse. In situations where rotations cannot be neglected, the apparent mass of the transducer is dependent on its geometry and is not the same in all directions. This paper investigates a method for correcting the measured system response that is contaminated with the effects of the attached force transducer mass and inertia. Experimental modal substructuring facilitated estimations of the translational and rotational mode shapes at the transducer connection point, thus enabling removal of an analytical transducer model from the measured test structure resulting in the corrected response. A numerical analysis showed the feasibility of the proposed approach in estimating the correct modal frequencies and forced response. To provide further validation, an experimental analysis showed the proposed approach applied to results obtained from a shaker test more accurately reflected results obtained from a hammer test.
Qualification of complex systems often involves shock and vibration testing at the component level to ensure each component is robust enough to survive the specified environments. In order for the component testing to adequately satisfy the system requirements, the component must exhibit a similar dynamic response between the laboratory component test and system test. There are several aspects of conventional testing techniques that may impair this objective. Modal substructuring provides a framework to accurately assess the level of impairment introduced in the laboratory setup. If the component response is described in terms of fixed-base modes in both the laboratory and system configurations, we can gain insight into whether the laboratory test is exercising the appropriate damage potential. Further, the fixed-base component response in the system can be used to determine the correct rigid body laboratory fixture input to overcome the errors seen in the standard component test. In this paper, we investigate the effectiveness of reproducing a system shock environment on a simple beam model with an essentially rigid fixture.
This work introduces a new method to efficiently solve optimization problems constrained by partial differential equations (PDEs) with uncertain coefficients. The method leverages two sources of inexactness that trade accuracy for speed: (1) stochastic collocation based on dimension-Adaptive sparse grids (SGs), which approximates the stochastic objective function with a limited number of quadrature nodes, and (2) projection-based reduced-order models (ROMs), which generate efficient approximations to PDE solutions. These two sources of inexactness lead to inexact objective function and gradient evaluations, which are managed by a trust-region method that guarantees global convergence by adaptively refining the SG and ROM until a proposed error indicator drops below a tolerance specified by trust-region convergence theory. A key feature of the proposed method is that the error indicator|which accounts for errors incurred by both the SG and ROM|must be only an asymptotic error bound, i.e., a bound that holds up to an arbitrary constant that need not be computed. This enables the method to be applicable to a wide range of problems, including those where sharp, computable error bounds are not available; this distinguishes the proposed method from previous works. Numerical experiments performed on a model problem from optimal ow control under uncertainty verify global convergence of the method and demonstrate the method's ability to outperform previously proposed alternatives.
Input/output (I/O) from various sources often contend for scarcely available bandwidth. For example, checkpoint/restart (CR) protocols can help to ensure application progress in failure-prone environments. However, CR I/O alongside an application's normal, requisite I/O can increase I/O contention and might negatively impact performance. In this work, we consider different aspects (system-level scheduling policies and hardware) that optimize the overall performance of concurrently executing CR-based applications that share I/O resources. We provide a theoretical model and derive a set of necessary constraints to minimize the global waste on a given platform. Our results demonstrate that Young/Daly's optimal checkpoint interval, despite providing a sensible metric for a single, undisturbed application, is not sufficient to optimally address resource contention at scale. We show that by combining optimal checkpointing periods with contention-aware system-level I/O scheduling strategies, we can significantly improve overall application performance and maximize the platform throughput. Finally, we evaluate how specialized hardware, namely burst buffers, may help to mitigate the I/O contention problem. Altogether, these results provide critical analysis and direct guidance on how to design efficient, CR ready, large -scale platforms without a large investment in the I/O subsystem.
There are several methodologies for modeling fasteners in finite element analyses. This work examines the effect of four predominant fastener modeling methods regarding the fatigue of mock hardware that requires fasteners. Typical fastener modeling methods explored in this work consist of a spring method with no preload, a beam method with no preload, a beam method with a preload, and a solid model representation of the fastener with preload. It is found that the different fastener modeling methods produce slightly different fatigue damage predictions, and that this uncertainty in modeling is insignificant as compared to uncertainty in input. Consequently, any of these methods are considered appropriate. In order to make this assertion, multiaxial fatigue methods are investigated and a proportional method is used because of a biaxiality metric.
U. S. Nuclear Power Plants are seeking to implement wireless communications for cost-effective operations. New technology introduced into power plants must not introduce security concerns into critical plant functions. This paper describes the potential for new security concerns with proposed nuclear power plant wireless system implementations and methods of evaluation. While two aspects of concern are introduced, only one (cyber attack vulnerability) is expanded with a description of test setup and methods. A novel method of cyber vulnerability discovery is also described. The goal of this research is to establish wireless technology as a part of a secure operations architecture that brings increased efficiency without introducing new security concerns.
We use Bayesian data analysis to predict dengue fever outbreaks and quantify the link between outbreaks and meteorological precursors tied to the breeding conditions of vector mosquitos. We use Hamiltonian Monte Carlo sampling to estimate a seasonal Gaussian process modeling infection rate, and aperiodic basis coefficients for the rate of an “outbreak level” of infection beyond seasonal trends across two separate regions. We use this outbreak level to estimate an autoregressive moving average (ARMA) model from which we extrapolate a forecast. We show that the resulting model has useful forecasting power in the 6–8 week range. The forecasts are not significantly more accurate with the inclusion of meteorological covariates than with infection trends alone.
Some of the primary barriers to widespread adoption of metal additive manufacturing (AM) are persistent defect formation in built components, high material costs, and lack of consistency in powder feedstock. To generate more reliable, complex-shaped metal parts, it is crucial to understand how feedstock properties change with reuse and how that affects build mechanical performance. Powder particles interacting with the energy source, yet not consolidated into an AM part can undergo a range of dynamic thermal interactions, resulting in variable particle behavior if reused. In this work, we present a systematic study of 316L powder properties from the virgin state through thirty powder reuses in the laser powder bed fusion process. Thirteen powder characteristics and the resulting AM build mechanical properties were investigated for both powder states. Results show greater variability in part ductility for the virgin state. The feedstock exhibited minor changes to size distribution, bulk composition, and hardness with reuse, but significant changes to particle morphology, microstructure, magnetic properties, surface composition, and oxide thickness. Additionally, sieved powder, along with resulting fume/condensate and recoil ejecta (spatter) properties were characterized. Formation mechanisms are proposed. It was discovered that spatter leads to formation of single crystal ferrite through large degrees of supercooling and massive solidification. Ferrite content and consequently magnetic susceptibility of the powder also increases with reuse, suggesting potential for magnetic separation as a refining technique for altered feedstock.
Many test articles exhibit slight nonlinearities which result in natural frequencies shifting between data from different references. This shifting can confound mode fitting algorithms because a single mode can appear as multiple modes when the data from multiple references are combined in a single data set. For this reason, modal test engineers at Sandia National Laboratories often fit data from each reference separately. However, this creates complexity when selecting a final set of modes, because a given mode may be fit from a number of reference data sets. The color-coded complex mode indicator function was developed as a tool that could be used to reduce a complex data set into a manageable figure that displays the number of modes in a given frequency range and also the reference that best excites the mode. The tool is wrapped in a graphical user interface that allows the test engineer to easily iterate on the selected set of modes, visualize the MAC matrix, quickly resynthesize data to check fits, and export the modes to a report-ready table. This tool has proven valuable, and has been used on very complex modal tests with hundreds of response channels and a handful of reference locations.
Small components are becoming increasingly prevalent in today’s society. Springs are a commonly found piece-part in many mechanisms, and as these components become smaller, so do the springs inside of them. Because of their size, small manufacturing defects or other damage to the spring may become significant: a tiny gouge might end up being a significant portion of the cross-sectional area of the wire. However, their small size also makes it difficult to detect such flaws and defects in an efficient manner. This work aims to investigate the effectiveness of using dynamic measurements to detect damage to a miniature spring. Due to their small size, traditional instrumentation cannot be used to take measurements on the spring. Instead, the non-contact Laser Doppler Vibrometry technique is investigated. Natural frequencies and operating shapes are measured for a number of springs. These results are compared against springs that have been intentionally flawed to determine if the change in dynamic properties is a reasonable metric for damage detection.
Many methods have been proposed for updating finite element matrices using experimentally derived modal parameters. By using these methods, a finite element model can be made to exactly match the experiment. These techniques have not achieved widespread use in finite element modeling because they introduce non-physical matrices. Recently, Scanning Laser Doppler Vibrometery (SLDV) has enabled finer measurement point resolution and more accurate measurement point placement with no mass loading compared to traditional accelerometer or roving hammer tests. Therefore, it is worth reinvestigating these updating procedures with high-resolution data inputs to determine if they are able to produce finite element models that are suitable for substructuring. A rough finite element model of an Ampair Wind Turbine Blade was created, and a SLDV measurement was performed that measured three-dimensional data at every node on one surface of the blade. This data was used to update the finite element model so that it exactly matched test data. A simple substructuring example of fixing the base of the blade was performed and compared to previously measured fixed-base data.
Experiments are a critical part of the model validation process, and the credibility of the resulting simulations are themselves dependent on the credibility of the experiments. The impact of experimental credibility on model validation occurs at several points through the model validation and uncertainty quantification (MVUQ) process. Many aspects of experiments involved in the development and verification and validation (V&V) of computational simulations will impact the overall simulation credibility. In this document, we define experimental credibility in the context of model validation and decision making. We summarize possible elements for evaluating experimental credibility, sometimes drawing from existing and preliminary frameworks developed for evaluation of computational simulation credibility. The proposed framework is an expert elicitation tool for planning, assessing, and communicating the completeness and correctness of an experiment (“test”) in the context of its intended use—validation. The goals of the assessment are (1) to encourage early communication and planning between the experimentalist, computational analyst, and customer, and (2) the communication of experimental credibility. This assessment tool could also be used to decide between potential existing data sets to be used for validation. The evidence and story of experimental credibility will support the communication of overall simulation credibility.
Differences in impedance are usually observed when components are tested in fixtures at lower levels of assembly from those in which they are fielded. In this work, the Kansas City National Security Campus (KCNSC) test bed hardware geometry is used to explore the sensitivity of the form of the objective function on the adequate reproduction of relevant response characteristics at the next level of assembly. Inverse methods within Sandia National Laboratories’ Sierra/SD code suite along with the Rapid Optimization Library (ROL) are used for identifying an unknown material (variable shear and bulk modulus) distributed across a predefined fixture volume. Comparisons of the results between time-domain based objective functions are presented. The development of the objective functions, solution sensitivity, and solution convergence will be discussed in the context of the practical considerations required for creating a realizable set of test hardware based on the variable-modulus optimized solutions.