Sandia National Laboratories has tested and evaluated the performance of the following five models of low-cost infrasound sensors and sensor packages: Camas microphone, Gem Infrasound Logger, InfraBSU sensor, Raspberry Boom, and the Samsung S10 smartphone utilizing the Redvox app. The purpose of this infrasound sensor evaluation is to measure the performance characteristics in such areas as power consumption, sensitivity, self-noise, dynamic range, response, passband, linearity, sensitivity variation due to changes in static pressure and temperature, and sensitivity to vertical acceleration. The infrasound monitoring community has leveraged such sensors and integrated packages in novel ways; better understanding the performance of these units serves the geophysical monitoring community.
This work aims to demonstrate that an incremental inductive model checking algorithm built on top of Boolean satisfiability (SAT) solvers can be extended to support modal mu calculus (MMC) formulas. The resulting algorithm, called modal mu calculus model checking using myopic constraints (MC3), solves MMC model checking problems over Boolean labeled transition systems (LTSs). MMC subsumes simple invariance/reachability (as solved by the IC3 algorithm), linear temporal logic (LTL, as solved by the fair algorithm), computation tree logic (CTL, as solved by IICTL), and CTL* (which in turn subsumes LTL and CTL, but was not previously supported by any incremental inductive algorithm). The algorithm is implemented in a prototype solver, mc3.
The Z Machine at Sandia National Laboratories uses current pulses with peaks up to 27 MA to drive target implosions and generate high energy density conditions of interest for stockpile stewardship programs pertinent to the NNSA program portfolio . Physical processes in the region near the Z Machine target create electrode plasmas which seed parasitic current loss that reduce the performance and output of a Z experiment. Electrode surface contaminants (hydrogen, water, hydrocarbons) are thought to be the primary constituent of electrode plasmas which contribute to loss mechanisms. The Sandia team explore d in situ heating and plasma discharge techniques by integrating requisite infrastructure into Sandia's Mykonos LTD accelerator, addressing potential impacts to accelerator operation, and reporting on the impact of these techniques on electrode plasma formation and shot performance. The in situ discharge cleaning utilizes the electrodes of the accelerator to excite an argon-oxygen plasma to sputter and chemically react contaminants from electrode surfaces. Insulating breaks are required to isolate the plasma in electrode regions where loss processes are most likely to occur. The shots on Mykonos validate that these breaks do not perturb experiment performance, reducing the uncertainty on the largest unknown about the in situ cleaning system. Preliminary observations with electrical and optical diagnostics suggest that electrode plasma formation is delayed, and overall inventory has been substantively reduced. In situ heating embeds cartridge heaters into accelerator electrodes and employs a thermal bakeout to rapidly desorb contaminants from electrode surfaces. For the first time, additively manufactured (AM) electrode assemblies were used on a low impedance accelerator to integrate cooling channels and manage thermal gradients. Challenges with poor supplier fabrication to specifications, load alignment, thermal expansion and hardware movement and warpage appears to have introduced large variability in observed loss, though, preventing strong assertions of loss reduction via in situ heating. At this time, an in situ discharge cleaning process offers the lowest risk path to reduce electrode contaminant inventories on Z, though we recommend continuing to develop both approaches. Additional engineering and testing are required to improve the implementation of both systems. .
Pre-chamber spark-ignition (PCSI), either fueled or non-fueled, is a leading concept with the potential to enable diesel-like efficiency in medium-duty (MD) and heavy-duty (HD) natural gas (NG) engines. However, the inadequate scientific base and simulation tools to describe/predict the underlying processes governing PCSI systems is one of the key barriers to market penetration of PCSI for MD/HD NG engines. To this end, experiments were performed in a heavy-duty, optical, single-cylinder engine fitted with an active fueled PCSI module. The spatial and temporal progress of ignition and subsequent combustion of lean-burn natural gas using PCSI system were studied using optical diagnostic imaging and heat release analysis based on main-chamber and pre-chamber pressure measurements. Optical diagnostics involving simultaneous infrared (IR) and high-speed (30 kfps) broadband and filtered OH* chemiluminescence imaging are used to probe the combustion process. Following the early pressure rise in the pre-chamber, IR imaging reveals initial ejection of unburnt fuel-air mixture from the pre-chamber into the main-chamber. Following this, the pre-chamber gas jets exhibit chemical activity in the vicinity of the pre-chamber region followed by a delayed spread in OH* chemiluminescence, as they continue to penetrate further into the main-chamber. The OH* signal progress radially until the pre-chamber jets merge, which sets up the limit to a first stage, jet-momentum driven, mixing-controlled (temperature field) premixed combustion. This is then followed by the subsequent deceleration of the pre-chamber jets, caused by the decrease in the driving pressure difference (ΔP) as well as charge entrainment, resulting in a flame front evolution, where mixing is not the only driver. Chemical-kinetic calculations probe the possibility of flame propagation or sequential auto-ignition in the second stage of combustion. Finally, key phenomenological features are then summarized so as to provide fundamental insights on the complex underlying fluid-mechanical and chemical-kinetic processes that govern the ignition and subsequent combustion of natural gas near lean-limits in high-efficiency lean-burn natural gas engines employing PCSI system.
The mechanical behavior of Ti-6Al-4V produced by additive manufacturing processes is assessed as based on a model derived from the Kocks–Mecking relationship. A constitutive parameter cb is derived from a linear Kocks–Mecking relationship for the microstructure that is characteristic of the work hardening behavior. The formulation for cb is determined by considering the plastic strain between the strengths at the proportional limit and the plastic instability. In this way, the model accommodates the variation in work hardening behavior observed when evaluating material as produced and tested along different orientations. The modeling approach is presented and evaluated for the case of Ti-6Al-4V additively manufactured materials as tested under quasi-static uniaxial tension. It is found that different test specimen orientations, along with postbuild heat treatments, produce a change in the microstructure and plasticity behavior which can be accounted for in the corresponding change of the cb values.
A mass property calculator has been developed to compute the moment of inertia properties of an assemblage of parts that make up a system. The calculator can take input from spreadsheets or Creo mass property files or it can be interfaced with Phoenix Integration Model Center. The input must include the centroidal moments of inertia of each part with respect to its local coordinates, the location of the centroid of each part in the system coordinates and the Euler angles needed to rotate from the part coordinates to the system coordinates. The output includes the system total mass, centroid and mass moment of inertia properties. The input/output capabilities allow the calculator to interface with external optimizers. In addition to describing the calculator, this document serves as its user's manual. The up-to-date version of the calculator can be found in the Git repository https://cee-gitlab.sandia.gov/cj?ete/mass-properties-calculator.
This work aims to advance computational methods for projection-based reduced-order models (ROMs) of linear time-invariant (LTI) dynamical systems. For such systems, current practice relies on ROM formulations expressing the state as a rank-1 tensor (i.e., a vector), leading to computational kernels that are memory bandwidth bound and, therefore, ill-suited for scalable performance on modern architectures. This weakness can be particularly limiting when tackling many-query studies, where one needs to run a large number of simulations. This work introduces a reformulation, called rank-2 Galerkin, of the Galerkin ROM for LTI dynamical systems which converts the nature of the ROM problem from memory bandwidth to compute bound. We present the details of the formulation and its implementation, and demonstrate its utility through numerical experiments using, as a test case, the simulation of elastic seismic shear waves in an axisymmetric domain. We quantify and analyze performance and scaling results for varying numbers of threads and problem sizes. Finally, we present an end-to-end demonstration of using the rank-2 Galerkin ROM for a Monte Carlo sampling study. We show that the rank-2 Galerkin ROM is one order of magnitude more efficient than the rank-1 Galerkin ROM (the current practice) and about 970 times more efficient than the full-order model, while maintaining accuracy in both the mean and statistics of the field.
Herein, the formulation, parameter sensitivities, and usage methods for the Microstructure-Aware Plasticity (MAP) model are presented. This document is intend to serve as a reference for the underlying theory that constitutes the MAP model and as a practical guide for analysts and future developers on how aspects of this material model influence generalized mechanical behavior.
Anwar, Ishtiaque; Hatambeigi, Mahya; Chojnicki, Kirsten; Taha, Mahmoud R.; Stormont, John C.
The stiffness of wellbore cement fracture surfaces was measured after exposing to the advective flow of nitrogen, silicone oil, and medium sweet dead crude oil for different exposure periods. The test specimens were extracted from fractured cement cylinders, where the cement fracture surfaces were exposed to the different fluids up to 15 weeks. A nanoindenter with a Berkovich indenter tip was used to measure load-indentation depth data, which was used to extract the elastic modulus (E) and nano-hardness (H) of the cement fracture surfaces. A reduction in the elastic modulus compared with an unexposed specimen were observed in all the specimens. Both elastic modulus and nano-hardness for the specimens exposed to silicone oil were lower than specimens exposed to nitrogen gas and varied with the period of exposure. The elastic modulus and nano-hardness of the specimens exposed to crude oil were the lowest with a significant decrement with the exposure period. The frequency distribution of the nanoindentation measurements shows that the volume-fraction ratio of the two types of cement hydrated nanocomposites for both the unexposed and test specimens is about 70:30%. Phase transformation beneath the indenter is observed for all of the specimens, with more obvious plastic deformation in specimens exposed to crude oil. Analytical measurements (SEM, EDS, FT-IR, and XRD) on exposed cement fracture surfaces reveal different levels of physical and chemical alteration that are consistent with the reduction in stiffness measured by nanoindentation. The study suggests that cement stiffness will decrease due to crude oil exposure, and the fracture will be sensitive to stress and pore pressure with time.
Reeves, Michael J.; Tian, Dave J.; Bianchi, Antonio; Berkay Celik ZBerkay C.
Container escapes enable the adversary to execute code on the host from inside an isolated container. Notably, these high severity escape vulnerabilities originate from three sources: (1) container profile misconfigurations, (2) Linux kernel bugs, and (3) container runtime vulnerabilities. While the first two cases have been studied in the literature, no works have investigated the impact of container runtime vulnerabilities. In this paper, to fill this gap, we study 59 CVEs for 11 different container runtimes. As a result of our study, we found that five of the 11 runtimes had nine publicly available PoC container escape exploits covering 13 CVEs. Our further analysis revealed all nine exploits are the result of a host component leaked into the container. Here, we apply a user namespace container defense to prevent the adversary from leveraging leaked host components and demonstrate that the defense stops seven of the nine container escape exploits.
Background: Blockchain distributed ledger technology is just starting to be adopted in genomics and healthcare applications. Despite its increased prevalence in biomedical research applications, skepticism regarding the practicality of blockchain technology for real-world problems is still strong and there are few implementations beyond proof-of-concept. We focus on benchmarking blockchain strategies applied to distributed methods for sharing records of gene-drug interactions. We expect this type of sharing will expedite personalized medicine. Basic Procedures: We generated gene-drug interaction test datasets using the Clinical Pharmacogenetics Implementation Consortium (CPIC) resource. We developed three blockchain-based methods to share patient records on gene-drug interactions: Query Index, Index Everything, and Dual-Scenario Indexing. Main Findings: We achieved a runtime of about 60 s for importing 4,000 gene-drug interaction records from four sites, and about 0.5 s for a data retrieval query. Our results demonstrated that it is feasible to leverage blockchain as a new platform to share data among institutions. Principal Conclusions: We show the benchmarking results of novel blockchain-based methods for institutions to share patient outcomes related to gene-drug interactions. Our findings support blockchain utilization in healthcare, genomic and biomedical applications. The source code is publicly available at https://github.com/tsungtingkuo/genedrug.
Xiong, Haifeng; Kunwar, Deepak; Jiang, Dong; Garcia-Vargas, Carlos E.; Li, Hengyu; Du, Congcong; Canning, Griffin; Pereira-Hernandez, Xavier I.; Wan, Qiang; Lin, Sen; Purdy, Stephen C.; Miller, Jeffrey T.; Leung, Kevin L.; Chou, Stanley S.; Brongersma, Hidde H.; Ter Veen, Rik; Huang, Jianyu; Guo, Hua; Wang, Yong; Datye, Abhaya K.
The treatment of emissions from natural gas engines is an important area of research since methane is a potent greenhouse gas. The benchmark catalysts, based on Pd, still face challenges such as water poisoning and long-term stability. Here we report an approach for catalyst synthesis that relies on the trapping of metal single atoms on the support surface, in thermally stable form, to modify the nature of further deposited metal/metal oxide. By anchoring Pt ions on a catalyst support we can tailor the morphology of the deposited phase. In particular, two-dimensional (2D) rafts of PdOx are formed, resulting in higher reaction rates and improved water tolerance during methane oxidation. The results show that modifying the support by trapping single atoms could provide an important addition to the toolkit of catalyst designers for controlling the nucleation and growth of metal and metal oxide clusters in heterogeneous catalysts. [Figure not available: see fulltext.].
The use of bounding scenarios is a common practice that greatly simplifies the design and qualification of structures. However, this approach implicitly assumes that the quantities of interest increase monotonically with the input to the structure, which is not necessarily true for nonlinear structures. This paper surveys the literature for observations of nonmonotonic behavior of nonlinear systems and finds such observations in both the earthquake engineering and applied mechanics literature. Numerical simulations of a single degree-of-freedom mass-spring system with an elastic–plastic spring subjected to a triangular base acceleration pulse are then presented, and it is shown that the relative acceleration of this system scales nonmonotonically with the input magnitude in some cases. The equation of motion for this system is solved symbolically and an approximate expression for the relative acceleration is developed, which qualitatively agrees with the nonmonotonic behavior seen in the numerical results. The nonmonotonicity is investigated and found to be a result of dynamics excited by the discontinuous derivative of the base acceleration pulse, the magnitude of which scales nonmonotonically with the input magnitude due to the fact that the first yield of the spring occurs earlier as the input magnitude is increased. The relevance of this finding within the context of defining bounding scenarios is discussed, and it is recommended that modeling be used to perform a survey of the full range of possible inputs prior to defining bounding scenarios.
Quantum sensing has the potential to provide ultrasensitive measurements of physical phenomenon. Unlike quantum computing, quantum sensing is available now, though in general at research laboratories. A notable commercially-available quantum sensing device is the ubiquitous Superconducting Quantum Interference Device (SQUID), which can measure faint magnetic fields such as found in the human brain. Quantum sensing is used for direct measurement of environmental phenomenon, such as electromagnetic fields and accelerations, which then are used for certain applications. For example, quantum sensing of accelerations is useful for Position, Navigation, and Timing (PNT) applications. It is not clear, however, how quantum sensing can be useful for nuclear safeguards. This report provides first a background in quantum sensing, followed by a survey of potential safeguards utilizations of quantum sensing. Several potential safeguards applications are identified and explored.
Understanding the capture of charge carriers by colour centres in semiconductors is important for the development of novel forms of sensing and quantum information processing, but experiments typically involve ensemble measurements, often impacted by defect proximity. Here we show that confocal fluorescence microscopy and magnetic resonance can be used to induce and probe charge transport between individual nitrogen-vacancy centres in diamond at room temperature. In our experiments, a ‘source’ nitrogen vacancy undergoes optically driven cycles of ionization and recombination to produce a stream of photogenerated carriers, one of which is subsequently captured by a ‘target’ nitrogen vacancy several micrometres away. We use a spin-to-charge conversion scheme to encode the spin state of the source colour centre into the charge state of the target, which allows us to set an upper bound to carrier injection from other background defects. We attribute our observations to the action of unscreened Coulomb potentials producing giant carrier capture cross-sections, orders of magnitude greater than those measured in ensembles.
Echeverria, Marco J.; Galitskiy, Sergey; Mishra, Avanish; Dingreville, Remi P.; Dongare, Avinash M.
A hybrid atomic-scale and continuum-modeling framework is used to study the microstructural evolution during the laser-induced shock deformation and failure (spallation) of copper microstructures. A continuum two-temperature model (TTM) is used to account for the interaction of Cu atoms with a laser in molecular dynamics (MD) simulations. The MD-TTM simulations study the effect of laser-loading conditions (laser fluence) on the microstructure (defects) evolution during various stages of shock wave propagation, reflection, and interaction in single-crystal (sc) Cu systems. In addition, the role of the microstructure is investigated by comparing the defect evolution and spall response of sc-Cu and nanocrystalline Cu systems. The defect (stacking faults and twin faults) evolution behavior in the metal at various times is further characterized using virtual in situ selected area electron diffraction and x-ray diffraction during various stages of evolution of microstructure. The simulations elucidate the uncertain relation between spall strength and strain-rate and the much stronger relation between the spall strength and the temperatures generated due to laser shock loading for the small Cu sample dimensions considered here.
A cohesive phase-field model of ductile fracture in a finite-deformation setting is presented. The model is based on a free-energy function in which both elastic and plastic work contributions are coupled to damage. Using a strictly variational framework, the field evolution equations, damage kinetics, and flow rule are jointly derived from a scalar least-action principle. Particular emphasis is placed on the use of a rational function for the stress degradation that maintains a fixed effective strength with decreasing regularization length. The model is employed to examine crack growth in pure mode-I problems through the generation of crack growth resistance (J-R) curves. In contrast to alternative models, the current formulation gives rise to J-R curves that are insensitive to the regularization length. Numerical evidence suggests convergence of local fields with respect to diminishing regularization length as well.
Interest in 3D printing of thermoset resins has increased significantly in recent years. One approach to additive manufacturing of thermoset resins is printing dual-cure resins with direct ink write (DIW). Dual-cure resins are multi-component resins which employ an in situ curable constituent to enable net-shape fabrication while a second constituent and cure mechanism contribute to the final mechanical properties of the printed materials. In this work, the cure kinetics, green strength, printability, and print fidelity of dual-cure epoxy/acrylate thermoset resins are investigated. Resin properties are evaluated as a function of acrylate concentration and in situ UV exposure conditions. The acrylate cure kinetics are probed using photo-differential scanning calorimetry and the impacts of resin composition and UV cure profile on the acrylate extent of conversion are presented. Continuous and pulsed UV cure profiles are shown to affect total conversion due to variances in radical efficiency at different UV intensities and acrylate concentrations. The effects of acrylate concentration on the kinetics of the epoxy thermal cure and the final mechanical properties are also investigated using dynamic mechanical analysis and three-point bend measurements. The glass transition temperature is dependent on formulation, with increasing acrylate content decreasing the Tg. However, the room temperature shear moduli, flexural moduli, strength, strain-to-failure, and toughness values are relatively independent of resin composition. The similarity of the final properties allows for greater flexibility in resin formulation and in situ cure parameters, which can enable the printing of complex parts that require high green strength. We found that the in situ UV print intensities and exposure profiles that are necessary to achieve the best print quality are not, in most cases, the conditions that maximize conversion of the acrylate network. This highlights the importance of developing optimized resin compositions which enable complete cure of the acrylate network by promoting acrylate dark cure or thermal cure.
This report presents the results of the sampling effort and documents all associated field activities including borehole clearing, soil sample collection, storage and transportation to the analytical laboratories, borehole backfilling and surface restoration, and storage of investigation-derived waste (IDW) for future profiling and disposal by SNL/CA waste management personnel.
Time-resolved particle image velocimetry (TR-PIV) has become widespread in fluid dynamics. Essentially a velocity field movie, the dynamic content provides temporal as well as spatial information, in contrast to conventional PIV offering only statistical ensembles of flow quantities. From these time series arise further analyses such as accelerometry, space-time correlations, frequency spectra of turbulence including spatial variability, and derivation of pressure fields and forces. The historical development of TR-PIV is chronicled, culminating in an assessment of the current state of technology in high-repetition-rate lasers and high-speed cameras. Commercialization of pulse-burst lasers has expanded TR-PIV into more flows, including the compressible regime, and has achieved MHz rates. Particle response times and peak locking during image interrogation require attention but generally are not impediments to success. Accuracy considerations are discussed, including the risks of noise and aliasing in spectral content. Oversampled TR-PIV measurements allow use of multi-frame image interrogation methods, which improve the precision of the correlation and raise the velocity dynamic range of PIV. In combination with volumetric methods and data assimilation, a full four-dimensional description of a flow is not only achievable but becoming standardized. A survey of exemplary applications is followed by a few predictions concerning the future of TR-PIV.
It is very difficult to measure the voltage of the load on the Saturn accelerator. Time-resolved measurements such as vacuum voltmeters and V-dot monitors are impractical at best and completely change the pulsed power behavior at the load at worst. We would like to know the load voltage of the machine so that we could correctly model the radiation transport and tune our x-ray unfold methodology and circuit simulations of the accelerator. Step wedges have been used for decades as a tool to measure the end - point energies of high energy particle beams. Typically, the technique is used for multi-megavolt accelerators, but we have adapted it to Saturn's modest <2 MV end-point energy and modified the standard bremsstrahlung x-ray source to extract the electron beam without changing the physics of the load region. We found clear evidence of high energy electrons >2 MV. We also attempted to unfold an electron energy spectrum using a machine learning algorithm and while these results come with large uncertainties, they qualitatively agree with PIC simulation results.
Polymers such as PTFE (polytetrafluorethylene or Teflon), EPDM (ethylene propylene diene monomer) rubber, FKM fluoroelastomer (Viton), Nylon 11, Nitrile butadiene (NBR) rubber, hydrogenated nitrile rubber (HNBR) and perfluoroelastomers (FF_202) are commonly employed in super critical CO2 (sCO2) energy conversion systems. O-rings and gaskets made from these polymers face stringent performance conditions such as elevated temperatures, high pressures, pollutants, and corrosive humid environments. In FY 2019, we conducted experiments at high temperatures (100°C and 120°C) under isobaric conditions (20 MPa). Findings showed that elevated temperatures accelerated degradation of polymers in sCO2, and that certain polymer microstructures are more susceptible to degradation over others. In FY 2020, the focus was to understand the effect of sCO2 on polymers at low (10 MPa) and high pressures (40 MPa) under isothermal conditions (100°C). It was clear that the same selectivity was observed in these experiments wherein certain polymeric functionalities showed more propensity to failure over others. Fast diffusion, supported by higher pressures and long exposure times (1000 hours) at the test temperature, caused increased damage in sCO2 environments to even the most robust polymers. We also looked at polymers under compression in sCO2 at 100°C and 20 MPa pressure to imitate actual sealing performance required of these materials in sCO2 systems. Compression worsened the physical damage that resulted from chemical attack of the polymers under these test conditions. In FY 2021, the effect of cycling temperature (from 50°C to 150°C to 50°C) for polymers under a steady sCO2 pressure of 20 MPa was studied. The aim was to understand the influence of cycling temperatures of sCO2 for typical polymers under isobaric (20 MPa) conditions. Thermoplastic polymers (Nylon, and PTFE) and elastomers (EPDM, Viton, Buna N, Neoprene, FF202, and HNBR) were subjected to 20 MPa sCO2 pressure for 50 cycles and 100 cycles in separate experiments. Samples were extracted for ex-situ characterization at 50 cycles and upon the completion of 100 cycles. Each cycle constituted of 175 minutes of cycling from 50°C to 150°C. The polymer samples were examined for physical and chemical changes by Dynamic Mechanical and Thermal Analysis (DMTA), Fourier Transform Infrared (FTIR) spectroscopy, and compression set. Density and mass changes immediately after removal from test were measured for degree of swell comparisons. Optical microscopy techniques and micro computer tomography (micro CT) images were collected on select specimens. Evaluations conducted showed that exposures to super-critical CO2 environments resulted in combinations of physical and/or chemical changes. For each polymer, the dominance of cycling temperatures under sCO2 pressures, were evaluated. Attempts were made to qualitatively link the permanent sCO2 effects to polymer micro- structure, free volume, backbone substitutions, presence of polar groups, and degree of crystallinity differences. This study has established that soft polymeric materials are conducive to failure in sCO2 through mechanisms of failure that are dependent on polymer microstructure and chemistry. Polar pendant groups, large atom substitutions on the backbone are some of the factors that are influential structural factors.
SIERRA/Aero is a compressible fluid dynamics program intended to solve a wide variety compressible fluid flows including transonic and hypersonic problems. This document describes the commands for assembling a fluid model for analysis with this module, henceforth referred to simply as Aero for brevity. Aero is an application developed using the SIERRA Toolkit (STK). The intent of STK is to provide a set of tools for handling common tasks that programmers encounter when developing a code for numerical simulation. For example, components of STK provide field allocation and management, and parallel input/output of field and mesh data. These services also allow the development of coupled mechanics analysis software for a massively parallel computing environment.
The SNL Sierra Mechanics code suite is designed to enable simulation of complex multiphysics scenarios. The code suite is composed of several specialized applications which can operate either in standalone mode or coupled with each other. Arpeggio is a supported utility that enables loose coupling of the various Sierra Mechanics applications by providing access to Framework services that facilitate the coupling. More importantly Arpeggio orchestrates the execution of applications that participate in the coupling. This document describes the various components of Arpeggio and their operability. The intent of the document is to provide a fast path for analysts interested in coupled applications via simple examples of its usage.
The SIERRA Low Mach Module: Fuego, henceforth referred to as Fuego, is the key element of the ASC fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Using MPMD coupling, Scefire and Nalu handle the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
Isocontours of Q-criterion with velocity visualized in the wake for two NREL 5-MW turbines operating under uniform-inflow wind speed of 8 m/s. Simulation performed with the hybrid-Nalu-Wind/AMR-Wind solver.
Adcock, Christiane; Ananthan, Shreyas; Berget-Vergiat, Luc; Brazell, Michael; Brunhart-Lupo, Nicholas; Hu, Jonathan J.; Knaus, Robert C.; Melvin, Jeremy; Moser, Bob; Mullowney, Paul; Rood, Jon; Sharma, Ashesh; Thomas, Stephen; Vijayakumar, Ganesh; Williams, Alan B.; Wilson, Robert; Yamazaki, Ichitaro Y.; Sprague, Michael
The goal of the ExaWind project is to enable predictive simulations of wind farms comprised of many megawatt-scale turbines situated in complex terrain. Predictive simulations will require computational fluid dynamics (CFD) simulations for which the mesh resolves the geometry of the turbines, capturing the thin boundary layers, and captures the rotation and large deflections of blades. Whereas such simulations for a single turbine are arguably petascale class, multi-turbine wind farm simulations will require exascale-class resources.
The SIERRA Low Mach Module: Fuego, henceforth referred to as Fuego, is the key element of the ASC re environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Using MPMD coupling, Scefire and Nalu handle the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
A key objective of the United States Department of Energy’s (DOE) Office of Nuclear Energy’s Spent Fuel and Waste Science and Technology Campaign is to better understand the technical basis, risks, and uncertainty associated with the safe and secure disposition of spent nuclear fuel (SNF) and high-level radioactive waste. Commercial nuclear power generation in the United States has resulted in thousands of metric tons of SNF, the disposal of which is the responsibility of the DOE (Nuclear Waste Policy Act of 1982, as amended). Any repository licensed to dispose of SNF must meet requirements regarding the long-term performance of that repository. For an evaluation of the long-term performance of the repository, one of the events that may need to be considered is the SNF achieving a critical configuration during the postclosure period. Of particular interest is the potential behavior of SNF in dual-purpose canisters (DPCs), which are currently licensed and being used to store and transport SNF but were not designed for permanent geologic disposal. A study has been initiated to examine the potential consequences, with respect to long-term repository performance, of criticality events that might occur during the postclosure period in a hypothetical repository containing DPCs. The first phase (a scoping phase) consisted of developing an approach to creating the modeling tools and techniques that may eventually be needed to either include or exclude criticality from a performance assessment (PA) as appropriate; this scoping phase is documented in Price et al. (2019a). In the second phase, that modeling approach was implemented and future work was identified, as documented in Price et al. (2019b). This report gives the results of a repository-scale PA examining the potential consequences of postclosure criticality, as well as the information, modeling tools, and techniques needed to incorporate the effects of postclosure criticality in the PA.
The SIERRA Low Mach Module: Fuego, henceforth referred to as Fuego, is the key element of the ASC fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Using MPMD coupling, Scefire and Nalu handle the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
SIERRA/Aero is a compressible fluid dynamics program intended to solve a wide variety compressible fluid flows including transonic and hypersonic problems. This document describes the commands for assembling a fluid model for analysis with this module, henceforth referred to simply as Aero for brevity. Aero is an application developed using the SIERRA Toolkit (STK). The intent of STK is to provide a set of tools for handling common tasks that programmers encounter when developing a code for numerical simulation. For example, components of STK provide field allocation and management, and parallel input/output of field and mesh data. These services also allow the development of coupled mechanics analysis software for a massively parallel computing environment.