Thermoset polymers (e.g. epoxies, vulcanizable rubbers, polyurethanes, etc.) are crosslinked materials with excellent thermal, chemical, and mechanical stability; these properties make thermoset materials attractive for use in harsh applications and environments. Unfortunately, material robustness means that these materials persist in the environment with very slow degradation over long periods of time. Balancing the benefits of material performance with sustainability is a challenge in need of novel solutions. Here, we aimed to address this challenge by incorporating boronic acid-amine complexes into epoxy thermoset chemistries, facilitating degradation of the material under pH neutral to alkaline conditions; in this scenario, water acts as an initiator to remove boron species, creating a porous structure with an enhanced surface area that makes the material more amenable to environmental degradation. Furthermore, the expulsion of the boron leaves the residual pores rich in amines which can be exploited for CO2 absorption or other functionalization. We demonstrated the formation of novel boron species from neat mixing of amine compounds with boric acid, including one complex that appears highly stable under nitrogen atmosphere up to 600 °C. While degradation of the materials under static, alkaline conditions (our “trigger”) was inconclusive at the time of this writing, dynamic conditions appeared more promising. Additionally, we showed that increasing boronic acid content created materials more resistant to thermal degradation, thus improving performance under typical high temperature use conditions.
This report provides detailed documentation of the algorithms that where developed and implemented in the Plato software over the course of the Optimization-based Design for Manufacturing LDRD project.
This document summarizes the findings of a review of published literature regarding the potential impacts of electromagnetic pulse (EMP) and geomagnetic disturbance (GMD) phenomena on oil and gas pipeline systems. The impacts of telluric currents on pipelines and their associated cathodic protection systems has been well studied. The existing literature describes implications for corrosion protection system design and monitoring to mitigate these impacts. Effects of an EMP on pipelines is not a thoroughly explored subject. Most directly related articles only present theoretical models and approaches rather than specific analyses and in-field testing. Literature on SCADA components and EMP is similarly sparse and the existing articles show a variety of impacts to control system components that range from upset and damage to no effect. The limited research and the range of observed impacts for the research that has been published suggests the need for additional work on GMD and EMP and natural gas SCADA components.
In WASH - 1400, external exposure from the finite radioactive cloud (cloudshine) is calculated by assuming that the cloud is semi-infinite, the concentration of radioactive material is uniform, and by using a correction factor to account for these approximations. This correction factor is originally based upon formulations by Healy and depends on the effective size of the plume and the distance from the plume center to the receptor. The range of the finite cloud dose correction factor table from WASH - 1400 developed using Healy formulations can be exceeded in certain situations. When the range of the table is exceeded, no extrapolation is performed; rather interpolation at the edge of the table is performed per WASH - 1400. The tabulated values of these finite cloud dose correction factors from WASH - 1400 and the interpolation at the edge of the table have been used in MACCS since its creation. An expanded table of finite cloud dose correction factors is one way to reduce the need of using interpolation at the edge of the table. The generation of an expanded finite cloud dose correction factor table for future use in MACCS is documented in this report.
As noise limits the performance of quantum processors, the ability to characterize this noise and develop methods to overcome it is essential for the future of quantum computing. In this report, we develop a complete set of tools for improving quantum processor performance at the application level, including low-level physical models of quantum gates, a numerically efficient method of producing process matrices that span a wide range of model parameters, and full-channel quantum simulations. We then provide a few examples of how to use these tools to study the effects of noise on quantum circuits.
Performance assessment (PA) of geologic radioactive waste repositories requires three-dimensional simulation of highly nonlinear, thermo-hydro-mechanical-chemical (THMC), multiphase flow and transport processes across many kilometers and over tens to hundreds of thousands of years. Integrating the effects of a near-field geomechanical process (i.e. buffer swelling) into coupled THC simulations through reduced-order modeling, rather than through fully coupled geomechanics, can reduce the dimensionality of the problem and improve computational efficiency. In this study, PFLOTRAN simulations model a single waste package in a shale host rock repository, where re-saturation of a bentonite buffer causes the buffer to swell and exert stress on a highly fractured disturbed rock zone (DRZ). Three types of stress-dependent permeability functions (exponential, modified cubic, and Two-part Hooke's law models) are implemented to describe mechanical characteristics of the system. Our modeling study suggests that compressing fractures reduces DRZ permeability, which could influence the rate of radionuclide transport and exchange with corrosive species in host rock groundwater that could accelerate waste package degradation. Less permeable shale host rock delays buffer swelling, consequently retarding DRZ permeability reduction as well as chemical transport within the barrier system.
Drilling systems that use downhole rotation must react torque either through the drill-string or near the motor to achieve effective drilling performance. Problems with drill-string loading such as buckling, friction, and twist become more severe as hole diameter decreases. Therefore, for small holes, reacting torque downhole without interfering with the application of weight-on-bit, is preferred. In this paper, we present a novel mechanism that enables effective and controllable downhole weight on bit transmission and torque reaction. This scalable design achieves its unique performance through four key features: (1) mechanical advantage based on geometry, (2) direction dependent behavior using rolling and sliding contact, (3) modular scalability by combining modules in series, and (4) torque reaction and weight on bit that are proportional to applied axial force. As a result, simple mechanical devices can be used to react large torques while allowing controlled force to be transmitted to the drill bit. We outline our design, provide theoretical predictions of performance, and validate the results using full-scale testing. The experimental results include laboratory studies as well as limited field testing using a percussive hammer. These results demonstrate effective torque reaction, axial force transmission, favorable scaling with multiple modules, and predictable performance that is proportional to applied force.
We consider a class of nonlinear control synthesis problems where the underlying mathe-matical models are not explicitly known. We propose a data-driven approach to stabilize the systems when only sample trajectories of the dynamics are accessible. Our method is built on the density-function-based stability certificate that is the dual to the Lyapunov function for dynamic systems. Unlike Lyapunov-based methods, density functions lead to a convex formulation for a joint search of the control strategy and the stability certificate. This type of convex problem can be solved efficiently using the machinery of the sum of squares (SOS). For the data-driven part, we exploit the fact that the duality results in the stability theory can be understood through the lens of Perron–Frobenius and Koopman operators. This allows us to use data-driven methods to approximate these operators and combine them with the SOS techniques to establish a convex formulation of control synthesis. The efficacy of the proposed approach is demonstrated through several examples.
The Perovskite PV Accelerator for Commercial Technology (PACT) is an independent validation center for the evaluation of perovskite PV technologies and their bankability. The center is led by Sandia National Laboratories and the National Renewable Energy Laboratory (NREL) and includes as part of its team Los Alamos National Laboratory (LANL), CFV Labs, Black and Veatch (B&V), and the Electric Power Research Institute (EPRI). The goals of the center are to: Develop and improve indoor and outdoor performance characterization methods, Develop and validate accelerated qualification testing for early failures (5-10 years), Research degradation and failure modes, Validate outdoor performance, and Provide bankability services to US perovskite PV (PSC) industry. The importance of data and data management to the success and outcomes of the PACT center is paramount. This report describes how data will be managed and protected by PACT and identifies important data management principles that will guide our approach.
This Laboratory Directed Research and Development project developed and applied closely coupled experimental and computational tools to investigate powder compaction across multiple length scales. The primary motivation for this work is to provide connections between powder feedstock characteristics, processing conditions, and powder pellet properties in the context of powder-based energetic components manufacturing. We have focused our efforts on multicrystalline cellulose, a molecular crystalline surrogate material that is mechanically similar to several energetic materials of interest, but provides several advantages for fundamental investigations. We report extensive experimental characterization ranging in length scale from nanometers to macroscopic, bulk behavior. Experiments included nanoindentation of well-controlled, micron-scale pillar geometries milled into the surface of individual particles, single-particle crushing experiments, in-situ optical and computed tomography imaging of the compaction of multiple particles in different geometries, and bulk powder compaction. In order to capture the large plastic deformation and fracture of particles in computational models, we have advanced two distinct meshfree Lagrangian simulation techniques: 1.) bonded particle methods, which extend existing discrete element method capabilities in the Sandia-developed , open-source LAMMPS code to capture particle deformation and fracture and 2.) extensions of peridynamics for application to mesoscale powder compaction, including a novel material model that includes plasticity and creep. We have demonstrated both methods for simulations of single-particle crushing as well as mesoscale multi-particle compaction, with favorable comparisons to experimental data. We have used small-scale, mechanical characterization data to inform material models, and in-situ imaging of mesoscale particle structures to provide initial conditions for simulations. Both mesostructure porosity characteristics and overall stress-strain behavior were found to be in good agreement between simulations and experiments. We have thus demonstrated a novel multi-scale, closely coupled experimental and computational approach to the study of powder compaction. This enables a wide range of possible investigations into feedstock-process-structure relationships in powder-based materials, with immediate applications in energetic component manufacturing, as well as other particle-based components and processes.
Crabtree, Stefani A.; White, Devin A.; Bradshaw, Corey J.A.; Saltre, Frederik; Williams, Alan N.; Beaman, Robin J.; Bird, Michael I.; Ulm, Sean
Archaeological data and demographic modelling suggest that the peopling of Sahul required substantial populations, occurred rapidly within a few thousand years and encompassed environments ranging from hyper-arid deserts to temperate uplands and tropical rainforests. How this migration occurred and how humans responded to the physical environments they encountered have, however, remained largely speculative. By constructing a high-resolution digital elevation model for Sahul and coupling it with fine-scale viewshed analysis of landscape prominence, least-cost pedestrian travel modelling and high-performance computing, we create over 125 billion potential migratory pathways, whereby the most parsimonious routes traversed emerge. Our analysis revealed several major pathways—superhighways—transecting the continent, that we evaluated using archaeological data. These results suggest that the earliest Australian ancestors adopted a set of fundamental rules shaped by physiological capacity, attraction to visually prominent landscape features and freshwater distribution to maximize survival, even without previous experience of the landscapes they encountered.
Xiong, Haifeng; Kunwar, Deepak; Jiang, Dong; Garcia-Vargas, Carlos E.; Li, Hengyu; Du, Congcong; Canning, Griffin; Pereira-Hernandez, Xavier I.; Wan, Qiang; Lin, Sen; Purdy, Stephen C.; Miller, Jeffrey T.; Leung, Kevin; Chou, Stanley S.; Brongersma, Hidde H.; Ter Veen, Rik; Huang, Jianyu; Guo, Hua; Wang, Yong; Datye, Abhaya K.
The treatment of emissions from natural gas engines is an important area of research since methane is a potent greenhouse gas. The benchmark catalysts, based on Pd, still face challenges such as water poisoning and long-term stability. Here we report an approach for catalyst synthesis that relies on the trapping of metal single atoms on the support surface, in thermally stable form, to modify the nature of further deposited metal/metal oxide. By anchoring Pt ions on a catalyst support we can tailor the morphology of the deposited phase. In particular, two-dimensional (2D) rafts of PdOx are formed, resulting in higher reaction rates and improved water tolerance during methane oxidation. The results show that modifying the support by trapping single atoms could provide an important addition to the toolkit of catalyst designers for controlling the nucleation and growth of metal and metal oxide clusters in heterogeneous catalysts. [Figure not available: see fulltext.].
Medical countermeasures (MCMs) based on messenger ribonucleic acid (mRNA) are promising due to their programmability, targeting precision and specificity, predictable physicochemical properties, and amenability to scalable manufacture. However, safe and effective delivery vehicles are needed, especially for targeting the lung. We developed a generalized approach to nanoparticle-mediated mRNA delivery to lung, and used it to evaluate candidate therapies. In initial studies, reporter mRNA was delivered using lipid-coated mesoporous silica nanoparticles (LC-MSNs) and lipid nanoparticles (LNPs), the latter with greater consistency. Then, mRNA encoding known protein therapies were delivered using LNPs. These formulations showed some toxicity in mice with lung damage, but those with IL-1RA, sACE2-Ig, and ANGPT1 mRNA were modestly therapeutic on balance. Our work advances the state of the art for mRNA delivery to lung, and provides a foundation for evaluating and characterizing mRNA-based lung therapies, including three that appear to be exceptionally promising.
In-situ additive manufacturing (AM) diagnostic tools (e.g., optical/infrared imaging, acoustic, etc.) already exist to correlate process anomalies to printed part defects. This current work aimed to augment existing capabilities by: 1) Incorporating in-situ imaging w/ machine learning (ML) image processing software (ORNL- developed "Peregrine") for AM process anomaly detection 2) Synchronizing multiple in-situ sensors for simultaneous analysis of AM build events 3) Correlating in-situ AM process data, generated part defects and part mechanical properties The key R&D question investigated was to determine if these new combined hardware/software tools could be used to successfully quantify defect distributions for parts build via SNL laser powder bed fusion (LPBF) machines, aiming to better understand data-driven process-structure-property- performance relationships. High resolution optical cameras and acoustic microphones were successfully integrated in two LPBF machines and linked to the Peregrine ML software. The software was successfully calibrated on both machines and used to image hundreds of layers of multiple builds to train the ML software in identifying printed part vs powder. The software's validation accuracy to identify this aspect increased from 56% to 98.8% over three builds. Lighting conditions inside the chamber were found to significantly impact ML algorithm predictions from in-situ sensors, so these were tailored to each machine's internal framework. Finally, 3D part reconstructions were successfully generated for a build from the compressed stack of layer-wise images. Resolution differences nearest and furthest from the optical camera were discussed. Future work aims to improve optical resolution, increase process anomalies identified, and integrate more sensor modalities.
Sandia National Laboratories has tested and evaluated the performance of the following five models of low-cost infrasound sensors and sensor packages: Camas microphone, Gem Infrasound Logger, InfraBSU sensor, Raspberry Boom, and the Samsung S10 smartphone utilizing the Redvox app. The purpose of this infrasound sensor evaluation is to measure the performance characteristics in such areas as power consumption, sensitivity, self-noise, dynamic range, response, passband, linearity, sensitivity variation due to changes in static pressure and temperature, and sensitivity to vertical acceleration. The infrasound monitoring community has leveraged such sensors and integrated packages in novel ways; better understanding the performance of these units serves the geophysical monitoring community.
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.
Anwar, Ishtiaque; Hatambeigi, Mahya; Chojnicki, Kirsten; Taha, Mahmoud R.; Stormont, John C.
The stiffness of wellbore cement fracture surfaces was measured after exposing to the advective flow of nitrogen, silicone oil, and medium sweet dead crude oil for different exposure periods. The test specimens were extracted from fractured cement cylinders, where the cement fracture surfaces were exposed to the different fluids up to 15 weeks. A nanoindenter with a Berkovich indenter tip was used to measure load-indentation depth data, which was used to extract the elastic modulus (E) and nano-hardness (H) of the cement fracture surfaces. A reduction in the elastic modulus compared with an unexposed specimen were observed in all the specimens. Both elastic modulus and nano-hardness for the specimens exposed to silicone oil were lower than specimens exposed to nitrogen gas and varied with the period of exposure. The elastic modulus and nano-hardness of the specimens exposed to crude oil were the lowest with a significant decrement with the exposure period. The frequency distribution of the nanoindentation measurements shows that the volume-fraction ratio of the two types of cement hydrated nanocomposites for both the unexposed and test specimens is about 70:30%. Phase transformation beneath the indenter is observed for all of the specimens, with more obvious plastic deformation in specimens exposed to crude oil. Analytical measurements (SEM, EDS, FT-IR, and XRD) on exposed cement fracture surfaces reveal different levels of physical and chemical alteration that are consistent with the reduction in stiffness measured by nanoindentation. The study suggests that cement stiffness will decrease due to crude oil exposure, and the fracture will be sensitive to stress and pore pressure with time.
In this analysis, the two material interaction models available in the MELCOR code are benchmarked for a severe accident at a BWR under representative Fukushima Daiichi boundary conditions. This part of the benchmark investigates the impact of each material interaction model on accident progression through a detailed single case analysis. It is found that the eutectics model simulation exhibits more rapid accident progression for the duration of the accident. The slower accident progression exhibited by the interactive materials model simulation, however, allows for a greater degree of core material oxidation and hydrogen generation to occur, as well as elevated core temperatures during the ex-vessel accident phase. The eutectics model simulation exhibits more significant degradation of core components during the late in-vessel accident phase – more debris forms and relocates to the lower plenum before lower head failure. The larger debris bed observed in the eutectics model simulation also reaches higher temperatures, presenting a more significant thermal challenge to the lower head until its failure. At the end of the simulated accident scenario, however, core damage is comparable between both simulations due to significant core degradation that occurs during the ex-vessel phase in the interactive materials model simulation. A key difference between the two models’ performance is the maximum temperatures that can be reached in the core and therefore the maximum ΔT between any two components. When implementing the interactive materials model, users have the option to modify the liquefaction temperature of the ZrO2-interactive and UO2-interactive materials as a way to mimic early fuel rod failure due to material interactions. Through modification of the liquefaction of high melting point materials with significant mass, users may inadvertently limit maximum core temperatures for fuel, cladding, and debris components.
In this project we developed and validated algorithms for privacy-preserving linear regression using a new variant of Secure Multiparty Computation (MPC) we call "Hybrid MPC" (hMPC). Our variant is intended to support low-power, unreliable networks of sensors with low-communication, fault-tolerant algorithms. In hMPC we do not share training data, even via secret sharing. Thus, agents are responsible for protecting their own local data. Only the machine learning (ML) model is protected with information-theoretic security guarantees against honest-but-curious agents. There are three primary advantages to this approach: (1) after setup, hMPC supports a communication-efficient matrix multiplication primitive, (2) organizations prevented by policy or technology from sharing any of their data can participate as agents in hMPC, and (3) large numbers of low-power agents can participate in hMPC. We have also created an open-source software library named "Cicada" to support hMPC applications with fault-tolerance. The fault-tolerance is important in our applications because the agents are vulnerable to failure or capture. We have demonstrated this capability at Sandia's Autonomy New Mexico laboratory through a simple machine-learning exercise with Raspberry Pi devices capturing and classifying images while flying on four drones.
This report presents the results of the sampling effort and documents all associated field activities including borehole clearing, soil sample collection, storage and transportation to the analytical laboratories, borehole backfilling and surface restoration, and storage of investigation-derived waste (IDW) for future profiling and disposal by SNL/CA waste management personnel.
Co-deposited, immiscible alloy systems form hierarchical microstructures under specific deposition conditions that accentuate the difference in constituent element mobility. The mechanism leading to the formation of these unique hierarchical morphologies during the deposition process is difficult to identify, since the characterization of these microstructures is typically carried out post-deposition. We employ phase-field modeling to study the evolution of microstructures during deposition combined with microscopy characterization of experimentally deposited thin films to reveal the origin of the formation mechanism of hierarchical morphologies in co-deposited, immiscible alloy thin films. Our results trace this back to the significant influence of a local compositional driving force that occurs near the surface of the growing thin film. We show that local variations in the concentration of the vapor phase near the surface, resulting in nuclei (i.e., a cluster of atoms) on the film’s surface with an inhomogeneous composition, can trigger the simultaneous evolution of multiple concentration modulations across multiple length scales, leading to hierarchical morphologies. We show that locally, the concentration must be above a certain threshold value in order to generate distinct hierarchical morphologies in a single domain.
Polymers such as PTFE (polytetrafluorethylene or Teflon), EPDM (ethylene propylene diene monomer) rubber, FKM fluoroelastomer (Viton), Nylon 11, Nitrile butadiene (NBR) rubber, hydrogenated nitrile rubber (HNBR) and perfluoroelastomers (FF_202) are commonly employed in super critical CO2 (sCO2) energy conversion systems. O-rings and gaskets made from these polymers face stringent performance conditions such as elevated temperatures, high pressures, pollutants, and corrosive humid environments. In FY 2019, we conducted experiments at high temperatures (100°C and 120°C) under isobaric conditions (20 MPa). Findings showed that elevated temperatures accelerated degradation of polymers in sCO2, and that certain polymer microstructures are more susceptible to degradation over others. In FY 2020, the focus was to understand the effect of sCO2 on polymers at low (10 MPa) and high pressures (40 MPa) under isothermal conditions (100°C). It was clear that the same selectivity was observed in these experiments wherein certain polymeric functionalities showed more propensity to failure over others. Fast diffusion, supported by higher pressures and long exposure times (1000 hours) at the test temperature, caused increased damage in sCO2 environments to even the most robust polymers. We also looked at polymers under compression in sCO2 at 100°C and 20 MPa pressure to imitate actual sealing performance required of these materials in sCO2 systems. Compression worsened the physical damage that resulted from chemical attack of the polymers under these test conditions. In FY 2021, the effect of cycling temperature (from 50°C to 150°C to 50°C) for polymers under a steady sCO2 pressure of 20 MPa was studied. The aim was to understand the influence of cycling temperatures of sCO2 for typical polymers under isobaric (20 MPa) conditions. Thermoplastic polymers (Nylon, and PTFE) and elastomers (EPDM, Viton, Buna N, Neoprene, FF202, and HNBR) were subjected to 20 MPa sCO2 pressure for 50 cycles and 100 cycles in separate experiments. Samples were extracted for ex-situ characterization at 50 cycles and upon the completion of 100 cycles. Each cycle constituted of 175 minutes of cycling from 50°C to 150°C. The polymer samples were examined for physical and chemical changes by Dynamic Mechanical and Thermal Analysis (DMTA), Fourier Transform Infrared (FTIR) spectroscopy, and compression set. Density and mass changes immediately after removal from test were measured for degree of swell comparisons. Optical microscopy techniques and micro computer tomography (micro CT) images were collected on select specimens. Evaluations conducted showed that exposures to super-critical CO2 environments resulted in combinations of physical and/or chemical changes. For each polymer, the dominance of cycling temperatures under sCO2 pressures, were evaluated. Attempts were made to qualitatively link the permanent sCO2 effects to polymer micro- structure, free volume, backbone substitutions, presence of polar groups, and degree of crystallinity differences. This study has established that soft polymeric materials are conducive to failure in sCO2 through mechanisms of failure that are dependent on polymer microstructure and chemistry. Polar pendant groups, large atom substitutions on the backbone are some of the factors that are influential structural factors.
Isocontours of Q-criterion with velocity visualized in the wake for two NREL 5-MW turbines operating under uniform-inflow wind speed of 8 m/s. Simulation performed with the hybrid-Nalu-Wind/AMR-Wind solver.
The SNL Sierra Mechanics code suite is designed to enable simulation of complex multiphysics scenarios. The code suite is composed of several specialized applications which can operate either in standalone mode or coupled with each other. Arpeggio is a supported utility that enables loose coupling of the various Sierra Mechanics applications by providing access to Framework services that facilitate the coupling. More importantly Arpeggio orchestrates the execution of applications that participate in the coupling. This document describes the various components of Arpeggio and their operability. The intent of the document is to provide a fast path for analysts interested in coupled applications via simple examples of its usage.
SIERRA/Aero is a compressible fluid dynamics program intended to solve a wide variety compressible fluid flows including transonic and hypersonic problems. This document describes the commands for assembling a fluid model for analysis with this module, henceforth referred to simply as Aero for brevity. Aero is an application developed using the SIERRA Toolkit (STK). The intent of STK is to provide a set of tools for handling common tasks that programmers encounter when developing a code for numerical simulation. For example, components of STK provide field allocation and management, and parallel input/output of field and mesh data. These services also allow the development of coupled mechanics analysis software for a massively parallel computing environment.
Adcock, Christiane; Ananthan, Shreyas; Berget-Vergiat, Luc; Brazell, Michael; Brunhart-Lupo, Nicholas; Hu, Jonathan J.; Knaus, Robert C.; Melvin, Jeremy; Moser, Bob; Mullowney, Paul; Rood, Jon; Sharma, Ashesh; Thomas, Stephen; Vijayakumar, Ganesh; Williams, Alan B.; Wilson, Robert; Yamazaki, Ichitaro; Sprague, Michael
The goal of the ExaWind project is to enable predictive simulations of wind farms comprised of many megawatt-scale turbines situated in complex terrain. Predictive simulations will require computational fluid dynamics (CFD) simulations for which the mesh resolves the geometry of the turbines, capturing the thin boundary layers, and captures the rotation and large deflections of blades. Whereas such simulations for a single turbine are arguably petascale class, multi-turbine wind farm simulations will require exascale-class resources.
The SIERRA Low Mach Module: Fuego, henceforth referred to as Fuego, is the key element of the ASC re environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Using MPMD coupling, Scefire and Nalu handle the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
SIERRA/Aero is a compressible fluid dynamics program intended to solve a wide variety compressible fluid flows including transonic and hypersonic problems. This document describes the commands for assembling a fluid model for analysis with this module, henceforth referred to simply as Aero for brevity. Aero is an application developed using the SIERRA Toolkit (STK). The intent of STK is to provide a set of tools for handling common tasks that programmers encounter when developing a code for numerical simulation. For example, components of STK provide field allocation and management, and parallel input/output of field and mesh data. These services also allow the development of coupled mechanics analysis software for a massively parallel computing environment.
A key objective of the United States Department of Energy’s (DOE) Office of Nuclear Energy’s Spent Fuel and Waste Science and Technology Campaign is to better understand the technical basis, risks, and uncertainty associated with the safe and secure disposition of spent nuclear fuel (SNF) and high-level radioactive waste. Commercial nuclear power generation in the United States has resulted in thousands of metric tons of SNF, the disposal of which is the responsibility of the DOE (Nuclear Waste Policy Act of 1982, as amended). Any repository licensed to dispose of SNF must meet requirements regarding the long-term performance of that repository. For an evaluation of the long-term performance of the repository, one of the events that may need to be considered is the SNF achieving a critical configuration during the postclosure period. Of particular interest is the potential behavior of SNF in dual-purpose canisters (DPCs), which are currently licensed and being used to store and transport SNF but were not designed for permanent geologic disposal. A study has been initiated to examine the potential consequences, with respect to long-term repository performance, of criticality events that might occur during the postclosure period in a hypothetical repository containing DPCs. The first phase (a scoping phase) consisted of developing an approach to creating the modeling tools and techniques that may eventually be needed to either include or exclude criticality from a performance assessment (PA) as appropriate; this scoping phase is documented in Price et al. (2019a). In the second phase, that modeling approach was implemented and future work was identified, as documented in Price et al. (2019b). This report gives the results of a repository-scale PA examining the potential consequences of postclosure criticality, as well as the information, modeling tools, and techniques needed to incorporate the effects of postclosure criticality in the PA.