The development and application of optically accessible engines to further our understanding of in-cylinder combustion processes is reviewed, spanning early efforts in simplified engines to the more recent development of high-pressure, highspeed engines that retain the geometric complexities of modern production engines. Limitations of these engines with respect to the reproduction of realistic metal test engine characteristics and performance are identified, as well as methods that have been used to overcome these limitations. Lastly, the role of the work performed in these engines on clarifying the fundamental physical processes governing the combustion process and on laying the foundation for predictive engine simulation is summarized.
A major theme in thermoelectric research is based on controlling the formation of nanostructures that occur naturally in bulk intermetallic alloys through various types of thermodynamic phase transformation processes (He et al., 2013). The question of how such nanostructures form and why they lead to a high thermoelectric figure of merit (zT) are scientifically interesting and worthy of attention. However, as we discuss in this opinion, any processing route based on thermodynamic phase transformations alone will be difficult to implement in thermoelectric applications where thermal stability and reliability are important. Attention should also be focused on overcoming these limitations through advanced post-processing techniques.
Model form error of the type considered here is error due to an approximate or incorrect representation of physics by a computational model. Typical approaches to adjust a model based on observed differences between experiment and prediction are to calibrate the model parameters utilizing the observed discrepancies and to develop parameterized additive corrections to the model output. These approaches are generally not suitable if significant physics is missing from the model and the desired quantities of interest for an application are different than those used for calibration. The approach developed here is to build a corrected surrogate solver through a multi- step process: 1) Sampled simulation results are used to develop a surrogate computational solver that maintains the overall conservative principles of the unmodified governing equations, 2) the surrogate solver is applied to candidate linear and non-linear corrector terms to develop corrections that are consistent with the original conservative principles, 3) constant multipliers on the these terms are calibrated using the experimental observations, and 4) the resulting surrogate solver is used to predict application response for the quantity of interest. This approach and several other calibration-based approaches were applied to an example problem based on the diffusive Burgers' equation. While all the approaches provided some model correction when the measure/calibration quantity was the same as that for an application, only the present approach was able to adequately correct the CompSim results when the prediction quantity was different from the calibration quantity.
A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.
In the summer of 2020, the National Aeronautics and Space Administration (NASA) plans to launch a spacecraft as part of the Mars 2020 mission. One option for the rover on the proposed spacecraft uses a Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) to provide continuous electrical and thermal power for the mission. An alternative option being considered is a set of solar panels for electrical power with up to 80 Light-Weight Radioisotope Heater Units (LWRHUs) for local component heating. Both the MMRTG and the LWRHUs use radioactive plutonium dioxide. NASA is preparing an Environmental Impact Statement (EIS) in accordance with the National Environmental Policy Act. The EIS will include information on the risks of mission accidents to the general public and on-site workers at the launch complex. This Nuclear Risk Assessment (NRA) addresses the responses of the MMRTG or LWRHU options to potential accident and abort conditions during the launch opportunity for the Mars 2020 mission and the associated consequences. This information provides the technical basis for the radiological risks of both options for the EIS.
In this report, measurements of the prompt radiation-induced conductivity (RIC) in 3 mil samples of Pyralux® are presented as a function of dose rate, pulse width, and applied bias. The experiments were conducted with the Medusa linear accelerator (LINAC) located at the Little Mountain Test Facility (LMTF) near Ogden, UT. The nominal electron energy for the LINAC is 20 MeV. Prompt conduction current data were obtained for dose rates ranging from ~2 x 109 rad(Si)/s to ~1.1 x 1011 rad(Si)/s and for nominal pulse widths of 50 ns and 500 ns. At a given dose rate, the applied bias across the samples was stepped between -1500 V and 1500 V. Calculated values of the prompt RIC varied between 1.39x10-8 Ω-1 · m-1 and 2.67x10-7 Ω-1 · m-1 and the prompt RIC coefficient varied between 1.25x10-18 Ω-1 · m-1/(rad/s) and 1.93x10-17 Ω-1 · m-1/(rad/s).
A simple method for experimentally determining thermodynamic quantities for flow battery cell reactions is presented. Equilibrium cell potentials, temperature derivatives of cell potential (dE/dT), Gibbs free energies, and entropies are reported here for all-vanadium, iron–vanadium, and iron–chromium flow cells with state-of-the-art solution compositions. Proof is given that formal potentials and formal temperature coefficients can be used with modified forms of the Nernst Equation to quantify the thermodynamics of flow cell reactions as a function of state-of-charge. Such empirical quantities can be used in thermo-electrochemical models of flow batteries at the cell or system level. In most cases, the thermodynamic quantities measured here are significantly different from standard values reported and used previously in the literature. The data reported here are also useful in the selection of operating temperatures for flow battery systems. Because higher temperatures correspond to lower equilibrium cell potentials for the battery chemistries studied here, it can be beneficial to charge a cell at higher temperature and discharge at lower temperature. As a result, proof-of-concept of improved voltage efficiency with the use of such non-isothermal cycling is given for the all-vanadium redox flow battery, and the effect is shown to be more pronounced at lower current densities.
A simple demonstration of nonlocality in a heterogeneous material is presented. By analysis of the microscale deformation of a two-component layered medium, it is shown that nonlocal interactions necessarily appear in a homogenized model of the system. Explicit expressions for the nonlocal forces are determined. The way these nonlocal forces appear in various nonlocal elasticity theories is derived. The length scales that emerge involve the constituent material properties as well as their geometrical dimen- sions. A peridynamic material model for the smoothed displacement eld is derived. It is demonstrated by comparison with experimental data that the incorporation of non- locality in modeling dramatically improves the prediction of the stress concentration in an open hole tension test on a composite plate.
For over two decades the dominant means for enabling portable performance of computational science and engineering applications on parallel processing architectures has been the bulk-synchronous parallel programming (BSP) model. Code developers, motivated by performance considerations to minimize the number of messages transmitted, have typically pursued a strategy of aggregating message data into fewer, larger messages. Emerging and future high-performance architectures, especially those seen as targeting Exascale capabilities, provide motivation and capabilities for revisiting this approach. In this paper we explore alternative configurations within the context of a large-scale complex multi-physics application and a proxy that represents its behavior, presenting results that demonstrate some important advantages as the number of processors increases in scale.
NetMOD (Network Monitoring for Optimal Detection) is a Java-based software package for conducting simulation of seismic networks. Specifically, NetMOD simulates the detection capabilities of seismic monitoring networks. Network simulations have long been used to study network resilience to station outages and to determine where additional stations are needed to reduce monitoring thresholds. NetMOD makes use of geophysical models to determine the source characteristics, signal attenuation along the path between the source and station, and the performance and noise properties of the station. These geophysical models are combined to simulate the relative amplitudes of signal and noise that are observed at each of the stations. From these signal-to-noise ratios (SNR), the probability of detection can be computed given a detection threshold. This manual describes how to configure and operate NetMOD to perform seismic detection simulations. In addition, NetMOD is distributed with a simulation dataset for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) International Monitoring System (IMS) seismic network for the purpose of demonstrating NetMOD's capabilities and providing user training. The tutorial sections of this manual use this dataset when describing how to perform the steps involved when running a simulation.
We are on the threshold of a transformative change in the basic architecture of highperformance computing. The use of accelerator processors, characterized by large core counts, shared but asymmetrical memory, and heavy thread loading, is quickly becoming the norm in high performance computing. These accelerators represent significant challenges in updating our existing base of software. An intrinsic problem with this transition is a fundamental programming shift from message passing processes to much more fine thread scheduling with memory sharing. Another problem is the lack of stability in accelerator implementation; processor and compiler technology is currently changing rapidly. This report documents the results of our three-year ASCR project to address these challenges. Our project includes the development of the Dax toolkit, which contains the beginnings of new algorithms for a new generation of computers and the underlying infrastructure to rapidly prototype and build further algorithms as necessary.
Simulating gamma spectra is useful for analyzing special nuclear materials. Gamma spectra are influenced not only by the source and the detector, but also by the external, and potentially complex, scattering environment. The scattering environment can make accurate representations of gamma spectra difficult to obtain. By coupling the Monte Carlo Nuclear Particle (MCNP) code with the Gamma Detector Response and Analysis Software (GADRAS) detector response function, gamma spectrum simulations can be computed with a high degree of fidelity even in the presence of a complex scattering environment. Traditionally, GADRAS represents the external scattering environment with empirically derived scattering parameters. By modeling the external scattering environment in MCNP and using the results as input for the GADRAS detector response function, gamma spectra can be obtained with a high degree of fidelity. This method was verified with experimental data obtained in an environment with a significant amount of scattering material. The experiment used both gamma-emitting sources and moderated and bare neutron-emitting sources. The sources were modeled using GADRAS and MCNP in the presence of the external scattering environment, producing accurate representations of the experimental data.
We develop a capability to simulate reduction-oxidation (redox) flow batteries in the Sierra Multi-Mechanics code base. Specifically, we focus on all-vanadium redox flow batteries; however, the capability is general in implementation and could be adopted to other chemistries. The electrochemical and porous flow models follow those developed in the recent publication by [28]. We review the model implemented in this work and its assumptions, and we show several verification cases including a binary electrolyte, and a battery half-cell. Then, we compare our model implementation with the experimental results shown in [28], with good agreement seen. Next, a sensitivity study is conducted for the major model parameters, which is beneficial in targeting specific features of the redox flow cell for improvement. Lastly, we simulate a three-dimensional version of the flow cell to determine the impact of plenum channels on the performance of the cell. Such channels are frequently seen in experimental designs where the current collector plates are borrowed from fuel cell designs. These designs use a serpentine channel etched into a solid collector plate.
To support higher fidelity modeling of residual stresses in glass-to-metal (GTM) seals and to demonstrate the accuracy of finite element analysis predictions, characterization and validation data have been collected for Sandia’s commonly used compression seal materials. The temperature dependence of the storage moduli, the shear relaxation modulus master curve and structural relaxation of the Schott 8061 glass were measured and stress-strain curves were generated for SS304L VAR in small strain regimes typical of GTM seal applications spanning temperatures from 20 to 500 C. Material models were calibrated and finite element predictions are being compared to measured data to assess the accuracy of predictions.
There are multiple ways for a homeowner to obtain the electricity generating and savings benefits offered by a photovoltaic (PV) system. These include purchasing a PV system through various financing mechanisms, or by leasing the PV system from a third party with multiple options that may include purchase, lease renewal or PV system removal. The different ownership options available to homeowners presents a challenge to appraisal and real estate professionals during a home sale or refinance in terms of how to develop a value that is reflective of the PV systems operational characteristics, local market conditions, and lender and underwriter requirements. This paper presents these many PV system ownership options with a discussion of what considerations an appraiser must make when developing the contributory value of a PV system to a residential property.
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Typically, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials. Following the development of a set of verification tests, the code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos LANSCE "Blue Room" facility. The results reveal that KMC calculations agree well with experiment once adjustments are made to significant defect parameters within the appropriate uncertainty bounds.
This work was an early career LDRD investigating the idea of using a focused ion beam (FIB) to implant Ga into silicon to create embedded nanowires and/or fully suspended nanowires. The embedded Ga nanowires demonstrated electrical resistivity of 5 m-cm, conductivity down to 4 K, and acts as an Ohmic silicon contact. The suspended nanowires achieved dimensions down to 20 nm x 30 nm x 10 m with large sensitivity to pressure. These structures then performed well as Pirani gauges. Sputtered niobium was also developed in this research for use as a superconductive coating on the nanowire. Oxidation characteristics of Nb were detailed and a technique to place the Nb under tensile stress resulted in the Nb resisting bulk atmospheric oxidation for up to years.
The availability of efficient algorithms for long-range pairwise interactions is central to the success of numerous applications, ranging in scale from atomic-level modeling of materials to astrophysics. This report focuses on the implementation and analysis of the multilevel summation method for approximating long-range pairwise interactions. The computational cost of the multilevel summation method is proportional to the number of particles, N, which is an improvement over FFTbased methods whos cost is asymptotically proportional to N logN. In addition to approximating electrostatic forces, the multilevel summation method can be use to efficiently approximate convolutions with long-range kernels. As an application, we apply the multilevel summation method to a discretized integral equation formulation of the regularized generalized Poisson equation. Numerical results are presented using an implementation of the multilevel summation method in the LAMMPS software package. Preliminary results show that the computational cost of the method scales as expected, but there is still a need for further optimization.
This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02, completed March, 31, 2012, THM.CFD.P5.01 completed June 30, 2012 and THM.CFD.P5.01 completed on October 31, 2012.
Radar coherence is an important concept for imaging radar systems such as synthetic aperture radar (SAR). This document quantifies some of the effects in SAR which modify the coherence. Although these effects can disrupt the coherence within a single SAR image, this report will focus on the coherence between separate images, such as for coherent change detection (CCD) processing. There have been other presentations on aspects of this material in the past. The intent of this report is to bring various issues that affect the coherence together in a single report to support radar engineers in making decisions about these matters.
This report describes a system model that can be used to analyze three advance small modular reactor (SMR) designs through their lifetime. Neutronics of these reactor designs were evaluated using Monte Carlo N-Particle eXtended (MCNPX/6). The system models were developed in Matlab and Simulink. A major thrust of this research was the initial scoping analysis of Sandias concept of a long-life fast reactor (LLFR). The inherent characteristic of this conceptual design is to minimize the change in reactivity over the lifetime of the reactor. This allows the reactor to operate substantially longer at full power than traditional light water reactors (LWRs) or other SMR designs (e.g. high temperature gas reactor (HTGR)). The system model has subroutines for lifetime reactor feedback and operation calculations, thermal hydraulic effects, load demand changes and a simplified SCO2 Brayton cycle for power conversion.
Cyber attacks pose a major threat to modern organizations. Little is known about the social aspects of decision making among organizations that face cyber threats, nor do we have empirically-grounded models of the dynamics of cooperative behavior among vulnerable organizations. The effectiveness of cyber defense can likely be enhanced if information and resources are shared among organizations that face similar threats. Three models were created to begin to understand the cognitive and social aspects of cyber cooperation. The first simulated a cooperative cyber security program between two organizations. The second focused on a cyber security training program in which participants interact (and potentially cooperate) to solve problems. The third built upon the first two models and simulates cooperation between organizations in an information-sharing program.
Early 2010 saw a signi cant change in adversarial techniques aimed at network intrusion: a shift from malware delivered via email attachments toward the use of hidden, embedded hyperlinks to initiate sequences of downloads and interactions with web sites and network servers containing malicious software. Enterprise security groups were well poised and experienced in defending the former attacks, but the new types of attacks were larger in number, more challenging to detect, dynamic in nature, and required the development of new technologies and analytic capabilities. The Hybrid LDRD project was aimed at delivering new capabilities in large-scale data modeling and analysis to enterprise security operators and analysts and understanding the challenges of detection and prevention of emerging cybersecurity threats. Leveraging previous LDRD research e orts and capabilities in large-scale relational data analysis, large-scale discrete data analysis and visualization, and streaming data analysis, new modeling and analysis capabilities were quickly brought to bear on the problems in email phishing and spear phishing attacks in the Sandia enterprise security operational groups at the onset of the Hybrid project. As part of this project, a software development and deployment framework was created within the security analyst work ow tool sets to facilitate the delivery and testing of new capabilities as they became available, and machine learning algorithms were developed to address the challenge of dynamic threats. Furthermore, researchers from the Hybrid project were embedded in the security analyst groups for almost a full year, engaged in daily operational activities and routines, creating an atmosphere of trust and collaboration between the researchers and security personnel. The Hybrid project has altered the way that research ideas can be incorporated into the production environments of Sandias enterprise security groups, reducing time to deployment from months and years to hours and days for the application of new modeling and analysis capabilities to emerging threats. The development and deployment framework has been generalized into the Hybrid Framework and incor- porated into several LDRD, WFO, and DOE/CSL projects and proposals. And most importantly, the Hybrid project has provided Sandia security analysts with new, scalable, extensible analytic capabilities that have resulted in alerts not detectable using their previous work ow tool sets.