This Report characterizes the defects in the defect reaction network in silicon-doped, n-type InP deduced from first principles density functional theory. The reaction network is deduced by following exothermic defect reactions starting with the initially mobile interstitial defects reacting with common displacement damage defects in Si-doped InP until culminating in immobile reaction products. The defect reactions and reaction energies are tabulated, along with the properties of all the silicon-related defects in the reaction network. This Report serves to extend the results for intrinsic defects in SAND 2012-3313: %E2%80%9CSimple intrinsic defects in InP: Numerical predictions%E2%80%9D to include Si-containing simple defects likely to be present in a radiation-induced defect reaction sequence.
Sandia National Laboratories, Los Alamos National Laboratory, and Lawrence Livermore National Laboratory each selected a representative simulation code to be used as a performance benchmark for the Trinity Capability Improvement Metric. Sandia selected SIERRA Low Mach Module: Nalu, which is a uid dynamics code that solves many variable-density, acoustically incompressible problems of interest spanning from laminar to turbulent ow regimes, since it is fairly representative of implicit codes that have been developed under ASC. The simulations for this metric were performed on the Cielo Cray XE6 platform during dedicated application time and the chosen case utilized 131,072 Cielo cores to perform a canonical turbulent open jet simulation within an approximately 9-billion-elementunstructured- hexahedral computational mesh. This report will document some of the results from these simulations as well as provide instructions to perform these simulations for comparison.
The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.
Density Functional Theory (DFT) has emerged as an indispensable tool in materials research, since it can accurately predict properties of a wide variety of materials at both equilibrium and extreme conditions. However, for organic molecular crystal explosives, successful application of DFT has largely failed due to the inability of current exchange-correlation functionals to correctly describe intermolecular van der Waals (vdWs) forces. Despite this, we have discovered that even with no treatment of vdWs bonding, the AM05 functional and DFT based molecular dynamics (MD) could be used to study the properties of molecular crystals under compression. We have used DFT-MD to predict the unreacted Hugoniots for PETN and HNS and validated the results by comparison with crystalline and porous experimental data. Since we are also interested in applying DFT methods to study the equilibrium volume properties of explosives, we studied the nature of the vdWs bonding in pursuit of creating a new DFT functional capable of accurately describing equilibrium bonding of molecular crystals. In this report we discuss our results for computing shock Hugoniots of molecular crystals and also what was learned about the nature of bonding in these materials.
This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.
This report develops and documents nonlinear kinematic relations needed to implement piezoelectric constitutive models in ALEGRA-EMMA [5], where calculations involving large displacements and rotations are routine. Kinematic relationships are established using Gausss law and Faradays law; this presentation on kinematics goes beyond piezoelectric materials and is applicable to all dielectric materials. The report then turns to practical details of implementing piezoelectric models in an application code where material principal axes are rarely aligned with user defined problem coordinate axes. This portion of the report is somewhat pedagogical but is necessary in order to establish documentation for the piezoelectric implementation in ALEGRA-EMMA. This involves transforming elastic, piezoelectric, and permittivity moduli from material principal axes to problem coordinate axes. The report concludes with an overview of the piezoelectric implementation in ALEGRA-EMMA and small verification examples.
The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. The limited nature of the measured data leads to a severely-underdetermined estimation problem. If the estimation is performed at fine spatial resolutions, it can also be computationally expensive. In order to enable such estimations, advances are needed in the spatial representation of ffCO2 emissions, scalable inversion algorithms and the identification of observables to measure. To that end, we investigate parsimonious spatial parameterizations of ffCO2 emissions which can be used in atmospheric inversions. We devise and test three random field models, based on wavelets, Gaussian kernels and covariance structures derived from easily-observed proxies of human activity. In doing so, we constructed a novel inversion algorithm, based on compressive sensing and sparse reconstruction, to perform the estimation. We also address scalable ensemble Kalman filters as an inversion mechanism and quantify the impact of Gaussian assumptions inherent in them. We find that the assumption does not impact the estimates of mean ffCO2 source strengths appreciably, but a comparison with Markov chain Monte Carlo estimates show significant differences in the variance of the source strengths. Finally, we study if the very different spatial natures of biogenic and ffCO2 emissions can be used to estimate them, in a disaggregated fashion, solely from CO2 concentration measurements, without extra information from products of incomplete combustion e.g., CO. We find that this is possible during the winter months, though the errors can be as large as 50%.
Over the last three years the Neurons to Algorithms (N2A) LDRD project teams has built infrastructure to discover computational structures in the brain. This consists of a modeling language, a tool that enables model development and simulation in that language, and initial connections with the Neuroinformatics community, a group working toward similar goals. The approach of N2A is to express large complex systems like the brain as populations of a discrete part types that have specific structural relationships with each other, along with internal and structural dynamics. Such an evolving mathematical system may be able to capture the essence of neural processing, and ultimately of thought itself. This final report is a cover for the actual products of the project: the N2A Language Specification, the N2A Application, and a journal paper summarizing our methods.
Density-functional theory calculations, ab-initio molecular dynamics, and the Kubo-Greenwood formula are applied to predict electrical conductivity in Ta2Ox (0 x 5) as a function of composition, phase, and temperature, where additional focus is given to various oxidation states of the O monovacancy (VOn; n=0,1+,2+). Our calculations of DC conductivity at 300K agree well with experimental measurements taken on Ta2Ox thin films and bulk Ta2O5 powder-sintered pellets, although simulation accuracy can be improved for the most insulating, stoichiometric compositions. Our conductivity calculations and further interrogation of the O-deficient Ta2O5 electronic structure provide further theoretical basis to substantiate VO0 as a donor dopant in Ta2O5 and other metal oxides. Furthermore, this dopant-like behavior appears specific to neutral VO cases in both Ta2O5 and TiO2 and was not observed in other oxidation states. This suggests that reduction and oxidation reactions may effectively act as donor activation and deactivation mechanisms, respectively, for VO0 in transition metal oxides.
The Predictive Capability Maturity Model (PCMM) is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated for an intended application. The primary application of this tool at Sandia National Laboratories (SNL) has been for physics-based computational simulations in support of nuclear weapons applications. The two main goals of a PCMM evaluation are 1) the communication of computational simulation capability, accurately and transparently, and 2) the development of input for effective planning. As a result of the increasing importance of computational simulation to SNLs mission, the PCMM has evolved through multiple generations with the goal to provide more clarity, rigor, and completeness in its application. This report describes the approach used to develop the fourth generation of the PCMM.
At sufficiently high energies, the wavelengths of electrons and photons are short enough to only interact with one atom at time, leading to the popular %E2%80%9Cindependent-atom approximation%E2%80%9D. We attempted to incorporate atomic structure in the generation of cross sections (which embody the modeled physics) to improve transport at lower energies. We document our successes and failures. This was a three-year LDRD project. The core team consisted of a radiation-transport expert, a solid-state physicist, and two DFT experts.
This report summarizes the result of a NEAMS project focused on the use of reliability methods within the RAVEN and RELAP-7 software framework for assessing failure probabilities as part of probabilistic risk assessment for nuclear power plants. RAVEN is a software tool under development at the Idaho National Laboratory that acts as the control logic driver and post-processing tool for the newly developed Thermal-Hydraulic code RELAP-7. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. Reliability methods are algorithms which transform the uncertainty problem to an optimization problem to solve for the failure probability, given uncertainty on problem inputs and a failure threshold on an output response. The goal of this work is to demonstrate the use of reliability methods in Dakota with RAVEN/RELAP-7. These capabilities are demonstrated on a demonstration of a Station Blackout analysis of a simplified Pressurized Water Reactor (PWR).
This SAND report summarizes the activities and outcomes of the Network and Ensemble Enabled Entity Extraction in Information Text (NEEEEIT) LDRD project, which addressed improving the accuracy of conditional random fields for named entity recognition through the use of ensemble methods.