We present an efficient self-consistent implementation of the Non-Equilibrium Green Function formalism, based on the Contact Block Reduction method for fast numerical efficiency, and the predictor-corrector approach, together with the Anderson mixing scheme, for the self-consistent solution of the Poisson and Schrödinger equations. Then, we apply this quantum transport framework to investigate 2D horizontal Si:P δ-layer Tunnel Junctions. We find that the potential barrier height varies with the tunnel gap width and the applied bias and that the sign of a single charge impurity in the tunnel gap plays an important role in the electrical current.
Since the classical molecular dynamics simulator LAMMPS was released as an open source code in 2004, it has become a widely-used tool for particle-based modeling of materials at length scales ranging from atomic to mesoscale to continuum. Reasons for its popularity are that it provides a wide variety of particle interaction models for different materials, that it runs on any platform from a single CPU core to the largest supercomputers with accelerators, and that it gives users control over simulation details, either via the input script or by adding code for new interatomic potentials, constraints, diagnostics, or other features needed for their models. As a result, hundreds of people have contributed new capabilities to LAMMPS and it has grown from fifty thousand lines of code in 2004 to a million lines today. In this paper several of the fundamental algorithms used in LAMMPS are described along with the design strategies which have made it flexible for both users and developers. We also highlight some capabilities recently added to the code which were enabled by this flexibility, including dynamic load balancing, on-the-fly visualization, magnetic spin dynamics models, and quantum-accuracy machine learning interatomic potentials.
Ultradoping introduces unprecedented dopant levels into Si, which transforms its electronic behavior and enables its use as a next-generation electronic material. Commercialization of ultradoping is currently limited by gas-phase ultra-high vacuum requirements. Solvothermal chemistry is amenable to scale-up. However, an integral part of ultradoping is a direct chemical bond between dopants and Si, and solvothermal dopant-Si surface reactions are not well-developed. This work provides the first quantified demonstration of achieving ultradoping concentrations of boron (∼1e14 cm2) by using a solvothermal process. Surface characterizations indicate the catalyst cross-reacted, which led to multiple surface products and caused ambiguity in experimental confirmation of direct surface attachment. Density functional theory computations elucidate that the reaction results in direct B−Si surface bonds. This proof-of-principle work lays groundwork for emerging solvothermal ultradoping processes.
The coupling of inter- and intramolecular vibrations plays a critical role in initiating chemistry during the shock-to-detonation transition in energetic materials. Herein, we report on the subpicosecond to subnanosecond vibrational energy transfer (VET) dynamics of the solid energetic material 1,3,5-trinitroperhydro-1,3,5-triazine (RDX) by using broadband, ultrafast infrared transient absorption spectroscopy. Experiments reveal VET occurring on three distinct time scales: subpicosecond, 5 ps, and 200 ps. The ultrafast appearance of signal at all probed modes in the mid-infrared suggests strong anharmonic coupling of all vibrations in the solid, whereas the long-lived evolution demonstrates that VET is incomplete, and thus thermal equilibrium is not attained, even on the 100 ps time scale. Density functional theory and classical molecular dynamics simulations provide valuable insights into the experimental observations, revealing compression-insensitive time scales for the initial VET dynamics of high-frequency vibrations and drastically extended relaxation times for low-frequency phonon modes under lattice compression. Mode selectivity of the longest dynamics suggests coupling of the N-N and axial NO2stretching modes with the long-lived, excited phonon bath.
We propose a domain decomposition method for the efficient simulation of nonlocal problems. Our approach is based on a multi-domain formulation of a nonlocal diffusion problem where the subdomains share “nonlocal” interfaces of the size of the nonlocal horizon. This system of nonlocal equations is first rewritten in terms of minimization of a nonlocal energy, then discretized with a meshfree approximation and finally solved via a Lagrange multiplier approach in a way that resembles the finite element tearing and interconnect method. Specifically, we propose a distributed projected gradient algorithm for the solution of the Lagrange multiplier system, whose unknowns determine the nonlocal interface conditions between subdomains. Several two-dimensional numerical tests on problems as large as 191 million unknowns illustrate the strong and the weak scalability of our algorithm, which outperforms the standard approach to the distributed numerical solution of the problem. Finally, this work is the first rigorous numerical study in a two-dimensional multi-domain setting for nonlocal operators with finite horizon and, as such, it is a fundamental step towards increasing the use of nonlocal models in large scale simulations.
Computational simulation is increasingly relied upon for high-consequence engineering decisions, and a foundational element to solid mechanics simulations is a credible material model. Our ultimate vision is to interlace material characterization and model calibration in a real-time feedback loop, where the current model calibration results will drive the experiment to load regimes that add the most useful information to reduce parameter uncertainty. The current work investigated one key step to this Interlaced Characterization and Calibration (ICC) paradigm, using a finite load-path tree to incorporate history/path dependency of nonlinear material models into a network of surrogate models that replace computationally-expensive finite-element analyses. Our reference simulation was an elastoplastic material point subject to biaxial deformation with a Hill anisotropic yield criterion. Training data was generated using either a space-filling or adaptive sampling method, and surrogates were built using either Gaussian process or polynomial chaos expansion methods. Surrogate error was evaluated to be on the order of 10⁻5 and 10⁻3 percent for the space-filling and adaptive sampling training data, respectively. Direct Bayesian inference was performed with the surrogate network and with the reference material point simulator, and results agreed to within 3 significant figures for the mean parameter values, with a reduction in computational cost over 5 orders of magnitude. These results bought down risk regarding the surrogate network and facilitated a successful FY22-24 full LDRD proposal to research and develop the complete ICC paradigm.
The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Spent Fuel & Waste Disposition (SFWD) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and highlevel nuclear waste (HLW). A high priority for SFWST disposal R&D is disposal system modeling (DOE 2012, Table 6; Sevougian et al. 2019). The SFWST Geologic Disposal Safety Assessment (GDSA) work package is charged with developing a disposal system modeling and analysis capability for evaluating generic disposal system performance for nuclear waste in geologic media.
Thin, high-density layers of dopants in semiconductors, known as δ-layer systems, have recently attracted attention as a platform for exploration of the future quantum and classical computing when patterned in plane with atomic precision. However, there are many aspects of the conductive properties of these systems that are still unknown. Here we present an open-system quantum transport treatment to investigate the local density of electron states and the conductive properties of the δ-layer systems. A successful application of this treatment to phosphorous δ-layer in silicon both explains the origin of recently-observed shallow sub-bands and reproduces the sheet resistance values measured by different experimental groups. Further analysis reveals two main quantum-mechanical effects: 1) the existence of spatially distinct layers of free electrons with different average energies; 2) significant dependence of sheet resistance on the δ-layer thickness for a fixed sheet charge density.
Swiler, Laura P.; Becker, Dirk-Alexander; Brooks, Dusty M.; Govaerts, Joan; Koskinen, Lasse; Plischke, Elmar; Rohlig, Klaus-Jurgen; Saveleva, Elena; Spiessl, Sabine M.; Stein, Emily S.; Svitelman, Valentina
Over the past four years, an informal working group has developed to investigate existing sensitivity analysis methods, examine new methods, and identify best practices. The focus is on the use of sensitivity analysis in case studies involving geologic disposal of spent nuclear fuel or nuclear waste. To examine ideas and have applicable test cases for comparison purposes, we have developed multiple case studies. Four of these case studies are presented in this report: the GRS clay case, the SNL shale case, the Dessel case, and the IBRAE groundwater case. We present the different sensitivity analysis methods investigated by various groups, the results obtained by different groups and different implementations, and summarize our findings.
Scientific applications run on high-performance computing (HPC) systems are critical for many national security missions within Sandia and the NNSA complex. However, these applications often face performance degradation and even failures that are challenging to diagnose. To provide unprecedented insight into these issues, the HPC Development, HPC Systems, Computational Science, and Plasma Theory & Simulation departments at Sandia crafted and completed their FY21 ASC Level 2 milestone entitled "Integrated System and Application Continuous Performance Monitoring and Analysis Capability." The milestone created a novel integrated HPC system and application monitoring and analysis capability by extending Sandia's Kokkos application portability framework, Lightweight Distributed Metric Service (LDMS) monitoring tool, and scalable storage, analysis, and visualization pipeline. The extensions to Kokkos and LDMS enable collection and storage of application data during run time, as it is generated, with negligible overhead. This data is combined with HPC system data within the extended analysis pipeline to present relevant visualizations of derived system and application metrics that can be viewed at run time or post run. This new capability was evaluated using several week-long, 290-node runs of Sandia's ElectroMagnetic Plasma In Realistic Environments ( EMPIRE ) modeling and design tool and resulted in 1TB of application data and 50TB of system data. EMPIRE developers remarked this capability was incredibly helpful for quickly assessing application health and performance alongside system state. In short, this milestone work built the foundation for expansive HPC system and application data collection, storage, analysis, visualization, and feedback framework that will increase total scientific output of Sandia's HPC users.
The propagation of a wave pulse due to low-speed impact on a one-dimensional, heterogeneous bar is studied. Due to the dispersive character of the medium, the pulse attenuates as it propagates. This attenuation is studied over propagation distances that are much longer than the size of the microstructure. A homogenized peridynamic material model can be calibrated to reproduce the attenuation and spreading of the wave. The calibration consists of matching the dispersion curve for the heterogeneous material near the limit of long wavelengths. It is demonstrated that the peridynamic method reproduces the attenuation of wave pulses predicted by an exact microstructural model over large propagation distances.
We present a simple and powerful technique for finding a good error model for a quantum processor. The technique iteratively tests a nested sequence of models against data obtained from the processor, and keeps track of the best-fit model and its wildcard error (a metric of the amount of unmodeled error) at each step. Each best-fit model, along with a quantification of its unmodeled error, constitutes a characterization of the processor. We explain how quantum processor models can be compared with experimental data and to each other. We demonstrate the technique by using it to characterize a simulated noisy two-qubit processor.
This report summarizes the activities performed as part of the Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) Grand Challenge LDRD project. We provide an overview of the research done in this project, including work on cyber emulation, uncertainty quantification, and optimization. We present examples of integrated analyses performed on two case studies: a network scanning/detection study and a malware command and control study. We highlight the importance of experimental workflows and list references of papers and presentations developed under this project. We outline lessons learned and suggestions for future work.
Probabilistic and Bayesian neural networks have long been proposed as a method to incorporate uncertainty about the world (both in training data and operation) into artificial intelligence applications. One approach to making a neural network probabilistic is to leverage a Monte Carlo sampling approach that samples a trained network while incorporating noise. Such sampling approaches for neural networks have not been extensively studied due to the prohibitive requirement of many computationally expensive samples. While the development of future microelectronics platforms that make this sampling more efficient is an attractive option, it has not been immediately clear how to sample a neural network and what the quality of random number generation should be. This research aimed to start addressing these two fundamental questions by examining basic “off the shelf” neural networks can be sampled through a few different mechanisms (including synapse “dropout” and neuron “dropout”) and examine how these sampling approaches can be evaluated both in terms of evaluating algorithm effectiveness and the required quality of random numbers.
With the rapid proliferation of additive manufacturing and 3D printing technologies, architected cellular solids including truss-like 3D lattice topologies offer the opportunity to program the effective material response through topological design at the mesoscale. The present report summarizes several of the key findings from a 3-year Laboratory Directed Research and Development Program. The program set out to explore novel lattice topologies that can be designed to control, redirect, or dissipate energy from one or multiple insult environments relevant to Sandia missions, including crush, shock/impact, vibration, thermal, etc. In the first 4 sections, we document four novel lattice topologies stemming from this study: coulombic lattices, multi-morphology lattices, interpenetrating lattices, and pore-modified gyroid cellular solids, each with unique properties that had not been achieved by existing cellular/lattice metamaterials. The fifth section explores how unintentional lattice imperfections stemming from the manufacturing process, primarily sur face roughness in the case of laser powder bed fusion, serve to cause stochastic response but that in some cases such as elastic response the stochastic behavior is homogenized through the adoption of lattices. In the sixth section we explore a novel neural network screening process that allows such stocastic variability to be predicted. In the last three sections, we explore considerations of computational design of lattices. Specifically, in section 7 using a novel generative optimization scheme to design novel pareto-optimal lattices for multi-objective environments. In section 8, we use computational design to optimize a metallic lattice structure to absorb impact energy for a 1000 ft/s impact. And in section 9, we develop a modified micromorphic continuum model to solve wave propagation problems in lattices efficiently.