This Report characterizes the defects in the def ect reaction network in silicon - doped, n - type InAs predicted with first principles density functional theory. The reaction network is deduced by following exothermic defect reactions starting with the initially mobile interstitial defects reacting with common displacement damage defects in Si - doped InAs , until culminating in immobile reaction p roducts. The defect reactions and reaction energies are tabulated, along with the properties of all the silicon - related defects in the reaction network. This Report serves to extend the results for the properties of intrinsic defects in bulk InAs as colla ted in SAND 2013 - 2477 : Simple intrinsic defects in InAs : Numerical predictions to include Si - containing simple defects likely to be present in a radiation - induced defect reaction sequence . This page intentionally left blank
This report summarizes the work performed under the project project Next-Generation Algo- rithms for Assessing Infrastructure Vulnerability and Optimizing System Resilience. The goal of the project was to improve mathematical programming-based optimization technology for in- frastructure protection. In general, the owner of a network wishes to design a network a network that can perform well when certain transportation channels are inhibited (e.g. destroyed) by an adversary. These are typically bi-level problems where the owner designs a system, an adversary optimally attacks it, and then the owner can recover by optimally using the remaining network. This project funded three years of Deon Burchett's graduate research. Deon's graduate advisor, Professor Jean-Philippe Richard, and his Sandia advisors, Richard Chen and Cynthia Phillips, supported Deon on other funds or volunteer time. This report is, therefore. essentially a replication of the Ph.D. dissertation it funded [12] in a format required for project documentation. The thesis had some general polyhedral research. This is the study of the structure of the feasi- ble region of mathematical programs, such as integer programs. For example, an integer program optimizes a linear objective function subject to linear constraints, and (nonlinear) integrality con- straints on the variables. The feasible region without the integrality constraints is a convex polygon. Careful study of additional valid constraints can significantly improve computational performance. Here is the abstract from the dissertation: We perform a polyhedral study of a multi-commodity generalization of variable upper bound flow models. In particular, we establish some relations between facets of single- and multi- commodity models. We then introduce a new family of inequalities, which generalizes traditional flow cover inequalities to the multi-commodity context. We present encouraging numerical results. We also consider the directed edge-failure resilient network design problem (DRNDP). This problem entails the design of a directed multi-commodity flow network that is capable of fulfilling a specified percentage of demands in the event that any G arcs are destroyed, where G is a constant parameter. We present a formulation of DRNDP and solve it in a branch-column-cut framework. We present computational results.
We present new algorithms for a distributed model for graph computations motivated by limited information sharing we first discussed in [20]. Two or more independent entities have collected large social graphs. They wish to compute the result of running graph algorithms on the entire set of relationships. Because the information is sensitive or economically valuable, they do not wish to simply combine the information in a single location. We consider two models for computing the solution to graph algorithms in this setting: 1) limited-sharing: the two entities can share only a polylogarithmic size subgraph; 2) low-trust: the entities must not reveal any information beyond the query answer, assuming they are all honest but curious. We believe this model captures realistic constraints on cooperating autonomous data centers. We have algorithms in both setting for s - t connectivity in both models. We also give an algorithm in the low-communication model for finding a planted clique. This is an anomaly- detection problem, finding a subgraph that is larger and denser than expected. For both the low- communication algorithms, we exploit structural properties of social networks to prove perfor- mance bounds better than what is possible for general graphs. For s - t connectivity, we use known properties. For planted clique, we propose a new property: bounded number of triangles per node. This property is based upon evidence from the social science literature. We found that classic examples of social networks do not have the bounded-triangles property. This is because many social networks contain elements that are non-human, such as accounts for a business, or other automated accounts. We describe some initial attempts to distinguish human nodes from automated nodes in social networks based only on topological properties.
Ross, Robert; Grider, Gary; Felix, Evan; Gary, Mark; Klasky, Scott; Oldfield, Ron A.; Shipman, Galen; Wu, John
Storage systems are a foundational component of computational, experimental, and observational science today. The success of Department of Energy (DOE) activities in these areas is inextricably tied to the usability, performance, and reliability of storage and input/output (I/O) technologies.
"BLIS: A Framework for Rapidly Instantiating BLAS Functionality" includes single-platform BLIS performance results for both level-2 and level-3 operations that is competitive with OpenBLAS, ATLAS, and Intel MKL. A detailed description of the configuration used to generate the performance results was provided to the reviewer by the authors. All the software components used in the comparison were reinstalled and new performance results were generated and compared to the original results. After completing this process, the published results are deemed replicable by the reviewer.
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.
Mixtures of light elements with heavy elements are important in inertial confinement fusion. We explore the physics of molecular scale mixing through a validation study of equation of state (EOS) properties. Density functional theory molecular dynamics (DFT-MD) at elevated temperature and pressure is used to obtain the thermodynamic state properties of pure xenon, ethane, and various compressed mixture compositions along their principal Hugoniots. To validate these simulations, we have performed shock compression experiments using the Sandia Z-Machine. A bond tracking analysis correlates the sharp rise in the Hugoniot curve with the completion of dissociation in ethane. The DFT-based simulation results compare well with the experimental data along the principal Hugoniots and are used to provide insight into the dissociation and temperature along the Hugoniots as a function of mixture composition. Interestingly, we find that the compression ratio for complete dissociation is similar for several compositions suggesting a limiting compression for C-C bonded systems.
The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary. The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. An immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.
We lay the foundation for a benchmarking methodology for assessing current and future quantum computers. We pose and begin addressing fundamental questions about how to fairly compare computational devices at vastly different stages of technological maturity. We critically evaluate and offer our own contributions to current quantum benchmarking efforts, in particular those involving adiabatic quantum computation and the Adiabatic Quantum Optimizers produced by D-Wave Systems, Inc. We find that the performance of D-Wave's Adiabatic Quantum Optimizers scales roughly on par with classical approaches for some hard combinatorial optimization problems; however, architectural limitations of D-Wave devices present a significant hurdle in evaluating real-world applications. In addition to identifying and isolating such limitations, we develop algorithmic tools for circumventing these limitations on future D-Wave devices, assuming they continue to grow and mature at an exponential rate for the next several years.
PANACM 2015 - 1st Pan-American Congress on Computational Mechanics, in conjunction with the 11th Argentine Congress on Computational Mechanics, MECOM 2015
We present a new explicit algorithm for linear elastodynamic problems with material interfaces. The method discretizes the governing equations independently on each material subdomain and then connects them by exchanging forces and masses across the material interface. Variational flux recovery techniques provide the force and mass approximations. The new algorithm has attractive computational properties. It allows different discretizations on each material subdomain and enables partitioned solution of the discretized equations. The method passes a linear patch test and recovers the solution of a monolithic discretization of the governing equations when interface grids match.
We present a new optimization-based, conservative, and quasi-monotone method for passive tracer transport. The scheme combines high-order spectral element discretization in space with semi-Lagrangian time stepping. Solution of a singly linearly constrained quadratic program with simple bounds enforces conservation and physically motivated solution bounds. The scheme can handle efficiently a large number of passive tracers because the semi-Lagrangian time stepping only needs to evolve the grid points where the primitive variables are stored and allows for larger time steps than a conventional explicit spectral element method. Numerical examples show that the use of optimization to enforce physical properties does not affect significantly the spectral accuracy for smooth solutions. Performance studies reveal the benefits of high-order approximations, including for discontinuous solutions.