Newton–Krylov solvers for ocean tracers have the potential to greatly decrease the computational costs of spinning up deep-ocean tracers, which can take several thousand model years to reach equilibrium with surface processes. One version of the algorithm uses offline tracer transport matrices to simulate an annual cycle of tracer concentrations and applies Newton's method to find concentrations that are periodic in time. Here we present the impact of time-averaging the transport matrices on the equilibrium values of an ideal-age tracer. We compared annually-averaged, monthly-averaged, and 5-day-averaged transport matrices to an online simulation using the ocean component of the Community Earth System Model (CESM) with a nominal horizontal resolution of 1° × 1° and 60 vertical levels. We found that increasing the time resolution of the offline transport model reduced a low age bias from 12% for the annually-averaged transport matrices, to 4% for the monthly-averaged transport matrices, and to less than 2% for the transport matrices constructed from 5-day averages. The largest differences were in areas with strong seasonal changes in the circulation, such as the Northern Indian Ocean. For many applications the relatively small bias obtained using the offline model makes the offline approach attractive because it uses significantly less computer resources and is simpler to set up and run.
Remote sensing systems have firmly established a role in providing immense value to commercial industry, scientific exploration, and national security. Continued maturation of sensing technology has reduced the cost of deploying highly-capable sensors while at the same time increased reliance on the information these sensors can provide. The demand for time on these sensors is unlikely to diminish. Coordination of next-generation sensor systems, larger constellations of satellites, unmanned aerial vehicles, ground telescopes, etc. is prohibitively complex for existing heuristics-based scheduling techniques. The project was a two-year collaboration spanning multiple Sandia centers and included a partnership with Texas A&M University. We have developed algorithms and software for collection scheduling, remote sensor field-of-view pointing models, and bandwidth-constrained prioritization of sensor data. Our approach followed best practices from the operations research and computational geometry communities. These models provide several advantages over state of the art techniques. In particular, our approach is more flexible compared to heuristics that tightly couple models and solution techniques. First, our mixed-integer linear models afford a rigorous analysis so that sensor planners can quantitatively describe a schedule relative to the best possible. Optimal or near-optimal schedules can be produced with commercial solvers in operational run-times. These models can be modified and extended to incorporate different scheduling and resource constraints and objective function definitions. Further, we have extended these models to proactively schedule sensors under weather and ad hoc collection uncertainty. This approach stands in contrast to existing deterministic schedulers which assume a single future weather or ad hoc collection scenario. The field-of-view pointing algorithm produces a mosaic with the fewest number of images required to fully cover a region of interest. The bandwidth-constrained algorithms find the highest priority information that can be transmitted. All of these are based on mixed-integer linear programs so that, in the future, collection scheduling, field-of-view, and bandwidth prioritization can be combined into a single problem. Experiments conducted using the developed models, commercial solvers, and benchmark datasets have demonstrated that proactively scheduling against uncertainty regularly and significantly outperforms deterministic schedulers.
We assess how geospatial-temporal semantic graphs and our GeoGraphy code implementation might contribute to induced seismicity analysis. We focus on evaluating strengths and weaknesses of both 1) the fundamental concept of semantic graphs and 2) our current code implementation. With extensions and research effort, code implementation limitations can be overcome. The paper also describes relevance including possible data input types, expected analytical outcomes and how it can pair with other approaches and fit into a workflow.
Engineering decisions are often formulated as optimization problems such as the optimal design or control of physical systems. In these applications, the resulting optimization problems are constrained by large-scale simulations involving systems of partial differential equations (PDEs), ordinary differential equations (ODEs), and differential algebraic equations (DAEs). In addition, critical components of these systems are fraught with uncertainty, including unverifiable modeling assumptions, unknown boundary and initial conditions, and uncertain coefficients. Typically, these components are estimated using noisy and incomplete data from a variety of sources (e.g., physical experiments). The lack of knowledge of the true underlying probabilistic characterization of model inputs motivates the need for optimal solutions that are robust to this uncertainty. In this report, we introduce a framework for handling "distributional" uncertainties in the context of simulation-based optimization. This includes a novel measure discretization technique that will lead to an adaptive optimization algorithm tailored to exploit the structures inherent to simulation- based optimization.
Parametric sensitivities of dynamic system responses are very useful in a variety of applications, including circuit optimization and uncertainty quantification. Sensitivity calculation methods fall into two related categories: direct and adjoint methods. Effective implementation of such methods in a production circuit simulator poses a number of technical challenges, including instrumentation of device models. This report documents several years of work developing and implementing direct and adjoint sensitivity methods in the Xyce circuit simulator. Much of this work sponsored by the Laboratory Directed Research and Development (LDRD) Program at Sandia National Laboratories, under project LDRD 14-0788.
This report summarizes the methods and algorithms that were developed on the Sandia National Laboratory LDRD project entitled "Advanced Uncertainty Quantification Methods for Circuit Simulation", which was project # 173331 and proposal # 2016-0845. As much of our work has been published in other reports and publications, this report gives an brief summary. Those who are interested in the technical details are encouraged to read the full published results and also contact the report authors for the status of follow-on projects.
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.
The Next Generation Global Atmosphere Model LDRD project developed a suite of atmosphere models: a shallow water model, an x-z hydrostatic model, and a 3D hydrostatic model, by using Albany, a finite element code. Albany provides access to a large suite of leading-edge Sandia high-performance computing technologies enabled by Trilinos, Dakota, and Sierra. The next-generation capabilities most relevant to a global atmosphere model are performance portability and embedded uncertainty quantification (UQ). Performance portability is the capability for a single code base to run efficiently on diverse set of advanced computing architectures, such as multi-core threading or GPUs. Embedded UQ refers to simulation algorithms that have been modified to aid in the quantifying of uncertainties. In our case, this means running multiple samples for an ensemble concurrently, and reaping certain performance benefits. We demonstrate the effectiveness of these approaches here as a prelude to introducing them into ACME.
This report describes a new capability for hierarchical task-data parallelism using Sandia's Kokkos and Qthreads, and evaluation of this capability with sparse matrix Cholesky factorization and social network triangle enumeration mini-applications. Hierarchical task-data parallelism consists of a collection of tasks with executes-after dependences where each task contains data parallel operations performed on a team of hardware threads. The collection of tasks and dependences form a directed acyclic graph of tasks - a task DAG. Major challenges of this research and development effort include: portability and performance across multicore CPU; manycore Intel Xeon Phi, and NVIDIA GPU architectures; scalability with respect to hardware concurrency and size of the task DAG; and usability of the application programmer interface (API).
The Rayleigh-Taylor instability (RTI) is investigated using the direct simulation Monte Carlo (DSMC) method of molecular gas dynamics. Here, fully resolved two-dimensional DSMC RTI simulations are performed to quantify the growth of flat and single-mode perturbed interfaces between two atmospheric-pressure monatomic gases as a function of the Atwood number and the gravitational acceleration. The DSMC simulations reproduce many qualitative features of the growth of the mixing layer and are in reasonable quantitative agreement with theoretical and empirical models in the linear, nonlinear, and self-similar regimes. In some of the simulations at late times, the instability enters the self-similar regime, in agreement with experimental observations. For the conditions simulated, diffusion can influence the initial instability growth significantly.
Enhancement-mode Si/SiGe electron quantum dots have been pursued extensively by many groups for their potential in quantum computing. Most of the reported dot designs utilize multiple metal-gate layers and use Si/SiGe heterostructures with Ge concentration close to 30%. Here, we report the fabrication and low-temperature characterization of quantum dots in the Si/Si0.8Ge0.2 heterostructures using only one metal-gate layer. We find that the threshold voltage of a channel narrower than 1 μm increases as the width decreases. The higher threshold can be attributed to the combination of quantum confinement and disorder. We also find that the lower Ge ratio used here leads to a narrower operational gate bias range. The higher threshold combined with the limited gate bias range constrains the device design of lithographic quantum dots. We incorporate such considerations in our device design and demonstrate a quantum dot that can be tuned from a single dot to a double dot. The device uses only a single metal-gate layer, greatly simplifying device design and fabrication.
In both continuum hydrodynamics simulations and also multimillion atom reactive molecular dynamics simulations of shockwave propagation in single crystal pentaerythritol tetranitrate (PETN) containing a cylindrical void, we observed the formation of an initial radially symmetric hot spot. By extending the simulation time to the nanosecond scale, however, we observed the transformation of the small symmetric hot spot into a longitudinally asymmetric hot region extending over a much larger volume. Performing reactive molecular dynamics shock simulations using the reactive force field (ReaxFF) as implemented in the LAMMPS molecular dynamics package, we showed that the longitudinally asymmetric hot region was formed by coalescence of the primary radially symmetric hot spot with a secondary triangular hot zone. We showed that the triangular hot zone coincided with a double-shocked region where the primary planar shockwave was overtaken by a secondary cylindrical shockwave. The secondary cylindrical shockwave originated in void collapse after the primary planar shockwave had passed over the void. A similar phenomenon was observed in continuum hydrodynamics shock simulations using the CTH hydrodynamics package. The formation and growth of extended asymmetric hot regions on nanosecond timescales has important implications for shock initiation thresholds in energetic materials.
Low-mobility twin grain boundaries dominate the microstructure of grain boundary-engineered materials and are critical to understanding their plastic deformation behaviour. The presence of solutes, such as hydrogen, has a profound effect on the thermodynamic stability of the grain boundaries. This work examines the case of a Σ3 grain boundary at inclinations from 0° ≤ Φ ≤ 90°. The angle Φ corresponds to the rotation of the Σ3 (1 1 1) < 1 1 0 > (coherent) into the Σ3 (1 1 2) < 1 1 0 > (lateral) twin boundary. To this end, atomistic models of inclined grain boundaries, utilising empirical potentials, are used to elucidate the finite-temperature boundary structure while grand canonical Monte Carlo models are applied to determine the degree of hydrogen segregation. In order to understand the boundary structure and segregation behaviour of hydrogen, the structural unit description of inclined twin grain boundaries is found to provide insight into explaining the observed variation of excess enthalpy and excess hydrogen concentration on inclination angle, but the explanatory power is limited by how the enthalpy of segregation is affected by hydrogen concentration. At higher concentrations, the grain boundaries undergo a defaceting transition. In order to develop a more complete mesoscale model of the interfacial behaviour, an analytical model of boundary energy and hydrogen segregation that relies on modelling the boundary as arrays of discrete 1/3 < 1 1 1 > disconnections is constructed. Lastly, the complex interaction of boundary reconstruction and concentration-dependent segregation behaviour exhibited by inclined twin grain boundaries limits the range of applicability of such an analytical model and illustrates the fundamental limitations for a structural unit model description of segregation in lower stacking fault energy materials.