Trilinos/MueLu: Node-level Parallelism
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Chemical Physics
Classical density functional theory (DFT) is used to calculate the structure of the electrical double layer and the differential capacitance of model molten salts. The DFT is shown to give good qualitative agreement with Monte Carlo simulations in the molten salt regime. The DFT is then applied to three common molten salts, KCl, LiCl, and LiKCl, modeled as charged hard spheres near a planar charged surface. The DFT predicts strong layering of the ions near the surface, with the oscillatory density profiles extending to larger distances for larger electrostatic interactions resulting from either lower temperature or lower dielectric constant. In conclusion, overall the differential capacitance is found to be bell-shaped, in agreement with recent theories and simulations for ionic liquids and molten salts, but contrary to the results of the classical Gouy-Chapman theory.
Proceedings of the National Academy of Sciences of the United States of America
The qubit is the fundamental building block of a quantum computer. We fabricate a qubit in a silicon double-quantum dot with an integrated micromagnet in which the qubit basis states are the singlet state and the spin-zero triplet state of two electrons. Because of the micromagnet, the magnetic field difference ΔB between the two sides of the double dot is large enough to enable the achievement of coherent rotation of the qubit’s Bloch vector around two different axes of the Bloch sphere. By measuring the decay of the quantum oscillations, the inhomogeneous spin coherence time T*2 is determined. Lastly, by measuring T*2 at many different values of the exchange coupling J and at two different values of ΔB, we provide evidence that the micromagnet does not limit decoherence, with the dominant limits on T*2 arising from charge noise and from coupling to nuclear spins.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
CMAME
Abstract not provided.
The purpose of this report is to document a basic installation of the Anasazi eigensolver package and provide a brief discussion on the numerical solution of some graph eigenvalue problems.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
arXiv posting
Abstract not provided.
Abstract not provided.
Abstract not provided.
The Water Security Toolkit (WST) is a suite of open source software tools that can be used by water utilities to create response strategies to reduce the impact of contamination in a water distribution network . WST includes hydraulic and water quality modeling software , optimizati on methodologies , and visualization tools to identify: (1) sensor locations to detect contamination, (2) locations in the network in which the contamination was introduced, (3) hydrants to remove contaminated water from the distribution system, (4) locations in the network to inject decontamination agents to inactivate, remove, or destroy contaminants, (5) locations in the network to take grab sample s to help identify the source of contamination and (6) valves to close in order to isolate contaminate d areas of the network. This user manual describes the different components of WST , along w ith examples and case studies. License Notice The Water Security Toolkit (WST) v.1.2 Copyright c 2012 Sandia Corporation. Under the terms of Contract DE-AC04-94AL85000, there is a non-exclusive license for use of this work by or on behalf of the U.S. government. This software is distributed under the Revised BSD License (see below). In addition, WST leverages a variety of third-party software packages, which have separate licensing policies: Acro Revised BSD License argparse Python Software Foundation License Boost Boost Software License Coopr Revised BSD License Coverage BSD License Distribute Python Software Foundation License / Zope Public License EPANET Public Domain EPANET-ERD Revised BSD License EPANET-MSX GNU Lesser General Public License (LGPL) v.3 gcovr Revised BSD License GRASP AT&T Commercial License for noncommercial use; includes randomsample and sideconstraints executable files LZMA SDK Public Domain nose GNU Lesser General Public License (LGPL) v.2.1 ordereddict MIT License pip MIT License PLY BSD License PyEPANET Revised BSD License Pyro MIT License PyUtilib Revised BSD License PyYAML MIT License runpy2 Python Software Foundation License setuptools Python Software Foundation License / Zope Public License six MIT License TinyXML zlib License unittest2 BSD License Utilib Revised BSD License virtualenv MIT License Vol Common Public License vpykit Revised BSD License Additionally, some precompiled WST binary distributions might bundle other third-party executables files: Coliny Revised BSD License (part of Acro project) Dakota GNU Lesser General Public License (LGPL) v.2.1 PICO Revised BSD License (part of Acro project) i Revised BSD License Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Sandia National Laboratories nor Sandia Corporation nor the names of its con- tributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IM- PLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL SANDIA CORPORATION BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUD- ING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ii Acknowledgements This work was supported by the U.S. Environmental Protection Agency through its Office of Research and Development (Interagency Agreement # DW8992192801). The material in this document has been subject to technical and policy review by the U.S. EPA, and approved for publication. The views expressed by individual authors, however, are their own, and do not necessarily reflect those of the U.S. Environmental Protection Agency. Mention of trade names, products, or services does not convey official U.S. EPA approval, endorsement, or recommendation. The Water Security Toolkit is an extension of the Threat Ensemble Vulnerability Assessment-Sensor Place- ment Optimization Tool (TEVA-SPOT), which was also developed with funding from the U.S. Environ- mental Protection Agency through its Office of Research and Development (Interagency Agreement # DW8992192801). The authors acknowledge the following individuals for their contributions to the devel- opment of TEVA-SPOT: Jonathan Berry (Sandia National Laboratories), Erik Boman (Sandia National Laboratories), Lee Ann Riesen (Sandia National Laboratories), James Uber (University of Cincinnati), and Jean-Paul Watson (Sandia National Laboratories). iii Acronyms ATUS American Time-Use Survey BLAS Basic linear algebra sub-routines CFU Colony-forming unit CVAR Conditional value at risk CWS Contamination warning system EA Evolutionary algorithm EDS Event detection system EPA U.S. Environmental Protection Agency EC Extent of Contamination ERD EPANET results database file GLPK GNU Linear Programming Kit GRASP Greedy randomized adaptive sampling process HEX Hexadecimal HTML HyperText markup language INP EPANET input file LP Linear program MC Mass consumed MILP Mixed integer linear program MIP Mixed integer program MSX Multi-species extension for EPANET NFD Number of failed detections NS Number of sensors NZD Non-zero demand PD Population dosed PE Population exposed PK Population killed TAI Threat assessment input file TCE Tailed-conditioned expectation TD Time to detection TEC Timed extent of contamination TEVA Threat ensemble vulnerability assessment TSB Tryptic soy broth TSG Threat scenario generation file TSI Threat simulation input file VAR Value at risk VC Volume consumed WST Water Security Toolkit YML YAML configuration file format for WST iv Symbols Notation Definition Example { , } set brackets { 1,2,3 } means a set containing the values 1,2, and 3. [?] is an element of s [?] S means that s is an element of the set S . [?] for all s = 1 [?] s [?] S means that the statement s = 1 is true for all s in set S . P summation P n i =1 s i means s 1 + s 2 + * * * + s n . \ set minus S \ T means the set that contains all those elements of S that are not in set T . %7C given %7C is used to define conditional probability. P ( s %7C t ) means the prob- ability of s occurring given that t occurs. %7C ... %7C cardinality Cardinality of a set is the number of elements of the set. If set S = { 2,4,6 } , then %7C S %7C = 3. v
Abstract not provided.
Strain, a journal in experimental mechanics
Abstract not provided.
Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
CMAME journal
Abstract not provided.
SIAM Journal on Scientific Computing
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The version of STK (Sierra ToolKit) that has long been provided with Trilinos is no longer supported by the core develop- ment team. With the introduction of a the new STK library into Trilinos, the old STK has been renamed to stk classic. This document contains a rough guide of how to port a stk classic code to STK.
Journal of Parallel and Distributed Computing
The manycore revolution can be characterized by increasing thread counts, decreasing memory per thread, and diversity of continually evolving manycore architectures. High performance computing (HPC) applications and libraries must exploit increasingly finer levels of parallelism within their codes to sustain scalability on these devices. We found that a major obstacle to performance portability is the diverse and conflicting set of constraints on memory access patterns across devices. Contemporary portable programming models address manycore parallelism (e.g., OpenMP, OpenACC, OpenCL) but fail to address memory access patterns. The Kokkos C++ library enables applications and domain libraries to achieve performance portability on diverse manycore architectures by unifying abstractions for both fine-grain data parallelism and memory access patterns. In this paper we describe Kokkos’ abstractions, summarize its application programmer interface (API), present performance results for unit-test kernels and mini-applications, and outline an incremental strategy for migrating legacy C++ codes to Kokkos. Furthermore, the Kokkos library is under active research and development to incorporate capabilities from new generations of manycore architectures, and to address a growing list of applications and domain libraries.
The Computer Science Research Institute (CSRI) brings university faculty and students to Sandia National Laboratories for focused collaborative research on computer science, computational science, and mathematics problems that are critical to the mission of the laboratories, the Department of Energy, and the United States. The CSRI provides a mechanism by which university researchers learn about and impact national— and global—scale problems while simultaneously bringing new ideas from the academic research community to bear on these important problems. A key component of CSRI programs over the last decade has been an active and productive summer program where students from around the country conduct internships at CSRI. Each student is paired with a Sandia staff member who serves as technical advisor and mentor. The goals of the summer program are to expose the students to research in mathematical and computer sciences at Sandia and to conduct a meaningful and impactful summer research project with their Sandia mentor. Every effort is made to align summer projects with the student's research objectives and all work is coordinated with the ongoing research activities of the Sandia mentor in alignment with Sandia technical thrusts. For the 2013 CSRI Proceedings, research articles have been organized into the following broad technical focus areas — Computational Mathematics and Algorithms, Combinatorial Algorithms and Visualization, Advanced Architectures and Systems Software, Computational Applications — which are well aligned with Sandia's strategic thrusts in computer and information sciences.
Physical Review. A
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physical Review B
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The 15th International Detonation Symposium
Abstract not provided.
Abstract not provided.
15th International Detonation Symposium
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
While it made sense for historical reasons to develop different interfaces and protocols for memory channels, CPU to CPU interactions, and I/O devices, ongoing developments in the computer industry are leading to more converged requirements and physical implementations for these interconnects. As it becomes increasingly common for advanced components to contain a variety of computational devices as well as memory, the distinction between processors, memory, accelerators, and I/O devices become s increasingly blurred. As a result, the interface requirements among such components are converging. There is also a wide range of new disruptive technologies that will impact the computer market in the coming years , including 3D integration and emerging NVRAM memory. Optimal exploitation of these technologies cannot be done with the existing memory, storage, and I/O interface standards. The computer industry has historically made major advances when industry players have been able to add innovation behind a standard interface. The standard interface provides a large market for their products and enables relatively quick and widespread adoption. To enable a new wave of innovation in the form of advanced memory products and accelerators, we need a new standard interface explicitly designed to provide both the performance and flexibility to support new system integration solutions.
Parallel Processing Letters
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
While it made sense for historical reasons to develop different interfaces and protocols for memory channels, CPU to CPU interactions, and I/O devices, ongoing developments in the computer industry are leading to more converged requirements and physical implementations for these interconnects. As it becomes increasingly common for advanced components to contain a variety of computational devices as well as memory, the distinction between processors, memory, accelerators, and I/O devices becomes increasingly blurred. As a result, the interface requirements among such components are converging. There is also a wide range of new disruptive technologies that will impact the computer market in the coming years, including 3D integration and emerging NVRAM memory. Optimal exploitation of these technologies cannot be done with the existing memory, storage, and I/O interface standards. The computer industry has historically made major advances when industry players have been able to add innovation behind a standard interface. The standard interface provides a large market for their products and enables relatively quick and widespread adoption. To enable a new wave of innovation in the form of advanced memory products and accelerators, we need a new standard interface explicitly designed to provide both the performance and flexibility to support new system integration solutions
Abstract not provided.
http://www.arXiv.org and Physical Review A
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Density Functional Theory (DFT) based Equation of State (EOS) construction is a prominent part of Sandia’s capabilities to support engineering sciences. This capability is based on augmenting experimental data with information gained from computational investigations, especially in those parts of the phase space where experimental data is hard, dangerous, or expensive to obtain. A key part of the success of the Sandia approach is the fundamental science work supporting the computational capability. Not only does this work enhance the capability to perform highly accurate calculations but it also provides crucial insight into the limitations of the computational tools, providing high confidence in the results even where results cannot be, or have not yet been, validated by experimental data. This report concerns the key ingredient of projector augmented-wave (PAW) potentials for use in pseudo-potential computational codes. Using the tools discussed in SAND2012-7389 we assess the standard Vienna Ab-initio Simulation Package (VASP) PAWs for Molybdenum.
Abstract not provided.
Abstract not provided.
This report documents the development, demonstration and validation of a mesoscale, microstructural evolution model for simulation of zirconium hydride {delta}-ZrH{sub 1.5} precipitation in the cladding of used nuclear fuels that may occur during long-term dry storage. While the Zr-based claddings are manufactured free of any hydrogen, they absorb hydrogen during service, in the reactor by a process commonly termed ‘hydrogen pick-up’. The precipitation and growth of zirconium hydrides during dry storage is one of the most likely fuel rod integrity failure mechanisms either by embrittlement or delayed hydride cracking of the cladding. While the phenomenon is well documented and identified as a potential key failure mechanism during long-term dry storage (NUREG/CR-7116), the ability to actually predict the formation of hydrides is poor. The model being documented in this work is a computational capability for the prediction of hydride formation in different claddings of used nuclear fuels. This work supports the Used Fuel Disposition Research and Development Campaign in assessing the structural engineering performance of the cladding during and after long-term dry storage. This document demonstrates a basic hydride precipitation model that is built on a recently developed hybrid Potts-phase field model that combines elements of Potts-Monte Carlo and the phase-field models. The model capabilities are demonstrated along with the incorporation of the starting microstructure, thermodynamics of the Zr-H system and the hydride formation mechanism.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Computational Physics
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Chemical Physics
Several extensions and improvements have been made to the multilevel summation method (MSM) of computing long-range electrostatic interactions. These include pressure calculation, an improved error estimator, faster direct part calculation, extension to non-orthogonal (triclinic) systems, and parallelization using the domain decomposition method. MSM also allows fully non-periodic long-range electrostatics calculations which are not possible using traditional Ewald-based methods. In spite of these significant improvements to the MSM algorithm, the particle-particle particle-mesh (PPPM) method was still found to be faster for the periodic systems we tested on a single processor. However, the fast Fourier transforms (FFTs) that PPPM relies on represent a major scaling bottleneck for the method when running on many cores (because the many-to-many communication pattern of the FFT becomes expensive) and MSM scales better than PPPM when using a large core count for two test problems on Sandia's Redsky machine. This FFT bottleneck can be reduced by running PPPM on only a subset of the total processors. MSM is most competitive for relatively low accuracy calculations. On Sandia's Chama machine, however, PPPM is found to scale better than MSM for all core counts that we tested. These results suggest that PPPM is usually more efficient than MSM for typical problems running on current high performance computers. However, further improvements to MSM algorithm could increase its competitiveness for calculation of long-range electrostatic interactions. © 2014 AIP Publishing LLC.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Recovery from process loss during the execution of a distributed memory parallel application is presently achieved by restarting the program, typically from a checkpoint file. Future computer system trends indicate that the size of data to checkpoint, the lack of improvement in parallel file system performance and the increase in process failure rates will lead to situations where checkpoint restart becomes infeasible. In this report we describe and prototype the use of a new application level resilient computing model that manages persistent storage of local state for each process such that, if a process fails, recovery can be performed locally without requiring access to a global checkpoint file. LFLR provides application developers with an ability to recover locally and continue application execution when a process is lost. This report discusses what features are required from the hardware, OS and runtime layers, and what approaches application developers might use in the design of future codes, including a demonstration of LFLR-enabled MiniFE code from the Matenvo mini-application suite.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE International Ultrasonics Symposium Procedings
Abstract not provided.
Abstract not provided.
International Journal of Solids and Structures
A Mindlin continuum model that incorporates both a dependence upon the microstructure and inelastic (nonlinear) behavior is used to study dispersive effects in elasto-plastic microstructured materials. A one-dimensional equation of motion of such material systems is derived based on a combination of the Mindlin microcontinuum model and a hardening model both at the macroscopic and microscopic level. The dispersion relation of propagating waves is established and compared to the classical linear elastic and gradient-dependent solutions. It is shown that the observed wave dispersion is the result of introducing microstructural effects and material inelasticity. The introduction of an internal characteristic length scale regularizes the ill-posedness of the set of partial differential equations governing the wave propagation. The phase speed does not necessarily become imaginary at the onset of plastic softening, as it is the case in classical continuum models and the dispersive character of such models constrains strain softening regions to localize. © 2014 Elsevier Ltd. All rights reserved.
Exascale data environments are fast approaching, driven by diverse structured and unstructured data such as system and application telemetry streams, open-source information capture, and on-demand simulation output. Storage costs having plummeted, the question is now one of converting vast stores of data to actionable information. Complicating this problem are the low degrees of awareness across domain boundaries about what potentially useful data may exist, and write-once-read- never issues (data generation/collection rates outpacing data analysis and integration rates). Increasingly, technologists and researchers need to correlate previously unrelated data sources and artifacts to produce fused data views for domain-specific purposes. New tools and approaches for creating such views from vast amounts of data are vitally important to maintaining research and operational momentum. We propose to research and develop tools and services to assist in the creation, refinement, discovery and reuse of fused data views over large, diverse collections of heterogeneously structured data. We innovate in the following ways. First, we enable and encourage end-users to introduce customized index methods selected for local benefit rather than for global interaction (flexible multi-indexing). We envision rich combinations of such views on application data: views that span backing stores with different semantics, that introduce analytic methods of indexing, and that define multiple views on individual data items. We specifically decline to build a big fused database of everything providing a centralized index over all data, or to export a rigid schema to all comers as in federated query approaches. Second, we proactively advertise these application-specific views so that they may be programmatically reused and extended (data proactivity). Through this mechanism, both changes in state (new data in existing view collected) and changes in structure (new or derived view exists) are made known. Lastly, we embrace found data heterogeneity by coupling multi-indexing to backing stores with appropriate semantics (as opposed to a single store or schema).
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Geophysical Model Development
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Sandia journal manuscript; Not yet accepted for publication
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
This report describes work performed from June 2012 through May 2014 as a part of a Sandia Early Career Laboratory Directed Research and Development (LDRD) project led by the first author. The objective of the project is to investigate methods for building stable and efficient proper orthogonal decomposition (POD)/Galerkin reduced order models (ROMs): models derived from a sequence of high-fidelity simulations but having a much lower computational cost. Since they are, by construction, small and fast, ROMs can enable real-time simulations of complex systems for onthe- spot analysis, control and decision-making in the presence of uncertainty. Of particular interest to Sandia is the use of ROMs for the quantification of the compressible captive-carry environment, simulated for the design and qualification of nuclear weapons systems. It is an unfortunate reality that many ROM techniques are computationally intractable or lack an a priori stability guarantee for compressible flows. For this reason, this LDRD project focuses on the development of techniques for building provably stable projection-based ROMs. Model reduction approaches based on continuous as well as discrete projection are considered. In the first part of this report, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. The key idea is to apply a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. It is shown that, for many PDE systems including the linearized compressible Euler and linearized compressible Navier-Stokes equations, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Attention is then turned to nonlinear conservation laws. A new transformation and corresponding energy-based inner product for the full nonlinear compressible Navier-Stokes equations is derived, and it is demonstrated that if a Galerkin ROM is constructed in this inner product, the ROM system energy will be bounded in a way that is consistent with the behavior of the exact solution to these PDEs, i.e., the ROM will be energy-stable. The viability of the linear as well as nonlinear continuous projection model reduction approaches developed as a part of this project is evaluated on several test cases, including the cavity configuration of interest in the targeted application area. In the second part of this report, some POD/Galerkin approaches for building stable ROMs using discrete projection are explored. It is shown that, for generic linear time-invariant (LTI) systems, a discrete counterpart of the continuous symmetry inner product is a weighted L2 inner product obtained by solving a Lyapunov equation. This inner product was first proposed by Rowley et al., and is termed herein the “Lyapunov inner product“. Comparisons between the symmetry inner product and the Lyapunov inner product are made, and the performance of ROMs constructed using these inner products is evaluated on several benchmark test cases. Also in the second part of this report, a new ROM stabilization approach, termed “ROM stabilization via optimization-based eigenvalue reassignment“, is developed for generic LTI systems. At the heart of this method is a constrained nonlinear least-squares optimization problem that is formulated and solved numerically to ensure accuracy of the stabilized ROM. Numerical studies reveal that the optimization problem is computationally inexpensive to solve, and that the new stabilization approach delivers ROMs that are stable as well as accurate. Summaries of “lessons learned“ and perspectives for future work motivated by this LDRD project are provided at the end of each of the two main chapters.
Reliability Engineering and System Safety
In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Geophysical Research
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.