Exploring Emerging Manycore Architectures for Uncertainty Quantification Through Embedded Stochastic Galerkin Methods
Proposed for publication in International Journal of Computer Mathematics.
Abstract not provided.
Proposed for publication in International Journal of Computer Mathematics.
Abstract not provided.
Abstract not provided.
Modeling geospatial information with semantic graphs enables search for sites of interest based on relationships between features, without requiring strong a priori models of feature shape or other intrinsic properties. Geospatial semantic graphs can be constructed from raw sensor data with suitable preprocessing to obtain a discretized representation. This report describes initial work toward extending geospatial semantic graphs to include temporal information, and initial results applying semantic graph techniques to SAR image data. We describe an efficient graph structure that includes geospatial and temporal information, which is designed to support simultaneous spatial and temporal search queries. We also report a preliminary implementation of feature recognition, semantic graph modeling, and graph search based on input SAR data. The report concludes with lessons learned and suggestions for future improvements.
Journal of Applied Physics
We present a method which computes many-electron energies and eigenfunctions by a full configuration interaction, which uses a basis of atomistic tight-binding wave functions. This approach captures electron correlation as well as atomistic effects, and is well suited to solid state quantum dot systems containing few electrons, where valley physics and disorder contribute significantly to device behavior. Results are reported for a two-electron silicon double quantum dot as an example. © 2012 American Institute of Physics.
11th International Probabilistic Safety Assessment and Management Conference and the Annual European Safety and Reliability Conference 2012, PSAM11 ESREL 2012
Many of the Performance Shaping Factors (PSFs) used in Human Reliability Analysis (HRA) methods are not directly measurable or observable. Methods like SPAR-H require the analyst to assign values for all of the PSFs, regardless of the PSF observability; this introduces subjectivity into the human error probability (HEP) calculation. One method to reduce the subjectivity of HRA estimates is to formally incorporate information about the probability of the PSFs into the methodology for calculating the HEP. This can be accomplished by encoding prior information in a Bayesian Network (BN) and updating the network using available observations. We translated an existing HRA methodology, SPAR-H, into a Bayesian Network to demonstrate the usefulness of the BN framework. We focus on the ability to incorporate prior information about PSF probabilities into the HRA process. This paper discusses how we produced the model by combining information from two sources, and how the BN model can be used to estimate HEPs despite missing observations. Use of the prior information allows HRA analysts to use partial information to estimate HEPs, and to rely on the prior information (from data or cognitive literature) when they are unable to gather information about the state of a particular PSF. The SPAR-H BN model is a starting point for future research activities to create a more robust HRA BN model using data from multiple sources.
Abstract not provided.
Abstract not provided.
Proposed for publication in Journal of Physical Oceanography.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proposed for publication in SIGMETRICS Performance Evaluation Review.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings - 2012 SC Companion: High Performance Computing, Networking Storage and Analysis, SCC 2012
The predictions for exascale computing are dire. Although we have benefited from a consistent supercomputer architecture design, even across manufacturers, for well over a decade, recent trends indicate that future high-performance computers will have different hardware structure and programming models to which software must adapt. This paper provides an informal discussion on the ways in which changes in high-performance computing architecture will profoundly affect the scalability of our current generation of scientific visualization and analysis codes and how we must adapt our applications, workflows, and attitudes to continue our success at exascale computing. © 2012 IEEE.
Proceedings - 2012 SC Companion: High Performance Computing, Networking Storage and Analysis, SCC 2012
Proceedings - 2012 SC Companion: High Performance Computing, Networking Storage and Analysis, SCC 2012
The push to exascale computing is informed by the assumption that the architecture, regardless of the specific design, will be fundamentally different from petascale computers. The Mantevo project has been established to produce a set of proxies, or 'miniapps,' which enable rapid exploration of key performance issues that impact a broad set of scientific applications programs of interest to ASC and the broader HPC community. Understanding the conditions under which a miniapp can be confidently used as predictive of an applications' behavior must be clearly elucidated. Toward this end, we have developed a methodology for assessing the predictive capabilities of application proxies. Adhering to the spirit of experimental validation, our approach provides a framework for examining data from the application with that provided by their proxies. In this poster we present this methodology, and apply it to three miniapps developed by the Mantevo project. © 2012 IEEE.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.