Publications

Results 6876–6900 of 9,998

Search results

Jump to search filters

First Application of Geospatial Semantic Graphs to SAR Image Data (LDRD Final Report)

Mclendon, William; Brost, Randolph

Modeling geospatial information with semantic graphs enables search for sites of interest based on relationships between features, without requiring strong a priori models of feature shape or other intrinsic properties. Geospatial semantic graphs can be constructed from raw sensor data with suitable preprocessing to obtain a discretized representation. This report describes initial work toward extending geospatial semantic graphs to include temporal information, and initial results applying semantic graph techniques to SAR image data. We describe an efficient graph structure that includes geospatial and temporal information, which is designed to support simultaneous spatial and temporal search queries. We also report a preliminary implementation of feature recognition, semantic graph modeling, and graph search based on input SAR data. The report concludes with lessons learned and suggestions for future improvements.

More Details

A many-electron tight binding method for the analysis of quantum dot systems

Journal of Applied Physics

Nielsen, Erik N.; Rahman, Rajib; Muller, Richard P.

We present a method which computes many-electron energies and eigenfunctions by a full configuration interaction, which uses a basis of atomistic tight-binding wave functions. This approach captures electron correlation as well as atomistic effects, and is well suited to solid state quantum dot systems containing few electrons, where valley physics and disorder contribute significantly to device behavior. Results are reported for a two-electron silicon double quantum dot as an example. © 2012 American Institute of Physics.

More Details

Use of a SPAR-H bayesian network for predicting human error probabilities with missing observations

11th International Probabilistic Safety Assessment and Management Conference and the Annual European Safety and Reliability Conference 2012, PSAM11 ESREL 2012

Groth, Katrina M.; Swiler, Laura P.

Many of the Performance Shaping Factors (PSFs) used in Human Reliability Analysis (HRA) methods are not directly measurable or observable. Methods like SPAR-H require the analyst to assign values for all of the PSFs, regardless of the PSF observability; this introduces subjectivity into the human error probability (HEP) calculation. One method to reduce the subjectivity of HRA estimates is to formally incorporate information about the probability of the PSFs into the methodology for calculating the HEP. This can be accomplished by encoding prior information in a Bayesian Network (BN) and updating the network using available observations. We translated an existing HRA methodology, SPAR-H, into a Bayesian Network to demonstrate the usefulness of the BN framework. We focus on the ability to incorporate prior information about PSF probabilities into the HRA process. This paper discusses how we produced the model by combining information from two sources, and how the BN model can be used to estimate HEPs despite missing observations. Use of the prior information allows HRA analysts to use partial information to estimate HEPs, and to rely on the prior information (from data or cognitive literature) when they are unable to gather information about the state of a particular PSF. The SPAR-H BN model is a starting point for future research activities to create a more robust HRA BN model using data from multiple sources.

More Details

Oh, exascale! the effect of emerging architectures on scientific discovery

Proceedings - 2012 SC Companion: High Performance Computing, Networking Storage and Analysis, SCC 2012

Moreland, Kenneth D.

The predictions for exascale computing are dire. Although we have benefited from a consistent supercomputer architecture design, even across manufacturers, for well over a decade, recent trends indicate that future high-performance computers will have different hardware structure and programming models to which software must adapt. This paper provides an informal discussion on the ways in which changes in high-performance computing architecture will profoundly affect the scalability of our current generation of scientific visualization and analysis codes and how we must adapt our applications, workflows, and attitudes to continue our success at exascale computing. © 2012 IEEE.

More Details

Navigating an evolutionary fast path to exascale

Proceedings - 2012 SC Companion: High Performance Computing, Networking Storage and Analysis, SCC 2012

Barrett, Richard F.; Hammond, Simon; Vaughan, Courtenay T.; Doerfler, Douglas W.; Heroux, Michael A.

The computing community is in the midst of a disruptive architectural change. The advent of manycore and heterogeneous computing nodes forces us to reconsider every aspect of the system software and application stack. To address this challenge there is a broad spectrum of approaches, which we roughly classify as either revolutionary or evolutionary. With the former, the entire code base is re-written, perhaps using a new programming language or execution model. The latter, which is the focus of this work, seeks a piecewise path of effective incremental change. The end effect of our approach will be revolutionary in that the control structure of the application will be markedly different in order to utilize single-instruction multiple-data/thread (SIMD/SIMT), manycore and heterogeneous nodes, but the physics code fragments will be remarkably similar. Our approach is guided by a set of mission driven applications and their proxies, focused on balancing performance potential with the realities of existing application code bases. Although the specifics of this process have not yet converged, we find that there are several important steps that developers of scientific and engineering application programs can take to prepare for making effective use of these challenging platforms. Aiding an evolutionary approach is the recognition that the performance potential of the architectures is, in a meaningful sense, an extension of existing capabilities: vectorization, threading, and a re-visiting of node interconnect capabilities. Therefore, as architectures, programming models, and programming mechanisms continue to evolve, the preparations described herein will provide significant performance benefits on existing and emerging architectures. © 2012 IEEE.

More Details

Assessing the predictive capabilities of mini-applications

Proceedings - 2012 SC Companion: High Performance Computing, Networking Storage and Analysis, SCC 2012

Barrett, Richard F.; Crozier, Paul; Doerfler, Douglas W.; Hammond, Simon; Heroux, Michael A.; Lin, Paul T.; Trucano, Timothy G.; Vaughan, Courtenay T.; Williams, Alan B.

The push to exascale computing is informed by the assumption that the architecture, regardless of the specific design, will be fundamentally different from petascale computers. The Mantevo project has been established to produce a set of proxies, or 'miniapps,' which enable rapid exploration of key performance issues that impact a broad set of scientific applications programs of interest to ASC and the broader HPC community. Understanding the conditions under which a miniapp can be confidently used as predictive of an applications' behavior must be clearly elucidated. Toward this end, we have developed a methodology for assessing the predictive capabilities of application proxies. Adhering to the spirit of experimental validation, our approach provides a framework for examining data from the application with that provided by their proxies. In this poster we present this methodology, and apply it to three miniapps developed by the Mantevo project. © 2012 IEEE.

More Details
Results 6876–6900 of 9,998
Results 6876–6900 of 9,998