Publications

Results 73026–73050 of 99,299

Search results

Jump to search filters

A combinatorial method for tracing objects using semantics of their shape

Diegert, Carl

We present a shape-first approach to finding automobiles and trucks in overhead images and include results from our analysis of an image from the Overhead Imaging Research Dataset [1]. For the OIRDS, our shape-first approach traces candidate vehicle outlines by exploiting knowledge about an overhead image of a vehicle: a vehicle's outline fits into a rectangle, this rectangle is sized to allow vehicles to use local roads, and rectangles from two different vehicles are disjoint. Our shape-first approach can efficiently process high-resolution overhead imaging over wide areas to provide tips and cues for human analysts, or for subsequent automatic processing using machine learning or other analysis based on color, tone, pattern, texture, size, and/or location (shape first). In fact, computationally-intensive complex structural, syntactic, and statistical analysis may be possible when a shape-first work flow sends a list of specific tips and cues down a processing pipeline rather than sending the whole of wide area imaging information. This data flow may fit well when bandwidth is limited between computers delivering ad hoc image exploitation and an imaging sensor. As expected, our early computational experiments find that the shape-first processing stage appears to reliably detect rectangular shapes from vehicles. More intriguing is that our computational experiments with six-inch GSD OIRDS benchmark images show that the shape-first stage can be efficient, and that candidate vehicle locations corresponding to features that do not include vehicles are unlikely to trigger tips and cues. We found that stopping with just the shape-first list of candidate vehicle locations, and then solving a weighted, maximal independent vertex set problem to resolve conflicts among candidate vehicle locations, often correctly traces the vehicles in an OIRDS scene.

More Details

Interfacial structure in Telluride-based thermoelectric materials

Medlin, Douglas L.

Chalcogenide compounds based on the rocksalt and tetradymite structures possess good thermoelectric properties and are widely used in a variety of thermoelectric devices. Examples include PbTe and AgSbTe2, which have the rocksalt structure, and Bi2Te3, Bi2Se3, and Sb2Te3, which fall within the broad tetradymite-class of structures. These materials are also of interest for thermoelectric nanocomposites, where the aim is to improve thermoelectric energy conversion efficiency by harnessing interfacial scattering processes (e.g., reducing the thermal conductivity by phonon scattering or enhancing the Seebeck coefficient by energy filtering). Understanding the phase stability and microstructural evolution within such materials is key to designing processing approaches for optimal thermoelectric performance and to predicting the long-term nanostructural stability of the materials. In this presentation, we discuss our work investigating relationships between interfacial structure and formation mechanisms in several telluride-based thermoelectric materials. We begin with a discussion of interfacial coherency and its special aspects at interfaces in telluride compounds based on the rocksalt and tetradymite structures. We compare perfectly coherent interfaces, such as the Bi2Te3 (0001) twin, with semi-coherent, misfitting interfaces. We next discuss the formal crystallographic analysis of interfacial defects in these systems and then apply this methodology to high resolution transmission electron microscopy (HRTEM) observations of interfaces in the AgSbTe2/Sb2Te3 and PbTe/Sb2Te3 systems, focusing on interfaces vicinal to {l_brace}111{r_brace}/{l_brace}0001{r_brace}. Through this analysis, we identify a defect that can accomplish the rocksalt-to-tetradymite phase transformation through diffusive-glide motion along the interface.

More Details

Impact of Rayleigh Taylor on neutron production in a deuterium Z-pinch

Stygar, William A.; Leeper, Ramon J.

A deuterium gas puff z-pinch has been shown to be a significant source of neutrons with yield scaling with current as Y{sub n} {approx} I{sup 3.5}. Recent implicit, electromagnetic and kinetic particle-in-cell simulations with the LSP code have shown that the yield has significant thermonuclear and beam-target components. Beam-target neutron yield is produced from deuterium ion high-energy tails driven by the Rayleigh Taylor instability. In this paper, we present further results from 1-3D simulations of deuterium z-pinches over a wider current range 1.4-20 MA. Preliminary results show that unlike the high current regime above 7 MA, the yield at lower currents is dominated by beam-target fusion reactions from high energy ions consistent with experiment. We will also examine in 3D the impact of the Rayleigh Taylor instability on the ion energy distribution. We discuss the implications of these simulations for neutron yield at still higher currents.

More Details

Computing contingency statistics in parallel : design trade-offs and limiting cases

Bennett, Janine C.; Thompson, David; Pebay, Philippe P.

Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics (which we discussed in [1]) where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel. We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

More Details

An overview of the Morfeus project

Rouson, Damian R.

The objectives of this project are to: (1) move scientific programmers to higher-level, platform-agnostic yet scalable abstractions; (2) to demonstrate general OOD patterns and distill new domain-specific patterns from multiphysics applications in Fortran; and (3) to construct an open-source framework that encourages the use of the demonstrated patterns. Some conclusions are: (1) Calculus illuminates a path toward highly asynchronous computing that blurs the task/data parallel distinction; (2) Fortran 2003 appears to have the expressiveness to support the general GoF design patterns in multiphysics applications; and (3) several domain-specific and language-specific patterns emerge along the way.

More Details

Vulnerability analysis for complex networks using aggressive abstraction

Large, complex networks are ubiquitous in nature and society, and there is great interest in developing rigorous, scalable methods for identifying and characterizing their vulnerabilities. This paper presents an approach for analyzing the dynamics of complex networks in which the network of interest is first abstracted to a much simpler, but mathematically equivalent, representation, the required analysis is performed on the abstraction, and analytic conclusions are then mapped back to the original network and interpreted there. We begin by identifying a broad and important class of complex networks which admit vulnerability-preserving, finite state abstractions, and develop efficient algorithms for computing these abstractions. We then propose a vulnerability analysis methodology which combines these finite state abstractions with formal analytics from theoretical computer science to yield a comprehensive vulnerability analysis process for networks of realworld scale and complexity. The potential of the proposed approach is illustrated with a case study involving a realistic electric power grid model and also with brief discussions of biological and social network examples.

More Details
Results 73026–73050 of 99,299
Results 73026–73050 of 99,299