Publications

Results 26–35 of 35

Search results

Jump to search filters

Algorithmic Strategies in Combinatorial Chemistry

Istrail, Sorin I.; Womble, David E.

Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.

More Details

Salvo: Seismic imaging software for complex geologies

Ober, Curtis C.; Womble, David E.

This report describes Salvo, a three-dimensional seismic-imaging software for complex geologies. Regions of complex geology, such as overthrusts and salt structures, can cause difficulties for many seismic-imaging algorithms used in production today. The paraxial wave equation and finite-difference methods used within Salvo can produce high-quality seismic images in these difficult regions. However this approach comes with higher computational costs which have been too expensive for standard production. Salvo uses improved numerical algorithms and methods, along with parallel computing, to produce high-quality images and to reduce the computational and the data input/output (I/O) costs. This report documents the numerical algorithms implemented for the paraxial wave equation, including absorbing boundary conditions, phase corrections, imaging conditions, phase encoding, and reduced-source migration. This report also describes I/O algorithms for large seismic data sets and images and parallelization methods used to obtain high efficiencies for both the computations and the I/O of seismic data sets. Finally, this report describes the required steps to compile, port and optimize the Salvo software, and describes the validation data sets used to help verify a working copy of Salvo.

More Details

The Challenge of Massively Parallel Computing

Womble, David E.

Since the mid-1980's, there have been a number of commercially available parallel computers with hundreds or thousands of processors. These machines have provided a new capability to the scientific community, and they been used successfully by scientists and engineers although with varying degrees of success. One of the reasons for the limited success is the difficulty, or perceived difficulty, in developing code for these machines. In this paper we discuss many of the issues and challenges in developing scalable hardware, system software and algorithms for machines comprising hundreds or thousands of processors.

More Details

Massively parallel computing: A Sandia perspective

Parallel Computing

Womble, David E.

The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant breakthroughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large-scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

More Details

3D seismic imaging on massively parallel computers

Womble, David E.

The ability to image complex geologies such as salt domes in the Gulf of Mexico and thrusts in mountainous regions is a key to reducing the risk and cost associated with oil and gas exploration. Imaging these structures, however, is computationally expensive. Datasets can be terabytes in size, and the processing time required for the multiple iterations needed to produce a velocity model can take months, even with the massively parallel computers available today. Some algorithms, such as 3D, finite-difference, prestack, depth migration remain beyond the capacity of production seismic processing. Massively parallel processors (MPPs) and algorithms research are the tools that will enable this project to provide new seismic processing capabilities to the oil and gas industry. The goals of this work are to (1) develop finite-difference algorithms for 3D, prestack, depth migration; (2) develop efficient computational approaches for seismic imaging and for processing terabyte datasets on massively parallel computers; and (3) develop a modular, portable, seismic imaging code.

More Details

3-D seismic imaging of complex geologies

Womble, David E.

We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.

More Details

Applications of boundary element methods on the intel paragon

Proceedings of the ACM/IEEE Supercomputing Conference

Womble, David E.

This paper describes three applications of the boundary element method and their implementations on the Intel Paragon supercomputer. Each of these applications sustains over 99 Gflops/s based on wall-clock time for the entire application and an actual count of flops executed; one application sustains over 140 Gflops/s! Each application accepts the description of an arbitrary geometry and computes the solution to a problem of commercial and research interest. The common kernel for these applications is a dense equation solver based on LU factorization. It is generally accepted that good performance can be achieved by dense matrix algorithms, but achieving the excellent performance demonstrated here required the development of a variety of special techniques to take full advantage of the power of the Intel Paragon.

More Details

Beyond core: Making parallel computer I/O practical

Womble, David E.

The solution of Grand Challenge Problems will require computations which are too large to fit in the memories of even the largest machines. Inevitably new designs of I/O systems will be necessary to support them. Through our implementations of an out-of-core LU factorization we have learned several important lessons about what I/O systems should be like. In particular we believe that the I/O system must provide the programmer with the ability to explicitly manage storage. One method of doing so is to have a partitioned secondary storage in which each processor owns a logical disk. Along with operating system enhancements which allow overheads such as buffer copying to be avoided, this sort of I/O system meets the needs of high performance computing.

More Details

Out of core, out of mind: Practical parallel I/O

Proceedings of Scalable Parallel Libraries Conference, SPLC 1993

Womble, David E.

Parallel computers are becoming more powerful and more complex in response to the demand for computing power by scientists and engineers. Inevitably, new and more complex I/O systems will be developed for these systems. In particular we believe that the I/O system must provide the programmer with the ability to explicitly manage storage (despite the trend toward complex parallel file systems and caching schemes). One method of doing so is to have a partitioned secondary storage in which each processor owns a logical disk. Along with operating system enhancements which allow overheads such as buffer copying to be avoided and libraries to support optimal remapping of data, this sort of I/O system meets the needs of high performance computing.

More Details

Two time stepping algorithms for parallel computers

Womble, David E.

Time stepping algorithms are often used to solve parabolic and hyperbolic differential equations numerically. These algorithms are generally regarded as sequential in time; that is, the solution on a time level must be known before the computation of the solution at subsequent time levels can start. While this remains true in principle, we demonstrate that it is possible for processors to perform useful work on many time levels simultaneously. Specifically, it is possible for a processor assigned to a ''later'' time level to compute a very good initial guess for the solution based on approximations to the solutions on ''previous'' time levels, thus reducing the time required for solution. The reduction in the solution time can be measured as parallel speedup. We demonstrate two parallel time stepping algorithms that can be used for both linear and nonlinear problems. We compare the two algorithms and discuss their performance in terms of parameters associated with classical time stepping algorithms. 4 refs., 5 tabs.

More Details
Results 26–35 of 35
Results 26–35 of 35