Publications

Results 9426–9450 of 9,998

Search results

Jump to search filters

Geophysical subsurface imaging and interface identification

Day, David M.; Bochev, Pavel B.; Weiss, Chester J.; Robinson, Allen C.

Electromagnetic induction is a classic geophysical exploration method designed for subsurface characterization--in particular, sensing the presence of geologic heterogeneities and fluids such as groundwater and hydrocarbons. Several approaches to the computational problems associated with predicting and interpreting electromagnetic phenomena in and around the earth are addressed herein. Publications resulting from the project include [31]. To obtain accurate and physically meaningful numerical simulations of natural phenomena, computational algorithms should operate in discrete settings that reflect the structure of governing mathematical models. In section 2, the extension of algebraic multigrid methods for the time domain eddy current equations to the frequency domain problem is discussed. Software was developed and is available in Trilinos ML package. In section 3 we consider finite element approximations of De Rham's complex. We describe how to develop a family of finite element spaces that forms an exact sequence on hexahedral grids. The ensuing family of non-affine finite elements is called a van Welij complex, after the work [37] of van Welij who first proposed a general method for developing tangentially and normally continuous vector fields on hexahedral elements. The use of this complex is illustrated for the eddy current equations and a conservation law problem. Software was developed and is available in the Ptenos finite element package. The more popular methods of geophysical inversion seek solutions to an unconstrained optimization problem by imposing stabilizing constraints in the form of smoothing operators on some enormous set of model parameters (i.e. ''over-parametrize and regularize''). In contrast we investigate an alternative approach whereby sharp jumps in material properties are preserved in the solution by choosing as model parameters a modest set of variables which describe an interface between adjacent regions in physical space. While still over-parametrized, this choice of model space contains far fewer parameters than before, thus easing the computational burden, in some cases, of the optimization problem. And most importantly, the associated finite element discretization is aligned with the abrupt changes in material properties associated with lithologic boundaries as well as the interface between buried cultural artifacts and the surrounding Earth. In section 4, algorithms and tools are described that associate a smooth interface surface to a given triangulation. In particular, the tools support surface refinement and coarsening. Section 5 describes some preliminary results on the application of interface identification methods to some model problems in geophysical inversion. Due to time constraints, the results described here use the GNU Triangulated Surface Library for the manipulation of surface meshes and the TetGen software library for the generation of tetrahedral meshes.

More Details

Peridynamic modeling of plain and reinforced concrete structures

Silling, Stewart A.

The peridynamic model was introduced by Silling in 1998. In this paper, we demonstrate the application of the quasistatic peridynamic model to two-dimensional, linear elastic, plane stress and plane strain problems, with special attention to the modeling of plain and reinforced concrete structures. We consider just one deviation from linearity--that which arises due to the irreversible sudden breaking of bonds between particles. The peridynamic model starts with the assumption that Newton's second law holds true on every infinitesimally small free body (or particle) within the domain of analysis. A specified force density function, called the pairwise force function, (with units of force per unit volume per unit volume) between each pair of infinitesimally small particles is postulated to act if the particles are closer together than some finite distance, called the material horizon. The pairwise force function may be assumed to be a function of the relative position and the relative displacement between the two particles. In this paper, we assume that for two particles closer together than the specified 'material horizon' the pairwise force function increases linearly with respect to the stretch, but at some specified stretch, the pairwise force function is irreversibly reduced to zero.

More Details

Programming future architectures : dusty decks, memory walls, and the speed of light

Rodrigues, Arun

Due to advances in CMOS fabrication technology, high performance computing capabilities have continually grown. More capable hardware has allowed a range of complex scientific applications to be developed. However, these applications present a bottleneck to future performance. Entrenched 'legacy' codes - 'Dusty Decks' - demand that new hardware must remain compatible with existing software. Additionally, conventional architectures faces increasing challenges. Many of these challenges revolve around the growing disparity between processor and memory speed - the 'Memory Wall' - and difficulties scaling to large numbers of parallel processors. To a large extent, these limitations are inherent to the traditional computer architecture. As data is consumed more quickly, moving that data to the point of computation becomes more difficult. Barring any upward revision in the speed of light, this will continue to be a fundamental limitation on the speed of computation. This work focuses on these solving these problems in the context of Light Weight Processing (LWP). LWP is an innovative technique which combines Processing-In-Memory, short vector computation, multithreading, and extended memory semantics. It applies these techniques to try and answer the questions 'What will a next-generation supercomputer look like?' and 'How will we program it?' To that end, this work presents four contributions: (1) An implementation of MPI which uses features of LWP to substantially improve message processing throughput; (2) A technique leveraging extended memory semantics to improve message passing by overlapping computation and communication; (3) An OpenMP library modified to allow efficient partitioning of threads between a conventional CPU and LWPs - greatly improving cost/performance; and (4) An algorithm to extract very small 'threadlets' which can overcome the inherent disadvantages of a simple processor pipeline.

More Details

A reduced order model for the study of asymmetries in linear gas chromatography for homogeneous tubular columns

Romero, L.A.; Whiting, Joshua J.; Parks, Michael L.

In gas chromatography, a chemical sample separates into its constituent components as it travels along a long thin column. As the component chemicals exit the column they are detected and identified, allowing the chemical makeup of the sample to be determined. For correct identification of the component chemicals, the distribution of the concentration of each chemical along the length of the column must be nearly symmetric. The prediction and control of asymmetries in gas chromatography has been an active research area since the advent of the technique. In this paper, we develop from first principles a general model for isothermal linear chromatography. We use this model to develop closed-form expressions for terms related to the first, second, and third moments of the distribution of the concentration, which determines the velocity, diffusion rate, and asymmetry of the distribution. We show that for all practical experimental situations, only fronting peaks are predicted by this model, suggesting that a nonlinear chromatography model is required to predict tailing peaks. For situations where asymmetries arise, we analyze the rate at which the concentration distribution returns to a normal distribution. Numerical examples are also provided.

More Details

On least-squares variational principles for the discretization of optimization and control problems

Proposed for publication in Methods and Applications of Analysis.

Bochev, Pavel B.

The approximate solution of optimization and control problems for systems governed by linear, elliptic partial differential equations is considered. Such problems are most often solved using methods based on the application of the Lagrange multiplier rule followed by discretization through, e.g., a Galerkin finite element method. As an alternative, we show how least-squares finite element methods can be used for this purpose. Penalty-based formulations, another approach widely used in other settings, have not enjoyed the same level of popularity in the partial differential equation case perhaps because naively defined penalty-based methods can have practical deficiencies. We use methodologies associated with modern least-squares finite element methods to develop and analyze practical penalty methods for the approximate solution of optimization problems for systems governed by linear, elliptic partial differential equations. We develop an abstract theory for such problems; along the way, we introduce several methods based on least-squares notions, and compare and constrast their properties.

More Details

On the Design of Interfaces to Serial and Parallel Direct Solver Libraries

Sala, Marzio S.

Wereportonthedesignofgeneral,flexible,consistentandefficientinterfacestodirectsolveralgorithmsforthesolutionofsystemsoflinearequations.Wesupposethatsuchalgorithmsareavailableinformofsoftwarelibraries,andweintroduceaframeworktofacilitatetheusageoftheselibraries.Thisframeworkiscomposedbytwocomponents:anabstractmatrixinterfacetoaccessthelinearsystemmatrixelements,andanabstractsolverinterfacethatcontrolsthesolutionofthelinearsystem.Wedescribeaconcreteimplementationoftheproposedframework,whichallowsahigh-levelviewandusageofmostofthecurrentlyavailablelibrariesthatimplementsdirectsolutionmethodsforlinearsystems.Wecommentontheadvantagesandlimitationoftheframework.3

More Details

Arsenic ion implant energy effects on CMOS gate oxide hardness

Proposed for publication in the IEEE Transactions on Nuclear Science.

Draper, Bruce L.; Shaneyfelt, Marty R.; Young, Ralph W.; Headley, Thomas J.; Dondero, Richard D.

Under conditions that were predicted as 'safe' by well-established TCAD packages, radiation hardness can still be significantly degraded by a few lucky arsenic ions reaching the gate oxide during self-aligned CMOS source/drain ion implantation. The most likely explanation is that both oxide traps and interface traps are created when ions penetrate and damage the gate oxide after channeling or traveling along polysilicon grain boundaries during the implantation process.

More Details

Parallel hypergraph partitioning for scientific computing

Boman, Erik G.; Devine, Karen D.; Heaphy, Robert T.; Hendrickson, Bruce A.

Graph partitioning is often used for load balancing in parallel computing, but it is known that hypergraph partitioning has several advantages. First, hypergraphs more accurately model communication volume, and second, they are more expressive and can better represent nonsymmetric problems. Hypergraph partitioning is particularly suited to parallel sparse matrix-vector multiplication, a common kernel in scientific computing. We present a parallel software package for hypergraph (and sparse matrix) partitioning developed at Sandia National Labs. The algorithm is a variation on multilevel partitioning. Our parallel implementation is novel in that it uses a two-dimensional data distribution among processors. We present empirical results that show our parallel implementation achieves good speedup on several large problems (up to 33 million nonzeros) with up to 64 processors on a Linux cluster.

More Details
Results 9426–9450 of 9,998
Results 9426–9450 of 9,998