Publications

Results 9001–9200 of 9,998

Search results

Jump to search filters

Sonic Infrared (IR) imaging and fluorescent penetrant inspection probability of detection (POD) comparison

AIP Conference Proceedings

DiMambro, Joseph D.; Ashbaugh, D.M.; Nelson, C.L.; Spencer, Floyd W.

Sandia National Laboratories Airworthiness Assurance Nondestructive Inspection Validation Center (AANC) implemented two crack probability of detection (POD) experiments to compare in a quantitative manner the ability of Sonic Infrared (IR) Imaging and fluorescent penetrant inspection (FPI) to reliably detect cracks. Blind Sonic IR and FPI inspections were performed on titanium and Inconel® specimens having statistically relevant flaw profiles. Inspector hit/miss data was collected and POD curves for each technique were generated and compared. In addition, the crack lengths for a number of titanium and Inconel® reference standards were measured before and after repeated Sonic IR inspections to determine if crack growth occurred. © 2007 American Institute of Physics.

More Details

Verification and validation as applied epistemology

McNamara, Laura A.; Trucano, Timothy G.; Backus, George A.

Since 1998, the Department of Energy/NNSA National Laboratories have invested millions in strategies for assessing the credibility of computational science and engineering (CSE) models used in high consequence decision making. The answer? There is no answer. There's a process--and a lot of politics. The importance of model evaluation (verification, validation, uncertainty quantification, and assessment) increases in direct proportion to the significance of the model as input to a decision. Other fields, including computational social science, can learn from the experience of the national laboratories. Some implications for evaluating 'low cognition agents'. Epistemology considers the question, How do we know what we [think we] know? What makes Western science special in producing reliable, predictive knowledge about the world? V&V takes epistemology out of the realm of thought and puts it into practice. What is the role of modeling and simulation in the production of reliable, credible scientific knowledge about the world? What steps, investments, practices do I pursue to convince myself that the model I have developed is producing credible knowledge?

More Details

A taxonomy and comparison of parallel block multi-level preconditioners for the incompressible Navier-Stokes equations

Howle, Victoria E.; Shadid, John N.; Shuttleworth, Robert R.; Tuminaro, Raymond S.

In recent years, considerable effort has been placed on developing efficient and robust solution algorithms for the incompressible Navier-Stokes equations based on preconditioned Krylov methods. These include physics-based methods, such as SIMPLE, and purely algebraic preconditioners based on the approximation of the Schur complement. All these techniques can be represented as approximate block factorization (ABF) type preconditioners. The goal is to decompose the application of the preconditioner into simplified sub-systems in which scalable multi-level type solvers can be applied. In this paper we develop a taxonomy of these ideas based on an adaptation of a generalized approximate factorization of the Navier-Stokes system first presented in [25]. This taxonomy illuminates the similarities and differences among these preconditioners and the central role played by efficient approximation of certain Schur complement operators. We then present a parallel computational study that examines the performance of these methods and compares them to an additive Schwarz domain decomposition (DD) algorithm. Results are presented for two and three-dimensional steady state problems for enclosed domains and inflow/outflow systems on both structured and unstructured meshes. The numerical experiments are performed using MPSalsa, a stabilized finite element code.

More Details

Surface effects in semiconductor interstitial formation energies

Sandia journal manuscript; Not yet accepted for publication

Wills, Ann E.; Wixom, Ryan R.

In this work, we examine the formation energies of interstitials in semiconductors obtained with four different pure functionals. Explicitely we investigate three silicon self-interstitials. All functionals give the same trend among those interstitials; the lowest energy being for formation of the <110>-split, somewhat higher energy for the formation of the hexagonal interstitial, while highest energy among the three is obtained for the meta-stable tetragonal configuration. However, the value for the formation energy for a specific interstitial differs substantially in calculations using different functionals. It is shown that the main contribution to these differences is stemming from the functionals different surface intrinsic errors. We also discuss the puzzle that the values obtained with the surface intrisic error free AM05 functional (Armiento and Mattsson, Phys. Rev. B 72, 085108 (2006)) gives values substantially lower than Quantum Monte Carlo results

More Details

Macro-meso-microsystems integration in LTCC : LDRD report

Rohde, Steven B.; Okandan, Murat O.; Pfeifer, Kent B.; De Smet, Dennis J.; Patel, Kamlesh P.; Ho, Clifford K.; Nordquist, Christopher N.; Walker, Charles A.; Rohrer, Brandon R.; Buerger, Stephen B.; Turner, Timothy S.; Wroblewski, Brian W.

Low Temperature Cofired Ceramic (LTCC) has proven to be an enabling medium for microsystem technologies, because of its desirable electrical, physical, and chemical properties coupled with its capability for rapid prototyping and scalable manufacturing of components. LTCC is viewed as an extension of hybrid microcircuits, and in that function it enables development, testing, and deployment of silicon microsystems. However, its versatility has allowed it to succeed as a microsystem medium in its own right, with applications in non-microelectronic meso-scale devices and in a range of sensor devices. Applications include silicon microfluidic ''chip-and-wire'' systems and fluid grid array (FGA)/microfluidic multichip modules using embedded channels in LTCC, and cofired electro-mechanical systems with moving parts. Both the microfluidic and mechanical system applications are enabled by sacrificial volume materials (SVM), which serve to create and maintain cavities and separation gaps during the lamination and cofiring process. SVMs consisting of thermally fugitive or partially inert materials are easily incorporated. Recognizing the premium on devices that are cofired rather than assembled, we report on functional-as-released and functional-as-fired moving parts. Additional applications for cofired transparent windows, some as small as an optical fiber, are also described. The applications described help pave the way for widespread application of LTCC to biomedical, control, analysis, characterization, and radio frequency (RF) functions for macro-meso-microsystems.

More Details

3D optical sectioning with a new hyperspectral confocal fluorescence imaging system

Haaland, David M.; Sinclair, Michael B.; Jones, Howland D.; Timlin, Jerilyn A.; Bachand, George B.; Sasaki, Darryl Y.; Davidson, George S.; Van Benthem, Mark V.

A novel hyperspectral fluorescence microscope for high-resolution 3D optical sectioning of cells and other structures has been designed, constructed, and used to investigate a number of different problems. We have significantly extended new multivariate curve resolution (MCR) data analysis methods to deconvolve the hyperspectral image data and to rapidly extract quantitative 3D concentration distribution maps of all emitting species. The imaging system has many advantages over current confocal imaging systems including simultaneous monitoring of numerous highly overlapped fluorophores, immunity to autofluorescence or impurity fluorescence, enhanced sensitivity, and dramatically improved accuracy, reliability, and dynamic range. Efficient data compression in the spectral dimension has allowed personal computers to perform quantitative analysis of hyperspectral images of large size without loss of image quality. We have also developed and tested software to perform analysis of time resolved hyperspectral images using trilinear multivariate analysis methods. The new imaging system is an enabling technology for numerous applications including (1) 3D composition mapping analysis of multicomponent processes occurring during host-pathogen interactions, (2) monitoring microfluidic processes, (3) imaging of molecular motors and (4) understanding photosynthetic processes in wild type and mutant Synechocystis cyanobacteria.

More Details

Supercomputer and cluster performance modeling and analysis efforts:2004-2006

Ang, James A.; Vaughan, Courtenay T.; Barnette, Daniel W.; Benner, R.E.; Doerfler, Douglas W.; Ganti, Anand G.; Phelps, Sue C.; Rajan, Mahesh R.; Stevenson, Joel O.; Scott, Ryan D.

This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

More Details

Verification and validation benchmarks

Oberkampf, William L.; Trucano, Timothy G.

Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of achievement in V&V activities, how closely related the V&V benchmarks are to the actual application of interest, and the quantification of uncertainties related to the application of interest.

More Details

Algebraic multilevel preconditioners for nonsymmetric PDEs on stretched grids

Lecture Notes in Computational Science and Engineering

Sala, Marzio; Lin, Paul L.; Shadid, John N.; Tuminaro, Raymond S.

We report on algebraic multilevel preconditioners for the parallel solution of linear systems arising from a Newton procedure applied to the finite-element (FE) discretization of the incompressible Navier-Stokes equations. We focus on the issue of how to coarsen FE operators produced from high aspect ratio elements.

More Details

Combinatorial scientific computing: The enabling power of discrete algorithms in computational science

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Hendrickson, Bruce A.; Pothen, Alex

Combinatorial algorithms have long played a crucial, albeit under-recognized role in scientific computing. This impact ranges well beyond the familiar applications of graph algorithms in sparse matrices to include mesh generation, optimization, computational biology and chemistry, data analysis and parallelization. Trends in science and in computing suggest strongly that the importance of discrete algorithms in computational science will continue to grow. This paper reviews some of these many past successes and highlights emerging areas of promise and opportunity. © Springer-Verlag Berlin Heidelberg 2007.

More Details

Simulating human behavior for national security human interactions

Bernard, Michael L.; Glickman, Matthew R.; Hart, Derek H.; Xavier, Patrick G.; Verzi, Stephen J.; Wolfenbarger, Paul W.

This 3-year research and development effort focused on what we believe is a significant technical gap in existing modeling and simulation capabilities: the representation of plausible human cognition and behaviors within a dynamic, simulated environment. Specifically, the intent of the ''Simulating Human Behavior for National Security Human Interactions'' project was to demonstrate initial simulated human modeling capability that realistically represents intra- and inter-group interaction behaviors between simulated humans and human-controlled avatars as they respond to their environment. Significant process was made towards simulating human behaviors through the development of a framework that produces realistic characteristics and movement. The simulated humans were created from models designed to be psychologically plausible by being based on robust psychological research and theory. Progress was also made towards enhancing Sandia National Laboratories existing cognitive models to support culturally plausible behaviors that are important in representing group interactions. These models were implemented in the modular, interoperable, and commercially supported Umbra{reg_sign} simulation framework.

More Details

Towards a predictive MHD simulation capability for designing hypervelocity magnetically-driven flyer plates and PWclass z-pinch x-ray sources on Z and ZR

Mehlhorn, Thomas A.; Yu, Edmund Y.; Vesey, Roger A.; Cuneo, M.E.; Jones, Brent M.; Knudson, Marcus D.; Sinars, Daniel S.; Robinson, Allen C.; Trucano, Timothy G.; Brunner, Thomas A.; Desjarlais, Michael P.; Garasi, Christopher J.; Haill, Thomas A.; Hanshaw, Heath L.; Lemke, Raymond W.; Oliver, Bryan V.; Peterson, Kyle J.

Abstract not provided.

Practical considerations in empirical probability of detection study design

Materials Evaluation

Spencer, Floyd W.

The purposes of a POD study often go beyond the estimation of a single curve based on discontinuity size. The larger goals of a particular POD study will dictate the need for additional planning beyond just deciding on the number of discontinuities and discontinuity-free areas to be included in a test specimen set. The bigger concerns lead to implementation issues that need to be planned for and fully specified prior to the collection of data. These bigger issues have been discussed under the general program areas of experimental design, protocol development and logistic and dress rehearsal. Two different programs were also summarized. Each program led to very different experimental plans. However, the common element in these programs was the use of the POD study as the basic metric for establishing capabilities and important influencing factors. Both programs were developed under the guidelines noted and referenced. Results from these studies are discussed in more detail in Spencer (2007).

More Details

Design tools for complex dynamic security systems

Byrne, Raymond H.; Wilson, David G.; Groom, Kenneth N.; Robinett, R.D.; Harrington, John J.; Rigdon, James B.; Rohrer, Brandon R.; Laguna, Glenn A.

The development of tools for complex dynamic security systems is not a straight forward engineering task but, rather, a scientific task where discovery of new scientific principles and math is necessary. For years, scientists have observed complex behavior but have had difficulty understanding it. Prominent examples include: insect colony organization, the stock market, molecular interactions, fractals, and emergent behavior. Engineering such systems will be an even greater challenge. This report explores four tools for engineered complex dynamic security systems: Partially Observable Markov Decision Process, Percolation Theory, Graph Theory, and Exergy/Entropy Theory. Additionally, enabling hardware technology for next generation security systems are described: a 100 node wireless sensor network, unmanned ground vehicle and unmanned aerial vehicle.

More Details

Advanced robot locomotion

Byrne, Raymond H.; Neely, Jason C.; Buerger, Stephen B.; Feddema, John T.; Novick, David K.; Rose, Scott E.; Spletzer, Barry L.; Sturgis, Beverly R.; Wilson, David G.

This report contains the results of a research effort on advanced robot locomotion. The majority of this work focuses on walking robots. Walking robot applications include delivery of special payloads to unique locations that require human locomotion to exo-skeleton human assistance applications. A walking robot could step over obstacles and move through narrow openings that a wheeled or tracked vehicle could not overcome. It could pick up and manipulate objects in ways that a standard robot gripper could not. Most importantly, a walking robot would be able to rapidly perform these tasks through an intuitive user interface that mimics natural human motion. The largest obstacle arises in emulating stability and balance control naturally present in humans but needed for bipedal locomotion in a robot. A tracked robot is bulky and limited, but a wide wheel base assures passive stability. Human bipedal motion is so common that it is taken for granted, but bipedal motion requires active balance and stability control for which the analysis is non-trivial. This report contains an extensive literature study on the state-of-the-art of legged robotics, and it additionally provides the analysis, simulation, and hardware verification of two variants of a proto-type leg design.

More Details

SLAM using camera and IMU sensors

Rothganger, Fredrick R.

Visual simultaneous localization and mapping (VSLAM) is the problem of using video input to reconstruct the 3D world and the path of the camera in an 'on-line' manner. Since the data is processed in real time, one does not have access to all of the data at once. (Contrast this with structure from motion (SFM), which is usually formulated as an 'off-line' process on all the data seen, and is not time dependent.) A VSLAM solution is useful for mobile robot navigation or as an assistant for humans exploring an unknown environment. This report documents the design and implementation of a VSLAM system that consists of a small inertial measurement unit (IMU) and camera. The approach is based on a modified Extended Kalman Filter. This research was performed under a Laboratory Directed Research and Development (LDRD) effort.

More Details

An evaluation of open MPI's matching transport layer on the cray XT

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Graham, Richard L.; Brightwell, Ronald B.; Barrett, Brian; Bosilca, George; Pješivac-Grbović, Jelena

Open MPI was initially designed to support a wide variety of high-performance networks and network programming interfaces. Recently, Open MPI was enhanced to support networks that have full support for MPI matching semantics. Previous Open MPI efforts focused on networks that require the MPI library to manage message matching, which is sub-optimal for some networks that inherently support matching. We describes a new matching transport layer in Open MPI, present results of micro-benchmarks and several applications on the Cray XT platform, and compare performance of the new and the existing transport layers, as well as the vendor-supplied implementation of MPI. © Springer-Verlag Berlin Heidelberg 2007.

More Details

Investigations on InfiniBand: Efficient network buffer utilization at scale

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Shipman, Galen M.; Brightwell, Ronald B.; Barrett, Brian; Squyres, Jeffrey M.; Bloch, Gil

The default messaging model for the OpenFabrics "Verbs" API is to consume receive buffers in order - regardless of the actual incoming message size - leading to inefficient registered memory usage. For example, many small messages can consume large amounts of registered memory. This paper introduces a new transport protocol in Open MPI implemented using the existing OpenFabrics Verbs API that exhibits efficient registered memory utilization. Several real-world applications were run at scale with the new protocol; results show that global network resource utilization efficiency increases, allowing increased scalability - and larger problem sizes - on clusters which can increase application performance in some cases. © Springer-Verlag Berlin Heidelberg 2007.

More Details

Stabilization of low-order mixed finite elements for the stokes equations

SIAM Journal on Numerical Analysis

Bochev, Pavel B.; Dohrmann, Clark R.; Gunzburger, Max D.

We present a new family of stabilized methods for the Stokes problem. The focus of the paper is on the lowest order velocity-pressure pairs. While not LBB compliant, their simplicity and attractive computational properties make these pairs a popular choice in engineering practice. Our stabilization approach is motivated by terms that characterize the LBB "deficiency" of the unstable spaces. The stabilized methods are defined by using these terms to modify the saddle-point Lagrangian associated with the Stokes equations. The new stabilized methods offer a number of attractive computational properties. In contrast to other stabilization procedures, they are parameter free, do not require calculation of higher order derivatives or edge-based data structures, and always lead to symmetric linear systems. Furthermore, the new methods are unconditionally stable, achieve optimal accuracy with respect to solution regularity, and have simple and straightforward implementations. We present numerical results in two and three dimensions that showcase the excellent stability and accuracy of the new methods. © 2006 Society for Industrial and Applied Mathematics.

More Details

Gaussian processes in response surface modeling

Conference Proceedings of the Society for Experimental Mechanics Series

Swiler, Laura P.

Gaussian processes are used as emulators for expensive computer simulations. Recently, Gaussian processes have also been used to model the "error field" or "code discrepancy" between a computer simulation code and experimental data, and the delta term between two levels of computer simulation (multi-fidelity codes). This work presents the use of Gaussian process models to approximate error or delta fields, and examines how one calculates the parameters governing the process. In multi-fidelity modeling, the delta term is used to correct a lower fidelity model to match or approximate a higher fidelity model. The terms governing the Gaussian process (e.g., the parameters of the covariance matrix) are updated using a Bayesian approach. We have found that use of Gaussian process models requires a good understanding of the method itself and an understanding of the problem in enough detail to identify reasonable covariance parameters. The methods are not "black-box" methods that can be used without some statistical understanding. However, Gaussian processes offer the ability to account for uncertainties in prediction. This approach can help reduce the number of high-fidelity function evaluations necessary in multi-fidelity optimization.

More Details

Model-based statistical estimation of Sandia RF ohmic switch dynamic operation form stroboscopic, x-ray imaging

Diegert, Carl F.

We define a new diagnostic method where computationally-intensive numerical solutions are used as an integral part of making difficult, non-contact, nanometer-scale measurements. The limited scope of this report comprises most of a due diligence investigation into implementing the new diagnostic for measuring dynamic operation of Sandia's RF Ohmic Switch. Our results are all positive, providing insight into how this switch deforms during normal operation. Future work should contribute important measurements on a variety of operating MEMS devices, with insights that are complimentary to those from measurements made using interferometry and laser Doppler methods. More generally, the work opens up a broad front of possibility where exploiting massive high-performance computers enable new measurements.

More Details

Modeling the coupled mechanics, transport, and growth processes in collagen tissues

Holdych, David J.; Stevens, Mark J.; In 't Veld, Pieter J.

The purpose of this project is to develop tools to model and simulate the processes of self-assembly and growth in biological systems from the molecular to the continuum length scales. The model biological system chosen for the study is the tendon fiber which is composed mainly of Type I collagen fibrils. The macroscopic processes of self-assembly and growth at the fiber scale arise from microscopic processes at the fibrillar and molecular length scales. At these nano-scopic length scales, we employed molecular modeling and simulation method to characterize the mechanical behavior and stability of the collagen triple helix and the collagen fibril. To obtain the physical parameters governing mass transport in the tendon fiber we performed direct numerical simulations of fluid flow and solute transport through an idealized fibrillar microstructure. At the continuum scale, we developed a mixture theory approach for modeling the coupled processes of mechanical deformation, transport, and species inter-conversion involved in growth. In the mixture theory approach, the microstructure of the tissue is represented by the species concentration and transport and material parameters, obtained from fibril and molecular scale calculations, while the mechanical deformation, transport, and growth processes are governed by balance laws and constitutive relations developed within a thermodynamically consistent framework.

More Details

FPGAs in High Perfomance Computing: Results from Two LDRD Projects

Underwood, Keith; Ulmer, Craig D.; Thompson, David C.; Hemmert, Karl S.

Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave order of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5

More Details

Development of Self-Remediating Packaging for Safe and Secure Transport of Infectious Substances

Guilinger, Terry R.; Gaudioso, Jennifer M.; Aceto, Donato G.; Lowe, Kathleen M.; Tucker, Mark D.; Salerno, Reynolds M.

As George W. Bush recognized in November 2001, "Infectious diseases make no distinctions among people and recognize no borders." By their very nature, infectious diseases of natural or intentional (bioterrorist) origins are capable of threatening regional health systems and economies. The best mechanism for minimizing the spread and impact of infectious disease is rapid disease detection and diagnosis. For rapid diagnosis to occur, infectious substances (IS) must be transported very quickly to appropriate laboratories, sometimes located across the world. Shipment of IS is problematic since many carriers, concerned about leaking packages, refuse to ship this material. The current packaging does not have any ability to neutralize or kill leaking IS. The technology described here was developed by Sandia National Laboratories to provide a fail-safe packaging system for shipment of IS that will increase the likelihood that critical material can be shipped to appropriate laboratories following a bioterrorism event or the outbreak of an infectious disease. This safe and secure packaging method contains a novel decontaminating material that will kill or neutralize any leaking infectious organisms; this feature will decrease the risk associated with shipping IS, making transport more efficient. 3 DRAFT4

More Details

Substructured multibody molecular dynamics

Crozier, Paul C.; Grest, Gary S.; Ismail, Ahmed I.; Lehoucq, Richard B.; Plimpton, Steven J.; Stevens, Mark J.

We have enhanced our parallel molecular dynamics (MD) simulation software LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator, lammps.sandia.gov) to include many new features for accelerated simulation including articulated rigid body dynamics via coupling to the Rensselaer Polytechnic Institute code POEMS (Parallelizable Open-source Efficient Multibody Software). We use new features of the LAMMPS software package to investigate rhodopsin photoisomerization, and water model surface tension and capillary waves at the vapor-liquid interface. Finally, we motivate the recipes of MD for practitioners and researchers in numerical analysis and computational mechanics.

More Details

Analysis of real-time reservoir monitoring : reservoirs, strategies, & modeling

Cooper, Scott P.; Elbring, Gregory J.; Jakaboski, Blake E.; Lorenz, John C.; Mani, Seethambal S.; Normann, Randy A.; Rightley, Michael J.; van Bloemen Waanders, Bart G.; Weiss, Chester J.

The project objective was to detail better ways to assess and exploit intelligent oil and gas field information through improved modeling, sensor technology, and process control to increase ultimate recovery of domestic hydrocarbons. To meet this objective we investigated the use of permanent downhole sensors systems (Smart Wells) whose data is fed real-time into computational reservoir models that are integrated with optimized production control systems. The project utilized a three-pronged approach (1) a value of information analysis to address the economic advantages, (2) reservoir simulation modeling and control optimization to prove the capability, and (3) evaluation of new generation sensor packaging to survive the borehole environment for long periods of time. The Value of Information (VOI) decision tree method was developed and used to assess the economic advantage of using the proposed technology; the VOI demonstrated the increased subsurface resolution through additional sensor data. Our findings show that the VOI studies are a practical means of ascertaining the value associated with a technology, in this case application of sensors to production. The procedure acknowledges the uncertainty in predictions but nevertheless assigns monetary value to the predictions. The best aspect of the procedure is that it builds consensus within interdisciplinary teams The reservoir simulation and modeling aspect of the project was developed to show the capability of exploiting sensor information both for reservoir characterization and to optimize control of the production system. Our findings indicate history matching is improved as more information is added to the objective function, clearly indicating that sensor information can help in reducing the uncertainty associated with reservoir characterization. Additional findings and approaches used are described in detail within the report. The next generation sensors aspect of the project evaluated sensors and packaging survivability issues. Our findings indicate that packaging represents the most significant technical challenge associated with application of sensors in the downhole environment for long periods (5+ years) of time. These issues are described in detail within the report. The impact of successful reservoir monitoring programs and coincident improved reservoir management is measured by the production of additional oil and gas volumes from existing reservoirs, revitalization of nearly depleted reservoirs, possible re-establishment of already abandoned reservoirs, and improved economics for all cases. Smart Well monitoring provides the means to understand how a reservoir process is developing and to provide active reservoir management. At the same time it also provides data for developing high-fidelity simulation models. This work has been a joint effort with Sandia National Laboratories and UT-Austin's Bureau of Economic Geology, Department of Petroleum and Geosystems Engineering, and the Institute of Computational and Engineering Mathematics.

More Details

Beyond the local density approximation : improving density functional theory for high energy density physics applications

Modine, N.A.; Wright, Alan F.; Muller, Richard P.; Sears, Mark P.; Wills, Ann E.; Desjarlais, Michael P.

A finite temperature version of 'exact-exchange' density functional theory (EXX) has been implemented in Sandia's Socorro code. The method uses the optimized effective potential (OEP) formalism and an efficient gradient-based iterative minimization of the energy. The derivation of the gradient is based on the density matrix, simplifying the extension to finite temperatures. A stand-alone all-electron exact-exchange capability has been developed for testing exact exchange and compatible correlation functionals on small systems. Calculations of eigenvalues for the helium atom, beryllium atom, and the hydrogen molecule are reported, showing excellent agreement with highly converged quantumMonte Carlo calculations. Several approaches to the generation of pseudopotentials for use in EXX calculations have been examined and are discussed. The difficult problem of finding a correlation functional compatible with EXX has been studied and some initial findings are reported.

More Details

Penetrator reliability investigation and design exploration : from conventional design processes to innovative uncertainty-capturing algorithms

Swiler, Laura P.; Hough, Patricia D.; Gray, Genetha A.; Chiesa, Michael L.; Heaphy, Robert T.; Thomas, Stephen W.; Trucano, Timothy G.

This project focused on research and algorithmic development in optimization under uncertainty (OUU) problems driven by earth penetrator (EP) designs. While taking into account uncertainty, we addressed three challenges in current simulation-based engineering design and analysis processes. The first challenge required leveraging small local samples, already constructed by optimization algorithms, to build effective surrogate models. We used Gaussian Process (GP) models to construct these surrogates. We developed two OUU algorithms using 'local' GPs (OUU-LGP) and one OUU algorithm using 'global' GPs (OUU-GGP) that appear competitive or better than current methods. The second challenge was to develop a methodical design process based on multi-resolution, multi-fidelity models. We developed a Multi-Fidelity Bayesian Auto-regressive process (MF-BAP). The third challenge involved the development of tools that are computational feasible and accessible. We created MATLAB{reg_sign} and initial DAKOTA implementations of our algorithms.

More Details

DAKOTA, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 reference manual

Brown, Shannon L.; Griffin, Joshua G.; Hough, Patricia D.; Kolda, Tamara G.; Martinez-Canales, Monica L.; Williams, Pamela J.; Adams, Brian M.; Dunlavy, Daniel D.; Gay, David M.; Swiler, Laura P.; Giunta, Anthony A.; Hart, William E.; Watson, Jean-Paul W.; Eddy, John P.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

More Details

Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 developers manual

Brown, Shannon L.; Griffin, Joshua G.; Hough, Patricia D.; Kolda, Tamara G.; Martinez-Canales, Monica L.; Williams, Pamela J.; Adams, Brian M.; Dunlavy, Daniel D.; Gay, David M.; Swiler, Laura P.; Giunta, Anthony A.; Hart, William E.; Watson, Jean-Paul W.; Eddy, John P.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

More Details

QCS: a system for querying, clustering and summarizing documents

Dunlavy, Daniel D.

Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence 'trimming', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.

More Details

On the design of reversible QDCA systems

Murphy, Sarah M.; DeBenedictis, Erik

This work is the first to describe how to go about designing a reversible QDCA system. The design space is substantial, and there are many questions that a designer needs to answer before beginning to design. This document begins to explicate the tradeoffs and assumptions that need to be made and offers a range of approaches as starting points and examples. This design guide is an effective tool for aiding designers in creating the best quality QDCA implementation for a system.

More Details

Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity

Adams, Brian M.; Wittwer, Jonathan W.; Bichon, Barron J.; Carnes, Brian C.; Copps, Kevin D.; Eldred, Michael S.; Hopkins, Matthew M.; Neckels, David C.; Notz, Patrick N.; Subia, Samuel R.

This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

More Details

DAKOTA, a multilevel parellel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 uers's manual

Swiler, Laura P.; Giunta, Anthony A.; Hart, William E.; Watson, Jean-Paul W.; Eddy, John P.; Griffin, Joshua G.; Hough, Patricia D.; Kolda, Tamara G.; Martinez-Canales, Monica L.; Williams, Pamela J.; Eldred, Michael S.; Brown, Shannon L.; Adams, Brian M.; Dunlavy, Daniel D.; Gay, David M.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details
Results 9001–9200 of 9,998
Results 9001–9200 of 9,998