Acceleration of direct-interface cad-based Monte Carlo radiation transport
American Nuclear Society's 14th Biennial Topical Meeting of the Radiation Protection and Shielding Division
Abstract not provided.
American Nuclear Society's 14th Biennial Topical Meeting of the Radiation Protection and Shielding Division
Abstract not provided.
Proceedings - IEEE International Conference on Cluster Computing, ICCC
The RandomAccess benchmark as defined by the High Performance Computing Challenge (HPCC) tests the speed at which a machine can update the elements of a table spread across global system memory, as measured in billions (giga) of updates per second (GUPS). The parallel implementation provided by HPCC typically performs poorly on distributed-memory machines, due to updates requiring numerous small point-to-point messages between processors. We present an alternative algorithm which treats the collection of P processors as a hypercube, aggregating data so that larger messages are sent, and routing individual datums through dimensions of the hypercube to their destination processor. The algorithm's computation (the GUP count) scales linearly with P while its communication overhead scales as log2(P), thus enabling better performance on large numbers of processors. The new algorithm achieves a GUPS rate of 19.98 on 8192 processors of Sandia's Red Storm machine, compared to 1.02 for the HPCC-provided algorithm on 10350 processors. We also illustrate how GUPS performance varies with the benchmark's specification of its "look-ahead" parameter. As expected, parallel performance degrades for small look-ahead values, and improves dramatically for large values. © 2006 IEEE.
Proceedings of the 2006 IEEE International Symposium on Workload Characterization, IISWC - 2006
Abstract not provided.
We define a new diagnostic method where computationally-intensive numerical solutions are used as an integral part of making difficult, non-contact, nanometer-scale measurements. The limited scope of this report comprises most of a due diligence investigation into implementing the new diagnostic for measuring dynamic operation of Sandia's RF Ohmic Switch. Our results are all positive, providing insight into how this switch deforms during normal operation. Future work should contribute important measurements on a variety of operating MEMS devices, with insights that are complimentary to those from measurements made using interferometry and laser Doppler methods. More generally, the work opens up a broad front of possibility where exploiting massive high-performance computers enable new measurements.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proposed for publication in the SIAM Journal on Scientific Computing.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The purpose of this project is to develop tools to model and simulate the processes of self-assembly and growth in biological systems from the molecular to the continuum length scales. The model biological system chosen for the study is the tendon fiber which is composed mainly of Type I collagen fibrils. The macroscopic processes of self-assembly and growth at the fiber scale arise from microscopic processes at the fibrillar and molecular length scales. At these nano-scopic length scales, we employed molecular modeling and simulation method to characterize the mechanical behavior and stability of the collagen triple helix and the collagen fibril. To obtain the physical parameters governing mass transport in the tendon fiber we performed direct numerical simulations of fluid flow and solute transport through an idealized fibrillar microstructure. At the continuum scale, we developed a mixture theory approach for modeling the coupled processes of mechanical deformation, transport, and species inter-conversion involved in growth. In the mixture theory approach, the microstructure of the tissue is represented by the species concentration and transport and material parameters, obtained from fibril and molecular scale calculations, while the mechanical deformation, transport, and growth processes are governed by balance laws and constitutive relations developed within a thermodynamically consistent framework.
Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave order of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
As George W. Bush recognized in November 2001, "Infectious diseases make no distinctions among people and recognize no borders." By their very nature, infectious diseases of natural or intentional (bioterrorist) origins are capable of threatening regional health systems and economies. The best mechanism for minimizing the spread and impact of infectious disease is rapid disease detection and diagnosis. For rapid diagnosis to occur, infectious substances (IS) must be transported very quickly to appropriate laboratories, sometimes located across the world. Shipment of IS is problematic since many carriers, concerned about leaking packages, refuse to ship this material. The current packaging does not have any ability to neutralize or kill leaking IS. The technology described here was developed by Sandia National Laboratories to provide a fail-safe packaging system for shipment of IS that will increase the likelihood that critical material can be shipped to appropriate laboratories following a bioterrorism event or the outbreak of an infectious disease. This safe and secure packaging method contains a novel decontaminating material that will kill or neutralize any leaking infectious organisms; this feature will decrease the risk associated with shipping IS, making transport more efficient. 3 DRAFT4
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We have enhanced our parallel molecular dynamics (MD) simulation software LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator, lammps.sandia.gov) to include many new features for accelerated simulation including articulated rigid body dynamics via coupling to the Rensselaer Polytechnic Institute code POEMS (Parallelizable Open-source Efficient Multibody Software). We use new features of the LAMMPS software package to investigate rhodopsin photoisomerization, and water model surface tension and capillary waves at the vapor-liquid interface. Finally, we motivate the recipes of MD for practitioners and researchers in numerical analysis and computational mechanics.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The project objective was to detail better ways to assess and exploit intelligent oil and gas field information through improved modeling, sensor technology, and process control to increase ultimate recovery of domestic hydrocarbons. To meet this objective we investigated the use of permanent downhole sensors systems (Smart Wells) whose data is fed real-time into computational reservoir models that are integrated with optimized production control systems. The project utilized a three-pronged approach (1) a value of information analysis to address the economic advantages, (2) reservoir simulation modeling and control optimization to prove the capability, and (3) evaluation of new generation sensor packaging to survive the borehole environment for long periods of time. The Value of Information (VOI) decision tree method was developed and used to assess the economic advantage of using the proposed technology; the VOI demonstrated the increased subsurface resolution through additional sensor data. Our findings show that the VOI studies are a practical means of ascertaining the value associated with a technology, in this case application of sensors to production. The procedure acknowledges the uncertainty in predictions but nevertheless assigns monetary value to the predictions. The best aspect of the procedure is that it builds consensus within interdisciplinary teams The reservoir simulation and modeling aspect of the project was developed to show the capability of exploiting sensor information both for reservoir characterization and to optimize control of the production system. Our findings indicate history matching is improved as more information is added to the objective function, clearly indicating that sensor information can help in reducing the uncertainty associated with reservoir characterization. Additional findings and approaches used are described in detail within the report. The next generation sensors aspect of the project evaluated sensors and packaging survivability issues. Our findings indicate that packaging represents the most significant technical challenge associated with application of sensors in the downhole environment for long periods (5+ years) of time. These issues are described in detail within the report. The impact of successful reservoir monitoring programs and coincident improved reservoir management is measured by the production of additional oil and gas volumes from existing reservoirs, revitalization of nearly depleted reservoirs, possible re-establishment of already abandoned reservoirs, and improved economics for all cases. Smart Well monitoring provides the means to understand how a reservoir process is developing and to provide active reservoir management. At the same time it also provides data for developing high-fidelity simulation models. This work has been a joint effort with Sandia National Laboratories and UT-Austin's Bureau of Economic Geology, Department of Petroleum and Geosystems Engineering, and the Institute of Computational and Engineering Mathematics.
A finite temperature version of 'exact-exchange' density functional theory (EXX) has been implemented in Sandia's Socorro code. The method uses the optimized effective potential (OEP) formalism and an efficient gradient-based iterative minimization of the energy. The derivation of the gradient is based on the density matrix, simplifying the extension to finite temperatures. A stand-alone all-electron exact-exchange capability has been developed for testing exact exchange and compatible correlation functionals on small systems. Calculations of eigenvalues for the helium atom, beryllium atom, and the hydrogen molecule are reported, showing excellent agreement with highly converged quantumMonte Carlo calculations. Several approaches to the generation of pseudopotentials for use in EXX calculations have been examined and are discussed. The difficult problem of finding a correlation functional compatible with EXX has been studied and some initial findings are reported.
This project focused on research and algorithmic development in optimization under uncertainty (OUU) problems driven by earth penetrator (EP) designs. While taking into account uncertainty, we addressed three challenges in current simulation-based engineering design and analysis processes. The first challenge required leveraging small local samples, already constructed by optimization algorithms, to build effective surrogate models. We used Gaussian Process (GP) models to construct these surrogates. We developed two OUU algorithms using 'local' GPs (OUU-LGP) and one OUU algorithm using 'global' GPs (OUU-GGP) that appear competitive or better than current methods. The second challenge was to develop a methodical design process based on multi-resolution, multi-fidelity models. We developed a Multi-Fidelity Bayesian Auto-regressive process (MF-BAP). The third challenge involved the development of tools that are computational feasible and accessible. We created MATLAB{reg_sign} and initial DAKOTA implementations of our algorithms.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
This work is the first to describe how to go about designing a reversible QDCA system. The design space is substantial, and there are many questions that a designer needs to answer before beginning to design. This document begins to explicate the tradeoffs and assumptions that need to be made and offers a range of approaches as starting points and examples. This design guide is an effective tool for aiding designers in creating the best quality QDCA implementation for a system.
Abstract not provided.
Abstract not provided.
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Abstract not provided.
Abstract not provided.