Publications

Results 7801–8000 of 9,998

Search results

Jump to search filters

rMPI : increasing fault resiliency in a message-passing environment

Ferreira, Kurt; Oldfield, Ron A.; Stearley, Jon S.; Laros, James H.; Pedretti, Kevin T.T.; Brightwell, Ronald B.

As High-End Computing machines continue to grow in size, issues such as fault tolerance and reliability limit application scalability. Current techniques to ensure progress across faults, like checkpoint-restart, are unsuitable at these scale due to excessive overheads predicted to more than double an applications time to solution. Redundant computation, long used in distributed and mission critical systems, has been suggested as an alternative to checkpoint-restart on its own. In this paper we describe the rMPI library which enables portable and transparent redundant computation for MPI applications. We detail the design of the library as well as two replica consistency protocols, outline the overheads of this library at scale on a number of real-world applications, and finally outline the significant increase in an applications time to solution at extreme scale as well as show the scenarios in which redundant computation makes sense.

More Details

Factors impacting performance of multithreaded sparse riangular solvet

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Wolf, Michael M.; Heroux, Michael A.; Boman, Erik G.

As computational science applications grow more parallel with multi-core supercomputers having hundreds of thousands of computational cores, it will become increasingly difficult for solvers to scale. Our approach is to use hybrid MPI/threaded numerical algorithms to solve these systems in order to reduce the number of MPI tasks and increase the parallel efficiency of the algorithm. However, we need efficient threaded numerical kernels to run on the multi-core nodes in order to achieve good parallel efficiency. In this paper, we focus on improving the performance of a multithreaded triangular solver, an important kernel for preconditioning. We analyze three factors that affect the parallel performance of this threaded kernel and obtain good scalability on the multi-core nodes for a range of matrix sizes. © 2011 Springer-Verlag Berlin Heidelberg.

More Details

Assessing the Near-Term Risk of Climate Uncertainty:Interdependencies among the U.S. States

Backus, George A.; Trucano, Timothy G.; Robinson, David G.; Adams, Brian M.; Richards, Elizabeth H.; Siirola, John D.; Boslough, Mark B.; Taylor, Mark A.; Conrad, Stephen H.; Kelic, Andjelka; Roach, Jesse D.; Warren, Drake E.; Ballantine, Marissa D.; Stubblefield, W.A.; Snyder, Lillian A.; Finley, Ray E.; Horschel, Daniel S.; Ehlen, Mark E.; Klise, Geoffrey T.; Malczynski, Leonard A.; Stamber, Kevin L.; Tidwell, Vincent C.; Vargas, Vanessa N.; Zagonel, Aldo A.

Abstract not provided.

A theory manual for multi-physics code coupling in LIME

Bartlett, Roscoe B.; Belcourt, Kenneth N.; Hooper, Russell H.; Schmidt, Rodney C.

The Lightweight Integrating Multi-physics Environment (LIME) is a software package for creating multi-physics simulation codes. Its primary application space is when computer codes are currently available to solve different parts of a multi-physics problem and now need to be coupled with other such codes. In this report we define a common domain language for discussing multi-physics coupling and describe the basic theory associated with multiphysics coupling algorithms that are to be supported in LIME. We provide an assessment of coupling techniques for both steady-state and time dependent coupled systems. Example couplings are also demonstrated.

More Details

Physics of intense, high energy radiation effects

Hjalmarson, Harold P.; Magyar, Rudolph J.; Crozier, Paul C.; Hartman, Elmer F.

This document summarizes the work done in our three-year LDRD project titled 'Physics of Intense, High Energy Radiation Effects.' This LDRD is focused on electrical effects of ionizing radiation at high dose-rates. One major thrust throughout the project has been the radiation-induced conductivity (RIC) produced by the ionizing radiation. Another important consideration has been the electrical effect of dose-enhanced radiation. This transient effect can produce an electromagnetic pulse (EMP). The unifying theme of the project has been the dielectric function. This quantity contains much of the physics covered in this project. For example, the work on transient electrical effects in radiation-induced conductivity (RIC) has been a key focus for the work on the EMP effects. This physics in contained in the dielectric function, which can also be expressed as a conductivity. The transient defects created during a radiation event are also contained, in principle. The energy loss lead the hot electrons and holes is given by the stopping power of ionizing radiation. This information is given by the inverse dielectric function. Finally, the short time atomistic phenomena caused by ionizing radiation can also be considered to be contained within the dielectric function. During the LDRD, meetings about the work were held every week. These discussions involved theorists, experimentalists and engineers. These discussions branched out into the work done in other projects. For example, the work on EMP effects had influence on another project focused on such phenomena in gases. Furthermore, the physics of radiation detectors and radiation dosimeters was often discussed, and these discussions had impact on related projects. Some LDRD-related documents are now stored on a sharepoint site (https://sharepoint.sandia.gov/sites/LDRD-REMS/default.aspx). In the remainder of this document the work is described in catergories but there is much overlap between the atomistic calculations, the continuum calculations and the experiments.

More Details

Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC) : FY10 development and integration

Freeze, Geoffrey A.; Arguello, Jose G.; Bouchard, Julie F.; Criscenti, Louise C.; Dewers, Thomas D.; Edwards, Harold C.; Sassani, David C.; Schultz, Peter A.; Wang, Yifeng

This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.

More Details

Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016

Schoenwald, David A.; Richardson, Bryan T.; Riehm, Andrew C.; Wolfenbarger, Paul W.; Adams, Brian M.; Reno, Matthew J.; Hansen, Clifford H.; Oldfield, Ron A.; Stamp, Jason E.; Stein, Joshua S.; Hoekstra, Robert J.; Nelson, Jeffrey S.; Munoz-Ramos, Karina M.; McLendon, William C.; Russo, Thomas V.; Phillips, Laurence R.

Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

More Details

Formulation, analysis and numerical study of an optimization-based conservative interpolation (remap) of scalar fields for arbitrary Lagrangian-Eulerian methods

Journal of Computational Physics

Bochev, Pavel; Ridzal, Denis R.; Scovazzi, Guglielmo S.; Shashkov, Mikhail

We develop and study the high-order conservative and monotone optimization-based remap (OBR) of a scalar conserved quantity (mass) between two close meshes with the same connectivity. The key idea is to phrase remap as a global inequality-constrained optimization problem for mass fluxes between neighboring cells. The objective is to minimize the discrepancy between these fluxes and the given high-order target mass fluxes, subject to constraints that enforce physically motivated bounds on the associated primitive variable (density). In so doing, we separate accuracy considerations, handled by the objective functional, from the enforcement of physical bounds, handled by the constraints. The resulting OBR formulation is applicable to general, unstructured, heterogeneous grids. Under some weak requirements on grid proximity, but not on the cell types, we prove that the OBR algorithm is linearity preserving in one, two and three dimensions. The paper also examines connections between the OBR and the recently proposed flux-corrected remap (FCR), Liska et al. [1]. We show that the FCR solution coincides with the solution of a modified version of OBR (M-OBR), which has the same objective but a simpler set of box constraints derived by using a "worst-case" scenario. Because M-OBR (FCR) has a smaller feasible set, preservation of linearity may be lost and accuracy may suffer for some grid configurations. Our numerical studies confirm this, and show that OBR delivers significant increases in robustness and accuracy. Preliminary efficiency studies of OBR reveal that it is only a factor of 2.1 slower than FCR, but admits 1.5 times larger time steps. © 2011 Elsevier Inc.

More Details

IceT users' guide and reference

Moreland, Kenneth D.

The Image Composition Engine for Tiles (IceT) is a high-performance sort-last parallel rendering library. In addition to providing accelerated rendering for a standard display, IceT provides the unique ability to generate images for tiled displays. The overall resolution of the display may be several times larger than any viewport that may be rendered by a single machine. This document is an overview of the user interface to IceT.

More Details

Coupling strategies for high-speed aeroheating problems

Bova, S.W.

A common purpose for performing an aerodynamic analysis is to calculate the resulting loads on a solid body immersed in the flow. Pressure or heat loads are often of interest for characterizing the structural integrity or thermal survivability of the structure. This document describes two algorithms for tightly coupling the mass, momentum and energy conservation equations for a compressible fluid and the energy conservation equation for heat transfer through a solid. We categorize both approaches as monolithically coupled, where the conservation equations for the fluid and the solid are assembled into a single residual vector. Newton's method is then used to solve the resulting nonlinear system of equations. These approaches are in contrast to other popular coupling schemes such as staggered coupling methods were each discipline is solved individually and loads are passed between as boundary conditions, and demonstrates the viability of the monolithic approach for aeroheating problems.

More Details

Adversary phase change detection using S.O.M. and text data

Speed, Ann S.; Warrender, Christina E.

In this work, we developed a self-organizing map (SOM) technique for using web-based text analysis to forecast when a group is undergoing a phase change. By 'phase change', we mean that an organization has fundamentally shifted attitudes or behaviors. For instance, when ice melts into water, the characteristics of the substance change. A formerly peaceful group may suddenly adopt violence, or a violent organization may unexpectedly agree to a ceasefire. SOM techniques were used to analyze text obtained from organization postings on the world-wide web. Results suggest it may be possible to forecast phase changes, and determine if an example of writing can be attributed to a group of interest.

More Details

Truncated multiGaussian fields and effective conductance of binary media

Mckenna, Sean A.; Ray, Jaideep R.; van Bloemen Waanders, Bart G.

Truncated Gaussian fields provide a flexible model for defining binary media with dispersed (as opposed to layered) inclusions. General properties of excursion sets on these truncated fields are coupled with a distance-based upscaling algorithm and approximations of point process theory to develop an estimation approach for effective conductivity in two-dimensions. Estimation of effective conductivity is derived directly from knowledge of the kernel size used to create the multiGaussian field, defined as the full-width at half maximum (FWHM), the truncation threshold and conductance values of the two modes. Therefore, instantiation of the multiGaussian field is not necessary for estimation of the effective conductance. The critical component of the effective medium approximation developed here is the mean distance between high conductivity inclusions. This mean distance is characterized as a function of the FWHM, the truncation threshold and the ratio of the two modal conductivities. Sensitivity of the resulting effective conductivity to this mean distance is examined for two levels of contrast in the two modal conductances and different FWHM sizes. Results demonstrate that the FWHM is a robust measure of mean travel distance in the background medium. The resulting effective conductivities are accurate when compared to numerical results and results obtained from effective media theory, distance-based upscaling and numerical simulation.

More Details

Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) verification and validation plan. version 1

Edwards, Harold C.; Arguello, Jose G.; Bartlett, Roscoe B.; Bouchard, Julie F.; Freeze, Geoffrey A.; Knupp, Patrick K.; Schultz, Peter A.; Urbina, Angel U.; Wang, Yifeng

The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. To meet this objective, NEAMS Waste IPSC M&S capabilities will be applied to challenging spatial domains, temporal domains, multiphysics couplings, and multiscale couplings. A strategic verification and validation (V&V) goal is to establish evidence-based metrics for the level of confidence in M&S codes and capabilities. Because it is economically impractical to apply the maximum V&V rigor to each and every M&S capability, M&S capabilities will be ranked for their impact on the performance assessments of various components of the repository systems. Those M&S capabilities with greater impact will require a greater level of confidence and a correspondingly greater investment in V&V. This report includes five major components: (1) a background summary of the NEAMS Waste IPSC to emphasize M&S challenges; (2) the conceptual foundation for verification, validation, and confidence assessment of NEAMS Waste IPSC M&S capabilities; (3) specifications for the planned verification, validation, and confidence-assessment practices; (4) specifications for the planned evidence information management system; and (5) a path forward for the incremental implementation of this V&V plan.

More Details

Real-time individualized training vectors for experiential learning

Fabian, Nathan D.; Glickman, Matthew R.

Military training utilizing serious games or virtual worlds potentially generate data that can be mined to better understand how trainees learn in experiential exercises. Few data mining approaches for deployed military training games exist. Opportunities exist to collect and analyze these data, as well as to construct a full-history learner model. Outcomes discussed in the present document include results from a quasi-experimental research study on military game-based experiential learning, the deployment of an online game for training evidence collection, and results from a proof-of-concept pilot study on the development of individualized training vectors. This Lab Directed Research & Development (LDRD) project leveraged products within projects, such as Titan (Network Grand Challenge), Real-Time Feedback and Evaluation System, (America's Army Adaptive Thinking and Leadership, DARWARS Ambush! NK), and Dynamic Bayesian Networks to investigate whether machine learning capabilities could perform real-time, in-game similarity vectors of learner performance, toward adaptation of content delivery, and quantitative measurement of experiential learning.

More Details

The use of electric circuit simulation for power grid dynamics

Proceedings of the American Control Conference

Schoenwald, David A.; Munoz-Ramos, Karina M.; McLendon, William C.; Russo, Thomas V.

Traditional grid models for large-scale simulations assume linear and quasi-static behavior allowing very simple models of the systems. In this paper, a scalable electric circuit simulation capability is presented that can capture a significantly higher degree of fidelity including transient dynamic behavior of the grid as well as allowing scaling to a regional and national level grid. A test case presented uses simple models, e.g. generators, transformers, transmission lines, and loads, but with the scalability feature it can be extended to include more advanced non-linear detailed models. The use of this scalable electric circuit simulator will provide the ability to conduct large-scale transient stability analysis as well as grid level planning as the grid evolves with greater degrees of penetration of renewables, power electronics, storage, distributed generation, and micro-grids. © 2011 AACC American Automatic Control Council.

More Details

Biologically inspired feature creation for multi-sensory perception

Frontiers in Artificial Intelligence and Applications

Rohrer, Brandon R.

Automatic feature creation is a powerful tool for identifying and reaching goals in the natural world. This paper describes in detail a biologically-inspired method of feature creation that can be applied to sensory information of any modality. The algorithm is incremental and on-line; it enforces sparseness in the features it creates; and it can form features from other features, making a hierarchical feature set. Here it demonstrates the creation of both visual and auditory features. © 2011 The authors and IOS Press. All rights reserved.

More Details

A nonlocal approach to modeling crack nucleation in AA 7075-T651

ASME 2011 International Mechanical Engineering Congress and Exposition, IMECE 2011

Littlewood, David J.

A critical stage in microstructurally small fatigue crack growth in AA 7075-T651 is the nucleation of cracks originating in constituent particles into the matrix material. Previous work has focused on a geometric approach to modeling microstruc-turally small fatigue crack growth in which damage metrics derived from an elastic-viscoplastic constitutive model are used to predict the nucleation event [1, 2]. While a geometric approach based on classical finite elements was successful in explicitly modeling the polycrystalline grain structure, singularities at the crack tip necessitated the use of a nonlocal sampling approach to remove mesh size dependence. This study is an initial investigation of the peridynamic formulation of continuum mechanics as an alternative approach to modeling microstructurally small fatigue crack growth. Peridy-namics, a nonlocal extension of continuum mechanics, is based on an integral formulation that remains valid in the presence of material discontinuities. To capture accurately the material response at the grain scale, a crystal elastic-viscoplastic constitutive model is adapted for use in non-ordinary state-based peri-dynamics through the use of a regularized deformation gradient. The peridynamic approach is demonstrated on a baseline model consisting of a hard elastic inclusion in a single crystal. Coupling the elastic-viscoplastic material model with peridynamics successfully facilitates the modeling of plastic deformation and damage accumulation in the vicinity of the particle inclusion. Lattice orientation is shown to have a strong influence on material response. Copyright © 2011 by ASME.

More Details

Connecting cognitive and neural models

Frontiers in Artificial Intelligence and Applications

Rothganger, Fredrick R.; Warrender, Christina E.; Speed, Ann S.; Rohrer, Brandon R.; Naugle, Asmeret B.; Trumbo, Derek T.

A key challenge in developing complete human equivalence is how to ground a synoptic theory of cognition in neural reality. Both cognitive architectures and neural models provide insight into how biological brains work, but from opposite directions. Here the authors report on initial work aimed at interpreting connectomic data in terms of algorithms. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. © 2011 The authors and IOS Press. All rights reserved.

More Details

A nonlocal approach to modeling crack nucleation in AA 7075-T651

ASME 2011 International Mechanical Engineering Congress and Exposition, IMECE 2011

Littlewood, David J.

A critical stage in microstructurally small fatigue crack growth in AA 7075-T651 is the nucleation of cracks originating in constituent particles into the matrix material. Previous work has focused on a geometric approach to modeling microstruc-turally small fatigue crack growth in which damage metrics derived from an elastic-viscoplastic constitutive model are used to predict the nucleation event [1, 2]. While a geometric approach based on classical finite elements was successful in explicitly modeling the polycrystalline grain structure, singularities at the crack tip necessitated the use of a nonlocal sampling approach to remove mesh size dependence. This study is an initial investigation of the peridynamic formulation of continuum mechanics as an alternative approach to modeling microstructurally small fatigue crack growth. Peridy-namics, a nonlocal extension of continuum mechanics, is based on an integral formulation that remains valid in the presence of material discontinuities. To capture accurately the material response at the grain scale, a crystal elastic-viscoplastic constitutive model is adapted for use in non-ordinary state-based peri-dynamics through the use of a regularized deformation gradient. The peridynamic approach is demonstrated on a baseline model consisting of a hard elastic inclusion in a single crystal. Coupling the elastic-viscoplastic material model with peridynamics successfully facilitates the modeling of plastic deformation and damage accumulation in the vicinity of the particle inclusion. Lattice orientation is shown to have a strong influence on material response. Copyright © 2011 by ASME.

More Details

Calculating hugoniots for molecular crystals from first principles

Proceedings - 14th International Detonation Symposium, IDS 2010

Wills, Ann E.; Wixom, Ryan R.; Mattsson, Thomas M.

Density Functional Theory (DFT) has over the last few years emerged as an indispensable tool for understanding the behavior of matter under extreme conditions. DFT based molecular dynamics simulations (MD) have for example confirmed experimental findings for shocked deuterium,1 enabled the first experimental evidence for a triple point in carbon above 850 GPa,2 and amended experimental data for constructing a global equation of state (EOS) for water, carrying implications for planetary physics.3 The ability to perform high-fidelity calculations is even more important for cases where experiments are impossible to perform, dangerous, and/or prohibitively expensive. For solid explosives, and other molecular crystals, similar success has been severely hampered by an inability of describing the materials at equilibrium. The binding mechanism of molecular crystals (van der Waals' forces) is not well described within traditional DFT.4 Among widely used exchange-correlation functionals, neither LDA nor PBE balances the strong intra-molecular chemical bonding and the weak inter-molecular attraction, resulting in incorrect equilibrium density, negatively affecting the construction of EOS for undetonated high explosives. We are exploring a way of bypassing this problem by using the new Armiento-Mattsson 2005 (AM05) exchange-correlation functional.5, 6 The AM05 functional is highly accurate for a wide range of solids,4, 7 in particular in compression.8 In addition, AM05 does not include any van der Waals' attraction,4 which can be advantageous compared to other functionals: Correcting for a fictitious van der Waals' like attraction with unknown origin can be harder than correcting for a complete absence of all types of van der Waals' attraction. We will show examples from other materials systems where van der Waals' attraction plays a key role, where this scheme has worked well,9 and discuss preliminary results for molecular crystals and explosives.

More Details

Architecture of PFC supports analogy, but PFC is not an analogy machine

Cognitive Neuroscience

Speed, Ann S.

In the preceding discussion paper, I proposed a theory of prefrontal cortical organization that was fundamentally intended to address the question: How does prefrontal cortex (PFC) support the various functions for which it seems to be selectively recruited? In so doing, I chose to focus on a particular function, analogy, that seems to have been largely ignored in the theoretical treatments of PFC, but that does underlie many other cognitive functions (Hofstadter, 2001; Holyoak & Thagard, 1997). At its core, this paper was intended to use analogy as a foundation for exploring one possibility for prefrontal function in general, although it is easy to see how the analogy-specific interpretation arises (as in the comment by Ibáñez). In an attempt to address this more foundational question, this response will step away from analogy as a focus, and will address first the various comments from the perspective of the initial motivation for developing this theory, and then specific issues raised by the commentators. © 2010 Psychology Press.

More Details

HPC application performance and scaling: Understanding trends and future challenges with application benchmarks on past, present and future tri-lab computing systems

AIP Conference Proceedings

Rajan, Mahesh; Doerfler, Douglas W.

More Details

HPC application performance and scaling: Understanding trends and future challenges with application benchmarks on past, present and future tri-lab computing systems

AIP Conference Proceedings

Rajan, Mahesh R.; Doerfler, Douglas W.

More Details

A combinatorial method for tracing objects using semantics of their shape

Proceedings - Applied Imagery Pattern Recognition Workshop

Diegert, Carl F.

We present a shape-first approach to finding automobiles and trucks in overhead images and include results from our analysis of an image from the Overhead Imaging Research Dataset [1]. For the OIRDS, our shape-first approach traces candidate vehicle outlines by exploiting knowledge about an overhead image of a vehicle: a vehicle's outline fits into a rectangle, this rectangle is sized to allow vehicles to use local roads, and rectangles from two different vehicles are disjoint. Our shape-first approach can efficiently process high-resolution overhead imaging over wide areas to provide tips and cues for human analysts, or for subsequent automatic processing using machine learning or other analysis based on color, tone, pattern, texture, size, and/or location (shape first). In fact, computationally-intensive complex structural, syntactic, and statistical analysis may be possible when a shape-first work flow sends a list of specific tips and cues down a processing pipeline rather than sending the whole of wide area imaging information. This data flow may fit well when bandwidth is limited between computers delivering ad hoc image exploitation and an imaging sensor. As expected, our early computational experiments find that the shape-first processing stage appears to reliably detect rectangular shapes from vehicles. More intriguing is that our computational experiments with six-inch GSD OIRDS benchmark images show that the shape-first stage can be efficient, and that candidate vehicle locations corresponding to features that do not include vehicles are unlikely to trigger tips and cues. We found that stopping with just the shape-first list of candidate vehicle locations, and then solving a weighted, maximal independent vertex set problem to resolve conflicts among candidate vehicle locations, often correctly traces the vehicles in an OIRDS scene. © 2010 IEEE.

More Details

Generation of pareto optimal ensembles of calibrated parameter sets for climate models

Dalbey, Keith D.; Levy, Michael N.

Climate models have a large number of inputs and outputs. In addition, diverse parameters sets can match observations similarly well. These factors make calibrating the models difficult. But as the Earth enters a new climate regime, parameters sets may cease to match observations. History matching is necessary but not sufficient for good predictions. We seek a 'Pareto optimal' ensemble of calibrated parameter sets for the CCSM climate model, in which no individual criteria can be improved without worsening another. One Multi Objective Genetic Algorithm (MOGA) optimization typically requires thousands of simulations but produces an ensemble of Pareto optimal solutions. Our simulation budget of 500-1000 runs allows us to perform the MOGA optimization once, but with far fewer evaluations than normal. We devised an analytic test problem to aid in the selection MOGA settings. The test problem's Pareto set is the surface of a 6 dimensional hypersphere with radius 1 centered at the origin, or rather the portion of it in the [0,1] octant. We also explore starting MOGA from a space-filling Latin Hypercube sample design, specifically Binning Optimal Symmetric Latin Hypercube Sampling (BOSLHS), instead of Monte Carlo (MC). We compare the Pareto sets based on: their number of points, N, larger is better; their RMS distance, d, to the ensemble's center, 0.5553 is optimal; their average radius, {mu}(r), 1 is optimal; their radius standard deviation, {sigma}(r), 0 is optimal. The estimated distributions for these metrics when starting from MC and BOSLHS are shown in Figs. 1 and 2.

More Details

Arctic Sea ice model sensitivities

Bochev, Pavel B.; Paskaleva, Biliana S.

Arctic sea ice is an important component of the global climate system and, due to feedback effects, the Arctic ice cover is changing rapidly. Predictive mathematical models are of paramount importance for accurate estimates of the future ice trajectory. However, the sea ice components of Global Climate Models (GCMs) vary significantly in their prediction of the future state of Arctic sea ice and have generally underestimated the rate of decline in minimum sea ice extent seen over the past thirty years. One of the contributing factors to this variability is the sensitivity of the sea ice state to internal model parameters. A new sea ice model that holds some promise for improving sea ice predictions incorporates an anisotropic elastic-decohesive rheology and dynamics solved using the material-point method (MPM), which combines Lagrangian particles for advection with a background grid for gradient computations. We evaluate the variability of this MPM sea ice code and compare it with the Los Alamos National Laboratory CICE code for a single year simulation of the Arctic basin using consistent ocean and atmospheric forcing. Sensitivities of ice volume, ice area, ice extent, root mean square (RMS) ice speed, central Arctic ice thickness,and central Arctic ice speed with respect to ten different dynamic and thermodynamic parameters are evaluated both individually and in combination using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA). We find similar responses for the two codes and some interesting seasonal variability in the strength of the parameters on the solution.

More Details

Robust emergent climate phenomena associated with the high-sensitivity tail

Boslough, Mark B.; Levy, Michael N.; Backus, George A.

Because the potential effects of climate change are more severe than had previously been thought, increasing focus on uncertainty quantification is required for risk assessment needed by policy makers. Current scientific efforts focus almost exclusively on establishing best estimates of future climate change. However, the greatest consequences occur in the extreme tail of the probability density functions for climate sensitivity (the 'high-sensitivity tail'). To this end, we are exploring the impacts of newly postulated, highly uncertain, but high-consequence physical mechanisms to better establish the climate change risk. We define consequence in terms of dramatic change in physical conditions and in the resulting socioeconomic impact (hence, risk) on populations. Although we are developing generally applicable risk assessment methods, we have focused our initial efforts on uncertainty and risk analyses for the Arctic region. Instead of focusing on best estimates, requiring many years of model parameterization development and evaluation, we are focusing on robust emergent phenomena (those that are not necessarily intuitive and are insensitive to assumptions, subgrid-parameterizations, and tunings). For many physical systems, under-resolved models fail to generate such phenomena, which only develop when model resolution is sufficiently high. Our ultimate goal is to discover the patterns of emergent climate precursors (those that cannot be predicted with lower-resolution models) that can be used as a 'sensitivity fingerprint' and make recommendations for a climate early warning system that would use satellites and sensor arrays to look for the various predicted high-sensitivity signatures. Our initial simulations are focused on the Arctic region, where underpredicted phenomena such as rapid loss of sea ice are already emerging, and because of major geopolitical implications associated with increasing Arctic accessibility to natural resources, shipping routes, and strategic locations. We anticipate that regional climate will be strongly influenced by feedbacks associated with a seasonally ice-free Arctic, but with unknown emergent phenomena.

More Details

The effect of error models in the multiscale inversion of binary permeability fields

Ray, Jaideep R.; van Bloemen Waanders, Bart G.; Mckenna, Sean A.

We present results from a recently developed multiscale inversion technique for binary media, with emphasis on the effect of subgrid model errors on the inversion. Binary media are a useful fine-scale representation of heterogeneous porous media. Averaged properties of the binary field representations can be used to characterize flow through the porous medium at the macroscale. Both direct measurements of the averaged properties and upscaling are complicated and may not provide accurate results. However, it may be possible to infer upscaled properties of the binary medium from indirect measurements at the coarse scale. Multiscale inversion, performed with a subgrid model to connect disparate scales together, can also yield information on the fine-scale properties. We model the binary medium using truncated Gaussian fields, and develop a subgrid model for the upscaled permeability based on excursion sets of those fields. The subgrid model requires an estimate of the proportion of inclusions at the block scale as well as some geometrical parameters of the inclusions as inputs, and predicts the effective permeability. The inclusion proportion is assumed to be spatially varying, modeled using Gaussian processes and represented using a truncated Karhunen-Louve (KL) expansion. This expansion is used, along with the subgrid model, to pose as a Bayesian inverse problem for the KL weights and the geometrical parameters of the inclusions. The model error is represented in two different ways: (1) as a homoscedastic error and (2) as a heteroscedastic error, dependent on inclusion proportionality and geometry. The error models impact the form of the likelihood function in the expression for the posterior density of the objects of inference. The problem is solved using an adaptive Markov Chain Monte Carlo method, and joint posterior distributions are developed for the KL weights and inclusion geometry. Effective permeabilities and tracer breakthrough times at a few 'sensor' locations (obtained by simulating a pump test) form the observables used in the inversion. The inferred quantities can be used to generate an ensemble of permeability fields, both upscaled and fine-scale, which are consistent with the observations. We compare the inferences developed using the two error models, in terms of the KL weights and fine-scale realizations that could be supported by the coarse-scale inferences. Permeability differences are observed mainly in regions where the inclusions proportion is near the percolation threshold, and the subgrid model incurs its largest approximation. These differences also reflected in the tracer breakthrough times and the geometry of flow streamlines, as obtained from a permeameter simulation. The uncertainty due to subgrid model error is also compared to the uncertainty in the inversion due to incomplete data.

More Details

Redundant computing for exascale systems

Ferreira, Kurt; Stearley, Jon S.; Oldfield, Ron A.; Laros, James H.; Pedretti, Kevin T.T.; Brightwell, Ronald B.

Exascale systems will have hundred thousands of compute nodes and millions of components which increases the likelihood of faults. Today, applications use checkpoint/restart to recover from these faults. Even under ideal conditions, applications running on more than 50,000 nodes will spend more than half of their total running time saving checkpoints, restarting, and redoing work that was lost. Redundant computing is a method that allows an application to continue working even when failures occur. Instead of each failure causing an application interrupt, multiple failures can be absorbed by the application until redundancy is exhausted. In this paper we present a method to analyze the benefits of redundant computing, present simulation results of the cost, and compare it to other proposed methods for fault resilience.

More Details

Sensor placement for municipal water networks

Phillips, Cynthia A.; Boman, Erik G.; Carr, Robert D.; Hart, William E.; Berry, Jonathan W.; Watson, Jean-Paul W.; Hart, David B.; Mckenna, Sean A.; Riesen, Lee A.

We consider the problem of placing a limited number of sensors in a municipal water distribution network to minimize the impact over a given suite of contamination incidents. In its simplest form, the sensor placement problem is a p-median problem that has structure extremely amenable to exact and heuristic solution methods. We describe the solution of real-world instances using integer programming or local search or a Lagrangian method. The Lagrangian method is necessary for solution of large problems on small PCs. We summarize a number of other heuristic methods for effectively addressing issues such as sensor failures, tuning sensors based on local water quality variability, and problem size/approximation quality tradeoffs. These algorithms are incorporated into the TEVA-SPOT toolkit, a software suite that the US Environmental Protection Agency has used and is using to design contamination warning systems for US municipal water systems.

More Details
Results 7801–8000 of 9,998
Results 7801–8000 of 9,998