Publications

Results 8001–8050 of 9,998

Search results

Jump to search filters

Generation of pareto optimal ensembles of calibrated parameter sets for climate models

Dalbey, Keith; Levy, Michael N.

Climate models have a large number of inputs and outputs. In addition, diverse parameters sets can match observations similarly well. These factors make calibrating the models difficult. But as the Earth enters a new climate regime, parameters sets may cease to match observations. History matching is necessary but not sufficient for good predictions. We seek a 'Pareto optimal' ensemble of calibrated parameter sets for the CCSM climate model, in which no individual criteria can be improved without worsening another. One Multi Objective Genetic Algorithm (MOGA) optimization typically requires thousands of simulations but produces an ensemble of Pareto optimal solutions. Our simulation budget of 500-1000 runs allows us to perform the MOGA optimization once, but with far fewer evaluations than normal. We devised an analytic test problem to aid in the selection MOGA settings. The test problem's Pareto set is the surface of a 6 dimensional hypersphere with radius 1 centered at the origin, or rather the portion of it in the [0,1] octant. We also explore starting MOGA from a space-filling Latin Hypercube sample design, specifically Binning Optimal Symmetric Latin Hypercube Sampling (BOSLHS), instead of Monte Carlo (MC). We compare the Pareto sets based on: their number of points, N, larger is better; their RMS distance, d, to the ensemble's center, 0.5553 is optimal; their average radius, {mu}(r), 1 is optimal; their radius standard deviation, {sigma}(r), 0 is optimal. The estimated distributions for these metrics when starting from MC and BOSLHS are shown in Figs. 1 and 2.

More Details

Redundant computing for exascale systems

Ferreira, Kurt; Stearley, Jon S.; Oldfield, Ron; Laros, James H.; Pedretti, Kevin T.T.; Brightwell, Ronald B.

Exascale systems will have hundred thousands of compute nodes and millions of components which increases the likelihood of faults. Today, applications use checkpoint/restart to recover from these faults. Even under ideal conditions, applications running on more than 50,000 nodes will spend more than half of their total running time saving checkpoints, restarting, and redoing work that was lost. Redundant computing is a method that allows an application to continue working even when failures occur. Instead of each failure causing an application interrupt, multiple failures can be absorbed by the application until redundancy is exhausted. In this paper we present a method to analyze the benefits of redundant computing, present simulation results of the cost, and compare it to other proposed methods for fault resilience.

More Details

Solution methods for very highly integrated circuits

Thornquist, Heidi K.; Mei, Ting; Tuminaro, Raymond S.

While advances in manufacturing enable the fabrication of integrated circuits containing tens-to-hundreds of millions of devices, the time-sensitive modeling and simulation necessary to design these circuits poses a significant computational challenge. This is especially true for mixed-signal integrated circuits where detailed performance analyses are necessary for the individual analog/digital circuit components as well as the full system. When the integrated circuit has millions of devices, performing a full system simulation is practically infeasible using currently available Electrical Design Automation (EDA) tools. The principal reason for this is the time required for the nonlinear solver to compute the solutions of large linearized systems during the simulation of these circuits. The research presented in this report aims to address the computational difficulties introduced by these large linearized systems by using Model Order Reduction (MOR) to (i) generate specialized preconditioners that accelerate the computation of the linear system solution and (ii) reduce the overall dynamical system size. MOR techniques attempt to produce macromodels that capture the desired input-output behavior of larger dynamical systems and enable substantial speedups in simulation time. Several MOR techniques that have been developed under the LDRD on 'Solution Methods for Very Highly Integrated Circuits' will be presented in this report. Among those presented are techniques for linear time-invariant dynamical systems that either extend current approaches or improve the time-domain performance of the reduced model using novel error bounds and a new approach for linear time-varying dynamical systems that guarantees dimension reduction, which has not been proven before. Progress on preconditioning power grid systems using multi-grid techniques will be presented as well as a framework for delivering MOR techniques to the user community using Trilinos and the Xyce circuit simulator, both prominent world-class software tools.

More Details

Using triggered operations to offload collective communication operations

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Hemmert, K.S.; Barrett, Brian; Underwood, Keith D.

Efficient collective operations are a major component of application scalability. Offload of collective operations onto the network interface reduces many of the latencies that are inherent in network communications and, consequently, reduces the time to perform the collective operation. To support offload, it is desirable to expose semantic building blocks that are simple to offload and yet powerful enough to implement a variety of collective algorithms. This paper presents the implementation of barrier and broadcast leveraging triggered operations - a semantic building block for collective offload. Triggered operations are shown to be both semantically powerful and capable of improving performance. © 2010 Springer-Verlag.

More Details

Alternative perturbation theories for triple excitations in coupled-cluster theory

Molecular Physics

Taube, Andrew G.

The dominant method of small molecule quantum chemistry over the last twenty years is CCSD(T). Despite this success, RHF-based CCSD(T) fails for systems away from equilibrium. Work over the last ten years has lead to modifications of CCSD(T) that improve the description of bond breaking. These new methods include CCSD(T), CCSD(2)T, CCSD(2) and CR-CC(2,3), which are new perturbative corrections to single-reference CCSD. We present a unified derivation of these methods and compare them at the level of formal theory and computational accuracy. None of the methods is clearly superior, although formal considerations favour CCSD(T) and computational accuracy for the systems considered favours CR-CC(2,3). © 2010 Taylor & Francis.

More Details

A revolution in micropower : the catalytic nanodiode

Creighton, James R.; Baucom, Kevin C.; Coltrin, Michael E.; Figiel, Jeffrey J.; Cross, Karen C.; Koleske, Daniel; Pawlowski, Roger; Heller, Edwin J.; Bogart, Katherine H.A.; Coker, Eric N.

Our ability to field useful, nano-enabled microsystems that capitalize on recent advances in sensor technology is severely limited by the energy density of available power sources. The catalytic nanodiode (reported by Somorjai's group at Berkeley in 2005) was potentially an alternative revolutionary source of micropower. Their first reports claimed that a sizable fraction of the chemical energy may be harvested via hot electrons (a 'chemicurrent') that are created by the catalytic chemical reaction. We fabricated and tested Pt/GaN nanodiodes, which eventually produced currents up to several microamps. Our best reaction yields (electrons/CO{sub 2}) were on the order of 10{sup -3}; well below the 75% values first reported by Somorjai (we note they have also been unable to reproduce their early results). Over the course of this Project we have determined that the whole concept of 'chemicurrent', in fact, may be an illusion. Our results conclusively demonstrate that the current measured from our nanodiodes is derived from a thermoelectric voltage; we have found no credible evidence for true chemicurrent. Unfortunately this means that the catalytic nanodiode has no future as a micropower source.

More Details

Installing python software packages : the good, the bad and the ugly

Hart, William E.

These slides describe different strategies for installing Python software. Although I am a big fan of Python software development, robust strategies for software installation remains a challenge. This talk describes several different installation scenarios. The Good: the user has administrative privileges - Installing on Windows with an installer executable, Installing with Linux application utility, Installing a Python package from the PyPI repository, and Installing a Python package from source. The Bad: the user does not have administrative privileges - Using a virtual environment to isolate package installations, and Using an installer executable on Windows with a virtual environment. The Ugly: the user needs to install an extension package from source - Installing a Python extension package from source, and PyCoinInstall - Managing builds for Python extension packages. The last item referring to PyCoinInstall describes a utility being developed for the COIN-OR software, which is used within the operations research community. COIN-OR includes a variety of Python and C++ software packages, and this script uses a simple plug-in system to support the management of package builds and installation.

More Details

On Raviart-Thomas and VMS formulations for flow in heterogeneous materials

Turner, D.Z.

It is well known that the continuous Galerkin method (in its standard form) is not locally conservative, yet many stabilized methods are constructed by augmenting the standard Galerkin weak form. In particular, the Variational Multiscale (VMS) method has achieved popularity for combating numerical instabilities that arise for mixed formulations that do not otherwise satisfy the LBB condition. Among alternative methods that satisfy local and global conservation, many employ Raviart-Thomas function spaces. The lowest order Raviart-Thomas finite element formulation (RT0) consists of evaluating fluxes over the midpoint of element edges and constant pressures within the element. Although the RT0 element poses many advantages, it has only been shown viable for triangular or tetrahedral elements (quadrilateral variants of this method do not pass the patch test). In the context of heterogenous materials, both of these methods have been used to model the mixed form of the Darcy equation. This work aims, in a comparative fashion, to evaluate the strengths and weaknesses of either approach for modeling Darcy flow for problems with highly varying material permeabilities and predominantly open flow boundary conditions. Such problems include carbon sequestration and enhanced oil recovery simulations for which the far-field boundary is typically described with some type of pressure boundary condition. We intend to show the degree to which the VMS formulation violates local mass conservation for these types of problems and compare the performance of the VMS and RT0 methods at boundaries between disparate permeabilities.

More Details

Managing variability in the IO performance of petascale storage systems

Oldfield, Ron

Significant challenges exist for achieving peak or even consistent levels of performance when using IO systems at scale. They stem from sharing IO system resources across the processes of single large-scale applications and/or multiple simultaneous programs causing internal and external interference, which in turn, causes substantial reductions in IO performance. This paper presents interference effects measurements for two different file systems at multiple supercomputing sites. These measurements motivate developing a 'managed' IO approach using adaptive algorithms varying the IO system workload based on current levels and use areas. An implementation of these methods deployed for the shared, general scratch storage system on Oak Ridge National Laboratory machines achieves higher overall performance and less variability in both a typical usage environment and with artificially introduced levels of 'noise'. The latter serving to clearly delineate and illustrate potential problems arising from shared system usage and the advantages derived from actively managing it.

More Details

Coarse-graining in peridynamics

Silling, Stewart

The peridynamic theory is an extension of traditional solid mechanics that treats discontinuous media, including the evolution of discontinuities due to fracture, on the same mathematical basis as classically smooth media. A recent advance in the linearized peridynamic theory permits the reduction of the number of degrees of freedom modeled within a body. Under equilibrium conditions, this coarse graining method exactly reproduces the internal forces on the coarsened degrees of freedom, including the effect of the omitted material that is no longer explicitly modeled. The method applies to heterogeneous as well as homogeneous media and accounts for defects in the material. The coarse graining procedure can be repeated over and over, resulting in a hierarchically coarsened description that, at each stage, continues to reproduce the exact internal forces present in the original, detailed model. Each coarsening step results in reduced computational cost. This talk will describe the new peridynamic coarsening method and show computational examples.

More Details

Iterative packing for demand matching and sparse packing

Parekh, Ojas D.

The main result we will present is a 2k-approximation algorithm for the following 'k-hypergraph demand matching' problem: given a set system with sets of size <=k, where sets have profits & demands and vertices have capacities, find a max-profit subsystem whose demands do not exceed the capacities. The main tool is an iterative way to explicitly build a decomposition of the fractional optimum as 2k times a convex combination of integral solutions. If time permits we'll also show how the approach can be extended to a 3-approximation for 2-column sparse packing. The second result is tight w.r.t the integrality gap, and the first is near-tight as a gap lower bound of 2(k-1+1/k) is known.

More Details

Pyomo : Python Optimization Modeling Objects

Siirola, John D.; Watson, Jean-Paul; Hart, William E.

The Python Optimization Modeling Objects (Pyomo) package [1] is an open source tool for modeling optimization applications within Python. Pyomo provides an objected-oriented approach to optimization modeling, and it can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. While Pyomo provides a capability that is commonly associated with algebraic modeling languages such as AMPL, AIMMS, and GAMS, Pyomo's modeling objects are embedded within a full-featured high-level programming language with a rich set of supporting libraries. Pyomo leverages the capabilities of the Coopr software library [2], which integrates Python packages (including Pyomo) for defining optimizers, modeling optimization applications, and managing computational experiments. A central design principle within Pyomo is extensibility. Pyomo is built upon a flexible component architecture [3] that allows users and developers to readily extend the core Pyomo functionality. Through these interface points, extensions and applications can have direct access to an optimization model's expression objects. This facilitates the rapid development and implementation of new modeling constructs and as well as high-level solution strategies (e.g. using decomposition- and reformulation-based techniques). In this presentation, we will give an overview of the Pyomo modeling environment and model syntax, and present several extensions to the core Pyomo environment, including support for Generalized Disjunctive Programming (Coopr GDP), Stochastic Programming (PySP), a generic Progressive Hedging solver [4], and a tailored implementation of Bender's Decomposition.

More Details

Opportunities for leveraging OS virtualization in high-end supercomputing

Pedretti, Kevin P.; Bridges, Patrick G.

This paper examines potential motivations for incorporating virtualization support in the system software stacks of high-end capability supercomputers. We advocate that this will increase the flexibility of these platforms significantly and enable new capabilities that are not possible with current fixed software stacks. Our results indicate that compute, virtual memory, and I/O virtualization overheads are low and can be further mitigated by utilizing well-known techniques such as large paging and VMM bypass. Furthermore, since the addition of virtualization support does not affect the performance of applications using the traditional native environment, there is essentially no disadvantage to its addition.

More Details

Comparison of open source visual analytics toolkits

Harger, John R.; Crossno, Patricia J.

We present the results of the first stage of a two-stage evaluation of open source visual analytics packages. This stage is a broad feature comparison over a range of open source toolkits. Although we had originally intended to restrict ourselves to comparing visual analytics toolkits, we quickly found that very few were available. So we expanded our study to include information visualization, graph analysis, and statistical packages. We examine three aspects of each toolkit: visualization functions, analysis capabilities, and development environments. With respect to development environments, we look at platforms, language bindings, multi-threading/parallelism, user interface frameworks, ease of installation, documentation, and whether the package is still being actively developed.

More Details

Surrogate modeling with surfpack

Adams, Brian M.; Dalbey, Keith; Swiler, Laura P.

Surfpack is a library of multidimensional function approximation methods useful for efficient surrogate-based sensitivity/uncertainty analysis or calibration/optimization. I will survey current Surfpack meta-modeling capabilities for continuous variables and describe recent progress generalizing to both continuous and categorical factors, including relevant test problems and analysis comparisons.

More Details

Synthesis of an ionic liquid with an iron coordination cation

Dalton Transactions

Anderson, Travis M.; Ingersoll, David; Hensley, Alyssa H.; Staiger, Chad L.; Leonard, Jonathan C.

An iron-based ionic liquid, Fe((OHCH2CH2) 2NH)6(CF3SO3)3, is synthesized in a single-step complexation reaction. Infrared and Raman data suggest NH(CH2CH2OH)2 primarily coordinates to Fe(iii) through alcohol groups. The compound has Tg and Td values of -64°C and 260°C, respectively. Cyclic voltammetry reveals quasi-reversible Fe(iii)/Fe(ii) reduction waves. © 2010 The Royal Society of Chemistry.

More Details

Shifted power method for computing tensor eigenpairs

Kolda, Tamara G.; Dunlavy, Daniel M.

Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

More Details
Results 8001–8050 of 9,998
Results 8001–8050 of 9,998