Publications

Results 7976–8000 of 9,998

Search results

Jump to search filters

Calculating hugoniots for molecular crystals from first principles

Proceedings - 14th International Detonation Symposium, IDS 2010

Wills, Ann E.; Wixom, Ryan R.; Mattsson, Thomas

Density Functional Theory (DFT) has over the last few years emerged as an indispensable tool for understanding the behavior of matter under extreme conditions. DFT based molecular dynamics simulations (MD) have for example confirmed experimental findings for shocked deuterium,1 enabled the first experimental evidence for a triple point in carbon above 850 GPa,2 and amended experimental data for constructing a global equation of state (EOS) for water, carrying implications for planetary physics.3 The ability to perform high-fidelity calculations is even more important for cases where experiments are impossible to perform, dangerous, and/or prohibitively expensive. For solid explosives, and other molecular crystals, similar success has been severely hampered by an inability of describing the materials at equilibrium. The binding mechanism of molecular crystals (van der Waals' forces) is not well described within traditional DFT.4 Among widely used exchange-correlation functionals, neither LDA nor PBE balances the strong intra-molecular chemical bonding and the weak inter-molecular attraction, resulting in incorrect equilibrium density, negatively affecting the construction of EOS for undetonated high explosives. We are exploring a way of bypassing this problem by using the new Armiento-Mattsson 2005 (AM05) exchange-correlation functional.5, 6 The AM05 functional is highly accurate for a wide range of solids,4, 7 in particular in compression.8 In addition, AM05 does not include any van der Waals' attraction,4 which can be advantageous compared to other functionals: Correcting for a fictitious van der Waals' like attraction with unknown origin can be harder than correcting for a complete absence of all types of van der Waals' attraction. We will show examples from other materials systems where van der Waals' attraction plays a key role, where this scheme has worked well,9 and discuss preliminary results for molecular crystals and explosives.

More Details

Architecture of PFC supports analogy, but PFC is not an analogy machine

Cognitive Neuroscience

Speed, Ann E.

In the preceding discussion paper, I proposed a theory of prefrontal cortical organization that was fundamentally intended to address the question: How does prefrontal cortex (PFC) support the various functions for which it seems to be selectively recruited? In so doing, I chose to focus on a particular function, analogy, that seems to have been largely ignored in the theoretical treatments of PFC, but that does underlie many other cognitive functions (Hofstadter, 2001; Holyoak & Thagard, 1997). At its core, this paper was intended to use analogy as a foundation for exploring one possibility for prefrontal function in general, although it is easy to see how the analogy-specific interpretation arises (as in the comment by Ibáñez). In an attempt to address this more foundational question, this response will step away from analogy as a focus, and will address first the various comments from the perspective of the initial motivation for developing this theory, and then specific issues raised by the commentators. © 2010 Psychology Press.

More Details

Introducing the target-matrix paradigm for mesh optimization via node-movement

Proceedings of the 19th International Meshing Roundtable, IMR 2010

Knupp, Patrick K.

A general-purpose algorithm for mesh optimization via node-movement, known as the Target-Matrix Paradigm, is introduced. The algorithm is general purpose in that it can be applied to a wide variety of mesh and element types, and to various commonly recurring mesh optimization problems such as shape improvement, and to more unusual problems like boundary-layer preservation with sliver removal, high-order mesh improvement, and edge-length equalization. The algorithm can be considered to be a direct optimization method in which weights are automatically constructed to enable definitions of application-specific mesh quality. The high-level concepts of the paradigm have been implemented in the Mesquite mesh-improvement library, along with a number of concrete algorithms that address mesh quality issues such as those shown in the examples of the present paper. © 2010 Springer-Verlag Berlin Heidelberg.

More Details

HPC application performance and scaling: Understanding trends and future challenges with application benchmarks on past, present and future tri-lab computing systems

AIP Conference Proceedings

Rajan, Mahesh; Doerfler, Douglas W.

More Details

HPC application performance and scaling: Understanding trends and future challenges with application benchmarks on past, present and future tri-lab computing systems

AIP Conference Proceedings

Rajan, Mahesh; Doerfler, Douglas W.

More Details

A combinatorial method for tracing objects using semantics of their shape

Proceedings - Applied Imagery Pattern Recognition Workshop

Diegert, Carl

We present a shape-first approach to finding automobiles and trucks in overhead images and include results from our analysis of an image from the Overhead Imaging Research Dataset [1]. For the OIRDS, our shape-first approach traces candidate vehicle outlines by exploiting knowledge about an overhead image of a vehicle: a vehicle's outline fits into a rectangle, this rectangle is sized to allow vehicles to use local roads, and rectangles from two different vehicles are disjoint. Our shape-first approach can efficiently process high-resolution overhead imaging over wide areas to provide tips and cues for human analysts, or for subsequent automatic processing using machine learning or other analysis based on color, tone, pattern, texture, size, and/or location (shape first). In fact, computationally-intensive complex structural, syntactic, and statistical analysis may be possible when a shape-first work flow sends a list of specific tips and cues down a processing pipeline rather than sending the whole of wide area imaging information. This data flow may fit well when bandwidth is limited between computers delivering ad hoc image exploitation and an imaging sensor. As expected, our early computational experiments find that the shape-first processing stage appears to reliably detect rectangular shapes from vehicles. More intriguing is that our computational experiments with six-inch GSD OIRDS benchmark images show that the shape-first stage can be efficient, and that candidate vehicle locations corresponding to features that do not include vehicles are unlikely to trigger tips and cues. We found that stopping with just the shape-first list of candidate vehicle locations, and then solving a weighted, maximal independent vertex set problem to resolve conflicts among candidate vehicle locations, often correctly traces the vehicles in an OIRDS scene. © 2010 IEEE.

More Details

Arctic Sea ice model sensitivities

Bochev, Pavel B.; Paskaleva, Biliana S.

Arctic sea ice is an important component of the global climate system and, due to feedback effects, the Arctic ice cover is changing rapidly. Predictive mathematical models are of paramount importance for accurate estimates of the future ice trajectory. However, the sea ice components of Global Climate Models (GCMs) vary significantly in their prediction of the future state of Arctic sea ice and have generally underestimated the rate of decline in minimum sea ice extent seen over the past thirty years. One of the contributing factors to this variability is the sensitivity of the sea ice state to internal model parameters. A new sea ice model that holds some promise for improving sea ice predictions incorporates an anisotropic elastic-decohesive rheology and dynamics solved using the material-point method (MPM), which combines Lagrangian particles for advection with a background grid for gradient computations. We evaluate the variability of this MPM sea ice code and compare it with the Los Alamos National Laboratory CICE code for a single year simulation of the Arctic basin using consistent ocean and atmospheric forcing. Sensitivities of ice volume, ice area, ice extent, root mean square (RMS) ice speed, central Arctic ice thickness,and central Arctic ice speed with respect to ten different dynamic and thermodynamic parameters are evaluated both individually and in combination using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA). We find similar responses for the two codes and some interesting seasonal variability in the strength of the parameters on the solution.

More Details

Robust emergent climate phenomena associated with the high-sensitivity tail

Boslough, Mark; Levy, Michael N.; Backus, George A.

Because the potential effects of climate change are more severe than had previously been thought, increasing focus on uncertainty quantification is required for risk assessment needed by policy makers. Current scientific efforts focus almost exclusively on establishing best estimates of future climate change. However, the greatest consequences occur in the extreme tail of the probability density functions for climate sensitivity (the 'high-sensitivity tail'). To this end, we are exploring the impacts of newly postulated, highly uncertain, but high-consequence physical mechanisms to better establish the climate change risk. We define consequence in terms of dramatic change in physical conditions and in the resulting socioeconomic impact (hence, risk) on populations. Although we are developing generally applicable risk assessment methods, we have focused our initial efforts on uncertainty and risk analyses for the Arctic region. Instead of focusing on best estimates, requiring many years of model parameterization development and evaluation, we are focusing on robust emergent phenomena (those that are not necessarily intuitive and are insensitive to assumptions, subgrid-parameterizations, and tunings). For many physical systems, under-resolved models fail to generate such phenomena, which only develop when model resolution is sufficiently high. Our ultimate goal is to discover the patterns of emergent climate precursors (those that cannot be predicted with lower-resolution models) that can be used as a 'sensitivity fingerprint' and make recommendations for a climate early warning system that would use satellites and sensor arrays to look for the various predicted high-sensitivity signatures. Our initial simulations are focused on the Arctic region, where underpredicted phenomena such as rapid loss of sea ice are already emerging, and because of major geopolitical implications associated with increasing Arctic accessibility to natural resources, shipping routes, and strategic locations. We anticipate that regional climate will be strongly influenced by feedbacks associated with a seasonally ice-free Arctic, but with unknown emergent phenomena.

More Details

The effect of error models in the multiscale inversion of binary permeability fields

Ray, Jaideep; Van Bloemen Waanders, Bart; Mckenna, Sean A.

We present results from a recently developed multiscale inversion technique for binary media, with emphasis on the effect of subgrid model errors on the inversion. Binary media are a useful fine-scale representation of heterogeneous porous media. Averaged properties of the binary field representations can be used to characterize flow through the porous medium at the macroscale. Both direct measurements of the averaged properties and upscaling are complicated and may not provide accurate results. However, it may be possible to infer upscaled properties of the binary medium from indirect measurements at the coarse scale. Multiscale inversion, performed with a subgrid model to connect disparate scales together, can also yield information on the fine-scale properties. We model the binary medium using truncated Gaussian fields, and develop a subgrid model for the upscaled permeability based on excursion sets of those fields. The subgrid model requires an estimate of the proportion of inclusions at the block scale as well as some geometrical parameters of the inclusions as inputs, and predicts the effective permeability. The inclusion proportion is assumed to be spatially varying, modeled using Gaussian processes and represented using a truncated Karhunen-Louve (KL) expansion. This expansion is used, along with the subgrid model, to pose as a Bayesian inverse problem for the KL weights and the geometrical parameters of the inclusions. The model error is represented in two different ways: (1) as a homoscedastic error and (2) as a heteroscedastic error, dependent on inclusion proportionality and geometry. The error models impact the form of the likelihood function in the expression for the posterior density of the objects of inference. The problem is solved using an adaptive Markov Chain Monte Carlo method, and joint posterior distributions are developed for the KL weights and inclusion geometry. Effective permeabilities and tracer breakthrough times at a few 'sensor' locations (obtained by simulating a pump test) form the observables used in the inversion. The inferred quantities can be used to generate an ensemble of permeability fields, both upscaled and fine-scale, which are consistent with the observations. We compare the inferences developed using the two error models, in terms of the KL weights and fine-scale realizations that could be supported by the coarse-scale inferences. Permeability differences are observed mainly in regions where the inclusions proportion is near the percolation threshold, and the subgrid model incurs its largest approximation. These differences also reflected in the tracer breakthrough times and the geometry of flow streamlines, as obtained from a permeameter simulation. The uncertainty due to subgrid model error is also compared to the uncertainty in the inversion due to incomplete data.

More Details

Sensor placement for municipal water networks

Phillips, Cynthia A.; Boman, Erik G.; Carr, Robert D.; Hart, William E.; Berry, Jonathan; Watson, Jean-Paul; Hart, David; Mckenna, Sean A.; Riesen, Lee A.

We consider the problem of placing a limited number of sensors in a municipal water distribution network to minimize the impact over a given suite of contamination incidents. In its simplest form, the sensor placement problem is a p-median problem that has structure extremely amenable to exact and heuristic solution methods. We describe the solution of real-world instances using integer programming or local search or a Lagrangian method. The Lagrangian method is necessary for solution of large problems on small PCs. We summarize a number of other heuristic methods for effectively addressing issues such as sensor failures, tuning sensors based on local water quality variability, and problem size/approximation quality tradeoffs. These algorithms are incorporated into the TEVA-SPOT toolkit, a software suite that the US Environmental Protection Agency has used and is using to design contamination warning systems for US municipal water systems.

More Details

Posterior predictive modeling using multi-scale stochastic inverse parameter estimates

Mckenna, Sean A.; Ray, Jaideep; Van Bloemen Waanders, Bart

Multi-scale binary permeability field estimation from static and dynamic data is completed using Markov Chain Monte Carlo (MCMC) sampling. The binary permeability field is defined as high permeability inclusions within a lower permeability matrix. Static data are obtained as measurements of permeability with support consistent to the coarse scale discretization. Dynamic data are advective travel times along streamlines calculated through a fine-scale field and averaged for each observation point at the coarse scale. Parameters estimated at the coarse scale (30 x 20 grid) are the spatially varying proportion of the high permeability phase and the inclusion length and aspect ratio of the high permeability inclusions. From the non-parametric, posterior distributions estimated for these parameters, a recently developed sub-grid algorithm is employed to create an ensemble of realizations representing the fine-scale (3000 x 2000), binary permeability field. Each fine-scale ensemble member is instantiated by convolution of an uncorrelated multiGaussian random field with a Gaussian kernel defined by the estimated inclusion length and aspect ratio. Since the multiGaussian random field is itself a realization of a stochastic process, the procedure for generating fine-scale binary permeability field realizations is also stochastic. Two different methods are hypothesized to perform posterior predictive tests. Different mechanisms for combining multi Gaussian random fields with kernels defined from the MCMC sampling are examined. Posterior predictive accuracy of the estimated parameters is assessed against a simulated ground truth for predictions at both the coarse scale (effective permeabilities) and at the fine scale (advective travel time distributions). The two techniques for conducting posterior predictive tests are compared by their ability to recover the static and dynamic data. The skill of the inference and the method for generating fine-scale binary permeability fields are evaluated through flow calculations on the resulting fields using fine-scale realizations and comparing them against results obtained with the ground truth fine-scale and coarse-scale permeability fields.

More Details

Investigating the impact of the cielo cray XE6 architecture on scientific application codes

Vaughan, Courtenay T.; Rajan, Mahesh; Barrett, Richard F.; Doerfler, Douglas W.; Pedretti, Kevin P.

Cielo, a Cray XE6, is the Department of Energy NNSA Advanced Simulation and Computing (ASC) campaign's newest capability machine. Rated at 1.37 PFLOPS, it consists of 8,944 dual-socket oct-core AMD Magny-Cours compute nodes, linked using Cray's Gemini interconnect. Its primary mission objective is to enable a suite of the ASC applications implemented using MPI to scale to tens of thousands of cores. Cielo is an evolutionary improvement to a successful architecture previously available to many of our codes, thus enabling a basis for understanding the capabilities of this new architecture. Using three codes strategically important to the ASC campaign, and supplemented with some micro-benchmarks that expose the fundamental capabilities of the XE6, we report on the performance characteristics and capabilities of Cielo.

More Details
Results 7976–8000 of 9,998
Results 7976–8000 of 9,998