Publications

Results 8501–8600 of 9,998

Search results

Jump to search filters

Current trends in parallel computation and the implications for modeling and optimization

Computer Aided Chemical Engineering

Siirola, John D.

More Details

Finite element solution of optimal control problems arising in semiconductor modeling

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Bochev, Pavel B.; Ridzal, Denis R.

Optimal design, parameter estimation, and inverse problems arising in the modeling of semiconductor devices lead to optimization problems constrained by systems of PDEs. We study the impact of different state equation discretizations on optimization problems whose objective functionals involve flux terms. Galerkin methods, in which the flux is a derived quantity, are compared with mixed Galerkin discretizations where the flux is approximated directly. Our results show that the latter approach leads to more robust and accurate solutions of the optimization problem, especially for highly heterogeneous materials with large jumps in material properties. © 2008 Springer.

More Details

New applications of the verdict library for standardized mesh verification pre, post, and end-to-end processing

Proceedings of the 16th International Meshing Roundtable, IMR 2007

Pébay, Philippe P.; Thompson, David; Shepherd, Jason F.; Knupp, Patrick K.; Lisle, Curtis; Magnotta, Vincent A.; Grosland, Nicole M.

Verdict is a collection of subroutines for evaluating the geometric qualities of triangles, quadrilaterals, tetrahedra, and hexahedra using a variety of functions. A quality is a real number assigned to one of these shapes depending on its particular vertex coordinates. These functions are used to evaluate the input to finite element, finite volume, boundary element, and other types of solvers that approximate the solution to partial differential equations defined over regions of space. This article describes the most recent version of Verdict and provides a summary of the main properties of the quality functions offered by the library. It finally demonstrates the versatility and applicability of Verdict by illustrating its use in several scientific applications that pertain to pre, post, and end-to-end processing.

More Details

pCAMAL: An embarrassingly parallel hexahedral mesh generator

Proceedings of the 16th International Meshing Roundtable, IMR 2007

Pébay, Philippe P.; Stephenson, Michael B.; Fortier, Leslie A.; Owen, Steven J.; Melander, Darryl J.

This paper describes a distributed-memory, embarrassingly parallel hexahedral mesh generator, pCAMAL (parallel CUBIT Adaptive Mesh Algorithm Library). pCAMAL utilizes the sweeping method following a serial step of geometry decomposition conducted in the CUBIT geometry preparation and mesh generation tool. The utility of pCAMAL in generating large meshes is illustrated, and linear speed-up under load-balanced conditions is demonstrated.

More Details

Limited-memory techniques for sensor placement in water distribution networks

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Hart, William E.; Berry, Jonathan W.; Boman, Erik G.; Phillips, Cynthia A.; Riesen, Lee A.; Watson, Jean-Paul W.

The practical utility of optimization technologies is often impacted by factors that reflect how these tools are used in practice, including whether various real-world constraints can be adequately modeled, the sophistication of the analysts applying the optimizer, and related environmental factors (e.g. whether a company is willing to trust predictions from computational models). Other features are less appreciated, but of equal importance in terms of dictating the successful use of optimization. These include the scale of problem instances, which in practice drives the development of approximate solution techniques, and constraints imposed by the target computing platforms. End-users often lack state-of-the-art computers, and thus runtime and memory limitations are often a significant, limiting factor in algorithm design. When coupled with large problem scale, the result is a significant technological challenge. We describe our experience developing and deploying both exact and heuristic algorithms for placing sensors in water distribution networks to mitigate against damage due intentional or accidental introduction of contaminants. The target computing platforms for this application have motivated limited-memory techniques that can optimize large-scale sensor placement problems. © 2008 Springer Berlin Heidelberg.

More Details

Implementing peridynamics within a molecular dynamics code

Computer Physics Communications

Parks, Michael L.; Lehoucq, Richard B.; Plimpton, Steven J.; Silling, Stewart A.

Peridynamics (PD) is a continuum theory that employs a nonlocal model to describe material properties. In this context, nonlocal means that continuum points separated by a finite distance may exert force upon each other. A meshless method results when PD is discretized with material behavior approximated as a collection of interacting particles. This paper describes how PD can be implemented within a molecular dynamics (MD) framework, and provides details of an efficient implementation. This adds a computational mechanics capability to an MD code, enabling simulations at mesoscopic or even macroscopic length and time scales. © 2008 Elsevier B.V.

More Details

Remarks on mesh quality

46th AIAA Aerospace Sciences Meeting and Exhibit

Knupp, Patrick K.

Various aspects of mesh quality are surveyed to clarify the disconnect between the traditional uses of mesh quality metrics within industry and the fact that quality ultimately depends on the solution to the physical problem. Truncation error analysis for ffnite difference methods reveals no clear connection to most traditional mesh quality metrics. Finite element bounds to the interpolation error can be shown, in some cases, to be related to known quality metrics such as the condition number. On the other hand, the use of quality metrics that do not take solution characteristics into account can be valid in certain circumstances, primarily as a means of automatically detecting defective meshes. The use of such metrics when applied to simulations for which quality is highly-dependent on the physical solution is clearly inappropriate. Various ffaws and problems with existing quality metrics are mentioned, along with a discussion on the use of threshold values. In closing, the author advocates the investigation of explicitly-referenced quality metrics as a potential means of bridging the gap between a priori quality metrics and solution-dependent metrics.

More Details

Low-memory Lagrangian relaxation methods for sensor placement in municipal water networks

World Environmental and Water Resources Congress 2008: Ahupua'a - Proceedings of the World Environmental and Water Resources Congress 2008

Berry, Jonathan W.; Boman, Erik G.; Phillips, Cynthia A.; Riesen, Lee A.

Placing sensors in municipal water networks to protect against a set of contamination events is a classic p-median problem for most objectives when we assume that sensors are perfect. Many researchers have proposed exact and approximate solution methods for this p-median formulation. For full-scale networks with large contamination event suites, one must generally rely on heuristic methods to generate solutions. These heuristics provide feasible solutions, but give no quality guarantee relative to the optimal placement. In this paper we apply a Lagrangian relaxation method in order to compute lower bounds on the expected impact of suites of contamination events. In all of our experiments with single objectives, these lower bounds establish that the GRASP local search method generates solutions that are provably optimal to to within a fraction of a percentage point. Our Lagrangian heuristic also provides good solutions itself and requires only a fraction of the memory of GRASP. We conclude by describing two variations of the Lagrangian heuristic: an aggregated version that trades off solution quality for further memory savings, and a multi-objective version which balances objectives with additional goals. © 2008 ASCE.

More Details

Preparing for the aftermath: Using emotional agents in game-based training for disaster response

2008 IEEE Symposium on Computational Intelligence and Games, CIG 2008

Djordjevich Reyna, Donna D.; Xavier, Patrick G.; Bernard, Michael L.; Whetzel, Jonathan H.; Glickman, Matthew R.; Verzi, Stephen J.

Ground Truth, a training game developed by Sandia National Laboratories in partnership with the University of Southern California GamePipe Lab, puts a player in the role of an Incident Commander working with teammate agents to respond to urban threats. These agents simulate certain emotions that a responder may feel during this high-stress situation. We construct psychology-plausible models compliant with the Sandia Human Embodiment and Representation Cognitive Architecture (SHERCA) that are run on the Sandia Cognitive Runtime Engine with Active Memory (SCREAM) software. SCREAM's computational representations for modeling human decision-making combine aspects of ANNs and fuzzy logic networks. This paper gives an overview of Ground Truth and discusses the adaptation of the SHERCA and SCREAM into the game. We include a semiformal descriptionof SCREAM. ©2008 IEEE.

More Details

The TEVA-SPOT toolkit for drinking water contaminant warning system design

World Environmental and Water Resources Congress 2008: Ahupua'a - Proceedings of the World Environmental and Water Resources Congress 2008

Hart, William E.; Berry, Jonathan W.; Boman, Erik G.; Murray, Regan; Phillips, Cynthia A.; Riesen, Lee A.; Watson, Jean-Paul W.

We present the TEVA-SPOT Toolkit, a sensor placement optimization tool developed within the USEPA TEVA program. The TEVA-SPOT Toolkit provides a sensor placement framework that facilitates research in sensor placement optimization and enables the practical application of sensor placement solvers to real-world CWS design applications. This paper provides an overview of its key features, and then illustrates how this tool can be flexibly applied to solve a variety of different types of sensor placement problems. © 2008 ASCE.

More Details

Tolerating the community detection resolution limit with edge weighting

Proposed for publication in the Proceedings of the National Academy of Sciences.

Hendrickson, Bruce A.; Laviolette, Randall A.; Phillips, Cynthia A.; Berry, Jonathan W.

Communities of vertices within a giant network such as the World-Wide-Web are likely to be vastly smaller than the network itself. However, Fortunato and Barthelemy have proved that modularity maximization algorithms for community detection may fail to resolve communities with fewer than {radical} L/2 edges, where L is the number of edges in the entire network. This resolution limit leads modularity maximization algorithms to have notoriously poor accuracy on many real networks. Fortunato and Barthelemy's argument can be extended to networks with weighted edges as well, and we derive this corollary argument. We conclude that weighted modularity algorithms may fail to resolve communities with fewer than {radical} W{epsilon}/2 total edge weight, where W is the total edge weight in the network and {epsilon} is the maximum weight of an inter-community edge. If {epsilon} is small, then small communities can be resolved. Given a weighted or unweighted network, we describe how to derive new edge weights in order to achieve a low {epsilon}, we modify the 'CNM' community detection algorithm to maximize weighted modularity, and show that the resulting algorithm has greatly improved accuracy. In experiments with an emerging community standard benchmark, we find that our simple CNM variant is competitive with the most accurate community detection methods yet proposed.

More Details

Improved parallel data partitioning by nested dissection with applications to information retrieval

Proposed for publication in Parallel Computing.

Boman, Erik G.; Chevalier, Cedric C.

The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it is a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.

More Details

The Xygra gun simulation tool

Garasi, Christopher J.; Robinson, Allen C.; Russo, Thomas V.; Lamppa, Derek C.

Inductive electromagnetic launchers, or coilguns, use discrete solenoidal coils to accelerate a coaxial conductive armature. To date, Sandia has been using an internally developed code, SLINGSHOT, as a point-mass lumped circuit element simulation tool for modeling coilgun behavior for design and verification purposes. This code has shortcomings in terms of accurately modeling gun performance under stressful electromagnetic propulsion environments. To correct for these limitations, it was decided to attempt to closely couple two Sandia simulation codes, Xyce and ALEGRA, to develop a more rigorous simulation capability for demanding launch applications. This report summarizes the modifications made to each respective code and the path forward to completing interfacing between them.

More Details

Distance-avoiding sequences for extremely low-bandwidth authentication

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Collins, Michael J.; Mitchell, Scott A.

We develop a scheme for providing strong cryptographic authentication on a stream of messages which consumes very little bandwidth (as little as one bit per message) and is robust in the presence of dropped messages. Such a scheme should be useful for extremely low-power, low-bandwidth wireless sensor networks and "smart dust" applications. The tradeoffs among security, memory, bandwidth, and tolerance for missing messages give rise to several new optimization problems. We report on experimental results and derive bounds on the performance of the scheme. © 2008 Springer-Verlag Berlin Heidelberg.

More Details

Inexact newton dogleg methods

SIAM Journal on Numerical Analysis

Pawlowski, Roger P.; Simonis, Joseph P.; Walker, Homer F.; Shadid, John N.

The dogleg method is a classical trust-region technique for globalizing Newton's method. While it is widely used in optimization, including large-scale optimization via truncated-Newton approaches, its implementation in general inexact Newton methods for systems of nonlinear equations can be problematic. In this paper, we first outline a very general dogleg method suitable for the general inexact Newton context and provide a global convergence analysis for it. We then discuss certain issues that may arise with the standard dogleg implementational strategy and propose modified strategies that address them. Newton-Krylov methods have provided important motivation for this work, and we conclude with a report on numerical experiments involving a Newton-GMRES dogleg method applied to benchmark CFD problems. © 2008 Society for Industrial and Applied Mathematics.

More Details

Solving elliptic finite element systems in near-linear time with support preconditioners

SIAM Journal on Numerical Analysis

Boman, Erik G.; Hendrickson, Bruce A.; Vavasis, Stephen

We consider linear systems arising from the use of the finite element method for solving scalar linear elliptic problems. Our main result is that these linear systems, which are symmetric and positive semidefinite, are well approximated by symmetric diagonally dominant matrices. Our framework for defining matrix approximation is support theory. Significant graph theoretic work has already been developed in the support framework for preconditioners in the diagonally dominant case, and, in particular, it is known that such systems can be solved with iterative methods in nearly linear time. Thus, our approximation result implies that these graph theoretic techniques can also solve a class of finite element problems in nearly linear time. We show that the support number bounds, which control the number of iterations in the preconditioned iterative solver, depend on mesh quality measures but not on the problem size or shape of the domain. © 2008 Society for Industrial and Applied Mathematics.

More Details

The Arctic as a test case for an assessment of climate impacts on national security

Boslough, Mark B.; Taylor, Mark A.; Zak, Bernard D.; Backus, George A.

The Arctic region is rapidly changing in a way that will affect the rest of the world. Parts of Alaska, western Canada, and Siberia are currently warming at twice the global rate. This warming trend is accelerating permafrost deterioration, coastal erosion, snow and ice loss, and other changes that are a direct consequence of climate change. Climatologists have long understood that changes in the Arctic would be faster and more intense than elsewhere on the planet, but the degree and speed of the changes were underestimated compared to recent observations. Policy makers have not yet had time to examine the latest evidence or appreciate the nature of the consequences. Thus, the abruptness and severity of an unfolding Arctic climate crisis has not been incorporated into long-range planning. The purpose of this report is to briefly review the physical basis for global climate change and Arctic amplification, summarize the ongoing observations, discuss the potential consequences, explain the need for an objective risk assessment, develop scenarios for future change, review existing modeling capabilities and the need for better regional models, and finally to make recommendations for Sandia's future role in preparing our leaders to deal with impacts of Arctic climate change on national security. Accurate and credible regional-scale climate models are still several years in the future, and those models are essential for estimating climate impacts around the globe. This study demonstrates how a scenario-based method may be used to give insights into climate impacts on a regional scale and possible mitigation. Because of our experience in the Arctic and widespread recognition of the Arctic's importance in the Earth climate system we chose the Arctic as a test case for an assessment of climate impacts on national security. Sandia can make a swift and significant contribution by applying modeling and simulation tools with internal collaborations as well as with outside organizations. Because changes in the Arctic environment are happening so rapidly, a successful program will be one that can adapt very quickly to new information as it becomes available, and can provide decision makers with projections on the 1-5 year time scale over which the most disruptive, high-consequence changes are likely to occur. The greatest short-term impact would be to initiate exploratory simulations to discover new emergent and robust phenomena associated with one or more of the following changing systems: Arctic hydrological cycle, sea ice extent, ocean and atmospheric circulation, permafrost deterioration, carbon mobilization, Greenland ice sheet stability, and coastal erosion. Sandia can also contribute to new technology solutions for improved observations in the Arctic, which is currently a data-sparse region. Sensitivity analyses have the potential to identify thresholds which would enable the collaborative development of 'early warning' sensor systems to seek predicted phenomena that might be precursory to major, high-consequence changes. Much of this work will require improved regional climate models and advanced computing capabilities. Socio-economic modeling tools can help define human and national security consequences. Formal uncertainty quantification must be an integral part of any results that emerge from this work.

More Details

Re-thinking linearized coupled-cluster theory

Proposed for publication in the Journal of Chemical Physics.

Taube, Andrew G.

Hermitian linearized coupled-cluster methods have several advantages over more conventional coupled-cluster methods including facile analytical gradients for searching a potential energy surface. A persistent failure of linearized methods, however, is the presence of singularities on the potential energy surface. A simple Tikhonov regularization procedure is introduced that can eliminate this singularity. Application of the regularized linearized coupled-cluster singles and doubles (CCSD) method to both equilibrium structures and transition states shows that it is competitive with or better than conventional CCSD, and is more amenable to parallelization.

More Details

Distributed micro-releases of bioterror pathogens : threat characterizations and epidemiology from uncertain patient observables

Adams, Brian M.; Devine, Karen D.; Najm, H.N.; Marzouk, Youssef M.

Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern since the anthrax attacks of 2001. The ability to characterize the parameters of such attacks, i.e., to estimate the number of people infected, the time of infection, the average dose received, and the rate of disease spread in contemporary American society (for contagious diseases), is important when planning a medical response. For non-contagious diseases, we address the characterization problem by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To keep the approach relevant for response planning, we limit ourselves to 3.5 days of data. In computational tests performed for anthrax, we usually find these observation windows sufficient, especially if the outbreak model employed in the inverse problem is accurate. For contagious diseases, we formulated a Bayesian inversion technique to infer both pathogenic transmissibility and the social network from outbreak observations, ensuring that the two determinants of spreading are identified separately. We tested this technique on data collected from a 1967 smallpox epidemic in Abakaliki, Nigeria. We inferred, probabilistically, different transmissibilities in the structured Abakaliki population, the social network, and the chain of transmission. Finally, we developed an individual-based epidemic model to realistically simulate the spread of a rare (or eradicated) disease in a modern society. This model incorporates the mixing patterns observed in an (American) urban setting and accepts, as model input, pathogenic transmissibilities estimated from historical outbreaks that may have occurred in socio-economic environments with little resemblance to contemporary society. Techniques were also developed to simulate disease spread on static and sampled network reductions of the dynamic social networks originally in the individual-based model, yielding faster, though approximate, network-based epidemic models. These reduced-order models are useful in scenario analysis for medical response planning, as well as in computationally intensive inverse problems.

More Details

Post-processing V&V Level II ASC Milestone (2843) results

Moreland, Kenneth D.; Wilke, Jason W.; Attaway, Stephen W.; Karelitz, David B.

The 9/30/2008 ASC Level 2 Post-Processing V&V Milestone (Milestone 2843) contains functionality required by the user community for certain verification and validation tasks. These capabilities include fragment detection from CTH simulation data, fragment characterization and analysis, and fragment sorting and display operations. The capabilities were tested extensively both on sample and actual simulations. In addition, a number of stretch criteria were met including a comparison between simulated and test data, and the ability to output each fragment as an individual geometric file.

More Details

Nanoparticle flow, ordering and self-assembly

Grest, Gary S.; Brown, William M.; Lechman, Jeremy B.; Petersen, Matt K.; Plimpton, Steven J.; Schunk, Randy

Nanoparticles are now more than ever being used to tailor materials function and performance in differentiating technologies because of their profound effect on thermo-physical, mechanical and optical properties. The most feasible way to disperse particles in a bulk material or control their packing at a substrate is through fluidization in a carrier, followed by solidification through solvent evaporation/drying/curing/sintering. Unfortunately processing particles as concentrated, fluidized suspensions into useful products remains an art largely because the effect of particle shape and volume fraction on fluidic properties and suspension stability remains unexplored in a regime where particle-particle interaction mechanics is prevalent. To achieve a stronger scientific understanding of the factors that control nanoparticle dispersion and rheology we have developed a multiscale modeling approach to bridge scales between atomistic and molecular-level forces active in dense nanoparticle suspensions. At the largest length scale, two 'coarse-grained' numerical techniques have been developed and implemented to provide for high-fidelity numerical simulations of the rheological response and dispersion characteristics typical in a processing flow. The first is a coupled Navier-Stokes/discrete element method in which the background solvent is treated by finite element methods. The second is a particle based method known as stochastic rotational dynamics. These two methods provide a new capability representing a 'bridge' between the molecular scale and the engineering scale, allowing the study of fluid-nanoparticle systems over a wide range of length and timescales as well as particle concentrations. To validate these new methodologies, multi-million atoms simulations explicitly including the solvent have been carried out. These simulations have been vital in establishing the necessary 'subgrid' models for accurate prediction at a larger scale and refining the two coarse-grained methodologies.

More Details

Verification for ALEGRA using magnetized shock hydrodynamics problems

Gardiner, Thomas A.; Rider, William J.; Robinson, Allen C.

Two classical verification problems from shock hydrodynamics are adapted for verification in the context of ideal magnetohydrodynamics (MHD) by introducing strong transverse magnetic fields, and simulated using the finite element Lagrange-remap MHD code ALEGRA for purposes of rigorous code verification. The concern in these verification tests is that inconsistencies related to energy advection are inherent in Lagrange-remap formulations for MHD, such that conservation of the kinetic and magnetic components of the energy may not be maintained. Hence, total energy conservation may also not be maintained. MHD shock propagation may therefore not be treated consistently in Lagrange-remap schemes, as errors in energy conservation are known to result in unphysical shock wave speeds and post-shock states. That kinetic energy is not conserved in Lagrange-remap schemes is well known, and the correction of DeBar has been shown to eliminate the resulting errors. Here, the consequences of the failure to conserve magnetic energy are revealed using order verification in the two magnetized shock-hydrodynamics problems. Further, a magnetic analog to the DeBar correction is proposed and its accuracy evaluated using this verification testbed. Results indicate that only when the total energy is conserved, by implementing both the kinetic and magnetic components of the DeBar correction, can simulations in Lagrange-remap formulation capture MHD shock propagation accurately. Additional insight is provided by the verification results, regarding the implementation of the DeBar correction and the advection scheme.

More Details

Multilinear algebra for analyzing data with multiple linkages

Dunlavy, Daniel D.; Kolda, Tamara G.; Kegelmeyer, William P.

Link analysis typically focuses on a single type of connection, e.g., two journal papers are linked because they are written by the same author. However, often we want to analyze data that has multiple linkages between objects, e.g., two papers may have the same keywords and one may cite the other. The goal of this paper is to show that multilinear algebra provides a tool for multilink analysis. We analyze five years of publication data from journals published by the Society for Industrial and Applied Mathematics. We explore how papers can be grouped in the context of multiple link types using a tensor to represent all the links between them. A PARAFAC decomposition on the resulting tensor yields information similar to the SVD decomposition of a standard adjacency matrix. We show how the PARAFAC decomposition can be used to understand the structure of the document space and define paper-paper similarities based on multiple linkages. Examples are presented where the decomposed tensor data is used to find papers similar to a body of work (e.g., related by topic or similar to a particular author's papers), find related authors using linkages other than explicit co-authorship or citations, distinguish between papers written by different authors with the same name, and predict the journal in which a paper was published.

More Details

CPOPT : optimization for fitting CANDECOMP/PARAFAC models

Kolda, Tamara G.; Acar Ataman, Evrim N.; Dunlavy, Daniel D.

Tensor decompositions (e.g., higher-order analogues of matrix decompositions) are powerful tools for data analysis. In particular, the CANDECOMP/PARAFAC (CP) model has proved useful in many applications such chemometrics, signal processing, and web analysis; see for details. The problem of computing the CP decomposition is typically solved using an alternating least squares (ALS) approach. We discuss the use of optimization-based algorithms for CP, including how to efficiently compute the derivatives necessary for the optimization methods. Numerical studies highlight the positive features of our CPOPT algorithms, as compared with ALS and Gauss-Newton approaches.

More Details

Qthreads: An API for programming with millions of lightweight threads

IPDPS Miami 2008 - Proceedings of the 22nd IEEE International Parallel and Distributed Processing Symposium, Program and CD-ROM

Wheeler, Kyle B.; Murphy, Richard C.; Thain, Douglas

Large scale hardware-supported multithreading, an attractive means of increasing computational power, benefits significantly from low per-thread costs. Hardware support for lightweight threads is a developing area of research. Each architecture with such support provides a unique interface, hindering development for them and comparisons between them. A portable abstraction that provides basic lightweight thread control and synchronization primitives is needed. Such an abstraction would assist in exploring both the architectural needs of large scale threading and the semantic power of existing languages. Managing thread resources is a problem that must be addressed if massive parallelism is to be popularized. The qthread abstraction enables development of large-scale multithreading applications on commodity architectures. This paper introduces the qthread API and its Unix implementation, discusses resource management, and presents performance results from the HPCCG benchmark. ©2008 IEEE.

More Details

LDRD final report for improving human effectiveness for extreme-scale problem solving : assessing the effectiveness of electronic brainstorming in an industrial setting

Dornburg, Courtney S.; Adams, Susan S.; Hendrickson, Stacey M.; Davidson, George S.

An experiment was conducted comparing the effectiveness of individual versus group electronic brainstorming in order to address difficult, real world challenges. While industrial reliance on electronic communications has become ubiquitous, empirical and theoretical understanding of the bounds of its effectiveness have been limited. Previous research using short-term, laboratory experiments have engaged small groups of students in answering questions irrelevant to an industrial setting. The present experiment extends current findings beyond the laboratory to larger groups of real-world employees addressing organization-relevant challenges over the course of four days. Employees and contractors at a national security laboratory participated, either in a group setting or individually, in an electronic brainstorm to pose solutions to a 'wickedly' difficult problem. The data demonstrate that (for this design) individuals perform at least as well as groups in producing quantity of electronic ideas, regardless of brainstorming duration. However, when judged with respect to quality along three dimensions (originality, feasibility, and effectiveness), the individuals significantly (p<0.05) out-performed the group working together. When idea quality is used as the benchmark of success, these data indicate that work-relevant challenges are better solved by aggregating electronic individual responses, rather than electronically convening a group. This research suggests that industrial reliance upon electronic problem solving groups should be tempered, and large nominal groups might be the more appropriate vehicle for solving wicked corporate issues.

More Details

Climate-derived tensions in Arctic security

Backus, George A.; Strickland, James H.

Globally, there is no lack of security threats. Many of them demand priority engagement and there can never be adequate resources to address all threats. In this context, climate is just another aspect of global security and the Arctic just another region. In light of physical and budgetary constraints, new security needs must be integrated and prioritized with existing ones. This discussion approaches the security impacts of climate from that perspective, starting with the broad security picture and establishing how climate may affect it. This method provides a different view from one that starts with climate and projects it, in isolation, as the source of a hypothetical security burden. That said, the Arctic does appear to present high-priority security challenges. Uncertainty in the timing of an ice-free Arctic affects how quickly it will become a security priority. Uncertainty in the emergent extreme and variable weather conditions will determine the difficulty (cost) of maintaining adequate security (order) in the area. The resolution of sovereignty boundaries affects the ability to enforce security measures, and the U.S. will most probably need a military presence to back-up negotiated sovereignty agreements. Without additional global warming, technology already allows the Arctic to become a strategic link in the global supply chain, possibly with northern Russia as its main hub. Additionally, the multinational corporations reaping the economic bounty may affect security tensions more than nation-states themselves. Countries will depend ever more heavily on the global supply chains. China has particular needs to protect its trade flows. In matters of security, nation-state and multinational-corporate interests will become heavily intertwined.

More Details

R&D for computational cognitive and social models : foundations for model evaluation through verification and validation (final LDRD report)

McNamara, Laura A.; Trucano, Timothy G.; Backus, George A.; Mitchell, Scott A.

Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high consequence decision-making.

More Details

High-throughput proteomics : optical approaches

Davidson, George S.

Realistic cell models could greatly accelerate our ability to engineer biochemical pathways and the production of valuable organic products, which would be of great use in the development of biofuels, pharmaceuticals, and the crops for the next green revolution. However, this level of engineering will require a great deal more knowledge about the mechanisms of life than is currently available. In particular, we need to understand the interactome (which proteins interact) as it is situated in the three dimensional geometry of the cell (i.e., a situated interactome), and the regulation/dynamics of these interactions. Methods for optical proteomics have become available that allow the monitoring and even disruption/control of interacting proteins in living cells. Here, a range of these methods is reviewed with respect to their role in elucidating the interactome and the relevant spatial localizations. Development of these technologies and their integration into the core competencies of research organizations can position whole institutions and teams of researchers to lead in both the fundamental science and the engineering applications of cellular biology. That leadership could be particularly important with respect to problems of national urgency centered around security, biofuels, and healthcare.

More Details

Capabilities for Uncertainty in Predictive Science (LDRD Final Report)

Phipps, Eric T.; Eldred, Michael S.; Salinger, Andrew G.

Predictive simulation of systems comprised of numerous interconnected, tightly coupled components promises to help solve many problems of scientific and national interest. However predictive simulation of such systems is extremely challenging due to the coupling of a diverse set of physical and biological length and time scales. This report investigates un-certainty quantification methods for such systems that attempt to exploit their structure to gain computational efficiency. The traditional layering of uncertainty quantification around nonlinear solution processes is inverted to allow for heterogeneous uncertainty quantification methods to be applied to each component in a coupled system. Moreover this approach allows stochastic dimension reduction techniques to be applied at each coupling interface. The mathematical feasibility of these ideas is investigated in this report, and mathematical formulations for the resulting stochastically coupled nonlinear systems are developed.

More Details

Asymmetric cubature formulas for polynomial integration in the triangle and square

Journal of Computational and Applied Mathematics

Taylor, Mark A.

We present five new cubature formula in the triangle and square for exact integration of polynomials. The points were computed numerically with a cardinal function algorithm which does not impose any symmetry requirements on the points. Cubature formula are presented which integrate degrees 10, 11 and 12 in the triangle and degrees 10 and 12 in the square. They have positive weights, contain no points outside the domain, and have fewer points than previously known results. © 2007 Elsevier B.V. All rights reserved.

More Details

Computational modeling of analogy: Destined ever to only be metaphor?

Behavioral and Brain Sciences

Speed, Ann S.

The target article by Leech et al. presents a compelling computational theory of analogy-making. However, there is a key difficulty that persists in theoretical treatments of analogy-making, computational and otherwise: namely, the lack of a detailed account of the neurophysiological mechanisms that give rise to analogy behavior. My commentary explores this issue. © 2008 Cambridge University Press.

More Details
Results 8501–8600 of 9,998
Results 8501–8600 of 9,998