Verdict is a collection of subroutines for evaluating the geometric qualities of triangles, quadrilaterals, tetrahedra, and hexahedra using a variety of functions. A quality is a real number assigned to one of these shapes depending on its particular vertex coordinates. These functions are used to evaluate the input to finite element, finite volume, boundary element, and other types of solvers that approximate the solution to partial differential equations defined over regions of space. This article describes the most recent version of Verdict and provides a summary of the main properties of the quality functions offered by the library. It finally demonstrates the versatility and applicability of Verdict by illustrating its use in several scientific applications that pertain to pre, post, and end-to-end processing.
This paper describes a distributed-memory, embarrassingly parallel hexahedral mesh generator, pCAMAL (parallel CUBIT Adaptive Mesh Algorithm Library). pCAMAL utilizes the sweeping method following a serial step of geometry decomposition conducted in the CUBIT geometry preparation and mesh generation tool. The utility of pCAMAL in generating large meshes is illustrated, and linear speed-up under load-balanced conditions is demonstrated.
Various aspects of mesh quality are surveyed to clarify the disconnect between the traditional uses of mesh quality metrics within industry and the fact that quality ultimately depends on the solution to the physical problem. Truncation error analysis for ffnite difference methods reveals no clear connection to most traditional mesh quality metrics. Finite element bounds to the interpolation error can be shown, in some cases, to be related to known quality metrics such as the condition number. On the other hand, the use of quality metrics that do not take solution characteristics into account can be valid in certain circumstances, primarily as a means of automatically detecting defective meshes. The use of such metrics when applied to simulations for which quality is highly-dependent on the physical solution is clearly inappropriate. Various ffaws and problems with existing quality metrics are mentioned, along with a discussion on the use of threshold values. In closing, the author advocates the investigation of explicitly-referenced quality metrics as a potential means of bridging the gap between a priori quality metrics and solution-dependent metrics.
Communities of vertices within a giant network such as the World-Wide-Web are likely to be vastly smaller than the network itself. However, Fortunato and Barthelemy have proved that modularity maximization algorithms for community detection may fail to resolve communities with fewer than {radical} L/2 edges, where L is the number of edges in the entire network. This resolution limit leads modularity maximization algorithms to have notoriously poor accuracy on many real networks. Fortunato and Barthelemy's argument can be extended to networks with weighted edges as well, and we derive this corollary argument. We conclude that weighted modularity algorithms may fail to resolve communities with fewer than {radical} W{epsilon}/2 total edge weight, where W is the total edge weight in the network and {epsilon} is the maximum weight of an inter-community edge. If {epsilon} is small, then small communities can be resolved. Given a weighted or unweighted network, we describe how to derive new edge weights in order to achieve a low {epsilon}, we modify the 'CNM' community detection algorithm to maximize weighted modularity, and show that the resulting algorithm has greatly improved accuracy. In experiments with an emerging community standard benchmark, we find that our simple CNM variant is competitive with the most accurate community detection methods yet proposed.
The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it is a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.
Inductive electromagnetic launchers, or coilguns, use discrete solenoidal coils to accelerate a coaxial conductive armature. To date, Sandia has been using an internally developed code, SLINGSHOT, as a point-mass lumped circuit element simulation tool for modeling coilgun behavior for design and verification purposes. This code has shortcomings in terms of accurately modeling gun performance under stressful electromagnetic propulsion environments. To correct for these limitations, it was decided to attempt to closely couple two Sandia simulation codes, Xyce and ALEGRA, to develop a more rigorous simulation capability for demanding launch applications. This report summarizes the modifications made to each respective code and the path forward to completing interfacing between them.
The Arctic region is rapidly changing in a way that will affect the rest of the world. Parts of Alaska, western Canada, and Siberia are currently warming at twice the global rate. This warming trend is accelerating permafrost deterioration, coastal erosion, snow and ice loss, and other changes that are a direct consequence of climate change. Climatologists have long understood that changes in the Arctic would be faster and more intense than elsewhere on the planet, but the degree and speed of the changes were underestimated compared to recent observations. Policy makers have not yet had time to examine the latest evidence or appreciate the nature of the consequences. Thus, the abruptness and severity of an unfolding Arctic climate crisis has not been incorporated into long-range planning. The purpose of this report is to briefly review the physical basis for global climate change and Arctic amplification, summarize the ongoing observations, discuss the potential consequences, explain the need for an objective risk assessment, develop scenarios for future change, review existing modeling capabilities and the need for better regional models, and finally to make recommendations for Sandia's future role in preparing our leaders to deal with impacts of Arctic climate change on national security. Accurate and credible regional-scale climate models are still several years in the future, and those models are essential for estimating climate impacts around the globe. This study demonstrates how a scenario-based method may be used to give insights into climate impacts on a regional scale and possible mitigation. Because of our experience in the Arctic and widespread recognition of the Arctic's importance in the Earth climate system we chose the Arctic as a test case for an assessment of climate impacts on national security. Sandia can make a swift and significant contribution by applying modeling and simulation tools with internal collaborations as well as with outside organizations. Because changes in the Arctic environment are happening so rapidly, a successful program will be one that can adapt very quickly to new information as it becomes available, and can provide decision makers with projections on the 1-5 year time scale over which the most disruptive, high-consequence changes are likely to occur. The greatest short-term impact would be to initiate exploratory simulations to discover new emergent and robust phenomena associated with one or more of the following changing systems: Arctic hydrological cycle, sea ice extent, ocean and atmospheric circulation, permafrost deterioration, carbon mobilization, Greenland ice sheet stability, and coastal erosion. Sandia can also contribute to new technology solutions for improved observations in the Arctic, which is currently a data-sparse region. Sensitivity analyses have the potential to identify thresholds which would enable the collaborative development of 'early warning' sensor systems to seek predicted phenomena that might be precursory to major, high-consequence changes. Much of this work will require improved regional climate models and advanced computing capabilities. Socio-economic modeling tools can help define human and national security consequences. Formal uncertainty quantification must be an integral part of any results that emerge from this work.
Hermitian linearized coupled-cluster methods have several advantages over more conventional coupled-cluster methods including facile analytical gradients for searching a potential energy surface. A persistent failure of linearized methods, however, is the presence of singularities on the potential energy surface. A simple Tikhonov regularization procedure is introduced that can eliminate this singularity. Application of the regularized linearized coupled-cluster singles and doubles (CCSD) method to both equilibrium structures and transition states shows that it is competitive with or better than conventional CCSD, and is more amenable to parallelization.
Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern since the anthrax attacks of 2001. The ability to characterize the parameters of such attacks, i.e., to estimate the number of people infected, the time of infection, the average dose received, and the rate of disease spread in contemporary American society (for contagious diseases), is important when planning a medical response. For non-contagious diseases, we address the characterization problem by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To keep the approach relevant for response planning, we limit ourselves to 3.5 days of data. In computational tests performed for anthrax, we usually find these observation windows sufficient, especially if the outbreak model employed in the inverse problem is accurate. For contagious diseases, we formulated a Bayesian inversion technique to infer both pathogenic transmissibility and the social network from outbreak observations, ensuring that the two determinants of spreading are identified separately. We tested this technique on data collected from a 1967 smallpox epidemic in Abakaliki, Nigeria. We inferred, probabilistically, different transmissibilities in the structured Abakaliki population, the social network, and the chain of transmission. Finally, we developed an individual-based epidemic model to realistically simulate the spread of a rare (or eradicated) disease in a modern society. This model incorporates the mixing patterns observed in an (American) urban setting and accepts, as model input, pathogenic transmissibilities estimated from historical outbreaks that may have occurred in socio-economic environments with little resemblance to contemporary society. Techniques were also developed to simulate disease spread on static and sampled network reductions of the dynamic social networks originally in the individual-based model, yielding faster, though approximate, network-based epidemic models. These reduced-order models are useful in scenario analysis for medical response planning, as well as in computationally intensive inverse problems.
The 9/30/2008 ASC Level 2 Post-Processing V&V Milestone (Milestone 2843) contains functionality required by the user community for certain verification and validation tasks. These capabilities include fragment detection from CTH simulation data, fragment characterization and analysis, and fragment sorting and display operations. The capabilities were tested extensively both on sample and actual simulations. In addition, a number of stretch criteria were met including a comparison between simulated and test data, and the ability to output each fragment as an individual geometric file.
Nanoparticles are now more than ever being used to tailor materials function and performance in differentiating technologies because of their profound effect on thermo-physical, mechanical and optical properties. The most feasible way to disperse particles in a bulk material or control their packing at a substrate is through fluidization in a carrier, followed by solidification through solvent evaporation/drying/curing/sintering. Unfortunately processing particles as concentrated, fluidized suspensions into useful products remains an art largely because the effect of particle shape and volume fraction on fluidic properties and suspension stability remains unexplored in a regime where particle-particle interaction mechanics is prevalent. To achieve a stronger scientific understanding of the factors that control nanoparticle dispersion and rheology we have developed a multiscale modeling approach to bridge scales between atomistic and molecular-level forces active in dense nanoparticle suspensions. At the largest length scale, two 'coarse-grained' numerical techniques have been developed and implemented to provide for high-fidelity numerical simulations of the rheological response and dispersion characteristics typical in a processing flow. The first is a coupled Navier-Stokes/discrete element method in which the background solvent is treated by finite element methods. The second is a particle based method known as stochastic rotational dynamics. These two methods provide a new capability representing a 'bridge' between the molecular scale and the engineering scale, allowing the study of fluid-nanoparticle systems over a wide range of length and timescales as well as particle concentrations. To validate these new methodologies, multi-million atoms simulations explicitly including the solvent have been carried out. These simulations have been vital in establishing the necessary 'subgrid' models for accurate prediction at a larger scale and refining the two coarse-grained methodologies.
Two classical verification problems from shock hydrodynamics are adapted for verification in the context of ideal magnetohydrodynamics (MHD) by introducing strong transverse magnetic fields, and simulated using the finite element Lagrange-remap MHD code ALEGRA for purposes of rigorous code verification. The concern in these verification tests is that inconsistencies related to energy advection are inherent in Lagrange-remap formulations for MHD, such that conservation of the kinetic and magnetic components of the energy may not be maintained. Hence, total energy conservation may also not be maintained. MHD shock propagation may therefore not be treated consistently in Lagrange-remap schemes, as errors in energy conservation are known to result in unphysical shock wave speeds and post-shock states. That kinetic energy is not conserved in Lagrange-remap schemes is well known, and the correction of DeBar has been shown to eliminate the resulting errors. Here, the consequences of the failure to conserve magnetic energy are revealed using order verification in the two magnetized shock-hydrodynamics problems. Further, a magnetic analog to the DeBar correction is proposed and its accuracy evaluated using this verification testbed. Results indicate that only when the total energy is conserved, by implementing both the kinetic and magnetic components of the DeBar correction, can simulations in Lagrange-remap formulation capture MHD shock propagation accurately. Additional insight is provided by the verification results, regarding the implementation of the DeBar correction and the advection scheme.
Link analysis typically focuses on a single type of connection, e.g., two journal papers are linked because they are written by the same author. However, often we want to analyze data that has multiple linkages between objects, e.g., two papers may have the same keywords and one may cite the other. The goal of this paper is to show that multilinear algebra provides a tool for multilink analysis. We analyze five years of publication data from journals published by the Society for Industrial and Applied Mathematics. We explore how papers can be grouped in the context of multiple link types using a tensor to represent all the links between them. A PARAFAC decomposition on the resulting tensor yields information similar to the SVD decomposition of a standard adjacency matrix. We show how the PARAFAC decomposition can be used to understand the structure of the document space and define paper-paper similarities based on multiple linkages. Examples are presented where the decomposed tensor data is used to find papers similar to a body of work (e.g., related by topic or similar to a particular author's papers), find related authors using linkages other than explicit co-authorship or citations, distinguish between papers written by different authors with the same name, and predict the journal in which a paper was published.
Tensor decompositions (e.g., higher-order analogues of matrix decompositions) are powerful tools for data analysis. In particular, the CANDECOMP/PARAFAC (CP) model has proved useful in many applications such chemometrics, signal processing, and web analysis; see for details. The problem of computing the CP decomposition is typically solved using an alternating least squares (ALS) approach. We discuss the use of optimization-based algorithms for CP, including how to efficiently compute the derivatives necessary for the optimization methods. Numerical studies highlight the positive features of our CPOPT algorithms, as compared with ALS and Gauss-Newton approaches.
An experiment was conducted comparing the effectiveness of individual versus group electronic brainstorming in order to address difficult, real world challenges. While industrial reliance on electronic communications has become ubiquitous, empirical and theoretical understanding of the bounds of its effectiveness have been limited. Previous research using short-term, laboratory experiments have engaged small groups of students in answering questions irrelevant to an industrial setting. The present experiment extends current findings beyond the laboratory to larger groups of real-world employees addressing organization-relevant challenges over the course of four days. Employees and contractors at a national security laboratory participated, either in a group setting or individually, in an electronic brainstorm to pose solutions to a 'wickedly' difficult problem. The data demonstrate that (for this design) individuals perform at least as well as groups in producing quantity of electronic ideas, regardless of brainstorming duration. However, when judged with respect to quality along three dimensions (originality, feasibility, and effectiveness), the individuals significantly (p<0.05) out-performed the group working together. When idea quality is used as the benchmark of success, these data indicate that work-relevant challenges are better solved by aggregating electronic individual responses, rather than electronically convening a group. This research suggests that industrial reliance upon electronic problem solving groups should be tempered, and large nominal groups might be the more appropriate vehicle for solving wicked corporate issues.
Globally, there is no lack of security threats. Many of them demand priority engagement and there can never be adequate resources to address all threats. In this context, climate is just another aspect of global security and the Arctic just another region. In light of physical and budgetary constraints, new security needs must be integrated and prioritized with existing ones. This discussion approaches the security impacts of climate from that perspective, starting with the broad security picture and establishing how climate may affect it. This method provides a different view from one that starts with climate and projects it, in isolation, as the source of a hypothetical security burden. That said, the Arctic does appear to present high-priority security challenges. Uncertainty in the timing of an ice-free Arctic affects how quickly it will become a security priority. Uncertainty in the emergent extreme and variable weather conditions will determine the difficulty (cost) of maintaining adequate security (order) in the area. The resolution of sovereignty boundaries affects the ability to enforce security measures, and the U.S. will most probably need a military presence to back-up negotiated sovereignty agreements. Without additional global warming, technology already allows the Arctic to become a strategic link in the global supply chain, possibly with northern Russia as its main hub. Additionally, the multinational corporations reaping the economic bounty may affect security tensions more than nation-states themselves. Countries will depend ever more heavily on the global supply chains. China has particular needs to protect its trade flows. In matters of security, nation-state and multinational-corporate interests will become heavily intertwined.
Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high consequence decision-making.
Realistic cell models could greatly accelerate our ability to engineer biochemical pathways and the production of valuable organic products, which would be of great use in the development of biofuels, pharmaceuticals, and the crops for the next green revolution. However, this level of engineering will require a great deal more knowledge about the mechanisms of life than is currently available. In particular, we need to understand the interactome (which proteins interact) as it is situated in the three dimensional geometry of the cell (i.e., a situated interactome), and the regulation/dynamics of these interactions. Methods for optical proteomics have become available that allow the monitoring and even disruption/control of interacting proteins in living cells. Here, a range of these methods is reviewed with respect to their role in elucidating the interactome and the relevant spatial localizations. Development of these technologies and their integration into the core competencies of research organizations can position whole institutions and teams of researchers to lead in both the fundamental science and the engineering applications of cellular biology. That leadership could be particularly important with respect to problems of national urgency centered around security, biofuels, and healthcare.
Predictive simulation of systems comprised of numerous interconnected, tightly coupled components promises to help solve many problems of scientific and national interest. However predictive simulation of such systems is extremely challenging due to the coupling of a diverse set of physical and biological length and time scales. This report investigates un-certainty quantification methods for such systems that attempt to exploit their structure to gain computational efficiency. The traditional layering of uncertainty quantification around nonlinear solution processes is inverted to allow for heterogeneous uncertainty quantification methods to be applied to each component in a coupled system. Moreover this approach allows stochastic dimension reduction techniques to be applied at each coupling interface. The mathematical feasibility of these ideas is investigated in this report, and mathematical formulations for the resulting stochastically coupled nonlinear systems are developed.