Hermitian linearized coupled-cluster methods have several advantages over more conventional coupled-cluster methods including facile analytical gradients for searching a potential energy surface. A persistent failure of linearized methods, however, is the presence of singularities on the potential energy surface. A simple Tikhonov regularization procedure is introduced that can eliminate this singularity. Application of the regularized linearized coupled-cluster singles and doubles (CCSD) method to both equilibrium structures and transition states shows that it is competitive with or better than conventional CCSD, and is more amenable to parallelization.
Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern since the anthrax attacks of 2001. The ability to characterize the parameters of such attacks, i.e., to estimate the number of people infected, the time of infection, the average dose received, and the rate of disease spread in contemporary American society (for contagious diseases), is important when planning a medical response. For non-contagious diseases, we address the characterization problem by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To keep the approach relevant for response planning, we limit ourselves to 3.5 days of data. In computational tests performed for anthrax, we usually find these observation windows sufficient, especially if the outbreak model employed in the inverse problem is accurate. For contagious diseases, we formulated a Bayesian inversion technique to infer both pathogenic transmissibility and the social network from outbreak observations, ensuring that the two determinants of spreading are identified separately. We tested this technique on data collected from a 1967 smallpox epidemic in Abakaliki, Nigeria. We inferred, probabilistically, different transmissibilities in the structured Abakaliki population, the social network, and the chain of transmission. Finally, we developed an individual-based epidemic model to realistically simulate the spread of a rare (or eradicated) disease in a modern society. This model incorporates the mixing patterns observed in an (American) urban setting and accepts, as model input, pathogenic transmissibilities estimated from historical outbreaks that may have occurred in socio-economic environments with little resemblance to contemporary society. Techniques were also developed to simulate disease spread on static and sampled network reductions of the dynamic social networks originally in the individual-based model, yielding faster, though approximate, network-based epidemic models. These reduced-order models are useful in scenario analysis for medical response planning, as well as in computationally intensive inverse problems.
The 9/30/2008 ASC Level 2 Post-Processing V&V Milestone (Milestone 2843) contains functionality required by the user community for certain verification and validation tasks. These capabilities include fragment detection from CTH simulation data, fragment characterization and analysis, and fragment sorting and display operations. The capabilities were tested extensively both on sample and actual simulations. In addition, a number of stretch criteria were met including a comparison between simulated and test data, and the ability to output each fragment as an individual geometric file.
Nanoparticles are now more than ever being used to tailor materials function and performance in differentiating technologies because of their profound effect on thermo-physical, mechanical and optical properties. The most feasible way to disperse particles in a bulk material or control their packing at a substrate is through fluidization in a carrier, followed by solidification through solvent evaporation/drying/curing/sintering. Unfortunately processing particles as concentrated, fluidized suspensions into useful products remains an art largely because the effect of particle shape and volume fraction on fluidic properties and suspension stability remains unexplored in a regime where particle-particle interaction mechanics is prevalent. To achieve a stronger scientific understanding of the factors that control nanoparticle dispersion and rheology we have developed a multiscale modeling approach to bridge scales between atomistic and molecular-level forces active in dense nanoparticle suspensions. At the largest length scale, two 'coarse-grained' numerical techniques have been developed and implemented to provide for high-fidelity numerical simulations of the rheological response and dispersion characteristics typical in a processing flow. The first is a coupled Navier-Stokes/discrete element method in which the background solvent is treated by finite element methods. The second is a particle based method known as stochastic rotational dynamics. These two methods provide a new capability representing a 'bridge' between the molecular scale and the engineering scale, allowing the study of fluid-nanoparticle systems over a wide range of length and timescales as well as particle concentrations. To validate these new methodologies, multi-million atoms simulations explicitly including the solvent have been carried out. These simulations have been vital in establishing the necessary 'subgrid' models for accurate prediction at a larger scale and refining the two coarse-grained methodologies.
Two classical verification problems from shock hydrodynamics are adapted for verification in the context of ideal magnetohydrodynamics (MHD) by introducing strong transverse magnetic fields, and simulated using the finite element Lagrange-remap MHD code ALEGRA for purposes of rigorous code verification. The concern in these verification tests is that inconsistencies related to energy advection are inherent in Lagrange-remap formulations for MHD, such that conservation of the kinetic and magnetic components of the energy may not be maintained. Hence, total energy conservation may also not be maintained. MHD shock propagation may therefore not be treated consistently in Lagrange-remap schemes, as errors in energy conservation are known to result in unphysical shock wave speeds and post-shock states. That kinetic energy is not conserved in Lagrange-remap schemes is well known, and the correction of DeBar has been shown to eliminate the resulting errors. Here, the consequences of the failure to conserve magnetic energy are revealed using order verification in the two magnetized shock-hydrodynamics problems. Further, a magnetic analog to the DeBar correction is proposed and its accuracy evaluated using this verification testbed. Results indicate that only when the total energy is conserved, by implementing both the kinetic and magnetic components of the DeBar correction, can simulations in Lagrange-remap formulation capture MHD shock propagation accurately. Additional insight is provided by the verification results, regarding the implementation of the DeBar correction and the advection scheme.
Link analysis typically focuses on a single type of connection, e.g., two journal papers are linked because they are written by the same author. However, often we want to analyze data that has multiple linkages between objects, e.g., two papers may have the same keywords and one may cite the other. The goal of this paper is to show that multilinear algebra provides a tool for multilink analysis. We analyze five years of publication data from journals published by the Society for Industrial and Applied Mathematics. We explore how papers can be grouped in the context of multiple link types using a tensor to represent all the links between them. A PARAFAC decomposition on the resulting tensor yields information similar to the SVD decomposition of a standard adjacency matrix. We show how the PARAFAC decomposition can be used to understand the structure of the document space and define paper-paper similarities based on multiple linkages. Examples are presented where the decomposed tensor data is used to find papers similar to a body of work (e.g., related by topic or similar to a particular author's papers), find related authors using linkages other than explicit co-authorship or citations, distinguish between papers written by different authors with the same name, and predict the journal in which a paper was published.
Tensor decompositions (e.g., higher-order analogues of matrix decompositions) are powerful tools for data analysis. In particular, the CANDECOMP/PARAFAC (CP) model has proved useful in many applications such chemometrics, signal processing, and web analysis; see for details. The problem of computing the CP decomposition is typically solved using an alternating least squares (ALS) approach. We discuss the use of optimization-based algorithms for CP, including how to efficiently compute the derivatives necessary for the optimization methods. Numerical studies highlight the positive features of our CPOPT algorithms, as compared with ALS and Gauss-Newton approaches.
An experiment was conducted comparing the effectiveness of individual versus group electronic brainstorming in order to address difficult, real world challenges. While industrial reliance on electronic communications has become ubiquitous, empirical and theoretical understanding of the bounds of its effectiveness have been limited. Previous research using short-term, laboratory experiments have engaged small groups of students in answering questions irrelevant to an industrial setting. The present experiment extends current findings beyond the laboratory to larger groups of real-world employees addressing organization-relevant challenges over the course of four days. Employees and contractors at a national security laboratory participated, either in a group setting or individually, in an electronic brainstorm to pose solutions to a 'wickedly' difficult problem. The data demonstrate that (for this design) individuals perform at least as well as groups in producing quantity of electronic ideas, regardless of brainstorming duration. However, when judged with respect to quality along three dimensions (originality, feasibility, and effectiveness), the individuals significantly (p<0.05) out-performed the group working together. When idea quality is used as the benchmark of success, these data indicate that work-relevant challenges are better solved by aggregating electronic individual responses, rather than electronically convening a group. This research suggests that industrial reliance upon electronic problem solving groups should be tempered, and large nominal groups might be the more appropriate vehicle for solving wicked corporate issues.
Globally, there is no lack of security threats. Many of them demand priority engagement and there can never be adequate resources to address all threats. In this context, climate is just another aspect of global security and the Arctic just another region. In light of physical and budgetary constraints, new security needs must be integrated and prioritized with existing ones. This discussion approaches the security impacts of climate from that perspective, starting with the broad security picture and establishing how climate may affect it. This method provides a different view from one that starts with climate and projects it, in isolation, as the source of a hypothetical security burden. That said, the Arctic does appear to present high-priority security challenges. Uncertainty in the timing of an ice-free Arctic affects how quickly it will become a security priority. Uncertainty in the emergent extreme and variable weather conditions will determine the difficulty (cost) of maintaining adequate security (order) in the area. The resolution of sovereignty boundaries affects the ability to enforce security measures, and the U.S. will most probably need a military presence to back-up negotiated sovereignty agreements. Without additional global warming, technology already allows the Arctic to become a strategic link in the global supply chain, possibly with northern Russia as its main hub. Additionally, the multinational corporations reaping the economic bounty may affect security tensions more than nation-states themselves. Countries will depend ever more heavily on the global supply chains. China has particular needs to protect its trade flows. In matters of security, nation-state and multinational-corporate interests will become heavily intertwined.
Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high consequence decision-making.
Realistic cell models could greatly accelerate our ability to engineer biochemical pathways and the production of valuable organic products, which would be of great use in the development of biofuels, pharmaceuticals, and the crops for the next green revolution. However, this level of engineering will require a great deal more knowledge about the mechanisms of life than is currently available. In particular, we need to understand the interactome (which proteins interact) as it is situated in the three dimensional geometry of the cell (i.e., a situated interactome), and the regulation/dynamics of these interactions. Methods for optical proteomics have become available that allow the monitoring and even disruption/control of interacting proteins in living cells. Here, a range of these methods is reviewed with respect to their role in elucidating the interactome and the relevant spatial localizations. Development of these technologies and their integration into the core competencies of research organizations can position whole institutions and teams of researchers to lead in both the fundamental science and the engineering applications of cellular biology. That leadership could be particularly important with respect to problems of national urgency centered around security, biofuels, and healthcare.
Predictive simulation of systems comprised of numerous interconnected, tightly coupled components promises to help solve many problems of scientific and national interest. However predictive simulation of such systems is extremely challenging due to the coupling of a diverse set of physical and biological length and time scales. This report investigates un-certainty quantification methods for such systems that attempt to exploit their structure to gain computational efficiency. The traditional layering of uncertainty quantification around nonlinear solution processes is inverted to allow for heterogeneous uncertainty quantification methods to be applied to each component in a coupled system. Moreover this approach allows stochastic dimension reduction techniques to be applied at each coupling interface. The mathematical feasibility of these ideas is investigated in this report, and mathematical formulations for the resulting stochastically coupled nonlinear systems are developed.