Publications

Results 6351–6400 of 9,998

Search results

Jump to search filters

LDRD final report : mesoscale modeling of dynamic loading of heterogeneous materials

Robbins, Joshua R.; Dingreville, Remi P.; Voth, Thomas E.; Furnish, Michael D.

Material response to dynamic loading is often dominated by microstructure (grain structure, porosity, inclusions, defects). An example critically important to Sandia's mission is dynamic strength of polycrystalline metals where heterogeneities lead to localization of deformation and loss of shear strength. Microstructural effects are of broad importance to the scientific community and several institutions within DoD and DOE; however, current models rely on inaccurate assumptions about mechanisms at the sub-continuum or mesoscale. Consequently, there is a critical need for accurate and robust methods for modeling heterogeneous material response at this lower length scale. This report summarizes work performed as part of an LDRD effort (FY11 to FY13; project number 151364) to meet these needs.

More Details

Edge remap for solids

Love, Edward L.; Robinson, Allen C.; Ridzal, Denis R.

We review the edge element formulation for describing the kinematics of hyperelastic solids. This approach is used to frame the problem of remapping the inverse deformation gradient for Arbitrary Lagrangian-Eulerian (ALE) simulations of solid dynamics. For hyperelastic materials, the stress state is completely determined by the deformation gradient, so remapping this quantity effectively updates the stress state of the material. A method, inspired by the constrained transport remap in electromagnetics, is reviewed, according to which the zero-curl constraint on the inverse deformation gradient is implicitly satisfied. Open issues related to the accuracy of this approach are identified. An optimization-based approach is implemented to enforce positivity of the determinant of the deformation gradient. The efficacy of this approach is illustrated with numerical examples.

More Details

QCAD simulation and optimization of semiconductor double quantum dots

Nielsen, Erik N.; Gao, Xujiao G.; Kalashnikova, Irina; Muller, Richard P.; Salinger, Andrew G.; Young, Ralph W.

We present the Quantum Computer Aided Design (QCAD) simulator that targets modeling quantum devices, particularly silicon double quantum dots (DQDs) developed for quantum qubits. The simulator has three di erentiating features: (i) its core contains nonlinear Poisson, e ective mass Schrodinger, and Con guration Interaction solvers that have massively parallel capability for high simulation throughput, and can be run individually or combined self-consistently for 1D/2D/3D quantum devices; (ii) the core solvers show superior convergence even at near-zero-Kelvin temperatures, which is critical for modeling quantum computing devices; (iii) it couples with an optimization engine Dakota that enables optimization of gate voltages in DQDs for multiple desired targets. The Poisson solver includes Maxwell- Boltzmann and Fermi-Dirac statistics, supports Dirichlet, Neumann, interface charge, and Robin boundary conditions, and includes the e ect of dopant incomplete ionization. The solver has shown robust nonlinear convergence even in the milli-Kelvin temperature range, and has been extensively used to quickly obtain the semiclassical electrostatic potential in DQD devices. The self-consistent Schrodinger-Poisson solver has achieved robust and monotonic convergence behavior for 1D/2D/3D quantum devices at very low temperatures by using a predictor-correct iteration scheme. The QCAD simulator enables the calculation of dot-to-gate capacitances, and comparison with experiment and between solvers. It is observed that computed capacitances are in the right ballpark when compared to experiment, and quantum con nement increases capacitance when the number of electrons is xed in a quantum dot. In addition, the coupling of QCAD with Dakota allows to rapidly identify which device layouts are more likely leading to few-electron quantum dots. Very efficient QCAD simulations on a large number of fabricated and proposed Si DQDs have made it possible to provide fast feedback for design comparison and optimization.

More Details

Power/energy use cases for high performance computing

Laros, James H.; Kelly, Suzanne M.

Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

More Details

Incremental learning for automated knowledge capture

Davis, Warren L.; Dixon, Kevin R.; Martin, Nathaniel M.; Wendt, Jeremy D.

People responding to high-consequence national-security situations need tools to help them make the right decision quickly. The dynamic, time-critical, and ever-changing nature of these situations, especially those involving an adversary, require models of decision support that can dynamically react as a situation unfolds and changes. Automated knowledge capture is a key part of creating individualized models of decision making in many situations because it has been demonstrated as a very robust way to populate computational models of cognition. However, existing automated knowledge capture techniques only populate a knowledge model with data prior to its use, after which the knowledge model is static and unchanging. In contrast, humans, including our national-security adversaries, continually learn, adapt, and create new knowledge as they make decisions and witness their effect. This artificial dichotomy between creation and use exists because the majority of automated knowledge capture techniques are based on traditional batch machine-learning and statistical algorithms. These algorithms are primarily designed to optimize the accuracy of their predictions and only secondarily, if at all, concerned with issues such as speed, memory use, or ability to be incrementally updated. Thus, when new data arrives, batch algorithms used for automated knowledge capture currently require significant recomputation, frequently from scratch, which makes them ill suited for use in dynamic, timecritical, high-consequence decision making environments. In this work we seek to explore and expand upon the capabilities of dynamic, incremental models that can adapt to an ever-changing feature space.

More Details

The relational blackboard

22nd Annual Conference on Behavior Representation in Modeling and Simulation, BRiMS 2013 - Co-located with the International Conference on Cognitive Modeling

Abbott, Robert G.

Modeling agent behaviors in complex task environments requires the agent to be sensitive to complex stimuli such as the positions and actions of varying numbers of other entities. Entity state updates may be received asynchronously rather than on a coordinated clock signal, so the world state must be estimated based on the most recent information available for each entity. The simulation environment is likely to be distributed across several computers over a network. This paper presents the Relational Blackboard (RBB), which is a framework developed to address these needs with clarity and efficiency. The purpose of this paper is to explain the concepts used to represent and process spatio-temporal data in the RBB framework so researchers in related areas can apply the concepts and software to their own problems of interest; detailed description of our own research will be found in other papers. The software is freely available under the BSD open-source license at http://rbb.sandia.gov.

More Details

Evaluating Near-Term Adiabatic Quantum Computing

Parekh, Ojas D.; Aidun, John B.; Dubicka, Irene D.; Landahl, Andrew J.; Shulenburger, Luke N.; Tigges, Chris P.; Wendt, Jeremy D.

This report summarizes the first year’s effort on the Enceladus project, under which Sandia was asked to evaluate the potential advantages of adiabatic quantum computing for analyzing large data sets in the near future, 5-to-10 years from now. We were not specifically evaluating the machine being sold by D-Wave Systems, Inc; we were asked to anticipate what future adiabatic quantum computers might be able to achieve. While realizing that the greatest potential anticipated from quantum computation is still far into the future, a special purpose quantum computing capability, Adiabatic Quantum Optimization (AQO), is under active development and is maturing relatively rapidly; indeed, D-Wave Systems Inc. already offers an AQO device based on superconducting flux qubits. The AQO architecture solves a particular class of problem, namely unconstrained quadratic Boolean optimization. Problems in this class include many interesting and important instances. Because of this, further investigation is warranted into the range of applicability of this class of problem for addressing challenges of analyzing big data sets and the effectiveness of AQO devices to perform specific analyses on big data. Further, it is of interest to also consider the potential effectiveness of anticipated special purpose adiabatic quantum computers (AQCs), in general, for accelerating the analysis of big data sets. The objective of the present investigation is an evaluation of the potential of AQC to benefit analysis of big data problems in the next five to ten years, with our main focus being on AQO because of its relative maturity. We are not specifically assessing the efficacy of the D-Wave computing systems, though we do hope to perform some experimental calculations on that device in the sequel to this project, at least to provide some data to compare with our theoretical estimates.

More Details

Computational Mechanics for Heterogeneous Materials

Baczewski, Andrew D.; Yarrington, Cole Y.; Bond, Stephen D.; Erikson, William W.; Lehoucq, Richard B.; Mondy, L.A.; Noble, David R.; Pierce, Flint P.; Roberts, Christine C.; Van Swol, Frank

The subject of this work is the development of models for the numerical simulation of matter, momentum, and energy balance in heterogeneous materials. These are materials that consist of multiple phases or species or that are structured on some (perhaps many) scale(s). By computational mechanics we mean to refer generally to the standard type of modeling that is done at the level of macroscopic balance laws (mass, momentum, energy). We will refer to the flow or flux of these quantities in a generalized sense as transport. At issue here are the forms of the governing equations in these complex materials which are potentially strongly inhomogeneous below some correlation length scale and are yet homogeneous on larger length scales. The question then becomes one of how to model this behavior and what are the proper multi-scale equations to capture the transport mechanisms across scales. To address this we look to the area of generalized stochastic process that underlie the transport processes in homogeneous materials. The archetypal example being the relationship between a random walk or Brownian motion stochastic processes and the associated Fokker-Planck or diffusion equation. Here we are interested in how this classical setting changes when inhomogeneities or correlations in structure are introduced into the problem. Aspects of non-classical behavior need to be addressed, such as non-Fickian behavior of the mean-squared-displacement (MSD) and non-Gaussian behavior of the underlying probability distribution of jumps. We present an experimental technique and apparatus built to investigate some of these issues. We also discuss diffusive processes in inhomogeneous systems, and the role of the chemical potential in diffusion of hard spheres is considered. Also, the relevance to liquid metal solutions is considered. Finally we present an example of how inhomogeneities in material microstructure introduce fluctuations at the meso-scale for a thermal conduction problem. These fluctuations due to random microstructures also provide a means of characterizing the aleatory uncertainty in material properties at the mesoscale.

More Details

An extended finite element method with algebraic constraints (XFEM-AC) for problems with weak discontinuities

Computer Methods in Applied Mechanics and Engineering

Kramer, Richard M.; Bochev, Pavel B.; Siefert, Christopher S.; Voth, Thomas E.

We present a new extended finite element method with algebraic constraints (XFEM-AC) for recovering weakly discontinuous solutions across internal element interfaces. If necessary, cut elements are further partitioned by a local secondary cut into body-fitting subelements. Each resulting subelement contributes an enrichment of the parent element. The enriched solutions are then tied using algebraic constraints, which enforce C0 continuity across both cuts. These constraints impose equivalence of the enriched and body-fitted finite element solutions, and are the key differentiating feature of the XFEM-AC. In so doing, a stable mixed formulation is obtained without having to explicitly construct a compatible Lagrange multiplier space and prove a formal inf-sup condition. Likewise, convergence of the XFEM-AC solution follows from its equivalence to the interface-fitted finite element solution. This relationship is further exploited to improve the numerical solution of the resulting XFEM-AC linear system. Examples are shown demonstrating the new approach for both steady-state and transient diffusion problems. © 2013 Elsevier B.V.

More Details

Qualification for PowerInsight accuracy of power measurements

Laros, James H.; Pedretti, Kevin

Accuracy of component based power measuring devices forms a necessary basis for research in the area of power-efficient and power-aware computing. The accuracy of these devices must be quantified within a reasonable tolerance. This study focuses on PowerInsight, an out- of-band embedded measuring device which takes readings of power rails on compute nodes within a HPC system in realtime. We quantify how well the device performs in comparison to a digital oscilloscope as well as PowerMon2. We show that the accuracy is within a 6% deviation on measurements under reasonable load.

More Details
Results 6351–6400 of 9,998
Results 6351–6400 of 9,998