Extreme-scale parallel systems will require alternative methods for applications to maintain current levels of uninterrupted execution. Redundant computation is one approach to consider, if the benefits of increased resiliency outweigh the cost of consuming additional resources. We describe a transparent redundancy approach for MPI applications and detail two different implementations that provide the ability to tolerate a range of failure scenarios, including loss of application processes and connectivity.We compare these two approaches and show performance results from micro-benchmarks that bound worst-case message passing performance degradation.We propose several enhancements that could lower the overhead of providing resiliency through redundancy.
Understanding the effects of gravity and wind loads on concentrating solar power (CSP) collectors is critical for performance calculations and developing more accurate alignment procedures and techniques. This paper presents a rigorous finite-element model of a parabolic trough collector that is used to determine the impact of gravity loads on bending and displacements of the mirror facets and support structure. The geometry of the LUZ LS-2 parabolic trough collector was modeled using SolidWorks, and gravity-induced loading and displacements were simulated in SolidWorks Simulation. The model of the trough collector was evaluated in two positions: the 90{sup o} position (mirrors facing upward) and the 0{sup o} position (mirrors facing horizontally). The slope errors of the mirror facet reflective surfaces were found by evaluating simulated angular displacements of node-connected segments along the mirror surface. The ideal (undeformed) shape of the mirror was compared to the shape of the deformed mirror after gravity loading. Also, slope errors were obtained by comparing the deformed shapes between the 90{sup o} and 0{sup o} positions. The slope errors resulting from comparison between the deformed vs. undeformed shape were as high as {approx}2 mrad, depending on the location of the mirror facet on the collector. The slope errors resulting from a change in orientation of the trough from the 90{sup o} position to the 0{sup o} position with gravity loading were as high as {approx}3 mrad, depending on the location of the facet.
This paper introduces a new analytical 'stretch' function that accurately predicts the flux distribution from on-axis point-focus collectors. Different dish sizes and slope errors can be assessed using this analytical function with a ratio of the focal length to collector diameter fixed at 0.6 to yield the maximum concentration ratio. Results are compared to data, and the stretch function is shown to provide more accurate flux distributions than other analytical methods employing cone optics.
A rigorous computational fluid dynamics (CFD) approach to calculating temperature distributions, radiative and convective losses, and flow fields in a cavity receiver irradiated by a heliostat field is typically limited to the receiver domain alone for computational reasons. A CFD simulation cannot realistically yield a precise solution that includes the details within the vast domain of an entire heliostat field in addition to the detailed processes and features within a cavity receiver. Instead, the incoming field irradiance can be represented as a boundary condition on the receiver domain. This paper describes a program, the Solar Patch Calculator, written in Microsoft Excel VBA to characterize multiple beams emanating from a 'solar patch' located at the aperture of a cavity receiver, in order to represent the incoming irradiance from any field of heliostats as a boundary condition on the receiver domain. This program accounts for cosine losses; receiver location; heliostat reflectivity, areas and locations; field location; time of day and day of year. This paper also describes the implementation of the boundary conditions calculated by this program into a Discrete Ordinates radiation model using Ansys{reg_sign} FLUENT (www.fluent.com), and compares the results to experimental data and to results generated by the code DELSOL.
Prediction is defined in the American Heritage Dictionary as follows: 'To state, tell about, or make known in advance, especially on the basis of special knowledge.' What special knowledge do we demand of modeling and simulation to assert that we have a predictive capability for high consequence applications? The 'special knowledge' question can be answered in two dimensions: the process and rigor by which modeling and simulation is executed and assessment results for the specific application. Here we focus on the process and rigor dimension and address predictive capability in terms of six attributes: (1) geometric and representational fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) validation, and (6) uncertainty quantification. This presentation will demonstrate through mini-tutorials, simple examples, and numerous case studies how each attribute creates opportunities for errors, biases, or uncertainties to enter into simulation results. The demonstrations will motivate a set of practices that minimize the risk in using modeling and simulation for high-consequence applications while defining important research directions. It is recognized that there are cultural, technical, infrastructure, and resource barriers that prevent analysts from performing all analyses at the highest levels of rigor. Consequently, the audience for this talk is (1) analysts, so they can know what is expected of them, (2) decision makers, so they can know what to expect from modeling and simulation, and (3) the R&D community, so they can address the technical and infrastructure issues that prevent analysts from executing analyses in a practical, timely, and quality manner.
Why plan beyond the flu: (1) the installation may be the target of bioterrorism - National Laboratory, military base collocated in large population center; and (2) International Airport - transport of infectious agents to the area - Sandia is a global enterprise and staff visit many foreign countries. In addition to the Pandemic Plan, Sandia has developed a separate Disease Response Plan (DRP). The DRP addresses Category A, B pathogens and Severe Acute Respiratory Syndrome (SARS). The DRP contains the Cities Readiness Initiative sub-plan for disbursement of Strategic National Stockpile assets.
This report documents calculations conducted to determine if 42 low-power transmitters located within a metallic enclosure can initiate electro-explosive devices (EED) located within the same enclosure. This analysis was performed for a generic EED no-fire power level of 250 mW. The calculations show that if the transmitters are incoherent, the power available is 32 mW - approximately one-eighth of the assumed level even with several worst-case assumptions in place.
The current packaging of most HC-3 radioactive materials at SNL/NM do not meet DOT requirements for offsite shipment. SNL/NM is transporting HC-3 quantities of radioactive materials from their storage locations in the Manzano Nuclear Facilities bunkers to facilities in TA-5 to be repackaged for offsite shipment. All transportation of HC-3 rad material by SNL/NM is onsite (performed within the confines of KAFB). Transport is performed only by the Regulated Waste/Nuclear Material Disposition Department. Part of the HC3T process is to provide the CAT with the following information at least three days prior to the move: (1) RFt-Request for transfer; (2) HC3T movement report; (3) Radiological survey; and (4) Transportation Route Map.
The Z machine is a fast pulsed-power machine at Sandia National Laboratories designed to deliver a 100-ns rise-time, 26-MA pulse of electrical current to Z-pinch experiments for research in radiation effects and inertial confinement fusion. Since 1999, Z has also been used as a current source for magnetically driven, high-pressure, high-strain-rate experiments in condensed matter. In this mode, Z produces simultaneous planar ramp-wave loading, with rise times in the range of 300-800 ns and peak longitudinal stress in the range of 4-400 GPa, of multiple macroscopic material samples. Control of the current-pulse shape enables shockless propagation of these ramp waves through samples 1-2 mm thick to measure quasi-isentropic compression response, as well as shockless acceleration of copper flyer plates to at least 28 km/s for impact experiments to measure ultra-high-pressure (-3000 GPa) shock compression response. This presentation will give background on the relevant physics, describe the experimental technique, and show recent results from both types of experiments.
The objectives of this presentation are: (1) To develop and validate a two-phase, three-dimensional transport modelfor simulating PEM fuel cell performance under a wide range of operating conditions; (2) To apply the validated PEM fuel cell model to improve fundamental understanding of key phenomena involved and to identify rate-limiting steps and develop recommendations for improvements so as to accelerate the commercialization of fuel cell technology; (3) The validated PEMFC model can be employed to improve and optimize PEM fuel cell operation. Consequently, the project helps: (i) address the technical barriers on performance, cost, and durability; and (ii) achieve DOE's near-term technical targets on performance, cost, and durability in automotive and stationary applications.
We have demonstrated a novel microfluidic technique for aqueous media, which uses super-hydrophobic materials to create microfluidic channels that are open to the atmosphere. We have demonstrated the ability to perform traditional electrokinetic operations such as ionic separations and electrophoresis using these devices. The rate of evaporation was studied and found to increase with decreasing channel size, which places a limitation on the minimum size of channel that could be used for such a device.
Arctic sea ice plays an important role in global climate by reflecting solar radiation and insulating the ocean from the atmosphere. Due to feedback effects, the Arctic sea ice cover is changing rapidly. To accurately model this change, high-resolution calculations must incorporate: (1) annual cycle of growth and melt due to radiative forcing; (2) mechanical deformation due to surface winds, ocean currents and Coriolis forces; and (3) localized effects of leads and ridges. We have demonstrated a new mathematical algorithm for solving the sea ice governing equations using the material-point method with an elastic-decohesive constitutive model. An initial comparison with the LANL CICE code indicates that the ice edge is sharper using Materials-Point Method (MPM), but that many of the overall features are similar.
The development of thin batteries has presented several interesting problems which are not seen in traditional battery sizes. As the size of a battery reaches a minimum, the usable capacity of the battery decreases due to the fact that the major constituent of the battery becomes the package and separator. As the size decreases, the volumetric contribution from the package and separator increases. This can result in a reduction of capacity from these types of batteries of nearly all of the available power. The development of a method for directly printing the battery layers, including the package, in place would help to alleviate this problem. The technology used in this paper to directly print battery components is known as robocasting and is capable of direct writing of slurries in complex geometries. This method is also capable of conformally printing on three dimensional surfaces, opening up the possibility of novel batteries based on tailoring battery footprints to conform to the available substrate geometry. Interfacial resistance can also be reduced by using the direct write method. Each layer is printed in place on the battery stack instead of being stacked one at a time. This ensures an intimate contact and seal at every interface within the cell. By limiting the resistance at these interfaces, we effectively help increase the useable capacity of our battery through increase transport capability. We have developed methodology for printing several different separator materials for use in a lithium cell. When combined with a printable cathode comprised of LiFePO{sub 4} (as seen in Figure 1) and a lithium anode, our battery is capable of delivering a theoretical capacity of 170 mAh g{sup -1}. This capacity is diminished by transport phenomena within the cell which limit the transport rate of the lithium ions during the discharge cycle. The material set chosen for the printable separator closely resemble those used in commercially available separators in order to keep the transport rates high within the cell during charge and discharge. In order to evaluate the effect of each layer being printed using the robocasting technique, coin cells using printed separator materials were assembled and cycled vs. Li/Li{sup +}. This allows for the standardization of a test procedure in order to evaluate each layer of a printed cell one layer at a time. A typical charge/discharge curve can be seen in Figure 2 using a printed LiFePO{sub 4} cathode and a printed separator with a commercial Celgard separator. This experiment was run to evaluate the loss in capacity and slowdown of transport within the cell due to the addition of the printed separator. This cell was cycled multiple times and showed a capacity of 75 mAh/g. The ability for this cell to cycle with good capacity indicates that a fully printable separator material is viable for use in a full lithium cell due to the retention of capacity. Most of the fully printed cathode and separator cells exhibit working capacities between 65 and 95 mAh/g up to this point. This capacity should increase as the efficiency of the printed separator increases. The ability to deposit each layer within the cell allows for intimate contact of each layer and ensures for a reduction of interfacial impedance of each layer within the cell. The overall effect of printing multiple layers within the cell will be an overall increase in the ionic conductivity during charge and discharge cycles. Several different polymer membranes have been investigated for use as a printed separator. The disadvantage of using polymer separators or solid electrolyte batteries is that they have relatively low conductivities at room temperature (10{sup -6} - 10{sup -8} S cm{sup -1}). This is orders of magnitude lower than the typically accepted 10{sup -3} S cm{sup -1} needed for proper ionic transport during battery discharge Because of their low conductivity, typical polymer separators such as polyethylene oxide (PEO) have a normal operational temperature well above ambient. At elevated temperature the conductivity of these polymers increases. These polymer membranes are, however, ideal for printable applications due to their ease of fabrication using the robocasting process and their ability to conform to surfaces uniformly. While the ability to print cathodes and separators is advantageous as a technology, several of the components still need to be fully optimized. The overall design for the full printed lithium cell can be seen in Figure 3. The printed cathode and separator will interface with a printed anode and current collector, using the LiFePO{sub 4} cycling to plate out a metallic lithium anode on the current collector during cycling. The ability to print every layer of the cell conformally using the robocasting technique will allow for ultimate flexibility in the application of a printed battery.
Analysts working at the International Data Centre in support of treaty monitoring through the Comprehensive Nuclear-Test-Ban Treaty Organization spend a significant amount of time reviewing hypothesized seismic events produced by an automatic processing system. When reviewing these events to determine their legitimacy, analysts take a variety of approaches that rely heavily on training and past experience. One method used by analysts to gauge the validity of an event involves examining the set of stations involved in the detection of an event. In particular, leveraging past experience, an analyst can say that an event located in a certain part of the world is expected to be detected by Stations A, B, and C. Implicit in this statement is that such an event would usually not be detected by Stations X, Y, or Z. For some well understood parts of the world, the absence of one or more 'expected' stations - or the presence of one or more 'unexpected' stations - is correlated with a hypothesized event's legitimacy and to its survival to the event bulletin. The primary objective of this research is to formalize and quantify the difference between the observed set of stations detecting some hypothesized event, versus the expected set of stations historically associated with detecting similar nearby events close in magnitude. This Station Set Residual can be quantified in many ways, some of which are correlated with the analysts determination of whether or not the event is valid. We propose that this Station Set Residual score can be used to screen out certain classes of 'false' events produced by automatic processing with a high degree of confidence, reducing the analyst burden. Moreover, we propose that the visualization of the historically expected distribution of detecting stations can be immediately useful as an analyst aid during their review process.
To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D (SAndia LoS Alamos) version 1.4, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is > 55%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model is assessed using a variation of the standard checkerboard method, as well as by directly estimating the diagonal of the model resolution matrix based on the technique developed by Bekas, et al. We compare the travel-time prediction and location capabilities of this model over standard 1D models. We perform location tests on a global, geographically-distributed event set with ground truth levels of 5 km or better. These events generally possess hundreds of Pn and P phases from which we can generate different realizations of station distributions, yielding a range of azimuthal coverage and proportions of teleseismic to regional arrivals, with which we test the robustness and quality of relocation. The SALSA3D model reduces mislocation over standard 1D ak135, especially with increasing azimuthal gap. The 3D model appears to perform better for locations based solely or dominantly on regional arrivals, which is not unexpected given that ak135 represents a global average and cannot therefore capture local and regional variations.
Recently, Sandia National Laboratories and General Motors cooperated on the development of the Biofuels Deployment Model (BDM) to assess the feasibility, implications, limitations, and enablers of producing 90 billion gallons of ethanol per year by 2030. Leveraging the past investment, a decision support model based on the BDM is being developed to assist investors, entrepreneurs, and decision makers in evaluating the costs and benefits associated with biofuels development in the U.S.-Mexico border region. Specifically, the model is designed to assist investors and entrepreneurs in assessing the risks and opportunities associated with alternative biofuels development strategies along the U.S.-Mexico border, as well as, assist local and regional decision makers in understanding the tradeoffs such development poses to their communities. The decision support model is developed in a system dynamics framework utilizing a modular architecture that integrates the key systems of feedstock production, transportation, and conversion. The model adopts a 30-year planning horizon, operating on an annual time step. Spatially the model is disaggregated at the county level on the U.S. side of the border and at the municipos level on the Mexican side. The model extent includes Luna, Hildalgo, Dona Anna, and Otero counties in New Mexico, El Paso and Hudspeth counties in Texas, and the four munipos along the U.S. border in Chihuahua. The model considers a variety of feedstocks; specifically, algae, gitropha, castor oil, and agricultural waste products from chili and pecans - identifying suitable lands for these feedstocks, possible yields, and required water use. The model also evaluates the carbon balance for each crop and provides insight into production costs including labor demands. Finally, the model is fitted with an interactive user interface comprised of a variety of controls (e.g., slider bars, radio buttons), descriptive text, and output graphics allowing stakeholders to directly explore the tradeoffs between alternative biofuels development scenarios.
The problem of missing data is ubiquitous in domains such as biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, and communication networks|all domains in which data collection is subject to occasional errors. Moreover, these data sets can be quite large and have more than two axes of variation, e.g., sender, receiver, time. Many applications in those domains aim to capture the underlying latent structure of the data; in other words, they need to factorize data sets with missing entries. If we cannot address the problem of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP), and formulate the CP model as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) using a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes.
Wind loading from turbulence and gusts can cause damage in horizontal axis wind turbines. These unsteady loads and the resulting damage initiation and propagation are difficult to predict. Unsteady loads enter at the rotor and are transmitted to the drivetrain. The current generation of wind turbine has drivetrain-mounted vibration and bearing temperature sensors, a nacelle-mounted inertial measurement unit, and a nacelle-mounted anemometer and wind vane. Some advanced wind turbines are also equipped with strain measurements at the root of the rotor. This paper analyzes additional measurements in a rotor blade to investigate the complexity of these unsteady loads. By identifying the spatial distribution, amplitude, and frequency bandwidth of these loads, design improvements could be facilitated to reduce uncertainties in reliability predictions. In addition, dynamic load estimates could be used in the future to control high-bandwidth aerodynamic actuators distributed along the rotor blade to reduce the saturation of slower pitch actuators currently used for wind turbine blades. Local acceleration measurements are made along a rotor blade to infer operational rotor states including deflection and dynamic modal contributions. Previous work has demonstrated that acceleration measurements can be experimentally acquired on an operating wind turbine. Simulations on simplified rotor blades have also been used to demonstrate that mean blade loading can be estimated based on deflection estimates. To successfully apply accelerometers in wind turbine applications for load identification, the spectral and spatial characteristics of each excitation source must be understood so that the total acceleration measurement can be decomposed into contributions from each source. To demonstrate the decomposition of acceleration measurements in conjunction with load estimation methods, a flexible body model has been created with MSC.ADAMS{copyright} The benefit of using a simulation model as opposed to a physical experiment to examine the merits of acceleration-based load identification methods is that models of the structural dynamics and aerodynamics enable one to compare estimates of the deflection and loading with actual values. Realistic wind conditions are applied to the wind turbine and used to estimate the operational displacement and acceleration of the rotor. The per-revolution harmonics dominate the displacement and acceleration response. Turbulent wind produces broadband excitation that includes both the harmonics and modal vibrations, such as the tower modes. Power Spectral Density estimates of the acceleration along the span of the rotor blades indicate that the edge modes may be coupled to the second harmonic.
We examine several conducting spheres moving through a magnetic field gradient. An analytical approximation is derived and an experiment is conducted to verify the analytical solution. The experiment is simulated as well to produce a numerical result. Both the low and high magnetic Reynolds number regimes are studied. Deformation of the sphere is noted in the high Reynolds number case. It is suggested that this deformation effect could be useful for designing or enhancing present protection systems against space debris.
Meaningful computational investigations of many solid mechanics problems require accurate characterization of material behavior through failure. A recent approach to fracture modeling has combined the partition of unity finite element method (PUFEM) with cohesive zone models. Extension of the PUFEM to address crack propagation is often referred to as the extended finite element method (XFEM). In the PUFEM, the displacement field is enriched to improve the local approximation. Most XFEM studies have used simplified enrichment functions (e.g., generalized Heaviside functions) to represent the strong discontinuity but have lacked an analytical basis to represent the displacement gradients in the vicinity of the cohesive crack. As such, the mesh had to be sufficiently fine for the FEM basis functions to capture these gradients.In this study enrichment functions based upon two analytical investigations of the cohesive crack problem are examined. These functions have the potential of representing displacement gradients in the vicinity of the cohesive crack with a relatively coarse mesh and allow the crack to incrementally advance across each element. Key aspects of the corresponding numerical formulation are summarized. Analysis results for simple model problems are presented to evaluate if quasi-static crack propagation can be accurately followed with the proposed formulation. A standard finite element solution with interface elements is used to provide the accurate reference solution, so the model problems are limited to a straight, mode I crack in plane stress. Except for the cohesive zone, the material model for the problems is homogenous, isotropic linear elasticity. The effects of mesh refinement, mesh orientation, and enrichment schemes that enrich a larger region around the cohesive crack are considered in the study. Propagation of the cohesive zone tip and crack tip, time variation of the cohesive zone length, and crack profiles are presented. The analysis results indicate that the enrichment functions based upon the asymptotic solutions can accurately track the cohesive crack propagation independent of mesh orientation. Example problems incorporating enrichment functions for mode II kinematics are also presented. The results yield acceptable crack paths compared with experimental studies. The applicability of the enrichment functions to problems with anisotropy, large strains, and inelasticity is the subject of ongoing studies. Preliminary results for a contrived orthotropic elastic material reflect a decrease in accuracy with increased orthotropy but do not preclude their application to this class of problems.
Even after decades of research, Li-ion cells still lack thermal stability. A number of approaches, including adding fire retardants or fluoro compounds to the electrolyte to mitigate fire, have been investigated. These additives improved the thermal stability of the cells (only marginally) but not enough for use in transportation applications. Recent investigations indicate that hydrofluoro-ethers are promising as nonflammable additives1. We describe here the results of our studies on electrolytes containing the hydrofluoro-ethers in cells fabricated at Sandia. In particular, we are investigating two solvents as nonflammable additives. These are: (1) 2-trifluoromethyl-3-methoxyperfluoropentane {l_brace}TMMP{r_brace} and (2) 2-trifluoro-2-fluoro-3-difluoropropoxy-3-difluoro-4-fluoro-5-trifluoropentane {l_brace}TPTP{r_brace}. These electrolytes not only have good thermal stability compared to the conventional electrolytes but respectable ionic conductivity. Sandia made 18650 cells successfully completed the formational cycle. The impedance behavior is typical of Li-ion cells.
The total peak radiated power of the Department of Energy Mark II container tag was measured in the electromagnetic reverberation chamber facility at Sandia National Laboratories. The tag's radio frequency content was also evaluated for possible emissions outside the intentional transmit frequency band. No spurious emissions of any significance were found, and the radiated power conformed to the manufacturer's specifications.
This talk discusses the unique demands that informatics applications, particularly graph-theoretic applications, place on computer systems. These applications tend to pose significant data movement challenges for conventional systems. Worse, underlying technology trends are moving computers to cost-driven optimization points that exacerbate the problem. The X-caliber architecture is an economically viable counter-example to conventional architectures based on the integration of innovative technologies that support the data movement requirements of large-scale informatics applications. This talk will discuss the technology drivers and architectural features of the platform, and present analysis showing the benefits for informatics applications, as well as our traditional science and engineering HPC applications.
Multiscale multiphysics problems arise in a host of application areas of significant relevance to DOE, including electrical storage systems (membranes and electrodes in fuel cells, batteries, and ultracapacitors), water surety, chemical analysis and detection systems, and surface catalysis. Multiscale methods aim to provide detailed physical insight into these complex systems by incorporating coupled effects of relevant phenomena on all scales. However, many sources of uncertainty and modeling inaccuracies hamper the predictive fidelity of multiscale multiphysics simulations. These include parametric and model uncertainties in the models on all scales, and errors associated with coupling, or information transfer, across scales/physics. This presentation introduces our work on the development of uncertainty quantification methods for spatially decomposed atomistic-to-continuum (A2C) multiscale simulations. The key thrusts of this research effort are: inference of uncertain parameters or observables from experimental or simulation data; propagation of uncertainty through particle models; propagation of uncertainty through continuum models; propagation of information and uncertainty across model/scale interfaces; and numerical and computational analysis and control. To enable the bidirectional coupling between the atomistic and continuum simulations, a general formulation has been developed for the characterization of sampling noise due to intrinsic variability in particle simulations, and for the propagation of both this sampling noise and parametric uncertainties through coupled A2C multiscale simulations. Simplified tests of noise quantification in particle computations are conducted through Bayesian inference of diffusion rates in an idealized isothermal binary material system. A proof of concept is finally presented based on application of the present formulation to the propagation of uncertainties in a model plane Couette flow, where the near wall region is handled with molecular dynamics while the bulk region is handled with continuum methods.
There is considerable interest in achieving a 1000 fold increase in supercomputing power in the next decade, but the challenges are formidable. In this paper, the authors discuss some of the driving science and security applications that require Exascale computing (a million, trillion operations per second). Key architectural challenges include power, memory, interconnection networks and resilience. The paper summarizes ongoing research aimed at overcoming these hurdles. Topics of interest are architecture aware and scalable algorithms, system simulation, 3D integration, new approaches to system-directed resilience and new benchmarks. Although significant progress is being made, a broader international program is needed.
The need for the engineering analysis of systems in which the transport of thermal energy occurs primarily through a conduction process is a common situation. For all but the simplest geometries and boundary conditions, analytic solutions to heat conduction problems are unavailable, thus forcing the analyst to call upon some type of approximate numerical procedure. A wide variety of numerical packages currently exist for such applications, ranging in sophistication from the large, general purpose, commercial codes, such as COMSOL, COSMOSWorks, ABAQUS and TSS to codes written by individuals for specific problem applications. The original purpose for developing the finite element code described here, COYOTE, was to bridge the gap between the complex commercial codes and the more simplistic, individual application programs. COYOTE was designed to treat most of the standard conduction problems of interest with a user-oriented input structure and format that was easily learned and remembered. Because of its architecture, the code has also proved useful for research in numerical algorithms and development of thermal analysis capabilities. This general philosophy has been retained in the current version of the program, COYOTE, Version 5.0, though the capabilities of the code have been significantly expanded. A major change in the code is its availability on parallel computer architectures and the increase in problem complexity and size that this implies. The present document describes the theoretical and numerical background for the COYOTE program. This volume is intended as a background document for the user's manual. Potential users of COYOTE are encouraged to become familiar with the present report and the simple example analyses reported in before using the program. The theoretical and numerical background for the finite element computer program, COYOTE, is presented in detail. COYOTE is designed for the multi-dimensional analysis of nonlinear heat conduction problems. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in COYOTE are also outlined. Instructions for use of the code are documented in SAND2010-0714.
The viscosity of molten salts comprising ternary and quaternary mixtures of the nitrates of sodium, potassium, lithium and calcium was determined experimentally. Viscosity was measured over the temperature range from near the relatively low liquidus temperatures of he individual mixtures to 200C. Molten salt mixtures that do not contain calcium nitrate exhibited relatively low viscosity and an Arrhenius temperature dependence. Molten salt mixtures that contained calcium nitrate were relatively more viscous and viscosity increased as the roportion of calcium nitrate increased. The temperature dependence of viscosity of molten salts containing calcium nitrate displayed curvature, rather than linearity, when plotted in Arrhenius format. Viscosity data for these mixtures were correlated by the Vogel-Fulcher- ammann-Hesse equation.
In a multiyear research agreement with Tenix Investments Pty. Ltd., Sandia has been developing field deployable technologies for detection of biotoxins in water supply systems. The unattended water sensor or UWS employs microfluidic chip based gel electrophoresis for monitoring biological analytes in a small integrated sensor platform. This instrument collects, prepares, and analyzes water samples in an automated manner. Sample analysis is done using the {mu}ChemLab{trademark} analysis module. This report uses analysis results of two datasets collected using the UWS to estimate performance of the device. The first dataset is made up of samples containing ricin at varying concentrations and is used for assessing instrument response and detection probability. The second dataset is comprised of analyses of water samples collected at a water utility which are used to assess the false positive probability. The analyses of the two sets are used to estimate the Receiver Operating Characteristic or ROC curves for the device at one set of operational and detection algorithm parameters. For these parameters and based on a statistical estimate, the ricin probability of detection is about 0.9 at a concentration of 5 nM for a false positive probability of 1 x 10{sup -6}.
Predicting the response of energetic materials during accidents, such as fire, is important for high consequence safety analysis. We hypothesize that responses of ener-getic materials before and after ignition depend on factors that cause thermal and chemi-cal damage. We have previously correlated violence from PETN to the extent of decom-position at ignition, determined as the time when the maximum Damkoehler number ex-ceeds a threshold value. We seek to understand if our method of violence correlation ap-plies universally to other explosive starting with RDX.
The image created in reflected light DIC can often be interpreted as a true three-dimensional representation of the surface geometry, provided a clear distinction can be realized between raised and lowered regions in the specimen. It may be helpful if our definition of saliency embraces work on the human visual system (HVS) as well as the more abstract work on saliency, as it is certain that understanding by humans will always stand between recording of a useful signal from all manner of sensors and so-called actionable intelligence. A DARPA/DSO program lays down this requirement in a current program (Kruse 2010): The vision for the Neurotechnology for Intelligence Analysts (NIA) Program is to revolutionize the way that analysts handle intelligence imagery, increasing both the throughput of imagery to the analyst and overall accuracy of the assessments. Current computer-based target detection capabilities cannot process vast volumes of imagery with the speed, flexibility, and precision of the human visual system.
The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.
The generation of all-hexahedral finite element meshes has been an area of ongoing research for the past two decades and remains an open problem. Unconstrained plastering is a new method for generating all-hexahedral finite element meshes on arbitrary volumetric geometries. Starting from an unmeshed volume boundary, unconstrained plastering generates the interior mesh topology without the constraints of a pre-defined boundary mesh. Using advancing fronts, unconstrained plastering forms partially defined hexahedral dual sheets by decomposing the geometry into simple shapes, each of which can be meshed with simple meshing primitives. By breaking from the tradition of previous advancing-front algorithms, which start from pre-meshed boundary surfaces, unconstrained plastering demonstrates that for the tested geometries, high quality, boundary aligned, orientation insensitive, all-hexahedral meshes can be generated automatically without pre-meshing the boundary. Examples are given for meshes from both solid mechanics and geotechnical applications.
The objective of this project is to investigate the complex fracture of ice and understand its role within larger ice sheet simulations and global climate change. At the present time, ice fracture is not explicitly considered within ice sheet models due in part to large computational costs associated with the accurate modeling of this complex phenomena. However, fracture not only plays an extremely important role in regional behavior but also influences ice dynamics over much larger zones in ways that are currently not well understood. Dramatic illustrations of fracture-induced phenomena most notably include the recent collapse of ice shelves in Antarctica (e.g. partial collapse of the Wilkins shelf in March of 2008 and the diminishing extent of the Larsen B shelf from 1998 to 2002). Other fracture examples include ice calving (fracture of icebergs) which is presently approximated in simplistic ways within ice sheet models, and the draining of supraglacial lakes through a complex network of cracks, a so called ice sheet plumbing system, that is believed to cause accelerated ice sheet flows due essentially to lubrication of the contact surface with the ground. These dramatic changes are emblematic of the ongoing change in the Earth's polar regions and highlight the important role of fracturing ice. To model ice fracture, a simulation capability will be designed centered around extended finite elements and solved by specialized multigrid methods on parallel computers. In addition, appropriate dynamic load balancing techniques will be employed to ensure an approximate equal amount of work for each processor.