Publications

Results 70001–70200 of 96,771

Search results

Jump to search filters

Uncertainty quantification for large-scale ocean circulation predictions

Safta, Cosmin S.; Sargsyan, Khachik S.; Debusschere, Bert D.; Najm, H.N.

Uncertainty quantificatio in climate models is challenged by the sparsity of the available climate data due to the high computational cost of the model runs. Another feature that prevents classical uncertainty analyses from being easily applicable is the bifurcative behavior in the climate data with respect to certain parameters. A typical example is the Meridional Overturning Circulation in the Atlantic Ocean. The maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO{sub 2} forcing. We develop a methodology that performs uncertainty quantificatio in the presence of limited data that have discontinuous character. Our approach is two-fold. First we detect the discontinuity location with a Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve location in presence of arbitrarily distributed input parameter values. Furthermore, we developed a spectral approach that relies on Polynomial Chaos (PC) expansions on each sides of the discontinuity curve leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification and propagation. The methodology is tested on synthetic examples of discontinuous data with adjustable sharpness and structure.

More Details

Peridynamic modeling of fracture in elastomers and composites

Silling, Stewart A.

The peridynamic model of solid mechanics is a mathematical theory designed to provide consistent mathematical treatment of deformations involving discontinuities, especially cracks. Unlike the partial differential equations (PDEs) of the standard theory, the fundamental equations of the peridynamic theory remain applicable on singularities such as crack surfaces and tips. These basic relations are integro-differential equations that do not require the existence of spatial derivatives of the deformation, or even continuity of the deformation. In the peridynamic theory, material points in a continuous body separated from each other by finite distances can interact directly through force densities. The interaction between each pair of points is called a bond. The dependence of the force density in a bond on the deformation provides the constitutive model for a material. By allowing the force density in a bond to depend on the deformation of other nearby bonds, as well as its own deformation, a wide spectrum of material response can be modelled. Damage is included in the constitutive model through the irreversible breakage of bonds according to some criterion. This criterion determines the critical energy release rate for a peridynamic material. In this talk, we present a general discussion of the peridynamic method and recent progress in its application to penetration and fracture in nonlinearly elastic solids. Constitutive models are presented for rubbery materials, including damage evolution laws. The deformation near a crack tip is discussed and compared with results from the standard theory. Examples demonstrating the spontaneous nucleation and growth of cracks are presented. It is also shown how the method can be applied to anisotropic media, including fiber reinforced composites. Examples show prediction of impact damage in composites and comparison against experimental measurements of damage and delamination.

More Details

CO2 interaction with geomaterials

Cygan, Randall T.

This work compares the sorption and swelling processes associated with CO2-coal and CO2-clay interactions. We investigated the mechanisms of interaction related to CO2 adsortion in micropores, intercalation into sub-micropores, dissolution in solid matrix, the role of water, and the associated changes in reservoir permeability, for applications in CO2 sequestration and enhanced coal bed methane recovery. The structural changes caused by CO2 have been investigated. A high-pressure micro-dilatometer was equipped to investigate the effect of CO2 pressure on the thermoplastic properties of coal. Using an identical dilatometer, Rashid Khan (1985) performed experiments with CO2 that revealed a dramatic reduction in the softening temperature of coal when exposed to high-pressure CO2. A set of experiments was designed for -20+45-mesh samples of Argonne Premium Pocahontas No.3 coal, which is similar in proximate and ultimate analysis to the Lower Kittanning seam coal that Khan used in his experiments. No dramatic decrease in coal softening temperature has been observed in high-pressure CO2 that would corroborate the prior work of Khan. Thus, conventional polymer (or 'geopolymer') theories may not be directly applicable to CO2 interaction with coals. Clays are similar to coals in that they represent abundant geomaterials with well-developed microporous structure. We evaluated the CO2 sequestration potential of clays relative to coals and investigated the factors that affect the sorption capacity, rates, and permanence of CO2 trapping. For the geomaterials comparison studies, we used source clay samples from The Clay Minerals Society. Preliminary results showed that expandable clays have CO2 sorption capacities comparable to those of coal. We analyzed sorption isotherms, XRD, DRIFTS (infrared reflectance spectra at non-ambient conditions), and TGA-MS (thermal gravimetric analysis) data to compare the effects of various factors on CO2 trapping. In montmorillonite, CO2 molecules may remain trapped for several months following several hours of exposure to high pressure (supercritical conditions), high temperature (above boiling point of water) or both. Such trapping is well preserved in either inert gas or the ambient environment and appears to eventually result in carbonate formation. We performed computer simulations of CO2 interaction with free cations (normal modes of CO2 and Na+CO2 were calculated using B3LYP / aug-cc-pVDZ and MP2 / aug-cc-pVDZ methods) and with clay structures containing interlayer cations (MD simulations with Clayff potentials for clay and a modified CO2 potential). Additionally, interaction of CO2 with hydrated Na-montmorillonite was studied using density functional theory with dispersion corrections. The sorption energies and the swelling behavior were investigated. Preliminary modeling results and experimental observations indicate that the presence of water molecules in the interlayer region is necessary for intercalation of CO2. Our preliminary conclusion is that CO2 molecules may intercalate into interlayer region of swelling clay and stay there via coordination to the interlayer cations.

More Details

RCM and application at Sandia National Labs

Williams, Edward J.

Reliability-Centered Maintenance (RCM) is a process used to determine what must be done to ensure that any physical asset continues to do whatever its users want it to do in its present operating context. There are 7 basic questions of RCM: (1) what are the functions of the asset; (2) in hwat ways does it fail to fulfill its functions; (3) what causes each functional failure; (4) what happens when each failure occurs; (5) in what way does each failure matter; (6) what can be done to predict or prevent each failure; and (7) what should be done if a suitable proactive task cannot be found. SNL's RCM experiences: (1) acid exhaust system - (a) reduced risk of system failure (safety and operational consequences), (b) reduced annual correctiv maintenance hours from 138 in FY06 to zero in FY07, FY08, FY09, FY10 and FY11 so far, (c) identified single point of failure, mitigated risk, and recommended a permanent solution; (2) fire alarm system - (a) reduced false alarms, which cause costly evacuations, (b) precented 1- to 2-day evacuation by identifying and obtaining a critical spare for a network card; (3) heating water system - (a) reduced PM hours on fire-tube boilers by 60%, (b) developed operator tasks and PM plan for modular boilers, which can be applied to many installations; and (4) GIF source elevator system - (a) reduced frequency of PM tasks from 6 months to 1 year, (b) established predictive maintenance task that identified overheating cabinet and prevented potential electrical failure or fire.

More Details

Scheduling error correction operations for a quantum computer

Phillips, Cynthia A.; Carr, Robert D.; Ganti, Anand G.; Landahl, Andrew J.

In a (future) quantum computer a single logical quantum bit (qubit) will be made of multiple physical qubits. These extra physical qubits implement mandatory extensive error checking. The efficiency of error correction will fundamentally influence the performance of a future quantum computer, both in latency/speed and in error threshold (the worst error tolerated for an individual gate). Executing this quantum error correction requires scheduling the individual operations subject to architectural constraints. Since our last talk on this subject, a team of researchers at Sandia National Labortories has designed a logical qubit architecture that considers all relevant architectural issues including layout, the effects of supporting classical electronics, and the types of gates that the underlying physical qubit implementation supports most naturally. This is a two-dimensional system where 2-qubit operations occur locally, so there is no need to calculate more complex qubit/information transportation. Using integer programming, we found a schedule of qubit operations that obeys the hardware constraints, implements the local-check code in the native gate set, and minimizes qubit idle periods. Even with an optimal schedule, however, parallel Monte Carlo simulation shows that there is no finite error probability for the native gates such that the error-correction system would be benecial. However, by adding dynamic decoupling, a series of timed pulses that can reverse some errors, we found that there may be a threshold. Thus finding optimal schedules for increasingly-refined scheduling problems has proven critical for the overall design of the logical qubit system. We describe the evolving scheduling problems and the ideas behind the integer programming-based solution methods. This talk assumes no prior knowledge of quantum computing.

More Details

FAA Airworthiness Assurance NDI Validation Center (AANC) operated by Sandia National Laboratories

Hartman, Roger D.; Roach, D.

Airworthiness Assurance NDI Validation Center (AANC) objectives are: (1) Enhance aircraft safety and reliability; (2) Aid developing advanced aircraft designs and maintenance techniques; (3) Provide our customers with comprehensive, independent, and quantitative/qualitative evaluations of new and enhanced inspection, maintenance, and repair techniques; (4) Facilitate transferring effective technologies into the aviation industry; (5) Support FAA rulemaking process by providing guidance on content & necessary tools to meet requirements or recommendations of FARs, ADs, ACs, SBs, SSIDs, CPCP, and WFD; and (6) Coordinate with and respond to Airworthiness Assurance Working Group (AAWG) in support of FAA Aviation Rulemaking Advisory Committee (ARAC).

More Details

Development, sensitivity analysis, and uncertainty quantification of high-fidelity arctic sea ice models

Bochev, Pavel B.; Paskaleva, Biliana S.

Arctic sea ice is an important component of the global climate system and due to feedback effects the Arctic ice cover is changing rapidly. Predictive mathematical models are of paramount importance for accurate estimates of the future ice trajectory. However, the sea ice components of Global Climate Models (GCMs) vary significantly in their prediction of the future state of Arctic sea ice and have generally underestimated the rate of decline in minimum sea ice extent seen over the past thirty years. One of the contributing factors to this variability is the sensitivity of the sea ice to model physical parameters. A new sea ice model that has the potential to improve sea ice predictions incorporates an anisotropic elastic-decohesive rheology and dynamics solved using the material-point method (MPM), which combines Lagrangian particles for advection with a background grid for gradient computations. We evaluate the variability of the Los Alamos National Laboratory CICE code and the MPM sea ice code for a single year simulation of the Arctic basin using consistent ocean and atmospheric forcing. Sensitivities of ice volume, ice area, ice extent, root mean square (RMS) ice speed, central Arctic ice thickness, and central Arctic ice speed with respect to ten different dynamic and thermodynamic parameters are evaluated both individually and in combination using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA). We find similar responses for the two codes and some interesting seasonal variability in the strength of the parameters on the solution.

More Details

A stress-state modified strain based failure criterion for evaluating the structural integrity of an inner eutectic barrier

Heitman, Lili A.; Yoshimura, Richard H.; Miller, David R.

A slight modification of a package to transport solid metal contents requires inclusion of a thin titanium liner to protect against possible eutectic formation in 10 CFR 71.74 regulatory fire accident conditions. Under severe transport regulatory impact conditions, the package contents could impart high localized loading of the liner, momentarily pinching it between the contents and the thick containment vessel, and inducing some plasticity near the contact point. Actuator and drop table testing of simulated contents impacts against liner/containment vessel structures nearly bounded the potential plastic strain and stress triaxiality conditions, without any ductile tearing of the eutectic barrier. Additional bounding was necessary in some cases beyond the capability of the actuator and drop table tests, and in these cases a stress-modified evolution integral over the plastic strain history was successfully used as a failure criterion to demonstrate that structural integrity was maintained. The Heaviside brackets only allow the evolution integral to accumulate value when the maximum principal stress is positive, since failure is never observed under pure hydrostatic pressure, where the maximum principal stress is negative. Detailed finite element analyses of myriad possible impact orientations and locations between package contents and the thin eutectic barrier under regulatory impact conditions have shown that not even the initiation of a ductile tear occurs. Although localized plasticity does occur in the eutectic barrier, it is not the primary containment boundary and is thus not subject to ASME stress allowables from NRC Regulatory Guide 7.6. These analyses were used to successfully demonstrate that structural integrity of the eutectic barrier was maintained in all 10 CFR 71.73 and 71.74 regulatory accident conditions. The NRC is currently reviewing the Safety Analysis Report.

More Details

LDRD final report : a lightweight operating system for multi-core capability class supercomputers

Pedretti, Kevin T.T.; Levenhagen, Michael J.; Ferreira, Kurt; Brightwell, Ronald B.; Kelly, Suzanne M.; Bridges, Patrick G.

The two primary objectives of this LDRD project were to create a lightweight kernel (LWK) operating system(OS) designed to take maximum advantage of multi-core processors, and to leverage the virtualization capabilities in modern multi-core processors to create a more flexible and adaptable LWK environment. The most significant technical accomplishments of this project were the development of the Kitten lightweight kernel, the co-development of the SMARTMAP intra-node memory mapping technique, and the development and demonstration of a scalable virtualization environment for HPC. Each of these topics is presented in this report by the inclusion of a published or submitted research paper. The results of this project are being leveraged by several ongoing and new research projects.

More Details

Optimal recovery sequencing for critical infrastructure resilience assessment

Vugrin, Eric D.; Brown, Nathanael J.

Critical infrastructure resilience has become a national priority for the U. S. Department of Homeland Security. System resilience has been studied for several decades in many different disciplines, but no standards or unifying methods exist for critical infrastructure resilience analysis. This report documents the results of a late-start Laboratory Directed Research and Development (LDRD) project that investigated the identification of optimal recovery strategies that maximize resilience. To this goal, we formulate a bi-level optimization problem for infrastructure network models. In the 'inner' problem, we solve for network flows, and we use the 'outer' problem to identify the optimal recovery modes and sequences. We draw from the literature of multi-mode project scheduling problems to create an effective solution strategy for the resilience optimization model. We demonstrate the application of this approach to a set of network models, including a national railroad model and a supply chain for Army munitions production.

More Details

Identifying emerging smart grid impacts to upstream and midstream natural gas operations

McIntyre, Annie M.

The Smart Grid has come to describe a next-generation electrical power system that is typified by the increased use of communications and information technology in the generation, delivery and consumption of electrical energy. Much of the present Smart Grid analysis focuses on utility and consumer interaction. i.e. smart appliances, home automation systems, rate structures, consumer demand response, etc. An identified need is to assess the upstream and midstream operations of natural gas as a result of the smart grid. The nature of Smart Grid, including the demand response and role of information, may require changes in upstream and midstream natural gas operations to ensure availability and efficiency. Utility reliance on natural gas will continue and likely increase, given the backup requirements for intermittent renewable energy sources. Efficient generation and delivery of electricity on Smart Grid could affect how natural gas is utilized. Things that we already know about Smart Grid are: (1) The role of information and data integrity is increasingly important. (2) Smart Grid includes a fully distributed system with two-way communication. (3) Smart Grid, a complex network, may change the way energy is supplied, stored, and in demand. (4) Smart Grid has evolved through consumer driven decisions. (5) Smart Grid and the US critical infrastructure will include many intermittent renewables.

More Details

Multiscale schemes for the predictive description and virtual engineering of materials

von Lilienfeld-Toal, Otto A.

This report documents research carried out by the author throughout his 3-years Truman fellowship. The overarching goal consisted of developing multiscale schemes which permit not only the predictive description but also the computational design of improved materials. Identifying new materials through changes in atomic composition and configuration requires the use of versatile first principles methods, such as density functional theory (DFT). Using DFT, its predictive reliability has been investigated with respect to pseudopotential construction, band-gap, van-der-Waals forces, and nuclear quantum effects. Continuous variation of chemical composition and derivation of accurate energy gradients in compound space has been developed within a DFT framework for free energies of solvation, reaction energetics, and frontier orbital eigenvalues. Similar variations have been leveraged within classical molecular dynamics in order to address thermal properties of molten salt candidates for heat transfer fluids used in solar thermal power facilities. Finally, a combination of DFT and statistical methods has been used to devise quantitative structure property relationships for the rapid prediction of charge mobilities in polyaromatic hydrocarbons.

More Details

LDRD final report : managing shared memory data distribution in hybrid HPC applications

Pedretti, Kevin T.T.

MPI is the dominant programming model for distributed memory parallel computers, and is often used as the intra-node programming model on multi-core compute nodes. However, application developers are increasingly turning to hybrid models that use threading within a node and MPI between nodes. In contrast to MPI, most current threaded models do not require application developers to deal explicitly with data locality. With increasing core counts and deeper NUMA hierarchies seen in the upcoming LANL/SNL 'Cielo' capability supercomputer, data distribution poses an upper boundary on intra-node scalability within threaded applications. Data locality therefore has to be identified at runtime using static memory allocation policies such as first-touch or next-touch, or specified by the application user at launch time. We evaluate several existing techniques for managing data distribution using micro-benchmarks on an AMD 'Magny-Cours' system with 24 cores among 4 NUMA domains and argue for the adoption of a dynamic runtime system implemented at the kernel level, employing a novel page table replication scheme to gather per-NUMA domain memory access traces.

More Details

Uncertainty quantification and validation of combined hydrological and macroeconomic analyses

Hernandez, Jacquelynne H.; Kaplan, Paul G.; Conrad, Stephen H.

Changes in climate can lead to instabilities in physical and economic systems, particularly in regions with marginal resources. Global climate models indicate increasing global mean temperatures over the decades to come and uncertainty in the local to national impacts means perceived risks will drive planning decisions. Agent-based models provide one of the few ways to evaluate the potential changes in behavior in coupled social-physical systems and to quantify and compare risks. The current generation of climate impact analyses provides estimates of the economic cost of climate change for a limited set of climate scenarios that account for a small subset of the dynamics and uncertainties. To better understand the risk to national security, the next generation of risk assessment models must represent global stresses, population vulnerability to those stresses, and the uncertainty in population responses and outcomes that could have a significant impact on U.S. national security.

More Details

ParaText : scalable solutions for processing and searching very large document collections : final LDRD report

Dunlavy, Daniel D.; Crossno, Patricia J.

This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages of information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.

More Details

Remote safeguards and monitoring of reactors with antineutrinos

Reyna, David R.; Cabrera-Palmer, Belkis C.; Kiff, Scott D.

The current state-of-the-art in antineutrino detection is such that it is now possible to remotely monitor the operational status, power levels and fissile content of nuclear reactors in real-time. This non-invasive and incorruptible technique has been demonstrated at civilian power reactors in both Russia and the United States and has been of interest to the IAEA Novel Technologies Unit for several years. Expert's meetings were convened at IAEA headquarters in 2003 and again in 2008. The latter produced a report in which antineutrino detection was called a 'highly promising technology for safeguards applications' at nuclear reactors and several near-term goals and suggested developments were identified to facilitate wider applicability. Over the last few years, we have been working to achieve some of these goals and improvements. Specifically, we have already demonstrated the successful operation of non-toxic detectors and most recently, we are testing a transportable, above-ground detector system, which is fully contained within a standard 6 meter ISO container. If successful, such a system could allow easy deployment at any reactor facility around the world. As well, our previously demonstrated ability to remotely monitor the data and respond in real-time to reactor operational changes could allow the verification of operator declarations without the need for costly site-visits. As the global nuclear power industry expands around the world, the burden on maintaining operational histories and safeguarding inventories will increase greatly. Such a system for providing remote data to verify operator's declarations could greatly reduce the need for frequent site inspections while still providing a robust warning of anomalies requiring further investigation.

More Details

Geometric comparison of popular mixture-model distances

Mitchell, Scott A.

More Details

Modeling of general 1-D periodic leaky-wave antennas in layered media using EIGER

Langston, William L.; Basilio, Lorena I.

This paper presents a mixed-potential integral-equation formulation for analyzing 1-D periodic leaky-wave antennas in layered media. The structures are periodic in one dimension and finite in the other two dimensions. The unit cell consists of an arbitrary-shaped metallic/dielectric structure. The formulation has been implemented in the EIGER{trademark} code in order to obtain the real and complex propagation wavenumbers of the bound and leaky modes of such structures. Validation results presented here include a 1-D periodic planar leaky-wave antenna and a fully 3-D waveguide test case.

More Details

Dynamics of discontinuous coating and drying of nanoparticulate films

Brinker, C.J.; Schunk, Randy

Heightened interest in micro-scale and nano-scale patterning by imprinting, embossing, and nano-particulate suspension coating stems from a recent surge in development of higher-throughput manufacturing methods for integrated devices. Energy-applications addressing alternative, renewable energy sources offer many examples of the need for improved manufacturing technology for micro and nano-structured films. In this presentation we address one approach to micro- and nano-pattering coating using film deposition and differential wetting of nanoparticles suspensions. Rather than print nanoparticle or colloidal inks in discontinuous patches, which typically employs ink jet printing technology, patterns can be formed with controlled dewetting of a continuously coated film. Here we report the dynamics of a volatile organic solvent laden with nanoparticles dispensed on the surfaces of water droplets, whose contact angles (surface energy) and perimeters are defined by lithographic patterning of initially (super)hydrophobic surfaces.. The lubrication flow equation together with averaged particle transport equation are employed to predict the film thickness and particle average concentration profiles during subsequent drying of the organic and water solvents. The predictions are validated by contact angle measurements, in situ grazing incidence small angle x-ray scattering experiments, and TEM images of the final nanoparticle assemblies.

More Details

PDV modifications

Dolan, Daniel H.

External modifications can transform a conventional photonic doppler velocimetry (PDV) system to other useful configurations - Non-standard probes and Frequency-conversion measurements. This approach is easier than supporting every conceivable measurement in the core PDV design. Circulator specifications may be important - -30 dB isolation (common) probably not be enough, -50 dB isolation is available, and some bench testing may be needed.

More Details

A computational study of nodal-based tetrahedral element behavior

Gullerud, Arne S.

This report explores the behavior of nodal-based tetrahedral elements on six sample problems, and compares their solution to that of a corresponding hexahedral mesh. The problems demonstrate that while certain aspects of the solution field for the nodal-based tetrahedrons provide good quality results, the pressure field tends to be of poor quality. Results appear to be strongly affected by the connectivity of the tetrahedral elements. Simulations that rely on the pressure field, such as those which use material models that are dependent on the pressure (e.g. equation-of-state models), can generate erroneous results. Remeshing can also be strongly affected by these issues. The nodal-based test elements as they currently stand need to be used with caution to ensure that their numerical deficiencies do not adversely affect critical values of interest.

More Details

Magnetically applied pressure-shear : a new technique for direct strength measurement at high pressure (final report for LDRD project 117856)

Alexander, Charles S.; Haill, Thomas A.; Lamppa, Derek C.

A new experimental technique to measure material shear strength at high pressures has been developed for use on magneto-hydrodynamic (MHD) drive pulsed power platforms. By applying an external static magnetic field to the sample region, the MHD drive directly induces a shear stress wave in addition to the usual longitudinal stress wave. Strength is probed by passing this shear wave through a sample material where the transmissible shear stress is limited to the sample strength. The magnitude of the transmitted shear wave is measured via a transverse VISAR system from which the sample strength is determined.

More Details

Toward exascale computing through neuromorphic approaches

Forsythe, James C.; Branch, Darren W.; McKenzie, Amber T.

While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.

More Details

Adagio 4.18 user's guide

Spencer, Benjamin S.

Adagio is a Lagrangian, three-dimensional, implicit code for the analysis of solids and structures. It uses a multi-level iterative solver, which enables it to solve problems with large deformations, nonlinear material behavior, and contact. It also has a versatile library of continuum and structural elements, and an extensive library of material models. Adagio is written for parallel computing environments, and its solvers allow for scalable solutions of very large problems. Adagio uses the SIERRA Framework, which allows for coupling with other SIERRA mechanics codes. This document describes the functionality and input structure for Adagio.

More Details

Presto 4.18 user's guide

Spencer, Benjamin S.

Presto is a Lagrangian, three-dimensional explicit, transient dynamics code that is used to analyze solids subjected to large, suddenly applied loads. The code is designed for a parallel computing environment and for problems with large deformations, nonlinear material behavior, and contact. Presto also has a versatile element library that incorporates both continuum elements and structural elements. This user's guide describes the input for Presto that gives users access to all the current functionality in the code. The environment in which Presto is built allows it to be coupled with other engineering analysis codes. Using a concept called scope, the input structure reflects the fact that Presto can be used in a coupled environment. The user's guide describes how scope is implemented from the outermost to the innermost scopes. Within a given scope, the descriptions of input commands are grouped based on functionality of the code. For example, all material input command lines are described in a chapter of the user's guide for all the material models that can be used in Presto.

More Details

Computing contingency statistics in parallel

Pebay, Philippe P.; Bennett, Janine C.

Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

More Details

Peridynamics as a rigorous coarse-graining of atomistics for multiscale materials design

Aidun, John B.; Kamm, James R.; Lehoucq, Richard B.; Parks, Michael L.; Sears, Mark P.; Silling, Stewart A.

This report summarizes activities undertaken during FY08-FY10 for the LDRD Peridynamics as a Rigorous Coarse-Graining of Atomistics for Multiscale Materials Design. The goal of our project was to develop a coarse-graining of finite temperature molecular dynamics (MD) that successfully transitions from statistical mechanics to continuum mechanics. The goal of our project is to develop a coarse-graining of finite temperature molecular dynamics (MD) that successfully transitions from statistical mechanics to continuum mechanics. Our coarse-graining overcomes the intrinsic limitation of coupling atomistics with classical continuum mechanics via the FEM (finite element method), SPH (smoothed particle hydrodynamics), or MPM (material point method); namely, that classical continuum mechanics assumes a local force interaction that is incompatible with the nonlocal force model of atomistic methods. Therefore FEM, SPH, and MPM inherit this limitation. This seemingly innocuous dichotomy has far reaching consequences; for example, classical continuum mechanics cannot resolve the short wavelength behavior associated with atomistics. Other consequences include spurious forces, invalid phonon dispersion relationships, and irreconcilable descriptions/treatments of temperature. We propose a statistically based coarse-graining of atomistics via peridynamics and so develop a first of a kind mesoscopic capability to enable consistent, thermodynamically sound, atomistic-to-continuum (AtC) multiscale material simulation. Peridynamics (PD) is a microcontinuum theory that assumes nonlocal forces for describing long-range material interaction. The force interactions occurring at finite distances are naturally accounted for in PD. Moreover, PDs nonlocal force model is entirely consistent with those used by atomistics methods, in stark contrast to classical continuum mechanics. Hence, PD can be employed for mesoscopic phenomena that are beyond the realms of classical continuum mechanics and atomistic simulations, e.g., molecular dynamics and density functional theory (DFT). The latter two atomistic techniques are handicapped by the onerous length and time scales associated with simulating mesoscopic materials. Simulating such mesoscopic materials is likely to require, and greatly benefit from multiscale simulations coupling DFT, MD, PD, and explicit transient dynamic finite element methods FEM (e.g., Presto). The proposed work fills the gap needed to enable multiscale materials simulations.

More Details

Impact of defects on the electrical transport, optical properties and failure mechanisms of GaN nanowires

Armstrong, Andrew A.; Bogart, Katherine B.; Li, Qiming L.; Wang, George T.; Jones, Reese E.; Zhou, Xiaowang Z.; Huang, Jian Y.; Harris, Charles T.; Siegal, Michael P.; Shaner, Eric A.

We present the results of a three year LDRD project that focused on understanding the impact of defects on the electrical, optical and thermal properties of GaN-based nanowires (NWs). We describe the development and application of a host of experimental techniques to quantify and understand the physics of defects and thermal transport in GaN NWs. We also present the development of analytical models and computational studies of thermal conductivity in GaN NWs. Finally, we present an atomistic model for GaN NW electrical breakdown supported with experimental evidence. GaN-based nanowires are attractive for applications requiring compact, high-current density devices such as ultraviolet laser arrays. Understanding GaN nanowire failure at high-current density is crucial to developing nanowire (NW) devices. Nanowire device failure is likely more complex than thin film due to the prominence of surface effects and enhanced interaction among point defects. Understanding the impact of surfaces and point defects on nanowire thermal and electrical transport is the first step toward rational control and mitigation of device failure mechanisms. However, investigating defects in GaN NWs is extremely challenging because conventional defect spectroscopy techniques are unsuitable for wide-bandgap nanostructures. To understand NW breakdown, the influence of pre-existing and emergent defects during high current stress on NW properties will be investigated. Acute sensitivity of NW thermal conductivity to point-defect density is expected due to the lack of threading dislocation (TD) gettering sites, and enhanced phonon-surface scattering further inhibits thermal transport. Excess defect creation during Joule heating could further degrade thermal conductivity, producing a viscous cycle culminating in catastrophic breakdown. To investigate these issues, a unique combination of electron microscopy, scanning luminescence and photoconductivity implemented at the nanoscale will be used in concert with sophisticated molecular-dynamics calculations of surface and defect-mediated NW thermal transport. This proposal seeks to elucidate long standing material science questions for GaN while addressing issues critical to realizing reliable GaN NW devices.

More Details

End of FY10 report - used fuel disposition technical bases and lessons learned : legal and regulatory framework for high-level waste disposition in the United States

Rechard, Robert P.; Weiner, Ruth F.

This report examines the current policy, legal, and regulatory framework pertaining to used nuclear fuel and high level waste management in the United States. The goal is to identify potential changes that if made could add flexibility and possibly improve the chances of successfully implementing technical aspects of a nuclear waste policy. Experience suggests that the regulatory framework should be established prior to initiating future repository development. Concerning specifics of the regulatory framework, reasonable expectation as the standard of proof was successfully implemented and could be retained in the future; yet, the current classification system for radioactive waste, including hazardous constituents, warrants reexamination. Whether or not consideration of multiple sites are considered simultaneously in the future, inclusion of mechanisms such as deliberate use of performance assessment to manage site characterization would be wise. Because of experience gained here and abroad, diversity of geologic media is not particularly necessary as a criterion in site selection guidelines for multiple sites. Stepwise development of the repository program that includes flexibility also warrants serious consideration. Furthermore, integration of the waste management system from storage, transportation, and disposition, should be examined and would be facilitated by integration of the legal and regulatory framework. Finally, in order to enhance acceptability of future repository development, the national policy should be cognizant of those policy and technical attributes that enhance initial acceptance, and those policy and technical attributes that maintain and broaden credibility.

More Details

Injection-locked composite lasers for mm-wave modulation : LDRD 117819 final report

Vawter, Gregory A.; Skogen, Erik J.; Chow, Weng W.; Overberg, Mark E.; Peake, Gregory M.; Wendt, J.R.

This report summarizes a 3-year LDRD program at Sandia National Laboratories exploring mutual injection locking of composite-cavity lasers for enhanced modulation responses. The program focused on developing a fundamental understanding of the frequency enhancement previously demonstrated for optically injection locked lasers. This was then applied to the development of a theoretical description of strongly coupled laser microsystems. This understanding was validated experimentally with a novel 'photonic lab bench on a chip'.

More Details

Exploration of cloud computing late start LDRD #149630 : Raincoat. v. 2.1

Edgett, Patrick G.; Gabert, Kasimir G.; Echeverria, Victor T.; Metral, Michael D.; Leger, Michelle A.; Thai, Tan Q.

This report contains documentation from an interoperability study conducted under the Late Start LDRD 149630, Exploration of Cloud Computing. A small late-start LDRD from last year resulted in a study (Raincoat) on using Virtual Private Networks (VPNs) to enhance security in a hybrid cloud environment. Raincoat initially explored the use of OpenVPN on IPv4 and demonstrates that it is possible to secure the communication channel between two small 'test' clouds (a few nodes each) at New Mexico Tech and Sandia. We extended the Raincoat study to add IPSec support via Vyatta routers, to interface with a public cloud (Amazon Elastic Compute Cloud (EC2)), and to be significantly more scalable than the previous iteration. The study contributed to our understanding of interoperability in a hybrid cloud.

More Details

Reduced order models for thermal analysis : final report : LDRD Project No. 137807

Hogan, Roy E.

This LDRD Senior's Council Project is focused on the development, implementation and evaluation of Reduced Order Models (ROM) for application in the thermal analysis of complex engineering problems. Two basic approaches to developing a ROM for combined thermal conduction and enclosure radiation problems are considered. As a prerequisite to a ROM a fully coupled solution method for conduction/radiation models is required; a parallel implementation is explored for this class of problems. High-fidelity models of large, complex systems are now used routinely to verify design and performance. However, there are applications where the high-fidelity model is too large to be used repetitively in a design mode. One such application is the design of a control system that oversees the functioning of the complex, high-fidelity model. Examples include control systems for manufacturing processes such as brazing and annealing furnaces as well as control systems for the thermal management of optical systems. A reduced order model (ROM) seeks to reduce the number of degrees of freedom needed to represent the overall behavior of the large system without a significant loss in accuracy. The reduction in the number of degrees of freedom of the ROM leads to immediate increases in computational efficiency and allows many design parameters and perturbations to be quickly and effectively evaluated. Reduced order models are routinely used in solid mechanics where techniques such as modal analysis have reached a high state of refinement. Similar techniques have recently been applied in standard thermal conduction problems e.g. though the general use of ROM for heat transfer is not yet widespread. One major difficulty with the development of ROM for general thermal analysis is the need to include the very nonlinear effects of enclosure radiation in many applications. Many ROM methods have considered only linear or mildly nonlinear problems. In the present study a reduced order model is considered for application to the combined problem of thermal conduction and enclosure radiation. The main objective is to develop a procedure that can be implemented in an existing thermal analysis code. The main analysis objective is to allow thermal controller software to be used in the design of a control system for a large optical system that resides with a complex radiation dominated enclosure. In the remainder of this section a brief outline of ROM methods is provided. The following chapter describes the fully coupled conduction/radiation method that is required prior to considering a ROM approach. Considerable effort was expended to implement and test the combined solution method; the ROM project ended shortly after the completion of this milestone and thus the ROM results are incomplete. The report concludes with some observations and recommendations.

More Details

Transportation scenarios for risk analysis

Weiner, Ruth F.

Transportation risk, like any risk, is defined by the risk triplet: what can happen (the scenario), how likely it is (the probability), and the resulting consequences. This paper evaluates the development of transportation scenarios, the associated probabilities, and the consequences. The most likely radioactive materials transportation scenario is routine, incident-free transportation, which has a probability indistinguishable from unity. Accident scenarios in radioactive materials transportation are of three different types: accidents in which there is no impact on the radioactive cargo, accidents in which some gamma shielding may be lost but there is no release of radioactive material, and accident in which radioactive material may potentially be released. Accident frequencies, obtainable from recorded data validated by the U.S. Department of Transportation, are considered equivalent to accident probabilities in this study. Probabilities of different types of accidents are conditional probabilities, conditional on an accident occurring, and are developed from event trees. Development of all of these probabilities and the associated highway and rail accident event trees are discussed in this paper.

More Details

LDRD final report : leveraging multi-way linkages on heterogeneous data

Dunlavy, Daniel D.; Kolda, Tamara G.

This report is a summary of the accomplishments of the 'Leveraging Multi-way Linkages on Heterogeneous Data' which ran from FY08 through FY10. The goal was to investigate scalable and robust methods for multi-way data analysis. We developed a new optimization-based method called CPOPT for fitting a particular type of tensor factorization to data; CPOPT was compared against existing methods and found to be more accurate than any faster method and faster than any equally accurate method. We extended this method to computing tensor factorizations for problems with incomplete data; our results show that you can recover scientifically meaningfully factorizations with large amounts of missing data (50% or more). The project has involved 5 members of the technical staff, 2 postdocs, and 1 summer intern. It has resulted in a total of 13 publications, 2 software releases, and over 30 presentations. Several follow-on projects have already begun, with more potential projects in development.

More Details

Generalized high order compact methods

Spotz, William S.

The fundamental ideas of the high order compact method are combined with the generalized finite difference method. The result is a finite difference method that works on unstructured, nonuniform grids, and is more accurate than one would classically expect from the number of grid points employed.

More Details

Modeling cortical circuits

Rothganger, Fredrick R.; Rohrer, Brandon R.; Verzi, Stephen J.; Xavier, Patrick G.

The neocortex is perhaps the highest region of the human brain, where audio and visual perception takes place along with many important cognitive functions. An important research goal is to describe the mechanisms implemented by the neocortex. There is an apparent regularity in the structure of the neocortex [Brodmann 1909, Mountcastle 1957] which may help simplify this task. The work reported here addresses the problem of how to describe the putative repeated units ('cortical circuits') in a manner that is easily understood and manipulated, with the long-term goal of developing a mathematical and algorithmic description of their function. The approach is to reduce each algorithm to an enhanced perceptron-like structure and describe its computation using difference equations. We organize this algorithmic processing into larger structures based on physiological observations, and implement key modeling concepts in software which runs on parallel computing hardware.

More Details

Peer-to-peer architectures for exascale computing : LDRD final report

Mayo, Jackson M.; Vorobeychik, Yevgeniy V.; Armstrong, Robert C.; Minnich, Ronald G.; Rudish, Don W.

The goal of this research was to investigate the potential for employing dynamic, decentralized software architectures to achieve reliability in future high-performance computing platforms. These architectures, inspired by peer-to-peer networks such as botnets that already scale to millions of unreliable nodes, hold promise for enabling scientific applications to run usefully on next-generation exascale platforms ({approx} 10{sup 18} operations per second). Traditional parallel programming techniques suffer rapid deterioration of performance scaling with growing platform size, as the work of coping with increasingly frequent failures dominates over useful computation. Our studies suggest that new architectures, in which failures are treated as ubiquitous and their effects are considered as simply another controllable source of error in a scientific computation, can remove such obstacles to exascale computing for certain applications. We have developed a simulation framework, as well as a preliminary implementation in a large-scale emulation environment, for exploration of these 'fault-oblivious computing' approaches. High-performance computing (HPC) faces a fundamental problem of increasing total component failure rates due to increasing system sizes, which threaten to degrade system reliability to an unusable level by the time the exascale range is reached ({approx} 10{sup 18} operations per second, requiring of order millions of processors). As computer scientists seek a way to scale system software for next-generation exascale machines, it is worth considering peer-to-peer (P2P) architectures that are already capable of supporting 10{sup 6}-10{sup 7} unreliable nodes. Exascale platforms will require a different way of looking at systems and software because the machine will likely not be available in its entirety for a meaningful execution time. Realistic estimates of failure rates range from a few times per day to more than once per hour for these platforms. P2P architectures give us a starting point for crafting applications and system software for exascale. In the context of the Internet, P2P applications (e.g., file sharing, botnets) have already solved this problem for 10{sup 6}-10{sup 7} nodes. Usually based on a fractal distributed hash table structure, these systems have proven robust in practice to constant and unpredictable outages, failures, and even subversion. For example, a recent estimate of botnet turnover (i.e., the number of machines leaving and joining) is about 11% per week. Nonetheless, P2P networks remain effective despite these failures: The Conficker botnet has grown to {approx} 5 x 10{sup 6} peers. Unlike today's system software and applications, those for next-generation exascale machines cannot assume a static structure and, to be scalable over millions of nodes, must be decentralized. P2P architectures achieve both, and provide a promising model for 'fault-oblivious computing'. This project aimed to study the dynamics of P2P networks in the context of a design for exascale systems and applications. Having no single point of failure, the most successful P2P architectures are adaptive and self-organizing. While there has been some previous work applying P2P to message passing, little attention has been previously paid to the tightly coupled exascale domain. Typically, the per-node footprint of P2P systems is small, making them ideal for HPC use. The implementation on each peer node cooperates en masse to 'heal' disruptions rather than relying on a controlling 'master' node. Understanding this cooperative behavior from a complex systems viewpoint is essential to predicting useful environments for the inextricably unreliable exascale platforms of the future. We sought to obtain theoretical insight into the stability and large-scale behavior of candidate architectures, and to work toward leveraging Sandia's Emulytics platform to test promising candidates in a realistic (ultimately {ge} 10{sup 7} nodes) setting. Our primary example applications are drawn from linear algebra: a Jacobi relaxation solver for the heat equation, and the closely related technique of value iteration in optimization. We aimed to apply P2P concepts in designing implementations capable of surviving an unreliable machine of 10{sup 6} nodes.

More Details

Long-Term Environmental Stewardship (LTES) life-cycle material management at Sandia National Laboratories

Nagy, Michael D.

The Long-Term Environmental Stewardship (LTES) mission is to ensure long-term protection of human health and the environment, and proactive management toward sustainable use and protection of natural and cultural resources affected by any Sandia National Laboratories (SNL) operations and operational legacies. The primary objectives of the LTES program are to: (1) Protect the environment from present and future operations; (2) Preserve and protect natural and cultural resources, and; (3) Apply environmental life-cycle management to SNL operations.

More Details

Parallel octree-based hexahedral mesh generation for eulerian to lagrangian conversion

Owen, Steven J.; Staten, Matthew L.

Computational simulation must often be performed on domains where materials are represented as scalar quantities or volume fractions at cell centers of an octree-based grid. Common examples include bio-medical, geotechnical or shock physics calculations where interface boundaries are represented only as discrete statistical approximations. In this work, we introduce new methods for generating Lagrangian computational meshes from Eulerian-based data. We focus specifically on shock physics problems that are relevant to ASC codes such as CTH and Alegra. New procedures for generating all-hexahedral finite element meshes from volume fraction data are introduced. A new primal-contouring approach is introduced for defining a geometric domain. New methods for refinement, node smoothing, resolving non-manifold conditions and defining geometry are also introduced as well as an extension of the algorithm to handle tetrahedral meshes. We also describe new scalable MPI-based implementations of these procedures. We describe a new software module, Sculptor, which has been developed for use as an embedded component of CTH. We also describe its interface and its use within the mesh generation code, CUBIT. Several examples are shown to illustrate the capabilities of Sculptor.

More Details

International physical protection self-assessment tool for chemical facilities

Stiles, Linda L.; Tewell, Craig R.; Burdick, Brent B.; Lindgren, Eric R.

This report is the final report for Laboratory Directed Research and Development (LDRD) Project No.130746, International Physical Protection Self-Assessment Tool for Chemical Facilities. The goal of the project was to develop an exportable, low-cost, computer-based risk assessment tool for small to medium size chemical facilities. The tool would assist facilities in improving their physical protection posture, while protecting their proprietary information. In FY2009, the project team proposed a comprehensive evaluation of safety and security regulations in the target geographical area, Southeast Asia. This approach was later modified and the team worked instead on developing a methodology for identifying potential targets at chemical facilities. Milestones proposed for FY2010 included characterizing the international/regional regulatory framework, finalizing the target identification and consequence analysis methodology, and developing, reviewing, and piloting the software tool. The project team accomplished the initial goal of developing potential target categories for chemical facilities; however, the additional milestones proposed for FY2010 were not pursued and the LDRD funding therefore was redirected.

More Details

Hydrogen effects on materials for CNG/H2 blends

Somerday, Brian P.; Keller, Jay O.

No concerns for Hydrogen-Enriched Compressed Natural gas (HCNG) in steel storage tanks if material strength is < 950 MPa. Recommend evaluating H{sub 2}-assisted fatigue cracking in higher strength steels at H{sub 2} partial pressure in blend. Limited fatigue testing on higher strength steel cylinders in H{sub 2} shows promising results. Impurities in Compressed Natural Gas (CNG) (e.g., CO) may provide extrinsic mechanism for mitigating H{sub 2}-assisted fatigue cracking in steel tanks.

More Details

Quantification of margins and uncertainty for risk-informed decision analysis

Alvin, Kenneth F.

QMU stands for 'Quantification of Margins and Uncertainties'. QMU is a basic framework for consistency in integrating simulation, data, and/or subject matter expertise to provide input into a risk-informed decision-making process. QMU is being applied to a wide range of NNSA stockpile issues, from performance to safety. The implementation of QMU varies with lab and application focus. The Advanced Simulation and Computing (ASC) Program develops validated computational simulation tools to be applied in the context of QMU. QMU provides input into a risk-informed decision making process. The completeness aspect of QMU can benefit from the structured methodology and discipline of quantitative risk assessment (QRA)/probabilistic risk assessment (PRA). In characterizing uncertainties it is important to pay attention to the distinction between those arising from incomplete knowledge ('epistemic' or systematic), and those arising from device-to-device variation ('aleatory' or random). The national security labs should investigate the utility of a probability of frequency (PoF) approach in presenting uncertainties in the stockpile. A QMU methodology is connected if the interactions between failure modes are included. The design labs should continue to focus attention on quantifying uncertainties that arise from epistemic uncertainties such as poorly-modeled phenomena, numerical errors, coding errors, and systematic uncertainties in experiment. The NNSA and design labs should ensure that the certification plan for any RRW is supported by strong, timely peer review and by an ongoing, transparent QMU-based documentation and analysis in order to permit a confidence level necessary for eventual certification.

More Details

In-situ observation of ErD2 formation during D2 loading via neutron diffraction

Rodriguez, Marko A.; Snow, Clark S.; Wixom, Ryan R.

In an effort to better understand the structural changes occurring during hydrogen loading of erbium target materials, we have performed in situ D{sub 2} loading of erbium metal (powder) at temperature (450 C) with simultaneous neutron diffraction analysis. This experiment tracked the conversion of Er metal to the {alpha} erbium deuteride (solid-solution) phase and then into the {beta} (fluorite) phase. Complete conversion to ErD{sub 2.0} was accomplished at 10 Torr D{sub 2} pressure with deuterium fully occupying the tetrahedral sites in the fluorite lattice.

More Details

Novel detection methods for radiation-induced electron-hole pairs

Cich, Michael C.; Derzon, Mark S.; Martinez, Marino M.; Nordquist, Christopher N.; Vawter, Gregory A.

Most common ionizing radiation detectors typically rely on one of two general methods: collection of charge generated by the radiation, or collection of light produced by recombination of excited species. Substantial efforts have been made to improve the performance of materials used in these types of detectors, e.g. to raise the operating temperature, to improve the energy resolution, timing or tracking ability. However, regardless of the material used, all these detectors are limited in performance by statistical variation in the collection efficiency, for charge or photons. We examine three alternative schemes for detecting ionizing radiation that do not rely on traditional direct collection of the carriers or photons produced by the radiation. The first method detects refractive index changes in a resonator structure. The second looks at alternative means to sense the chemical changes caused by radiation on a scintillator-type material. The final method examines the possibilities of sensing the perturbation caused by radiation on the transmission of a RF transmission line structure. Aspects of the feasibility of each approach are examined and recommendations made for further work.

More Details

Aerosol cluster impact and break-up : II. Atomic and Cluster Scale Models

Lechman, Jeremy B.

Understanding the interaction of aerosol particle clusters/flocs with surfaces is an area of interest for a number of processes in chemical, pharmaceutical, and powder manufacturing as well as in steam-tube rupture in nuclear power plants. Developing predictive capabilities for these applications involves coupled phenomena on multiple length and timescales from the process macroscopic scale ({approx}1m) to the multi-cluster interaction scale (1mm-0.1m) to the single cluster scale ({approx}1000 - 10000 particles) to the particle scale (10nm-10{micro}m) interactions, and on down to the sub-particle, atomic scale interactions. The focus of this report is on the single cluster scale; although work directed toward developing better models of particle-particle interactions by considering sub-particle scale interactions and phenomena is also described. In particular, results of mesoscale (i.e., particle to single cluster scale) discrete element method (DEM) simulations for aerosol cluster impact with rigid walls are presented. The particle-particle interaction model is based on JKR adhesion theory and is implemented as an enhancement to the granular package in the LAMMPS code. The theory behind the model is outlined and preliminary results are shown. Additionally, as mentioned, results from atomistic classical molecular dynamics simulations are also described as a means of developing higher fidelity models of particle-particle interactions. Ultimately, the results from these and other studies at various scales must be collated to provide systems level models with accurate 'sub-grid' information for design, analysis and control of the underlying systems processes.

More Details

Sensitivity analysis techniques for models of human behavior

Naugle, Asmeret B.

Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

More Details

Enterprise analytics

Spomer, Judith E.

Ranking search results is a thorny issue for enterprise search. Search engines rank results using a variety of sophisticated algorithms, but users still complain that search can't ever seem to find anything useful or relevant! The challenge is to provide results that are ranked according to the users definition of relevancy. Sandia National Laboratories has enhanced its commercial search engine to discover user preferences, re-ranking results accordingly. Immediate positive impact was achieved by modeling historical data consisting of user queries and subsequent result clicks. New data is incorporated into the model daily. An important benefit is that results improve naturally and automatically over time as a function of user actions. This session presents the method employed, how it was integrated with the search engine,metrics illustrating the subsequent improvement to the users search experience, and plans for implementation with Sandia's FAST for SharePoint 2010 search engine.

More Details

Detection of exposure damage in composite materials using Fourier transform infrared technology

Roach, D.; Duvall, Randy L.

Goal: to detect the subtle changes in laminate composite structures brought about by thermal, chemical, ultraviolet, and moisture exposure. Compare sensitivity of an array of NDI methods, including Fourier Transform Infrared Spectroscopy (FTIR), to detect subtle differences in composite materials due to deterioration. Inspection methods applied: ultrasonic pulse echo, through transmission ultrasonics, thermography, resonance testing, mechanical impedance analysis, eddy current, low frequency bond testing & FTIR. Comparisons between the NDI methods are being used to establish the potential of FTIR to provide the necessary sensitivity to non-visible, yet significant, damage in the resin and fiber matrix of composite structures. Comparison of NDI results with short beam shear tests are being used to relate NDI sensitivity to reduction in structural performance. Chemical analyses technique, which measures the infrared intensity versus wavelength of light reflected on the surface of a structure (chemical and physical information via this signature). Advances in instrumentation have resulted in hand-held portable devices that allow for field use (few seconds per scan). Shows promise for production quality assurance and in-service applications on composite aircraft structures (scarfed repairs). Statistical analysis on frequency spectrums produced by FTIR interrogations are being used to produce an NDI technique for assessing material integrity. Conclusions are: (1) Use of NDI to assess loss of composite laminate integrity brought about by thermal, chemical, ultraviolet, and moisture exposure. (2) Degradation trends between SBS strength and exposure levels (temperature and time) have been established for different materials. (3) Various NDI methods have been applied to evaluate damage and relate this to loss of integrity - PE UT shows greatest sensitivity. (4) FTIR shows promise for damage detection and calibration to predict structural integrity (short beam shear). (5) Detection of damage for medium exposure levels (possibly resin matrix degradation only) is more difficult and requires additional study. (6) These are initial results only - program is continuing with additional heat, UV, chemical and water exposure test specimens.

More Details

Quantifying the debonding of inclusions through tomography and computational homology

Foulk, James W.; Jin, Huiqing J.; Lu, Wei-Yang L.; Mota, Alejandro M.

This report describes a Laboratory Directed Research and Development (LDRD) project to use of synchrotron-radiation computed tomography (SRCT) data to determine the conditions and mechanisms that lead to void nucleation in rolled alloys. The Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory (LBNL) has provided SRCT data of a few specimens of 7075-T7351 aluminum plate (widely used for aerospace applications) stretched to failure, loaded in directions perpendicular and parallel to the rolling direction. The resolution of SRCT data is 900nm, which allows elucidation of the mechanisms governing void growth and coalescence. This resolution is not fine enough, however, for nucleation. We propose the use statistics and image processing techniques to obtain sub-resolution scale information from these data, and thus determine where in the specimen and when during the loading program nucleation occurs and the mechanisms that lead to it. Quantitative analysis of the tomography data, however, leads to the conclusion that the reconstruction process compromises the information obtained from the scans. Alternate, more powerful reconstruction algorithms are needed to address this problem, but those fall beyond the scope of this project.

More Details

Multivariate analysis of progressive thermal desorption coupled gas chromatography-mass spectrometry

Van Benthem, Mark V.; Borek, Theodore T.; Mowry, Curtis D.; Kotula, Paul G.

Thermal decomposition of poly dimethyl siloxane compounds, Sylgard{reg_sign} 184 and 186, were examined using thermal desorption coupled gas chromatography-mass spectrometry (TD/GC-MS) and multivariate analysis. This work describes a method of producing multiway data using a stepped thermal desorption. The technique involves sequentially heating a sample of the material of interest with subsequent analysis in a commercial GC/MS system. The decomposition chromatograms were analyzed using multivariate analysis tools including principal component analysis (PCA), factor rotation employing the varimax criterion, and multivariate curve resolution. The results of the analysis show seven components related to offgassing of various fractions of siloxanes that vary as a function of temperature. Thermal desorption coupled with gas chromatography-mass spectrometry (TD/GC-MS) is a powerful analytical technique for analyzing chemical mixtures. It has great potential in numerous analytic areas including materials analysis, sports medicine, in the detection of designer drugs; and biological research for metabolomics. Data analysis is complicated, far from automated and can result in high false positive or false negative rates. We have demonstrated a step-wise TD/GC-MS technique that removes more volatile compounds from a sample before extracting the less volatile compounds. This creates an additional dimension of separation before the GC column, while simultaneously generating three-way data. Sandia's proven multivariate analysis methods, when applied to these data, have several advantages over current commercial options. It also has demonstrated potential for success in finding and enabling identification of trace compounds. Several challenges remain, however, including understanding the sources of noise in the data, outlier detection, improving the data pretreatment and analysis methods, developing a software tool for ease of use by the chemist, and demonstrating our belief that this multivariate analysis will enable superior differentiation capabilities. In addition, noise and system artifacts challenge the analysis of GC-MS data collected on lower cost equipment, ubiquitous in commercial laboratories. This research has the potential to affect many areas of analytical chemistry including materials analysis, medical testing, and environmental surveillance. It could also provide a method to measure adsorption parameters for chemical interactions on various surfaces by measuring desorption as a function of temperature for mixtures. We have presented results of a novel method for examining offgas products of a common PDMS material. Our method involves utilizing a stepped TD/GC-MS data acquisition scheme that may be almost totally automated, coupled with multivariate analysis schemes. This method of data generation and analysis can be applied to a number of materials aging and thermal degradation studies.

More Details

Application of the DG-1199 methodology to the ESBWR and ABWR

Kalinich, Donald A.; Walton, Fotini W.; Gauntt, Randall O.

Appendix A-5 of Draft Regulatory Guide DG-1199 'Alternative Radiological Source Term for Evaluating Design Basis Accidents at Nuclear Power Reactors' provides guidance - applicable to RADTRAD MSIV leakage models - for scaling containment aerosol concentration to the expected steam dome concentration in order to preserve the simplified use of the Accident Source Term (AST) in assessing containment performance under assumed design basis accident (DBA) conditions. In this study Economic and Safe Boiling Water Reactor (ESBWR) and Advanced Boiling Water Reactor (ABWR) RADTRAD models are developed using the DG-1199, Appendix A-5 guidance. The models were run using RADTRAD v3.03. Low Population Zone (LPZ), control room (CR), and worst-case 2-hr Exclusion Area Boundary (EAB) doses were calculated and compared to the relevant accident dose criteria in 10 CFR 50.67. For the ESBWR, the dose results were all lower than the MSIV leakage doses calculated by General Electric/Hitachi (GEH) in their licensing technical report. There are no comparable ABWR MSIV leakage doses, however, it should be noted that the ABWR doses are lower than the ESBWR doses. In addition, sensitivity cases were evaluated to ascertain the influence/importance of key input parameters/features of the models.

More Details

Hardware authentication using transmission spectra modified optical fiber

Romero, Juan A.; Grubbs, Robert K.

The ability to authenticate the source and integrity of data is critical to the monitoring and inspection of special nuclear materials, including hardware related to weapons production. Current methods rely on electronic encryption/authentication codes housed in monitoring devices. This always invites the question of implementation and protection of authentication information in an electronic component necessitating EMI shielding, possibly an on board power source to maintain the information in memory. By using atomic layer deposition techniques (ALD) on photonic band gap (PBG) optical fibers we will explore the potential to randomly manipulate the output spectrum and intensity of an input light source. This randomization could produce unique signatures authenticating devices with the potential to authenticate data. An external light source projected through the fiber with a spectrometer at the exit would 'read' the unique signature. No internal power or computational resources would be required.

More Details

A bio-synthetic interface for discovery of viral entry mechanisms

Negrete, Oscar N.; Hayden, Carl C.

Understanding and defending against pathogenic viruses is an important public health and biodefense challenge. The focus of our LDRD project has been to uncover the mechanisms enveloped viruses use to identify and invade host cells. We have constructed interfaces between viral particles and synthetic lipid bilayers. This approach provides a minimal setting for investigating the initial events of host-virus interaction - (i) recognition of, and (ii) entry into the host via membrane fusion. This understanding could enable rational design of therapeutics that block viral entry as well as future construction of synthetic, non-proliferating sensors that detect live virus in the environment. We have observed fusion between synthetic lipid vesicles and Vesicular Stomatitis virus particles, and we have observed interactions between Nipah virus-like particles and supported lipid bilayers and giant unilamellar vesicles.

More Details

Biomolecular transport and separation in nanotubular networks

Sasaki, Darryl Y.; Wang, Julia W.; Hayden, Carl C.; Stachowiak, Jeanne C.; Branda, Steven B.; Bachand, George B.; Meagher, Robert M.; Stevens, Mark J.; Robinson, David R.; Zendejas, Frank Z.

Cell membranes are dynamic substrates that achieve a diverse array of functions through multi-scale reconfigurations. We explore the morphological changes that occur upon protein interaction to model membrane systems that induce deformation of their planar structure to yield nanotube assemblies. In the two examples shown in this report we will describe the use of membrane adhesion and particle trajectory to form lipid nanotubes via mechanical stretching, and protein adsorption onto domains and the induction of membrane curvature through steric pressure. Through this work the relationship between membrane bending rigidity, protein affinity, and line tension of phase separated structures were examined and their relationship in biological membranes explored.

More Details

QMU as an approach to strengthening the predictive capabilities of complex models

Gray, Genetha A.; Boggs, Paul T.; Grace, Matthew G.

Complex systems are made up of multiple interdependent parts, and the behavior of the entire system cannot always be directly inferred from the behavior of the individual parts. They are nonlinear and system responses are not necessarily additive. Examples of complex systems include energy, cyber and telecommunication infrastructures, human and animal social structures, and biological structures such as cells. To meet the goals of infrastructure development, maintenance, and protection for cyber-related complex systems, novel modeling and simulation technology is needed. Sandia has shown success using M&S in the nuclear weapons (NW) program. However, complex systems represent a significant challenge and relative departure from the classical M&S exercises, and many of the scientific and mathematical M&S processes must be re-envisioned. Specifically, in the NW program, requirements and acceptable margins for performance, resilience, and security are well-defined and given quantitatively from the start. The Quantification of Margins and Uncertainties (QMU) process helps to assess whether or not these safety, reliability and performance requirements have been met after a system has been developed. In this sense, QMU is used as a sort of check that requirements have been met once the development process is completed. In contrast, performance requirements and margins may not have been defined a priori for many complex systems, (i.e. the Internet, electrical distribution grids, etc.), particularly not in quantitative terms. This project addresses this fundamental difference by investigating the use of QMU at the start of the design process for complex systems. Three major tasks were completed. First, the characteristics of the cyber infrastructure problem were collected and considered in the context of QMU-based tools. Second, UQ methodologies for the quantification of model discrepancies were considered in the context of statistical models of cyber activity. Third, Bayesian methods for optimal testing in the QMU framework were developed. This completion of this project represent an increased understanding of how to apply and use the QMU process as a means for improving model predictions of the behavior of complex systems. 4

More Details

Studies of the viscoelastic properties of water confined between surfaces of specified chemical nature

Moore, Nathan W.; Feibelman, Peter J.; Grest, Gary S.

This report summarizes the work completed under the Laboratory Directed Research and Development (LDRD) project 10-0973 of the same title. Understanding the molecular origin of the no-slip boundary condition remains vitally important for understanding molecular transport in biological, environmental and energy-related processes, with broad technological implications. Moreover, the viscoelastic properties of fluids in nanoconfinement or near surfaces are not well-understood. We have critically reviewed progress in this area, evaluated key experimental and theoretical methods, and made unique and important discoveries addressing these and related scientific questions. Thematically, the discoveries include insight into the orientation of water molecules on metal surfaces, the premelting of ice, the nucleation of water and alcohol vapors between surface asperities and the lubricity of these molecules when confined inside nanopores, the influence of water nucleation on adhesion to salts and silicates, and the growth and superplasticity of NaCl nanowires.

More Details

Chemical strategies for die/wafer submicron alignment and bonding

Rohwer, Lauren E.; Chu, Dahwey C.; Martin, James E.

This late-start LDRD explores chemical strategies that will enable sub-micron alignment accuracy of dies and wafers by exploiting the interfacial energies of chemical ligands. We have micropatterned commensurate features, such as 2-d arrays of micron-sized gold lines on the die to be bonded. Each gold line is functionalized with alkanethiol ligands before the die are brought into contact. The ligand interfacial energy is minimized when the lines on the die are brought into registration, due to favorable interactions between the complementary ligand tails. After registration is achieved, standard bonding techniques are used to create precision permanent bonds. We have computed the alignment forces and torque between two surfaces patterned with arrays of lines or square pads to illustrate how best to maximize the tendency to align. We also discuss complex, aperiodic patterns such as rectilinear pad assemblies, concentric circles, and spirals that point the way towards extremely precise alignment.

More Details

Use of technology assessment databases to identify the issues associated with adoption of structural health monitoring practices

Roach, D.

The goal is to create a systematic method and structure to compile, organize, and summarize SHM related data to identify the level of maturity and rate of evolution and have a quick and ongoing evaluation of the current state of SHM among research institutions and industry. Hundreds of technical publication and conference proceedings were read and analyzed to compile the database. Microsoft Excel was used to create a useable interface that could be filtered to compare any of the entered data fields.

More Details

Silicon carbide tritium permeation barrier for steel structural components

Buchenauer, D.A.; Kolasinski, Robert K.; Youchison, Dennis L.; Garde, J.; Holschuh, Thomas V.

Chemical vapor deposited (CVD) silicon carbide (SiC) has superior resistance to tritium permeation even after irradiation. Prior work has shown Ultrametfoam to be forgiving when bonded to substrates with large CTE differences. The technical objectives are: (1) Evaluate foams of vanadium, niobium and molybdenum metals and SiC for CTE mitigation between a dense SiC barrier and steel structure; (2) Thermostructural modeling of SiC TPB/Ultramet foam/ferritic steel architecture; (3) Evaluate deuterium permeation of chemical vapor deposited (CVD) SiC; (4) D testing involved construction of a new higher temperature (> 1000 C) permeation testing system and development of improved sealing techniques; (5) Fabricate prototype tube similar to that shown with dimensions of 7cm {theta} and 35cm long; and (6) Tritium and hermeticity testing of prototype tube.

More Details

Using after-action review based on automated performance assessment to enhance training effectiveness

Adams, Susan S.; Basilico, Justin D.; Abbott, Robert G.

Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. In this work, we follow-up on previous evaluations of the Automated Expert Modeling and Automated Student Evaluation (AEMASE) system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain. The current study provides a rigorous empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback on two out of three domain-specific performance metrics.

More Details

Performance assessment to enhance training effectiveness

Adams, Susan S.; Basilico, Justin D.; Abbott, Robert G.

Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. To maximize training efficiency, new technologies are required that assist instructors in providing individually relevant instruction. Sandia National Laboratories has shown the feasibility of automated performance assessment tools, such as the Sandia-developed Automated Expert Modeling and Student Evaluation (AEMASE) software, through proof-of-concept demonstrations, a pilot study, and an experiment. In the pilot study, the AEMASE system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain, achieved a high degree of agreement with a human grader (89%) in assessing tactical air engagement scenarios. In more recent work, we found that AEMASE achieved a high degree of agreement with human graders (83-99%) for three Navy E-2 domain-relevant performance metrics. The current study provides a rigorous empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we assessed whether giving students feedback based on automated metrics would enhance training effectiveness and improve student performance. We trained two groups of employees (differentiated by type of feedback) on a Navy E-2 simulator and assessed their performance on three domain-specific performance metrics. We found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback on two out of three metrics. Future work will focus on extending these developments for automated assessment of teamwork.

More Details

The high current, fast, 100ns, Linear Transformer Driver (LTD) developmental project at Sandia Laboratories and HCEI

Mazarakis, Michael G.; Fowler, William E.; Matzen, M.K.; McDaniel, Dillon H.; McKee, George R.; Savage, Mark E.; Struve, Kenneth W.; Stygar, William A.; Woodworth, Joseph R.

Sandia National Laboratories, Albuquerque, N.M., USA, in collaboration with the High Current Electronic Institute (HCEI), Tomsk, Russia, is developing a new paradigm in pulsed power technology: the Linear Transformer Driver (LTD) technology. This technological approach can provide very compact devices that can deliver very fast high current and high voltage pulses straight out of the cavity with out any complicated pulse forming and pulse compression network. Through multistage inductively insulated voltage adders, the output pulse, increased in voltage amplitude, can be applied directly to the load. The load may be a vacuum electron diode, a z-pinch wire array, a gas puff, a liner, an isentropic compression load (ICE) to study material behavior under very high magnetic fields, or a fusion energy (IFE) target. This is because the output pulse rise time and width can be easily tailored to the specific application needs. In this paper we briefly summarize the developmental work done in Sandia and HCEI during the last few years, and describe our new MYKONOS Sandia High Current LTD Laboratory. An extensive evaluation of the LTD technology is being performed at SNL and the High Current Electronic Institute (HCEI) in Tomsk Russia. Two types of High Current LTD cavities (LTD I-II, and 1-MA LTD) were constructed and tested individually and in a voltage adder configuration (1-MA cavity only). All cavities performed remarkably well and the experimental results are in full agreement with analytical and numerical calculation predictions. A two-cavity voltage adder is been assembled and currently undergoes evaluation. This is the first step towards the completion of the 10-cavity, 1-TW module. This MYKONOS voltage adder will be the first ever IVA built with a transmission line insulated with deionized water. The LTD II cavity renamed LTD III will serve as a test bed for evaluating a number of different types of switches, resistors, alternative capacitor configurations, cores and other cavity components. Experimental results will be presented at the Conference and in future publications.

More Details

High-efficiency high-energy Ka source for the critically-required maximum illumination of x-ray optics on Z using Z-petawatt-driven laser-breakout-afterburner accelerated ultrarelativistic electrons LDRD

Bennett, Guy R.; Sefkow, Adam B.

Under the auspices of the Science of Extreme Environments LDRD program, a <2 year theoretical- and computational-physics study was performed (LDRD Project 130805) by Guy R Bennett (formally in Center-01600) and Adam B. Sefkow (Center-01600): To investigate novel target designs by which a short-pulse, PW-class beam could create a brighter K{alpha} x-ray source than by simple, direct-laser-irradiation of a flat foil; Direct-Foil-Irradiation (DFI). The computational studies - which are still ongoing at this writing - were performed primarily on the RedStorm supercomputer at Sandia National Laboratories Albuquerque site. The motivation for a higher efficiency K{alpha} emitter was very clear: as the backlighter flux for any x-ray imaging technique on the Z accelerator increases, the signal-to-noise and signal-to-background ratios improve. This ultimately allows the imaging system to reach its full quantitative potential as a diagnostic. Depending on the particular application/experiment this would imply, for example, that the system would have reached its full design spatial resolution and thus the capability to see features that might otherwise be indiscernible with a traditional DFI-like x-ray source. This LDRD began FY09 and ended FY10.

More Details

Dynamic tensile characterization of a 4330-V steel with kolsky bar techniques

Song, Bo S.; Connelly, Kevin C.

There has been increasing demand to understand the stress-strain response as well as damage and failure mechanisms of materials under impact loading condition. Dynamic tensile characterization has been an efficient approach to acquire satisfactory information of mechanical properties including damage and failure of the materials under investigation. However, in order to obtain valid experimental data, reliable tensile experimental techniques at high strain rates are required. This includes not only precise experimental apparatus but also reliable experimental procedures and comprehensive data interpretation. Kolsky bar, originally developed by Kolsky in 1949 [1] for high-rate compressive characterization of materials, has been extended for dynamic tensile testing since 1960 [2]. In comparison to Kolsky compression bar, the experimental design of Kolsky tension bar has been much more diversified, particularly in producing high speed tensile pulses in the bars. Moreover, instead of directly sandwiching the cylindrical specimen between the bars in Kolsky bar compression bar experiments, the specimen must be firmly attached to the bar ends in Kolsky tensile bar experiments. A common method is to thread a dumbbell specimen into the ends of the incident and transmission bars. The relatively complicated striking and specimen gripping systems in Kolsky tension bar techniques often lead to disturbance in stress wave propagation in the bars, requiring appropriate interpretation of experimental data. In this study, we employed a modified Kolsky tension bar, newly developed at Sandia National Laboratories, Livermore, CA, to explore the dynamic tensile response of a 4330-V steel. The design of the new Kolsky tension bar has been presented at 2010 SEM Annual Conference [3]. Figures 1 and 2 show the actual photograph and schematic of the Kolsky tension bar, respectively. As shown in Fig. 2, the gun barrel is directly connected to the incident bar with a coupler. The cylindrical striker set inside the gun barrel is launched to impact on the end cap that is threaded into the open end of the gun barrel, producing a tension on the gun barrel and the incident bar.

More Details

Use of nanofiltration to reduce cooling tower water usage

Altman, Susan J.; Jensen, Richard P.; Everett, Randy L.

Nanofiltration (NF) can effectively treat cooling-tower water to reduce water consumption and maximize water usage efficiency of thermoelectric power plants. A pilot is being run to verify theoretical calculations. A side stream of water from a 900 gpm cooling tower is being treated by NF with the permeate returning to the cooling tower and the concentrate being discharged. The membrane efficiency is as high as over 50%. Salt rejection ranges from 77-97% with higher rejection for divalent ions. The pilot has demonstrated a reduction of makeup water of almost 20% and a reduction of discharge of over 50%.

More Details

Co-design in ACES and exascale

The Alliance for Computing at the Extreme Scale (ACES) is a Los Alamos and Sandia collaboration encompassing not only HPC procurements and operations, but also computer science and architecture research and development. One area of focus within ACES relates to the critical technology developments for future high performance computing systems and the applications that would run on them, and ACES is heavily involved in the proposed DOE Exascale Initiative. The proposed Exascale Initiative emphasizes the need for co-design, which is the three-way collaborative and concurrent design of HPC hardware, software, and the applications themselves. Transformational changes will occur not only in HPC hardware, but also in the applications space, and taken together these will require transformation changes in the overall software layers supporting the programming models, tools, runtimes, file systems, and operating systems. Co-design involving all three areas of hardware, software, and applications will be the key to success. This talk will outline key aspects of the Exascale Initiative and its emphasis on co-design. It will provide some examples from LANL & Sandia experiences in co-design including aspects of the innovative Roadrunner architecture and software, a ACES-Cray project studying advanced interconnects within Cray, and Sandia's work in computer system simulators and mini-applications.

More Details

Clustering of graphs with of multiple edge types

Pinar, Ali P.; Rocklin, Matthew D.

We study clustering on graphs with multiple edge types. Our main motivation is that similarities between objects can be measured in many different metrics. For instance similarity between two papers can be based on common authors, where they are published, keyword similarity, citations, etc. As such, graphs with multiple edges is a more accurate model to describe similarities between objects. Each edge/metric provides only partial information about the data; recovering full information requires aggregation of all the similarity metrics. Clustering becomes much more challenging in this context, since in addition to the difficulties of the traditional clustering problem, we have to deal with a space of clusterings. We generalize the concept of clustering in single-edge graphs to multi-edged graphs and investigate problems such as: Can we find a clustering that remains good, even if we change the relative weights of metrics? How can we describe the space of clusterings efficiently? Can we find unexpected clusterings (a good clustering that is distant from all given clusterings)? If given the groundtruth clustering, can we recover how the weights for edge types were aggregated?

More Details

Nexus of technologies : international safeguards, physical protection and arms control

Jordan, Sabina E.; Blair, Dianna S.; Smartt, Heidi A.

New technologies have been, and are continuing to be, developed for Safeguards, Arms Control, and Physical Protection. Application spaces and technical requirements are evolving - Overlaps are developing. Lessons learned from IAEA's extensive experience could benefit other communities. Technologies developed for other applications may benefit Safeguards - Inherent cost benefits and improvements in procurement security processes.

More Details
Results 70001–70200 of 96,771
Results 70001–70200 of 96,771