Sandia National Laboratories (SNL) has developed a number of security risk assessment methodologies (RAMs) for various infrastructures including dams, water systems, electrical transmission, chemical facilities and communities. All of these RAMs consider potential malevolent attacks from different threats, possible undesired events and consequences and determine potential adversary success. They focus on the assessment of these infrastructures to help identify security weaknesses and develop measures to help mitigate the consequences from possible adversary attacks. This paper will focus on RAM-C, the security risk assessment methodology for communities. There are many reasons for a community to conduct a security risk assessment. They include: providing a way to identify vulnerabilities, helping a community to be better prepared in the event of an adversary attack, providing justification for resources to address identified vulnerabilities and planning for future projects. RAM-C provides a systematic, risk-based approach useable by public safety and emergency planners to determine relative risk and provides useful information in making security risk decisions. RAM-C consists of a number of steps starting with a screening step which selects facilities based on a documented process; characterization of the community and facilities; determination of severity of consequences for identified undesired events; determination of the community protection goals and defining the threat; defining existing baseline safeguard measures; analyzing protection system effectiveness against identified scenarios, determining a relative risk and finally deciding if that risk is too high. If the risk is too high then possible countermeasures and mitigation measures are considered. RAM-C has been used by a number of communities within the United States. From these assessments there have been many results. Some communities have been surprised by the vulnerabilities that have been identified; have identified the need to test procedures and responses to many different situations; have identified the need to have redundancy in certain systems and have identified who within their community are valuable resources. The RAM-C process is a systematic way to assess vulnerabilities and make decisions based on risk. It has provided valuable information to community planners.
Measurements of the single-particle density of states (DOS) near T=0 ?K in Si:B are used to construct an energy-density phase diagram of Coulomb interactions across the critical density n{sub c} of the metal-insulator transition. Insulators and metals are found to be distinguishable only below a phase boundary (|n/n{sub c}-1|) determined by the Coulomb energy. Above ? is a mixed state where metals and insulators equidistant from n{sub c} cannot be distinguished from their DOS structure. The data imply a diverging screening radius at n{sub c}, which may signal an interaction-driven thermodynamic state change.
We are extending the existing features of Aspen, a powerful economic modeling tool, and introducing new features to simulate the role of confidence in economic activity. The new model is built from a collection of autonomous agents that represent households, firms, and other relevant entities like financial exchanges and governmental authorities. We simultaneously model several interrelated markets, including those for labor, products, stocks, and bonds. We also model economic tradeoffs, such as decisions of households and firms regarding spending, savings, and investment. In this paper, we review some of the basic principles and model components and describe our approach and development strategy for emulating consumer, investor, and business confidence. The model of confidence is explored within the context of economic disruptions, such as those resulting from disasters or terrorist events.
The authors provide a detailed overview of an on-going, multinational test program that is developing aerosol data for some spent fuel sabotage scenarios on spent fuel transport and storage casks. Experiments are being performed to quantify the aerosolized materials plus volatilized fission products generated from actual spent fuel and surrogate material test rods, due to impact by a high-energy-density device. The program participants in the United States plus Germany, France and the United Kingdom, part of the international Working Group for Sabotage Concerns of Transport and Storage Casks (WGSTSC) have strongly supported and coordinated this research program. Sandia National Laboratories has the lead role for conducting this research program; test program support is provided by both the US Department of Energy and the US Nuclear Regulatory Commission. The authors provide a summary of the overall, multiphase test design and a description of all explosive containment and aerosol collection test components used. They focus on the recently initiated tests on 'surrogate' spent fuel, unirradiated depleted uranium oxide and forthcoming actual spent fuel tests, and briefly summarize similar results from completed surrogate tests that used non-radioactive, sintered cerium oxide ceramic pellets in test rods.
We conducted sets of experiments with three diameters of concrete targets that had an average compressive strength of 23 MPa (3.3 ksi) and 76.2-mm-diameter, 3.0 caliber-radius-head, 13-kg projectiles. The three target diameters were D = 1.83, 1.37, and 0.91, so the ratios of the target diameters to the projectile diameter were D/d=24, 18, and 12. The ogive-nose projectiles were machined from 4340 R{sub c} 45 steel and designed to contain a single-channel acceleration data recorder. Thus, we recorded acceleration during launch and deceleration during penetration. An 83-mm-diameter powder gun launched the 13-kg projectiles to striking velocities between 160 and 340 m/s. Measured penetration depths and deceleration-time data were analyzed with a previously published model. We measured negligible changes in penetration depth and only small decreases in deceleration magnitude as the targets diameters were reduced.
As part of meeting the GRPA (Government Performance and Results Act) requirements and to provide input to Sandia's annual Performance Evaluation Assessment Report (PEAR) to the National Nuclear Security Administration in FY2004, a 14-member external review committee chaired by Dr. Alvin Trivelpiece was convened by Sandia National Laboratories (SNL) on May 4-6, 2004 to review Sandia National Laboratories' Pulsed Power Programs. The scope of the review included activities in high energy density physics (HEDP), inertial confinement fusion (ICF), radiation/weapon physics, the petawatt laser initiative (PW) and fast ignition, equation-of state studies, radiation effects science and lethality, x-ray radiography, ZR development, basic research and pulsed power technology research and development, as well as electromagnetics and work for others. In his charge to the Committee, Dr. Jeffrey P. Quintenz, Director of Pulsed Power Sciences (Org. 1600) asked that the evaluation and feedback be based on three criteria: (1) quality of technical activities in science, technology, and engineering, (2) programmatic performance, management, and planning, and (3) relevance to national needs and agency missions. In addition, the director posed specific programmatic questions. The accompanying report, produced as a SAND document, is the report of the Committee's finding.
This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In Section 2 we provide context for this report--what a 'method' is and where it fits. In Section 3 we present background for our classification scheme--what other schemes we have found, the fundamental nature of methods and their necessary incompleteness. In Section 4 we present our classification scheme in the form of a matrix, then we present an analogy that should provide an understanding of the scheme, concluding with an explanation of the two dimensions and the nine types in our scheme. In Section 5 we present examples of each of our classification types. In Section 6 we present conclusions.
Critical Infrastructures are formed by a large number of components that interact within complex networks. As a rule, infrastructures contain strong feedbacks either explicitly through the action of hardware/software control, or implicitly through the action/reaction of people. Individual infrastructures influence others and grow, adapt, and thus evolve in response to their multifaceted physical, economic, cultural, and political environments. Simply put, critical infrastructures are complex adaptive systems. In the Advanced Modeling and Techniques Investigations (AMTI) subgroup of the National Infrastructure Simulation and Analysis Center (NISAC), we are studying infrastructures as complex adaptive systems. In one of AMTI's efforts, we are focusing on cascading failure as can occur with devastating results within and between infrastructures. Over the past year we have synthesized and extended the large variety of abstract cascade models developed in the field of complexity science and have started to apply them to specific infrastructures that might experience cascading failure. In this report we introduce our comprehensive model, Polynet, which simulates cascading failure over a wide range of network topologies, interaction rules, and adaptive responses as well as multiple interacting and growing networks. We first demonstrate Polynet for the classical Bac, Tang, and Wiesenfeld or BTW sand-pile in several network topologies. We then apply Polynet to two very different critical infrastructures: the high voltage electric power transmission system which relays electricity from generators to groups of distribution-level consumers, and Fedwire which is a Federal Reserve service for sending large-value payments between banks and other large financial institutions. For these two applications, we tailor interaction rules to represent appropriate unit behavior and consider the influence of random transactions within two stylized networks: a regular homogeneous array and a heterogeneous scale-free (fractal) network. For the stylized electric power grid, our initial simulations demonstrate that the addition of geographically unrestricted random transactions can eventually push a grid to cascading failure, thus supporting the hypothesis that actions of unrestrained power markets (without proper security coordination on market actions) can undermine large scale system stability. We also find that network topology greatly influences system robustness. Homogeneous networks that are 'fish-net' like can withstand many more transaction perturbations before cascading than can scale-free networks. Interestingly, when the homogeneous network finally cascades, it tends to fail in its entirety, while the scale-free tends to compartmentalize failure and thus leads to smaller, more restricted outages. In the case of stylized Fedwire, initial simulations show that as banks adaptively set their individual reserves in response to random transactions, the ratio of the total volume of transactions to individual reserves, or 'turnover ratio', increases with increasing volume. The removal of a bank from interaction within the network then creates a cascade, its speed of propagation increasing as the turnover ratio increases. We also find that propagation is accelerated by patterned transactions (as expected to occur within real markets) and in scale-free networks, by the 'attack' of the most highly connected bank. These results suggest that the time scale for intervention by the Federal Reserve to divert a cascade in Fedwire may be quite short. Ongoing work in our cascade analysis effort is building on both these specific stylized applications to enhance their fidelity as well as embracing new applications. We are implementing markets and additional network interactions (e.g., social, telecommunication, information gathering, and control) that can impose structured drives (perturbations) comparable to those seen in real systems. Understanding the interaction of multiple networks, their interdependencies, and in particular, the underlying mechanisms for their growth/evolution is paramount. With this understanding, appropriate public policy can be identified to guide the evolution of present infrastructures to withstand the demands and threats of the future.
High-energy ion tracks (374 MeV Au{sup 26+}) in thin films were examined with transmission electron microscopy to investigate nanopore formation. Tracks in quartz and mica showed diffraction contrast. Tracks in sapphire and mica showed craters formed at the positions of ion incidence and exit, with a lower-density track connecting them. Direct nanopore formation by ions (without chemical etching) would appear to require film thicknesses less than 10 nm.
The purpose of this presentation is to provide an overview of the science-based materials modeling activities at Sandia National Laboratories, California. The main mission driver for the work is the development of predictive modeling and simulation capabilities leveraging high performance computing software and hardware. Presentation will highlight research accomplishments in several specific topics of current interest. Sandia/California has been engaged in the development of high performance computing based predictive modeling and simulation capabilities in support of the Science-Based Stockpile Stewardship Program of the U. S. Department of Energy. Of particular interest is the development of constitutive models that can efficiently and accurately predict post-failure material response and load-redistribution in systems and components. Fracture and failure are inherently multi-scale and our philosophy is to include required physics in our models at all appropriate scales. We approach the problems from the continuum point of view and intend to provide continuum models that include dominant subscale mechanisms. Moreover, numerical algorithms are needed to allow implementation of physical models in high performance computing codes such that large-scale modeling and simulation can be conducted. Other drivers of our effort include the emerging application of micro- and nano-systems and the increasing interest in biotechnology. In this presentation, our research in fracture and failure modeling, atomic-continuum coupling code development, microstructure-material properties relationships exploration, and general continuum theories advancement will be presented. Where appropriate, examples will be given to demonstrate the utility of the models.
{sm_bullet}HF/DFT are one-particle approximation to the Schrodinger equation {sm_bullet} The one-particle, mean field approaches are what lead to the nonlinear eigenvalue problem {sm_bullet} DFT includes a parameterized XC functional that reproduces many-electron effects -Very accurate ground state structures and energies - Problematic for excited states, band gaps
We are investigating the use of face-to-face porphyrin (FTF) materials as potential oxygen reduction catalysts in fuel cells. The FTF materials were popularized by Anson and Collman, and have the interesting property that varying the spacing between the porphyrin rings changes the chemistry they catalyze from a two-electron reduction of oxygen to a four-electron reduction of oxygen. Our goal is to understand how changes in the structure of the FTF materials lead to either two-electron or four-electron reductions. This understand of the FTF catalysis is important because of the potential use of these materials as fuel cell electrocatalysts. Furthermore, the laccase family of enzymes, which has been proposed as an electrocatalytic enzyme in biofuel cell applications, also has family members that display either two-electron or four electron reduction of oxygen, and we believe that an understanding of the structure-function relationships in the FTF materials may lead to an understanding of the behavior of laccase and other enzymes. We will report the results of B3LYP density functional theory studies with implicit solvent models of the reduction of oxygen in several members of the cobalt FTF family.
This paper investigates nonlinear behavior of coupled lasers. Composite-cavity-mode approach and a class-B description of the active medium are used to describe nonlinearities associated with population dynamics and optical coupling. The multimode equations are studied using bifurcation analysis to identify regions of stable locking, periodic oscillations, and complicated dynamics in the parameter space of coupling-mirror transmission T and normalized cavity-length mismatch dL/{lambda}. We further investigate the evolution of the key bifurcations with the linewidth enhancement factor {alpha}. In particular, our analysis reveals the formation of a gap in the lockband that is gradually occupied by instabilities. We also investigate effects of the cavity-length on chaotic dynamics.
Nature combines hard and soft materials, often in hierarchical architectures, to get synergistic, optimized properties with proven, complex functionalities. Emulating such natural designs in robust engineering materials using efficient processing approaches represents a fundamental challenge to materials chemists. This presentation will review progress on understanding so-called 'evaporation-induced silica/surfactant self-assembly' (EISA) as a simple, general means to prepare porous thin-film nanostructures. Such porous materials are of interest for membranes, low-dielectric-constant (low-k) insulators, and even 'nano-valves' that open and close in response to an external stimulus. EISA can also be used to simultaneously organize hydrophilic and hydrophobic precursors into hybrid nanocomposites that are optically or chemically polymerizable, patternable, or adjustable. In constructing composite structures, a significant challenge is how to controllably organize or define multiple materials on multiple length scales. To address this challenge, we have combined sol-gel chemistry with molecular self-assembly in several evaporation-driven processing procedures collectively referred to as evaporation-induced self-assembly (EISA). EISA starts with a silica/water/surfactant system diluted with ethanol to create a homogeneous solution. We rely on ethanol and water evaporation during dip-coating (or other coating methods) to progressively concentrate surfactant and silica in the depositing film, driving micelle formation and subsequent continuous self-assembly of silica/surfactant thin film mesophases. One of the crucial aspects of this process, in terms of the sol-gel chemistry, is to work under conditions where the condensation rate of the hydrophilic silicic acid precursors (Si-OH) is minimized. The idea is to avoid gelation that would kinetically trap the system at an intermediate non-equilibrium state. We want the structure to self-assemble then solidify, with the addition of a siloxane condensation catalyst or by heating, to form the desired mesostructured product. Operating at an acidic pH (pH = 2) minimizes the condensation rate of silanols to form siloxanes Si-O-SiIn addition, hydrogen bonding and electrostatic interactions between silanols and hydrophilic surfactant head groups can further reduce the condensation rate. These combined factors maintain the depositing film in a fluid state, even beyond the point where ethanol and water are largely evaporated. This allows the deposited film to be self-healing and enables the use of virtually any evaporation-driven process (spin-coating, inkjet printing, or aerosol processing) to create ordered nanostructured films, patterns, or particles.
Vapor detection of explosives continues to be a technological basis for security applications. This study began experimental work to measure the chemical emanation rates of pure explosive materials as a basis for determining emanation rates of security threats containing explosives. Sublimation rates for TNT were determined with thermo gravimetric analysis using two different techniques. Data were compared with other literature values to provide sublimation rates from 25 to 70 C. The enthalpy of sublimation for the combined data was found to be 115 kJ/mol, which corresponds well with previously reported data from vapor pressure determinations. A simple Gaussian atmospheric dispersion model was used to estimate downrange concentrations based on continuous, steady-state conditions at 20, 45 and 62 C for a nominal exposed block of TNT under low wind conditions. Recommendations are made for extension of the experimental vapor emanation rate determinations and development of turbulent flow computational fluid dynamics based atmospheric dispersion estimates of standoff vapor concentrations.
A quasi-spherical z-pinch may directly compress foam or deuterium and tritium in three dimensions as opposed to a cylindrical z-pinch, which compresses an internal load in two dimensions only. Because of compression in three dimensions the quasi-spherical z-pinch is more efficient at doing pdV work on an internal fluid than a cylindrical pinch. Designs of quasi-spherical z-pinch loads for the 28 MA 100 ns driver ZR, results from zero-dimensional (0D) circuit models of quasi-spherical implosions, and results from 1D hydrodynamic simulations of quasi-spherical implosions heating internal fluids will be presented. Applications of the quasi-spherical z-pinch implosions include a high radiation temperature source for radiation driven experiments, a source of neutrons for treating radioactive waste, and a source of fusion energy for a power generator.
Addition of fullerenes (C60 or buckyballs) to a linear polymer has been found to eliminate dewetting when a thin (?50 nm) film is exposed to solvent vapor. Based on neutron reflectivity measurements, it is found that the fullerenes form a coherent layer approximately 2 nm thick at the substrate--polymer film interface during the spin-coating process. The thickness and relative fullerene concentration (?29 vol%) is not altered during solvent vapor annealing and it is thought this layer forms a solid-like buffer shielding the adverse van der Waals forces promoted by the underlying substrate. Several polymer films produced by spin- or spray-coating were tested on both silicon wafers and live surface acoustic wave sensors demonstrating fullerenes stabilize many different polymer types, prepared by different procedures and on various surfaces. Further, the fullerenes drastically improve sensor performance since dewetted films produce a sensor that is effectively inoperable.
A general, approximate expression is described that can be used to predict the thermophoretic force on a free-molecular, motionless, spherical particle suspended in a quiescent gas with a temperature gradient. The thermophoretic force is equal to the product of an order-unity coefficient, the gas-phase translational heat flux, the particle cross-sectional area, and the inverse of the mean molecular speed. Numerical simulations are used to test the accuracy of this expression for monatomic gases, polyatomic gases, and mixtures thereof. Both continuum and noncontinuum conditions are examined; in particular, the effects of low pressure, wall proximity, and high heat flux are investigated. The direct simulation Monte Carlo (DSMC) method is used to calculate the local molecular velocity distribution, and the force-Green's-function method is used to calculate the thermophoretic force. The approximate expression is found to predict the calculated thermophoretic force to within 10% for all cases examined.
The regulatory compliance determination for the Waste Isolation Pilot Plant includes the consideration of room closure. Elements of the geomechanical processes include salt creep, gas generation and mechanical deformation of the waste residing in the rooms. The WIPP was certified as complying with regulatory requirements based in part on the implementation of room closure and material models for the waste. Since the WIPP began receiving waste in 1999, waste packages have been identified that are appreciably more robust than the 55-gallon drums characterized for the initial calculations. The pipe overpack comprises one such waste package. This report develops material model parameters for the pipe overpack containers by using axisymmetrical finite element models. Known material properties and structural dimensions allow well constrained models to be completed for uniaxial, triaxial, and hydrostatic compression of the pipe overpack waste package. These analyses show that the pipe overpack waste package is far more rigid than the originally certified drum. The model parameters developed in this report are used subsequently to evaluate the implications to performance assessment calculations.
MatSeis's infrasound analysis tool, Infra Tool, uses frequency slowness processing to deconstruct the array data into three outputs per processing step: correlation, azimuth and slowness. Until now, an experienced analyst trained to recognize a pattern observed in outputs from signal processing manually accomplished infrasound signal detection. Our goal was to automate the process of infrasound signal detection. The critical aspect of infrasound signal detection is to identify consecutive processing steps where the azimuth is constant (flat) while the time-lag correlation of the windowed waveform is above background value. These two statements describe the arrival of a correlated set of wavefronts at an array. The Hough Transform and Inverse Slope methods are used to determine the representative slope for a specified number of azimuth data points. The representative slope is then used in conjunction with associated correlation value and azimuth data variance to determine if and when an infrasound signal was detected. A format for an infrasound signal detection output file is also proposed. The detection output file will list the processed array element names, followed by detection characteristics for each method. Each detection is supplied with a listing of frequency slowness processing characteristics: human time (YYYY/MM/DD HH:MM:SS.SSS), epochal time, correlation, fstat, azimuth (deg) and trace velocity (km/s). As an example, a ground truth event was processed using the four-element DLIAR infrasound array located in New Mexico. The event is known as the Watusi chemical explosion, which occurred on 2002/09/28 at 21:25:17 with an explosive yield of 38,000 lb TNT equivalent. Knowing the source and array location, the array-to-event distance was computed to be approximately 890 km. This test determined the station-to-event azimuth (281.8 and 282.1 degrees) to within 1.6 and 1.4 degrees for the Inverse Slope and Hough Transform detection algorithms, respectively, and the detection window closely correlated to the theoretical stratospheric arrival time. Further testing will be required for tuning of detection threshold parameters for different types of infrasound events.
The LIGA process has the ability to fabricate very precise, high aspect ratio mesoscale structures with microscale features [l]. The process consists of multiple steps before a final part is produced. Materials native to the LIGA process include metals and photoresists. These structures are routinely measured for quality control and process improvement. However, metrology of LIGA structures is challenging because of their high aspect ratio and edge topography. For the scale of LIGA structures, a programmable optical microscope is well suited for lateral (XU) critical dimension measurements. Using grayscale gradient image processing with sub-pixel interpolation, edges are detected and measurements are performed. As with any measurement, understanding measurement uncertainty is necessary so that appropriate conclusions are drawn from the data. Therefore, the abilities of the inspection tool and the obstacles presented by the structures under inspection should be well understood so that precision may be quantified. This report presents an inspection method for LIGA microstructures including a comprehensive assessment of the uncertainty for each inspection scenario.
This document is a reference guide for the UNIX Library/Standalone version of the Latin Hypercube Sampling Software. This software has been developed to generate Latin hypercube multivariate samples. This version runs on Linux or UNIX platforms. This manual covers the use of the LHS code in a UNIX environment, run either as a standalone program or as a callable library. The underlying code in the UNIX Library/Standalone version of LHS is almost identical to the updated Windows version of LHS released in 1998 (SAND98-0210). However, some modifications were made to customize it for a UNIX environment and as a library that is called from the DAKOTA environment. This manual covers the use of the LHS code as a library and in the standalone mode under UNIX.
A new Z-pinch driver is being planned by Sandia National Laboratories (SNL) that will provide up to 16 MJ of X-ray radiation. Two load designs are being considered. One is a double Z-pinch configuration, with each load providing 7 MJ radiation. The other is a single Z-pinch configuration that produces 16 MJ. Both configurations require 100 to 120 ns implosion times, and radiation pulse widths of less than 10 ns. These requirements translate into two 40 MA drivers for the double-sided load, and a 60 MA driver for the single-load configuration. The design philosophy for this machine is to work from the load out. Radiation requirements determine the current, pulsewidth, and load-inductance requirements. These parameters set the drive wave-form and insulator voltage, which in turn determine the insulator-stack design. The goal is to choose a drive wave-form that meets the load requirements while optimizing efficiency and minimizing breakdown risk.
The purpose of this report is to provide guidance, from the open literature, on developing a set of ''measures of effectiveness'' (MoEs) and using them to evaluate a system. Approximately twenty papers and books are reviewed. The papers that provide the clearest understanding of MoEs are identified (Sproles [46], [48], [50]). The seminal work on value-focused thinking (VFT), an approach that bridges the gap between MoEs and a system, is also identified (Keeney [25]). And finally three examples of the use of VFT in evaluating a system based on MoEs are identified (Jackson et al. [21], Kerchner & Deckro [27], and Doyle et al. [14]). Notes are provided of the papers and books to pursue in order to take this study to the next level of detail.
This test plan describes the testing strategy for the ITS (Integrated-TIGER-Series) suite of codes. The processes and procedures for performing both verification and validation tests are described. ITS Version 5.0 was developed under the NNSA's ASC program and supports Sandia's stockpile stewardship mission.
Parabolic trough power systems that utilize concentrated solar energy to generate electricity are a proven technology. Industry and laboratory research efforts are now focusing on integration of thermal energy storage as a viable means to enhance dispatchability of concentrated solar energy. One option to significantly reduce costs is to use thermocline storage systems, low-cost filler materials as the primary thermal storage medium, and molten nitrate salts as the direct heat transfer fluid. Prior thermocline evaluations and thermal cycling tests at the Sandia National Laboratories' National Solar Thermal Test Facility identified quartzite rock and silica sand as potential filler materials. An expanded series of isothermal and thermal cycling experiments were planned and implemented to extend those studies in order to demonstrate the durability of these filler materials in molten nitrate salts over a range of operating temperatures for extended timeframes. Upon test completion, careful analyses of filler material samples, as well as the molten salt, were conducted to assess long-term durability and degradation mechanisms in these test conditions. Analysis results demonstrate that the quartzite rock and silica sand appear able to withstand the molten salt environment quite well. No significant deterioration that would impact the performance or operability of a thermocline thermal energy storage system was evident. Therefore, additional studies of the thermocline concept can continue armed with confidence that appropriate filler materials have been identified for the intended application.
Reliability methods are probabilistic algorithms for quantifying the effect of simulation input uncertainties on response metrics of interest. In particular, they compute approximate response function distribution statistics (probability, reliability and response levels) based on specified input random variable probability distributions. In this paper, a number of algorithmic variations are explored for both the forward reliability analysis of computing probabilities for specified response levels (the reliability index approach (RIA)) and the inverse reliability analysis of computing response levels for specified probabilities (the performance measure approach (PMA)). These variations include limit state linearizations, probability integrations, warm starting and optimization algorithm selections. The resulting RIA/PMA reliability algorithms for uncertainty quantification are then employed within bi-level and sequential reliability-based design optimization approaches. Relative performance of these uncertainty quantification and reliability-based design optimization algorithms are presented for a number of computational experiments performed using the DAKOTA/UQ software.
A study was undertaken to validate the 'capability' computing needs of DOE's Office of Science. More than seventy members of the community provided information about algorithmic scaling laws, so that the impact of having access to Petascale capability computers could be assessed. We have concluded that the Office of Science community has described credible needs for Petascale capability computing.
The energies of adsorbed H and D recoiled from tungsten surfaces during bombardment with 3 keV Ne{sup +} at oblique angles of incidence were measured. The energy spectra show structure that extends above the elastic recoil energy. We find that the high-energy structure results from multiple collisions, namely recoil of a H isotope followed by scattering from an adjacent W atom, and vice versa. This scattering assisted recoil process is especially prevalent for H isotopes adsorbed on W, owing to the large mass difference between the scattering partners. Such processes will tend to enhance H isotope recycling from plasma-facing W surfaces and reduce energy transfer to the W substrate.
Two heuristic strategies intended to enhance the performance of the generalized global basis (GGB) method [H. Waisman, J. Fish, R.S. Tuminaro, J. Shadid, The Generalized Global Basis (GGB) method, International Journal for Numerical Methods in Engineering 61(8), 1243-1269] applied to nonlinear systems are presented. The standard GGB accelerates a multigrid scheme by an additional coarse grid correction that filters out slowly converging modes. This correction requires a potentially costly eigen calculation. This paper considers reusing previously computed eigenspace information. The GGB? scheme enriches the prolongation operator with new eigenvectors while the modified method (MGGB) selectively reuses the same prolongation. Both methods use the criteria of principal angles between subspaces spanned between the previous and current prolongation operators. Numerical examples clearly indicate significant time savings in particular for the MGGB scheme.
We consider linear systems arising from the use of the finite element method for solving scalar linear elliptic problems. Our main result is that these linear systems, which are symmetric and positive semidefinite, are well approximated by symmetric diagonally dominant matrices. Our framework for defining matrix approximation is support theory. Significant graph theoretic work has already been developed in the support framework for preconditioners in the diagonally dominant case, and in particular it is known that such systems can be solved with iterative methods in nearly linear time. Thus, our approximation result implies that these graph theoretic techniques can also solve a class of finite element problems in nearly linear time. We show that the support number bounds, which control the number of iterations in the preconditioned iterative solver, depend on mesh quality measures but not on the problem size or shape of the domain.
A family of microporous phases with compositions Na{sub 2}Nb{sub 2-x}Ti{sub x}O{sub 6-x}(OH){sub x} {center_dot} H{sub 2}O (0 {le} x {le} 0.4) transform to Na{sub 2}Nb{sub 2-x}Ti{sub x}O{sub 6-0.5x} perovskites upon heating. In this study, we have measured the enthalpies of formation of the microporous phases and their corresponding perovskites from the constituent oxides and from the elements by drop solution calorimetry in 3Na{sub 2}O {center_dot} 4MoO{sub 3} solvent at 974 K. As Ti/Nb increases, the enthalpies of formation for the microporous phases become less exothermic up to x = {approx}0.2 but then more exothermic thereafter. In contrast, the formation enthalpies for the corresponding perovskites become less exothermic across the series. The energetic disparity between the two series can be attributed to their different mechanisms of ionic substitutions: Nb{sup 5+} + O{sup 2-} {yields} Ti{sup 4+} + OH{sup -} for the microporous phases and Nb{sup 5+} {yields} Ti{sup 4+} + 0.5 V{sub O}** for the perovskites. From the calorimetric data for the two series, the enthalpies of the dehydration reaction, Na{sub 2}Nb{sub 2-x}Ti{sub x}O{sub 6-x}(OH){sub x} {center_dot} H{sub 2}O {yields} Na{sub 2}Nb{sub 2-x}Ti{sub x}O{sub 6-0.5X} + H{sub 2}O, have been derived, and their implications for phase stability at the synthesis conditions are discussed.
Analytic solutions are useful for code verification. Structural vibration codes approximate solutions to the eigenvalue problem for the linear elasticity equations (Navier's equations). Unfortunately the verification method of 'manufactured solutions' does not apply to vibration problems. Verification books (for example [2]) tabulate a few of the lowest modes, but are not useful for computations of large numbers of modes. A closed form solution is presented here for all the eigenvalues and eigenfunctions for a cuboid solid with isotropic material properties. The boundary conditions correspond physically to a greased wall.
We report on unique particle-in-cell simulations to understand the relativistic electron beam thermalization and subsequent heating of highly compressed plasmas. The simulations yield heated core parameters in good agreement with the GEKKO-PW experimental measurements, given reasonable assumptions of laser-to-electron coupling efficiency and the distribution function of laser-produced electrons. The classical range of the hot electrons exceeds the mass density-core diameter product {rho}L by a factor of several. Anomalous stopping appears to be present and is created by the growth and saturation of an electromagnetic filamentation mode that generates a strong back-EMF impeding hot electrons on the injection side of the density maxima.
An empirical model for investigating the behavior of CaCO{sub 3} polymorphs incorporating a shell model for oxygen has been created. The model was constructed by fitting to: the structure of aragonite and calcite; their elastic, static and high-frequency dielectric constants; phonon frequencies at the wave vectors [1/2 0 2] and [0 0 0] of calcite; and vibrational frequencies of the carbonate deformation modes of calcite. The high-pressure phase transition between calcite I and II is observed. The potentials for the CO{sub 3} group were transferred to other carbonates, by refitting the interaction between CO{sub 3} and the cation to both the experimental structures and their bulk modulus, creating a set of potentials for calculating the properties of a wide range of carbonate materials. Defect energies of substitutional cation defects were analyzed for calcite and aragonite phases. The results were rationalized by studying the structure of calcite and aragonite in greater detail.
Recent experiments have shown that in the oxygen isotopic exchange reaction for O({sup 1}D) + CO{sub 2} the elastic channel is approximately 50% that of the inelastic channel [Perri et al., 2003]. We propose an analogous oxygen atom exchange reaction for the isoelectronic O({sup 1}D) + N{sub 2}O system to explain the mass-independent isotopic fractionation (MIF) in atmospheric N{sub 2}O. We apply quantum chemical methods to compute the energetics of the potential energy surfaces on which the O({sup 1}D) + N{sub 2}O reaction occurs. Preliminary modeling results indicate that oxygen isotopic exchange via O({sup 1}D) + N{sub 2}O can account for the MIF oxygen anomaly if the oxygen atom isotopic exchange rate is 30-50% that of the total rate for the reactive channels.
We present the source code for three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or Nway array. This is a supplementary report; details on using this code are provided separately in SAND-XXXX.
The collapse of the Soviet Union in 1991 left the legacy of the USSR weapons complex with an estimated 50 nuclear, chemical, and biological weapons cities containing facilities responsible for research, production, maintenance, and destruction of the weapons stockpile. The Russian Federation acquired ten such previously secret, closed nuclear weapons complex cities. Unfortunately, a lack of government funding to support these facilities resulted in non-payment of salaries to employees and even plant closures, which led to an international fear of weapons material and knowledge proliferation. This dissertation analyzes migration in 33 regions of the Russian Federation, six of which contain the ten closed nuclear weapons complex cities. This study finds that the presence of a closed nuclear city does not significantly influence migration. However, the factors that do influence migration are statistically different in regions containing closed nuclear cities compared to regions without closed nuclear cities. Further, these results show that the net rate of migration has changed across the years since the break up of the Soviet Union, and that the push and pull factors for migration have changed across time. Specifically, personal and residential factors had a significant impact on migration immediately following the collapse of the Soviet Union, but economic infrastructure and societal factors became significant in later years. Two significant policy conclusions are derived from this research. First, higher levels of income are found to increase outmigration from regions, implying that programs designed to prevent migration by increasing incomes for closed city residents may be counter-productive. Second, this study finds that programs designed to increase capital and build infrastructure in the new Russian Federation will be more effective for employing scientists and engineers from the weapons complex, and consequently reduce the potential for emigration of potential proliferants.
It seems well understood that supercomputer simulation is an enabler for scientific discoveries, weapons, and other activities of value to society. It also seems widely believed that Moore's Law will make progressively more powerful supercomputers over time and thus enable more of these contributions. This paper seeks to add detail to these arguments, revealing them to be generally correct but not a smooth and effortless progression. This paper will review some key problems that can be solved with supercomputer simulation, showing that more powerful supercomputers will be useful up to a very high yet finite limit of around 1021 FLOPS (1 Zettaflops) . The review will also show the basic nature of these extreme problems. This paper will review work by others showing that the theoretical maximum supercomputer power is very high indeed, but will explain how a straightforward extrapolation of Moore's Law will lead to technological maturity in a few decades. The power of a supercomputer at the maturity of Moore's Law will be very high by today's standards at 1016-1019 FLOPS (100 Petaflops to 10 Exaflops), depending on architecture, but distinctly below the level required for the most ambitious applications. Having established that Moore's Law will not be that last word in supercomputing, this paper will explore the nearer term issue of what a supercomputer will look like at maturity of Moore's Law. Our approach will quantify the maximum performance as permitted by the laws of physics for extension of current technology and then find a design that approaches this limit closely. We study a 'multi-architecture' for supercomputers that combines a microprocessor with other 'advanced' concepts and find it can reach the limits as well. This approach should be quite viable in the future because the microprocessor would provide compatibility with existing codes and programming styles while the 'advanced' features would provide a boost to the limits of performance.
Plasma and sheath structure around a rf excited stepped electrode is investigated. Laser-induced fluorescence dip spectroscopy is used to spatially resolve sheath fields in an argon discharge while optical emission and laser-induced fluorescence are used to measure the spatial structure of the surrounding discharge for various discharge conditions and step-junction configurations. The presence of the step perturbs the spatial structure of the fields around the step as well as the excitation in the region above the step.
Its large cross section for absorption of thermal neutrons has made {sup 10}B a frequent candidate for use in neutron detectors. Here a boron-carbide-based thermoelectric device for the detection of a thermal-neutron flux is proposed. The very high melting temperatures and the radiation tolerance of boron carbides made them suitable for use within hostile environments (e.g., within nuclear reactors). The large anomalous Seebeck coefficients of boron carbides are exploited in proposing a relatively sensitive detector of the local heating that follows the absorption of a neutron by a {sup 10}B nucleus in a boron carbide.
Flash x-ray radiography has undergone a transformation in recent years with the resurgence of interest in compact, high intensity pulsed-power-driven electron beam sources. The radiographic requirements and the choice of a consistent x-ray source determine the accelerator parameters, which can be met by demonstrated Induction Voltage Adder technologies. This paper reviews the state of the art and the recent advances which have improved performance by over an order of magnitude in beam brightness and radiographic utility.
We compare inexact Newton and coordinate descent methods for optimizing the quality of a mesh by repositioning the vertices, where quality is measured by the harmonic mean of the mean-ratio metric. The effects of problem size, element size heterogeneity, and various vertex displacement schemes on the performance of these algorithms are assessed for a series of tetrahedral meshes.
We performed atomistic simulations to study the effect of free surfaces on the yielding of gold nanowires. Tensile surface stresses on the surfaces of the nanowires cause them to contract along the length with respect to the bulk face-centered cubic lattice and induce compressive stress in the interior. When the cross-sectional area of a (100) nanowire is less than 2.45 nm x 2.45 nm, the wire yields under its surface stresses. Under external forces and surface stresses, nanowires yield via the nucleation and propagation of the {l_brace}111{r_brace}<112> partial dislocations. The magnitudes of the tensile and compressive yield stress of (100) nanowires increase and decrease, respectively, with a decrease of the wire width. The magnitude of the tensile yield stress is much larger than that of the compressive yield stress for small (100) nanowires, while for small <111> nanowires, tensile and compressive yield stresses have similar magnitudes. The critical resolved shear stress (RSS) by external forces depends on wire width, orientation and loading condition (tension vs. compression). However, the critical RSS in the interior of the nanowires, which is exerted by both the external force and the surface-stress-induced compressive stress, does not change significantly with wire width for same orientation and same loading condition, and can thus serve as a 'local' criterion. This local criterion is invoked to explain the observed size dependence of yield behavior and tensile/compressive yield stress asymmetry, considering surface stress effects and different slip systems active in tensile and compressive yielding.
Calore is the ASC code developed to model steady and transient thermal diffusion with chemistry and dynamic enclosure radiation. An integral part of the software development process is code verification, which addresses the question 'Are we correctly solving the model equations'? This process aids the developers in that it identifies potential software bugs and gives the thermal analyst confidence that a properly prepared input will produce satisfactory output. Grid refinement studies have been performed on problems for which we have analytical solutions. In this talk, the code verification process is overviewed and recent results are presented. Recent verification studies have focused on transient nonlinear heat conduction and verifying algorithms associated with (tied) contact and adaptive mesh refinement. In addition, an approach to measure the coverage of the verification test suite relative to intended code applications is discussed.
Three complex target penetration scenarios are run with a model developed by the U. S. Army Engineer Waterways Experiment Station, called PENCURV. The results are compared with both test data and a Zapotec model to evaluate PENCURV's suitability for conducting broad-based scoping studies on a variety of targets to give first order solutions to the problem of G-loading. Under many circumstances, the simpler, empirically based PENCURV model compares well with test data and the much more sophisticated Zapotec model. The results suggest that, if PENCURV were enhanced to include rotational acceleration in its G-loading computations, it would provide much more accurate solutions for a wide variety of penetration problems. Data from an improved PENCURV program would allow for faster, lower cost optimization of targets, test parameters and penetration bodies as Sandia National Laboratories continues in its evaluation of the survivability requirements for earth penetrating sensors and weapons.
There is currently a large research and development effort within the high-performance computing community on advanced parallel programming models. This research can potentially have an impact on parallel applications, system software, and computing architectures in the next several years. Given Sandia's expertise and unique perspective in these areas, particularly on very large-scale systems, there are many areas in which Sandia can contribute to this effort. This technical report provides a survey of past and present parallel programming model research projects and provides a detailed description of the Partitioned Global Address Space (PGAS) programming model. The PGAS model may offer several improvements over the traditional distributed memory message passing model, which is the dominant model currently being used at Sandia. This technical report discusses these potential benefits and outlines specific areas where Sandia's expertise could contribute to current research activities. In particular, we describe several projects in the areas of high-performance networking, operating systems and parallel runtime systems, compilers, application development, and performance evaluation.
We describe three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or N-way array. We present a tensor class for manipulating tensors which allows for tensor multiplication and 'matricization.' We have further added two classes for representing tensors in decomposed format: cp{_}tensor and tucker{_}tensor. We demonstrate the use of these classes by implementing several algorithms that have appeared in the literature.
Geostatistical and non-geostatistical noise filtering methodologies, factorial kriging and a low-pass filter, and a region growing method are applied to analytic signal magnetometer images at two UXO contaminated sites to delineate UXO target areas. Overall delineation performance is improved by removing background noise. Factorial kriging slightly outperforms the low-pass filter but there is no distinct difference between them in terms of finding anomalies of interest.
We give processor-allocation algorithms for grid architectures, where the objective is to select processors from a set of available processors to minimize the average number of communication hops. The associated clustering problem is as follows: Given n points in R{sup d}, find a size-k subset with minimum average pairwise L{sub 1} distance.We present a natural approximation algorithm and show that it is a 7/4-approximation for 2D grids. In d dimensions, the approximation guarantee is 2 - 1/2d, which is tight. We also give a polynomial-time approximation scheme (PTAS) for constant dimension d and report on experimental results.
One critical aspect of any denuclearization of the Democratic People's Republic of Korea (DPRK) involves dismantlement of its nuclear facilities and management of their associated radioactive wastes. The decommissioning problem for its two principal operational plutonium facilities at Yongbyun, the 5MWe nuclear reactor and the Radiochemical Laboratory reprocessing facility, alone present a formidable challenge. Dismantling those facilities will create radioactive waste in addition to existing inventories of spent fuel and reprocessing wastes. Negotiations with the DPRK, such as the Six Party Talks, need to appreciate the enormous scale of the radioactive waste management problem resulting from dismantlement. The two operating plutonium facilities, along with their legacy wastes, will result in anywhere from 50 to 100 metric tons of uranium spent fuel, as much as 500,000 liters of liquid high-level waste, as well as miscellaneous high-level waste sources from the Radiochemical Laboratory. A substantial quantity of intermediate-level waste will result from disposing 600 metric tons of graphite from the reactor, an undetermined quantity of chemical decladding liquid waste from reprocessing, and hundreds of tons of contaminated concrete and metal from facility dismantlement. Various facilities for dismantlement, decontamination, waste treatment and packaging, and storage will be needed. The shipment of spent fuel and liquid high level waste out of the DPRK is also likely to be required. Nuclear facility dismantlement and radioactive waste management in the DPRK are all the more difficult because of nuclear nonproliferation constraints, including the call by the United States for 'complete, verifiable and irreversible dismantlement', or 'CVID'. It is desirable to accomplish dismantlement quickly, but many aspects of the radioactive waste management cannot be achieved without careful assessment, planning and preparation, sustained commitment, and long completion times. The radioactive waste management problem in fact offers a prospect for international participation to engage the DPRK constructively. DPRK nuclear dismantlement, when accompanied with a concerted effort for effective radioactive waste management, can be a mutually beneficial goal.
Sandia and Rontec have developed an annular, 12-element, 60 mm{sup 2}, Peltier-cooled, translatable, silicon drift detector called the SDD-12. The body of the SDD-12 is only 22.8 mm in total thickness and easily fits between the sample and the upstream wall of the Sandia microbeam chamber. At a working distance of 1 mm, the solid angle is 1.09 sr. The energy resolution is 170 eV at count rates <40 kcps and 200 eV for rates of 1 Mcps. X-ray count rates must be maintained below 50 kcps when protons are allowed to strike the full area of the SDD. Another innovation with this new {mu}PIXE system is that the data are analyzed using Sandia's Automated eXpert Spectral Image Analysis (AXSIA).
This project will attempt to develop a new family of inorganic crystalline porous materials under IMF that will lead to improvement of energy efficiency and productivity via improved separations. Initially this project will be focused on materials for the separation of linear from branched hydrocarbons. However, it is anticipated that the results will provide the basis of knowledge to enable this technology to be applied toward additional hydrocarbon and chemical separations. Industrial involvement from Goodyear and Burns & McDonnell provides needed direction for solving real industrial problems, which will find application throughout the US chemical and petroleum industries.
A microswitch utilizing thermoelectric MEMS actuators is being designed, fabricated, and characterized. The switch is intended to switch >1000 VDC with over 100 gigaohms off-state resistance. The main challenge in designing these switches is determining a contact electrode configuration with the ability to stand off high voltages, while still being able to bridge the contact gap using MEMS actuators. Extensive high voltage breakdown testing has confirmed that the breakdown response for planar MEMS polysilicon devices is similar to the published response of larger metal electrodes across single small air gaps (0.5 to 10 um). Investigations of breakdown response in planar electrode configurations with multiple gaps show promising results for high voltage switching.
A deterministic algorithm for enumeration of transmembrane protein folds is presented. Using a set of sparse pairwise atomic distance constraints (such as those obtained from chemical cross-linking, FRET, or dipolar EPR experiments), the algorithm performs an exhaustive search of secondary structure element packing conformations distributed throughout the entire conformational space. The end result is a set of distinct protein conformations, which can be scored and refined as part of a process designed for computational elucidation of transmembrane protein structures.
Autonomous bio-chemical agent detectors require sample preparation involving multiplex fluid control. We have developed a portable microfluidic pump array for metering sub-microliter volumes at flowrates of 1-100 {micro}L/min. Each pump is composed of an electrokinetic (EK) pump and high-voltage power supply with 15-Hz feedback from flow sensors. The combination of high pump fluid impedance and active control results in precise fluid metering with nanoliter accuracy. Automated sample preparation will be demonstrated by labeling proteins with fluorescamine and subsequent injection to a capillary gel electrophoresis (CGE) chip.
Natural fractures in Jurassic through Tertiary rock units of the Raton Basin locally contain conjugate shear fractures that are mechanically compatible with associated extension fractures, i.e., they have a bisector to the acute angle that is parallel to the strike of associated extension fractures, normal to the thrust front at the western margin of the basin. Both sets of fractures are therefore interpreted to have formed during Laramide-age thrusting from west to east that formed the Sangre de Cristo Mountains and subsequently the foreland Raton Basin, and that imposed strong east-west compressive stresses onto the strata filling the basin. This pattern is not universal, however. Anomalous NNE-SSW striking fractures locally dominate strata close to the thrust front, and fracture patterns are irregular in strata associated with anticlinal structures within the basin. Of special interest are strike-slip style conjugate shear fractures within Dakota Sandstone outcrops 60 miles to the east of the thrust front. Mohr-Coulomb failure diagrams are utilized to describe how these formed as well as how two distinctly different types of fractures can be formed in the same basin under the same regional tectonic setting and at the same time. The primary controls in this interpretation are simply the mechanical properties of the specific rock units and the depth of burial rather than significant changes in the applied stress.
We provide an algorithm and analysis of a high order projection scheme for time integration of the incompressible Navier-Stokes equations (NSE). The method is based on a projection onto the subspace of divergence-free (incompressible) functions interleaved with a Krylov-based exponential time integration (KBEI). These time integration methods provide a high order accurate, stable approach with many of the advantages of explicit methods, and can reduce the computational resources over conventional methods. The method is scalable in the sense that the computational costs grow linearly with problem size. Exponential integrators, used typically to solve systems of ODEs, utilize matrix vector products of the exponential of the Jacobian on a vector. For large systems, this product can be approximated efficiently by Krylov subspace methods. However, in contrast to explicit methods, KBEIs are not restricted by the time step. While implicit methods require a solution of a linear system with the Jacobian, KBEIs only require matrix vector products of the Jacobian. Furthermore, these methods are based on linearization, so there is no non-linear system solve at each time step. Differential-algebraic equations (DAEs) are ordinary differential equations (ODEs) subject to algebraic constraints. The discretized NSE constitute a system of DAEs, where the incompressibility condition is the algebraic constraint. Exponential integrators can be extended to DAEs with linear constraints imposed via a projection onto the constraint manifold. This results in a projected ODE that is integrated by a KBEI. In this approach, the Krylov subspace satisfies the constraint, hence the solution at the advanced time step automatically satisfies the constraint as well. For the NSE, the projection onto the constraint is typically achieved by a projection induced by the L{sup 2} inner product. We examine this L{sup 2} projection and an H{sup 1} projection induced by the H{sup 1} semi-inner product. The H{sup 1} projection has an advantage over the L{sup 2} projection in that it retains tangential Dirichlet boundary conditions for the ow. Both the H{sup 1} and L{sup 2} projections are solutions to saddle point problems that are efficiently solved by a preconditioned Uzawa algorithm.
Wireless networking can provide a cost effective and convenient method for installing and operating an unattended or remote monitoring system in an established facility. There is concern, however, that wireless devices can interfere with each other and with other radio systems within the facility. Additionally, there is concern that these devices add a potential risk to the security of the network. Since all data is transmitted in the air, it is possible for an unauthorized user to intercept the data transmissions and/or insert data onto the network if proper security is not in place. This paper describes a study being undertaken to highlight the benefits of wireless networking, evaluate interference and methods for mitigation, recommend security architectures, and present the results of a wireless network demonstration between Sandia National Laboratories (SNL) and the Joint Research Centre (JRC).
The magnitude and structure of the ion wakefield potential below a single negatively-charged dust particle levitated in the plasma sheath region was calculated and measured. Attractive and repulsive components of the interaction force were extracted from a trajectory analysis of low-energy collisions between different mass particles in a well-defined electrostatic potential.
The Trilinos{trademark} Project is an effort to facilitate the design, development, integration and ongoing support of mathematical software libraries. AztecOO{trademark} is a package within Trilinos that enables the use of the Aztec solver library [19] with Epetra{trademark} [13] objects. AztecOO provides access to Aztec preconditioners and solvers by implementing the Aztec 'matrix-free' interface using Epetra. While Aztec is written in C and procedure-oriented, AztecOO is written in C++ and is object-oriented. In addition to providing access to Aztec capabilities, AztecOO also provides some signficant new functionality. In particular it provides an extensible status testing capability that allows expression of sophisticated stopping criteria as is needed in production use of iterative solvers. AztecOO also provides mechanisms for using Ifpack [2], ML [20] and AztecOO itself as preconditioners.
Over the last few years, a variety of experiments studying higher photon energy (>4 keV) radiators have been performed, primarily at the Z accelerator. In this paper, the results of experiments designed to study the effects of initial load diameter on the radiated output of stainless steel wire arrays are presented. Stainless steel is primarily iron, which radiates in the K-shell at 6.7 keV. Nested wire arrays from 45 mm initial outer diameter to 80 mm outer diameter were fielded at the Z accelerator. A nested array consists of two wire arrays, with the inner concentric to an outer. All of the arrays fielded for this work had a 2:1 mass and diameter ratio (outer:inner), and the arrays were designed to have the same implosion time. A degradation of K-shell output was observed (pulse shape and power) for the smallest and largest diameter arrays, suggesting a region in which optimal conditions exist for K-shell output. The degradation at small diameters results from the reduced eta value, due to low implosion velocity. Eta is defined as the kinetic energy per ion divided by the energy required to get to the K-shell. At large diameters, a dramatic degradation of output is observed not just for the K-shell, but also for the lower energy X-rays. This may be the result of the low mass required to maintain an appropriate implosion time - there simply aren't many radiators available to participate. One other possibility is that the higher acceleration necessary at large diameters to achieve the same implosion time results in additional instability growth. Also necessary to consider are the effects of interwire gap: due to the limited wire sizes available, the interwire gap on the large diameter loads is large, in one case more than 3 mm. Comparisons of the trends observed in the experiments (radiated yield, pulse shape, and spectra) will be made to calculations previously benchmarked to K-shell data obtained at Z. The reproducibility of the arrays, advanced imaging diagnostics fielded, current diagnostics, and sensitivities of the calculations are also discussed.
Z-pinch plasmas are susceptible to the magnetic Rayleigh-Taylor (MRT) instability. The Z-pinch dynamic hohlraum (ZPDH), as implemented on the Z machine at Sandia National Laboratories, is composed of an annular tungsten plasma that implodes onto a coaxial foam convertor. The collision between tungsten Z pinch and convertor launches a strong shock in the foam. Shock heating generates radiation that is trapped by the tungsten Z pinch. The radiation can be used to implode a fuel-filled, inertial confinement fusion capsule. Hence, it is important to understand the influence that the MRT instability has on shock generation. This paper presents results of an investigation to determine the affect that the MRT instability has on characteristics of the radiating shock in a ZPDH. Experiments on Z were conducted in which a 1.5 cm tall, nested array (two arrays with initial diameters of 2.0 and 4.0 cm), tungsten wire plasma implodes onto a 5 mg/cc, CH{sub 2} foam convertor to create a {approx}135 eV dynamic hohlraum. X-ray pinhole cameras viewing along the ZPDH axis recorded time and space resolved images of emission produced by the radiating shock. These measurements showed that the shock remained circular to within +/-30-60 {micro}m as it propagated towards the axis, and that it was highly uniform along its height. The measured emission intensities are compared with synthetic x-ray images obtained by postprocessing two-dimensional, radiation magnetohydrodynamic simulations in which the amplitude of MRT perturbations is varied. These simulations accurately reproduce the measured shock trajectory and spatial profiles of the dynamic hohlraum interior emission as a function of time, even for large MRT amplitudes. Furthermore, the radiating shock remains relatively uniform in the axial direction regardless of the MRT amplitude because nonuniformities are tamped by the interaction of the tungsten Z-pinch plasma with the foam. These results suggest that inertial confinement fusion implosions driven by a ZPDH should be relatively free from random radiation symmetry variations produced by Z-pinch instabilities.
Sundance is a system of software components that allows construction of an entire parallel simulator and its derivatives using a high-level symbolic language. With this high-level problem description, it is possible to specify a weak formulation of a PDE and its discretization method in a small amount of user-level code; furthermore, because derivatives are easily available, a simulation in Sundance is immediately suitable for accelerated PDE-constrained optimization algorithms. This paper is a tutorial for setting up and solving linear and nonlinear PDEs in Sundance. With several simple examples, we show how to set up mesh objects, geometric regions for BC application, the weak form of the PDE, and boundary conditions. Each example then illustrates use of an appropriate solver and solution visualization.
We present the first comprehensive study of high wire-number, wire-array Z-pinch dynamics at 14-18 MA using x-ray backlighting and optical shadowgraphy diagnostics. The cylindrical arrays retain slowly expanding, dense wire cores at the initial position up to 60% of the total implosion time. Azimuthally correlated instabilities at the array edge appear during this stage which continue to grow in amplitude and wavelength after the start of bulk motion, resulting in measurable trailing mass that does not arrive on axis before peak x-ray emission.
An effort is underway at Sandia National Laboratories to develop a library of algorithms to search for potential interactions between surfaces represented by analytic and discretized topological entities. This effort is also developing algorithms to determine forces due to these interactions for transient dynamics applications. This document describes the Application Programming Interface (API) for the ACME (Algorithms for Contact in a Multiphysics Environment) library.