The first viscous compressible three-dimensional BiGlobal linear instability analysis of leading-edge boundary layer flow has been performed. Results have been obtained by independent application of asymptotic analysis and numerical solution of the appropriate partial-differential eigenvalue problem. It has been shown that the classification of three-dimensional linear instabilities of the related incompressible flow [13] into symmetric and antisymmetric mode expansions in the chordwise coordinate persists for compressible, subsonic flow-regime at sufficiently large Reynolds numbers.
Techniques for mitigating the adsorption of {sup 137}Cs and {sup 60}Co on metal surfaces (e.g. RAM packages) exposed to contaminated water (e.g. spent-fuel pools) have been developed and experimentally verified. The techniques are also effective in removing some of the {sup 60}Co and {sup 137}Cs that may have been adsorbed on the surfaces after removal from the contaminated water. The principle for the {sup 137}Cs mitigation technique is based upon ion-exchange processes. In contrast, {sup 60}Co contamination primarily resides in minute particles of crud that become lodged on cask surfaces. Crud is an insoluble Fe-Ni-Cr oxide that forms colloidal-sized particles as reactor cooling systems corrode. Because of the similarity between Ni{sup 2+} and Co{sup 2+}, crud is able to scavenge and retain traces of cobalt as it forms. A number of organic compounds have a great specificity for combining with nickel and cobalt. Ongoing research is investigating the effectiveness of chemical complexing agent EDTA with regard to its ability to dissolve the host phase (crud) thereby liberating the entrained {sup 60}Co into a solution where it can be rinsed away.
The National Spent Nuclear Fuel Program, located at the Idaho National Laboratory (INL), coordinates and integrates national efforts in management and disposal of US Department of Energy (DOE)-owned spent nuclear fuel. These management functions include development of standardized systems for long-term disposal in the proposed Yucca Mountain repository. Nuclear criticality control measures are needed in these systems to avoid restrictive fissile loading limits because of the enrichment and total quantity of fissile material in some types of the DOE spent nuclear fuel. This need is being addressed by development of corrosion-resistant, neutron-absorbing structural alloys for nuclear criticality control. This paper outlines results of a metallurgical development program that is investigating the alloying of gadolinium into a nickel-chromium-molybdenum alloy matrix. Gadolinium has been chosen as the neutron absorption alloying element due to its high thermal neutron absorption cross section and low solubility in the expected repository environment. The nickel-chromium-molybdenum alloy family was chosen for its known corrosion performance, mechanical properties, and weldability. The workflow of this program includes chemical composition definition, primary and secondary melting studies, ingot conversion processes, properties testing, and national consensus codes and standards work. The microstructural investigation of these alloys shows that the gadolinium addition is present in the alloy as a gadolinium-rich second phase. The mechanical strength values are similar to those expected for commercial Ni-Cr-Mo alloys. The alloys have been corrosion tested with acceptable results. The initial results of weldability tests have also been acceptable. Neutronic testing in a moderated critical array has generated favorable results. An American Society for Testing and Materials material specification has been issued for the alloy and a Code Case has been submitted to the American Society of Mechanical Engineers for code qualification.
This report describes work carried out under a Sandia National Laboratories Excellence in Engineering Fellowship in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. Our research group (at UIUC) is developing a intelligent robot, and attempting to teach it language. While there are many aspects of this research, for the purposes of this report the most important are the following ideas. Language is primarily based on semantics, not syntax. To truly learn meaning, the language engine must be part of an embodied intelligent system, one capable of using associative learning to form concepts from the perception of experiences in the world, and further capable of manipulating those concepts symbolically. In the work described here, we explore the use of hidden Markov models (HMMs) in this capacity. HMMs are capable of automatically learning and extracting the underlying structure of continuous-valued inputs and representing that structure in the states of the model. These states can then be treated as symbolic representations of the inputs. We describe a composite model consisting of a cascade of HMMs that can be embedded in a small mobile robot and used to learn correlations among sensory inputs to create symbolic concepts. These symbols can then be manipulated linguistically and used for decision making. This is the project final report for the University Collaboration LDRD project, 'A Robotic Framework for Semantic Concept Learning'.
Finding the central sets, such as center and median sets, of a network topology is a fundamental step in the design and analysis of complex distributed systems. This paper presents distributed synchronous algorithms for finding central sets in general tree structures. Our algorithms are distinguished from previous work in that they take only qualitative information, thus reducing the constants hidden in the asymptotic notation, and all vertices of the topology know the central sets upon their termination.
Extremely short collision mean free paths and near-singular elastic and inelastic differential cross sections (DCS) make analog Monte Carlo simulation an impractical tool for charged particle transport. The widely used alternative, the condensed history method, while efficient, also suffers from several limitations arising from the use of precomputed smooth distributions for sampling. There is much interest in developing computationally efficient algorithms that implement the correct transport mechanics. Here we present a nonanalog transport-based method that incorporates the correct transport mechanics and is computationally efficient for implementation in single event Monte Carlo codes. Our method systematically preserves important physics and is mathematically rigorous. It builds on higher order Fokker-Planck and Boltzmann Fokker-Planck representations of the scattering and energy-loss process, and we accordingly refer to it as a Generalized Boltzmann Fokker-Planck (GBFP) approach. We postulate the existence of nonanalog single collision scattering and energy-loss distributions (differential cross sections) and impose the constraint that the first few momentum transfer and energy loss moments be identical to corresponding analog values. This is effected through a decomposition or hybridizing scheme wherein the singular forward peaked, small energy-transfer collisions are isolated and de-singularized using different moment-preserving strategies, while the large angle, large energy-transfer collisions are described by the exact (analog) DCS or approximated to a high degree of accuracy. The inclusion of the latter component allows the higher angle and energy-loss moments to be accurately captured. This procedure yields a regularized transport model characterized by longer mean free paths and smoother scattering and energy transfer kernels than analog. In practice, acceptable accuracy is achieved with two rigorously preserved moments, but accuracy can be systematically increased to analog level by preserving successively higher moments with almost no change to the algorithm. Details of specific moment-preserving strategies will be described and results presented for dose in heterogeneous media due to a pencil beam and a line source of monoenergetic electrons. Error and runtimes of our nonanalog formulations will be contrasted against condensed history implementations.
The efficiency of neuronal encoding in sensory and motor systems has been proposed as a first principle governing response properties within the central nervous system. We present a continuation of a theoretical study presented by Zhang and Sejnowski, where the influence of neuronal tuning properties on encoding accuracy is analyzed using information theory. When a finite stimulus space is considered, we show that the encoding accuracy improves with narrow tuning for one- and two-dimensional stimuli. For three dimensions and higher, there is an optimal tuning width.
Human behavior is a function of an iterative interaction between the stimulus environment and past experience. It is not simply a matter of the current stimulus environment activating the appropriate experience or rule from memory (e.g., if it is dark and I hear a strange noise outside, then I turn on the outside lights and investigate). Rather, it is a dynamic process that takes into account not only things one would generally do in a given situation, but things that have recently become known (e.g., there have recently been coyotes seen in the area and one is known to be rabid), as well as other immediate environmental characteristics (e.g., it is snowing outside, I know my dog is outside, I know the police are already outside, etc.). All of these factors combine to inform me of the most appropriate behavior for the situation. If it were the case that humans had a rule for every possible contingency, the amount of storage that would be required to enable us to fluidly deal with most situations we encounter would rapidly become biologically untenable. We can all deal with contingencies like the one above with fairly little effort, but if it isn't based on rules, what is it based on? The assertion of the Cognitive Systems program at Sandia for the past 5 years is that at the heart of this ability to effectively navigate the world is an ability to discriminate between different contexts (i.e., Dynamic Context Discrimination, or DCD). While this assertion in and of itself might not seem earthshaking, it is compelling that this ability and its components show up in a wide variety of paradigms across different subdisciplines in psychology. We begin by outlining, at a high functional level, the basic ideas of DCD. We then provide evidence from several different literatures and paradigms that support our assertion that DCD is a core aspect of cognitive functioning. Finally, we discuss DCD and the computational model that we have developed as an instantiation of DCD in more detail. Before commencing with our overview of DCD, we should note that DCD is not necessarily a theory in the classic sense. Rather, it is a description of cognitive functioning that seeks to unify highly similar findings across a wide variety of literatures. Further, we believe that such convergence warrants a central place in efforts to computationally emulate human cognition. That is, DCD is a general principle of cognition. It is also important to note that while we are drawing parallels across many literatures, these are functional parallels and are not necessarily structural ones. That is, we are not saying that the same neural pathways are involved in these phenomena. We are only saying that the different neural pathways that are responsible for the appearance of these various phenomena follow the same functional rules - the mechanisms are the same even if the physical parts are distinct. Furthermore, DCD is not a causal mechanism - it is an emergent property of the way the brain is constructed. DCD is the result of neurophysiology (cf. John, 2002, 2003). Finally, it is important to note that we are not proposing a generic learning mechanism such that one biological algorithm can account for all situation interpretation. Rather, we are pointing out that there are strikingly similar empirical results across a wide variety of disciplines that can be understood, in part, by similar cognitive processes. It is entirely possible, even assumed in some cases (i.e., primary language acquisition) that these more generic cognitive processes are complemented and constrained by various limits which may or may not be biological in nature (cf. Bates & Elman, 1996; Elman, in press).
Dual-frequency reactors employ source rf power supplies to generate plasma and bias supplies to extract ions. There is debate over choices for the source and bias frequencies. Higher frequencies facilitate plasma generation but their shorter wavelengths may cause spatial variations in plasma properties. Electrical nonlinearity of plasma sheaths causes harmonic generation and mixing of source and bias frequencies. These processes, and the resulting spectrum of frequencies, are as much dependent on electrical characteristics of matching networks and on chamber geometry as on plasma sheath properties. We investigated such electrical effects in a 300-mm Applied-Materials plasma reactor. Data were taken for 13.56-MHz bias frequency (chuck) and for source frequencies from 30 to 160 MHz (upper electrode). An rf-magnetic-field probe (B-dot loop) was used to measure the radial variation of fields inside the plasma. We will describe the results of this work.
A laser hazard analysis and safety assessment was performed for the LH-40 IR Laser Rangefinder based on the 2000 version of the American National Standard Institute's Standard Z136.1, for the Safe Use of Lasers and Z136.6, for the Safe Use of Lasers Outdoors. The LH-40 IR Laser is central to the Long Range Reconnaissance and Observation System (LORROS). The LORROS is being evaluated by the Department 4149 Group to determine its capability as a long-range assessment tool. The manufacture lists the laser rangefinder as 'eye safe' (Class 1 laser classified under the CDRH Compliance Guide for Laser Products and 21 CFR 1040 Laser Product Performance Standard). It was necessary that SNL validate this prior to its use involving the general public. A formal laser hazard analysis is presented for the typical mode of operation.
This report surveys the needs associated with environmental monitoring and long-term environmental stewardship. Emerging sensor technologies are reviewed to identify compatible technologies for various environmental monitoring applications. The contaminants that are considered in this report are grouped into the following categories: (1) metals, (2) radioisotopes, (3) volatile organic compounds, and (4) biological contaminants. Regulatory drivers are evaluated for different applications (e.g., drinking water, storm water, pretreatment, and air emissions), and sensor requirements are derived from these regulatory metrics. Sensor capabilities are then summarized according to contaminant type, and the applicability of the different sensors to various environmental monitoring applications is discussed.
This report describes both a general methodology and some specific examples of passive radio receivers. A passive radio receiver uses no direct electrical power but makes sole use of the power available in the radio spectrum. These radio receivers are suitable as low data-rate receivers or passive alerting devices for standard, high power radio receivers. Some zero-power radio architectures exhibit significant improvements in range with the addition of very low power amplifiers or signal processing electronics. These ultra-low power radios are also discussed and compared to the purely zero-power approaches.
We modeled the effects of temperature, degree of polymerization, and surface coverage on the equilibrium structure of tethered poly(N-isopropylacrylamide) chains immersed in water. We employed a numerical self-consistent field theory where the experimental phase diagram was used as input to the theory. At low temperatures, the composition profiles are approximately parabolic and extend into the solvent. In contrast, at temperatures above the LCST of the bulk solution, the polymer profiles are collapsed near the surface. The layer thickness and the effective monomer fraction within the layer undergo what appears to be a first-order change at a temperature that depends on surface coverage and chain length. Our results suggest that as a result of the tethering constraint, the phase diagram becomes distorted relative to the bulk polymer solution and exhibits closed loop behavior. As a consequence, we find that the relative magnitude of the layer thickness change at 20 and 40 C is a nonmonotonic function of surface coverage, with a maximum that shifts to lower surface coverage as the chain length increases in qualitative agreement with experiment.
Sandia National Laboratories, under contract to Nuclear Waste Management Organization of Japan (NUMO), is performing research on regional classification of given sites in Japan with respect to potential volcanic disruption using multivariate statistics and geo-statistical interpolation techniques. This report provides results obtained for hierarchical probabilistic regionalization of volcanism for the Sengan region in Japan by applying multivariate statistical techniques and geostatistical interpolation techniques on the geologic data provided by NUMO. A workshop report produced in September 2003 by Sandia National Laboratories (Arnold et al., 2003) on volcanism lists a set of most important geologic variables as well as some secondary information related to volcanism. Geologic data extracted for the Sengan region in Japan from the data provided by NUMO revealed that data are not available at the same locations for all the important geologic variables. In other words, the geologic variable vectors were found to be incomplete spatially. However, it is necessary to have complete geologic variable vectors to perform multivariate statistical analyses. As a first step towards constructing complete geologic variable vectors, the Universal Transverse Mercator (UTM) zone 54 projected coordinate system and a 1 km square regular grid system were selected. The data available for each geologic variable on a geographic coordinate system were transferred to the aforementioned grid system. Also the recorded data on volcanic activity for Sengan region were produced on the same grid system. Each geologic variable map was compared with the recorded volcanic activity map to determine the geologic variables that are most important for volcanism. In the regionalized classification procedure, this step is known as the variable selection step. The following variables were determined as most important for volcanism: geothermal gradient, groundwater temperature, heat discharge, groundwater pH value, presence of volcanic rocks and presence of hydrothermal alteration. Data available for each of these important geologic variables were used to perform directional variogram modeling and kriging to estimate values for each variable at 23949 centers of the chosen 1 km cell grid system that represents the Sengan region. These values formed complete geologic variable vectors at each of the 23,949 one km cell centers.
Sandia has developed and tested mockups armored with W rods over the last decade and pioneered the initial development of W rod armor for International Thermonuclear Experimental Reactor (ITER) in the 1990's. We have also developed 2D and 3D thermal and stress models of W rod-armored plasma facing components (PFCs) and test mockups and are applying the models to both short pulses, i.e. edge localized modes (ELMs), and thermal performance in steady state for applications in C-MOD, DiMES testing and ITER. This paper briefly describes the 2D and 3D models and their applications with emphasis on modeling for an ongoing test program that simulates repeated heat loads from ITER ELMs.
In recent dynamic hohlraum experiments on the Z facility, Al and MgF{sub 2} tracer layers were embedded in cylindrical CH{sub 2} foam targets to provide K-shell lines in the keV spectral region for diagnosing the conditions of the interior hohlraum plasma. The position of the tracers was varied: sometimes they were placed 2 mm from the ends of the foam cylinder and sometimes at the ends of the cylinder. Also varied was the composition of the tracers in the sense that pure Al layers, pure MgF{sub 2} layers, or mixtures of the elements were employed on various shots. Time-resolved K-shell spectra of both Al and Mg show mostly absorption lines. These data can be analyzed with detailed configuration atomic models of carbon, aluminum, and magnesium in which spectra are calculated by solving the radiation transport equation for as many as 4100 frequencies. We report results from shot Z1022 to illustrate the basic radiation physics and the capabilities as well as limitations of this diagnostic method.
We present a formulation for coupling atomistic and continuum simulation methods for application to both quasistatic and dynamic analyses. In our formulation, a coarse-scale continuum discretization is assumed to cover all parts of the computational domain with atomistic crystals introduced only in regions of interest. The geometry of the discretization and crystal are allowed to overlap arbitrarily. Our approach uses interpolation and projection operators to link the kinematics of each region, which are then used to formulate a system potential energy from which we derive coupled expressions for the forces acting in each region. A hyperelastic constitutive formulation is used to compute the stress response of the defect-free continuum with constitutive properties derived from the Cauchy-Born rule. A correction to the Cauchy-Born rule is introduced in the overlap region to minimize fictitious boundary effects. Features of our approach will be demonstrated with simulations in one, two and three dimensions.
With increased terrorist threats in the past few years, it is no longer feasible to feel confident that a facility is well protected with a static security system. Potential adversaries often research their targets, examining procedural and system changes, in order to attack at a vulnerable time. Such system changes may include scheduled sensor maintenance, scheduled or unscheduled changes in the guard force, facility alert level changes, sensor failures or degradation, etc. All of these changes impact the system effectiveness and can make a facility more vulnerable. Currently, a standard analysis of system effectiveness is performed approximately every six months using a vulnerability assessment tool called ASSESS (Analytical Systems and Software for Evaluating Safeguards and Systems). New standards for determining a facility's system effectiveness will be defined by tools that are currently under development, such as ATLAS (Adversary Time-line Analysis System) and NextGen (Next Generation Security Simulation). Although these tools are useful to model analyses at different spatial resolutions and can support some sensor dynamics using statistical models, they are limited in that they require a static system state as input. They cannot account for the dynamics of the system through day-to-day operations. The emphasis of this project was to determine the feasibility of dynamically monitoring the facility security system and performing an analysis as changes occur. Hence, the system effectiveness is known at all times, greatly assisting time-critical decisions in response to a threat or a potential threat.
We have successfully demonstrated selective trapping, concentration, and release of various biological organisms and inert beads by insulator-based dielectrophoresis within a polymeric microfluidic device. The microfluidic channels and internal features, in this case arrays of insulating posts, were initially created through standard wet-etch techniques in glass. This glass chip was then transformed into a nickel stamp through the process of electroplating. The resultant nickel stamp was then used as the replication tool to produce the polymeric devices through injection molding. The polymeric devices were made of Zeonor{reg_sign} 1060R, a polyolefin copolymer resin selected for its superior chemical resistance and optical properties. These devices were then optically aligned with another polymeric substrate that had been machined to form fluidic vias. These two polymeric substrates were then bonded together through thermal diffusion bonding. The sealed devices were utilized to selectively separate and concentrate a biological pathogen simulants. These include spores that were selectively concentrated and released by simply applying D.C. voltages across the plastic replicates via platinum electrodes in inlet and outlet reservoirs. The dielectrophoretic response of the organisms is observed to be a function of the applied electric field and post size, geometry and spacing. Cells were selectively trapped against a background of labeled polystyrene beads and spores to demonstrate that samples of interest can be separated from a diverse background. We have implemented and demonstrated here a methodology to determine the concentration factors obtained in these devices.
The effects of ionizing and neutron radiation on the characteristics and performance of laser diodes are reviewed, and the formation mechanisms for nonradiative recombination centers, the primary type of radiation damage in laser diodes, are discussed. Additional topics include the detrimental effects of aluminum in the active (lasing) volume, the transient effects of high-dose-rate pulses of ionizing radiation, and a summary of ways to improve the radiation hardness of laser diodes. Radiation effects on laser diodes emitting in the wavelength region around 808 nm are emphasized.
We find that small temperature changes cause steps on the NiAl(110) surface to move. We show that this step motion occurs because mass is transferred between the bulk and the surface as the concentration of bulk thermal defects (i.e., vacancies) changes with temperature. Since the change in an island's area with a temperature change is found to scale strictly with the island's step length, the thermally generated defects are created (annihilated) very near the surface steps. To quantify the bulk/surface exchange, we oscillate the sample temperature and measure the amplitude and phase lag of the system response, i.e., the change in an island's area normalized to its perimeter. Using a one-dimensional model of defect diffusion through the bulk in a direction perpendicular to the surface, we determine the migration and formation energies of the bulk thermal defects. During surface smoothing, we show that there is no flow of material between islands on the same terrace and that all islands in a stack shrink at the same rate. We conclude that smoothing occurs by mass transport through the bulk of the crystal rather than via surface diffusion. Based on the measured relative sizes of the activation energies for island decay, defect migration, and defect formation, we show that attachment/detachment at the steps is the rate-limiting step in smoothing.
The goal of z-pinch inertial fusion energy (IFE) is to extend the single-shot z-pinch inertial confinement fusion (ICF) results on Z to a repetitive-shot z-pinch power plant concept for the economical production of electricity. Z produces up to 1.8 MJ of x-rays at powers as high as 230 TW. Recent target experiments on Z have demonstrated capsule implosion convergence ratios of 14-21 with a double-pinch driven target, and DD neutron yields up to 8x10exp10 with a dynamic hohlraum target. For z-pinch IFE, a power plant concept is discussed that uses high-yield IFE targets (3 GJ) with a low rep-rate per chamber (0.1 Hz). The concept includes a repetitive driver at 0.1 Hz, a Recyclable Transmission Line (RTL) to connect the driver to the target, high-yield targets, and a thick-liquid wall chamber. Recent funding by a U.S. Congressional initiative for $4M for FY04 is supporting research on RTLs, repetitive pulsed power drivers, shock mitigation, full RTL cycle planned experiments, high-yield IFE targets, and z-pinch power plant technologies. Recent results of research in all of these areas are discussed, and a Road Map for Z-Pinch IFE is presented.
This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.
As part of DARPA Information Processing Technology Office (IPTO) Software for Distributed Robotics (SDR) Program, Sandia National Laboratories has developed analysis and control software for coordinating tens to thousands of autonomous cooperative robotic agents (primarily unmanned ground vehicles) performing military operations such as reconnaissance, surveillance and target acquisition; countermine and explosive ordnance disposal; force protection and physical security; and logistics support. Due to the nature of these applications, the control techniques must be distributed, and they must not rely on high bandwidth communication between agents. At the same time, a single soldier must easily direct these large-scale systems. Finally, the control techniques must be provably convergent so as not to cause undo harm to civilians. In this project, provably convergent, moderate communication bandwidth, distributed control algorithms have been developed that can be regulated by a single soldier. We have simulated in great detail the control of low numbers of vehicles (up to 20) navigating throughout a building, and we have simulated in lesser detail the control of larger numbers of vehicles (up to 1000) trying to locate several targets in a large outdoor facility. Finally, we have experimentally validated the resulting control algorithms on smaller numbers of autonomous vehicles.
As part of DARPA's Software for Distributed Robotics Program within the Information Processing Technologies Office (IPTO), Sandia National Laboratories was tasked with identifying military airborne and maritime missions that require cooperative behaviors as well as identifying generic collective behaviors and performance metrics for these missions. This report documents this study. A prioritized list of general military missions applicable to land, air, and sea has been identified. From the top eight missions, nine generic reusable cooperative behaviors have been defined. A common mathematical framework for cooperative controls has been developed and applied to several of the behaviors. The framework is based on optimization principles and has provably convergent properties. A three-step optimization process is used to develop the decentralized control law that minimizes the behavior's performance index. A connective stability analysis is then performed to determine constraints on the communication sample period and the local control gains. Finally, the communication sample period for four different network protocols is evaluated based on the network graph, which changes throughout the task. Using this mathematical framework, two metrics for evaluating these behaviors are defined. The first metric is the residual error in the global performance index that is used to create the behavior. The second metric is communication sample period between robots, which affects the overall time required for the behavior to reach its goal state.
When residual range migration due to either real or apparent motion errors exceeds the range resolution, conventional autofocus algorithms fail. A new migration-correction autofocus algorithm has been developed that estimates the migration and applies phase and frequency corrections to properly focus the image.
This report is a collection of documents written by the group members of the Engineering Sciences Research Foundation (ESRF), Laboratory Directed Research and Development (LDRD) project titled 'A Robust, Coupled Approach to Atomistic-Continuum Simulation'. Presented in this document is the development of a formulation for performing quasistatic, coupled, atomistic-continuum simulation that includes cross terms in the equilibrium equations that arise due to kinematic coupling and corrections used for the calculation of system potential energy to account for continuum elements that overlap regions containing atomic bonds, evaluations of thermo-mechanical continuum quantities calculated within atomistic simulations including measures of stress, temperature and heat flux, calculation used to determine the appropriate spatial and time averaging necessary to enable these atomistically-defined expressions to have the same physical meaning as their continuum counterparts, and a formulation to quantify a continuum 'temperature field', the first step towards constructing a coupled atomistic-continuum approach capable of finite temperature and dynamic analyses.
This document describes the modeling of the physics (and eventually features) in the Integrated TIGER Series (ITS) codes [Franke 04] which is largely pulled from various sources in the open literature (especially [Seltzer 88], [Seltzer 91], [Lorence 89], [Halbleib 92]), although those sources often describe the ETRAN Code from which the physics engine of ITS is derived, not necessarily identical. This is meant to be an evolving document, with more coverage and detail as time goes on. As such, entire sections are still incomplete. Presently, this document covers the continuous-energy ITS codes with more completeness on photon transport (though electron transport will not be completely ignored). In particular, this document does not cover the Multigroup code, MCODES (externally applied electromagnetic fields), or high-energy phenomena (photon pair-production). In this version, equations are largely left to the references though they may be pulled in over time.
An experiment at Sandia National Laboratories confirmed that a ternary salt (Flinabe, a ternary mixture of LiF, BeF{sub 2} and NaF) had a sufficiently low melting temperature ({approx}305 C) to be useful for first wall and blanket applications using flowing molten salts that were investigated in the Advanced Power Extraction (APEX) Program.[1] In the experiment, the salt pool was contained in a stainless steel crucible under vacuum. One thermocouple was placed in the salt and two others were embedded in the crucible. The results and observations from the experiment are reported in the companion paper.[2] The paper presented here will cover a 3-D finite element thermal analysis of the salt pool and crucible. The analysis was done to evaluate the thermal gradients in the salt pool and crucible and to compare the temperatures of the three thermocouples. One salt mixture appeared to melt and to solidify as a eutectic with a visible plateau in the cooling curve (i. e, time versus temperature for the thermocouple in the salt pool). This behavior was reproduced with the thermal model. Cases were run with several values of the thermal conductivity and latent heat of fusion to see the parametric effects of these changes on the respective cooling curves. The crucible was heated by an electrical heater in an inverted well at the base of the crucible. It lost heat primarily by radiation from the outer surfaces of the crucible and the top surface of the salt. The primary independent factors in the model were the emissivity of the crucible (and of the salt) and the fraction of the heater power coupled into the crucible. The model was 'calibrated' using (thermocouple) data and heating power from runs in which the crucible contained no salt.
This paper analyzes the relationship between current renewable energy technology costs and cumulative production, research, development and demonstration expenditures, and other institutional influences. Combining the theoretical framework of 'learning by doing' and developments in 'learning by searching' with the fields of organizational learning and institutional economics offers a complete methodological framework to examine the underlying capital cost trajectory when developing electricity cost estimates used in energy policy planning models. Sensitivities of the learning rates for global wind and solar photovoltaic technologies to changes in the model parameters are tested. The implications of the results indicate that institutional policy instruments play an important role for these technologies to achieve cost reductions and further market adoption.
Waste characterization is probably the most costly part of radioactive waste management. An important part of this characterization is the measurements of headspace gas in waste containers in order to demonstrate the compliance with Resource Conservation and Recovery Act (RCRA) or transportation requirements. The traditional chemical analysis methods, which include all steps of gas sampling, sample shipment and laboratory analysis, are expensive and time-consuming as well as increasing worker's exposure to hazardous environments. Therefore, an alternative technique that can provide quick, in-situ, and real-time detections of headspace gas compositions is highly desirable. This report summarizes the results obtained from a Laboratory Directed Research & Development (LDRD) project entitled 'Potential Application of Microsensor Technology in Radioactive Waste Management with Emphasis on Headspace Gas Detection'. The objective of this project is to bridge the technical gap between the current status of microsensor development and the intended applications of these sensors in nuclear waste management. The major results are summarized below: {sm_bullet} A literature review was conducted on the regulatory requirements for headspace gas sampling/analysis in waste characterization and monitoring. The most relevant gaseous species and the related physiochemical environments were identified. It was found that preconcentrators might be needed in order for chemiresistor sensors to meet desired detection {sm_bullet} A long-term stability test was conducted for a polymer-based chemresistor sensor array. Significant drifts were observed over the time duration of one month. Such drifts should be taken into account for long-term in-situ monitoring. {sm_bullet} Several techniques were explored to improve the performance of sensor polymers. It has been demonstrated that freeze deposition of black carbon (CB)-polymer composite can effectively eliminate the so-called 'coffee ring' effect and lead to a desirable uniform distribution of CB particles in sensing polymer films. The optimal ratio of CB/polymer has been determined. UV irradiation has been shown to improve sensor sensitivity. {sm_bullet} From a large set of commercially available polymers, five polymers were selected to form a sensor array that was able to provide optimal responses to six target-volatile organic compounds (VOCs). A series of tests on the response of sensor array to various VOC concentrations have been performed. Linear sensor responses have been observed over the tested concentration ranges, although the responses over a whole concentration range are generally nonlinear. {sm_bullet} Inverse models have been developed for identifying individual VOCs based on sensor array responses. A linear solvation energy model is particularly promising for identifying an unknown VOC in a single-component system. It has been demonstrated that a sensor array as such we developed is able to discriminate waste containers for their total VOC concentrations and therefore can be used as screening tool for reducing the existing headspace gas sampling rate. {sm_bullet} Various VOC preconcentrators have been fabricated using Carboxen 1000 as an absorbent. Extensive tests have been conducted in order to obtain optimal configurations and parameter ranges for preconcentrator performance. It has been shown that use of preconcentrators can reduce the detection limits of chemiresistors by two orders of magnitude. The life span of preconcentrators under various physiochemical conditions has also been evaluated. {sm_bullet} The performance of Pd film-based H2 sensors in the presence of VOCs has been evaluated. The interference of sensor readings by VOC has been observed, which can be attributed to the interference of VOC with the H2-O2 reaction on the Pd alloy surface. This interference can be eliminated by coating a layer of silicon dioxide on sensing film surface. Our work has demonstrated a wide range of applications of gas microsensors in radioactive waste management. Such applications can potentially lead to a significant cost saving and risk reduction for waste characterization.
In the mid-90's, breakthroughs were achieved at Sandia with z-pinches for high energy density physics on the Saturn machine. These initial tests led to the modification of the PBFA II machine to provide high currents rather than the high voltage it was initially designed for. The success of z-pinch for high energy density physics experiments insured a new mission for the converted accelerator, known as Z since 1997. Z now provides a unique capability to a number of basic science communities and has expanded its mission to include radiation effects research, inertial confinement fusion and material properties research. To achieve continued success, the physics community has requested higher peak current, better precision and pulse shaping versatility be incorporated into the refurbishment of the Z machine, known as ZR. In addition to the performance specification for ZR of a peak current of 26 MA with an implosion time of 100 ns, the machine also has a reliability specification to achieve 400 shots per year. While changes to the basic architecture of the Z machine are minor, the vast majority of its components have been redesigned. Moreover the increase in peak current from its present 18 MA to ZR's peak current of 26 MA at nominal operating parameters requires significantly higher voltages. These higher voltages, along with the reliability requirement, mandate a system assessment be performed to insure the requirements have been met. This paper will describe the System Assessment Test Program (SATPro) for the ZR project and report on the results.
Multivariate spatial classification schemes such as regionalized classification or principal components analysis combined with kriging rely on all variables being collocated at the sample locations. In these approaches, classification of the multivariate data into a finite number of groups is done prior to the spatial estimation. However, in some cases, the variables may be sampled at different locations with the extreme case being complete heterotopy of the data set. In these situations, it is necessary to adapt existing techniques to work with non-collocated data. Two approaches are considered: (1) kriging of existing data onto a series of 'collection points' where the classification into groups is completed and a measure of the degree of group membership is kriged to all other locations; and (2) independent kriging of all attributes to all locations after which the classification is done at each location. Calculations are conducted using an existing groundwater chemistry data set in the upper Dakota aquifer in Kansas (USA) and previously examined using regionalized classification (Bohling, 1997). This data set has all variables measured at all locations. To test the ability of the first approach for dealing with non-collocated data, each variable is reestimated at each sample location through a cross-validation process and the reestimated values are then used in the regionalized classification. The second approach for non-collocated data requires independent kriging of each attribute across the entire domain prior to classification. Hierarchical and non-hierarchical classification of all vectors is completed and a computationally less burdensome classification approach, 'sequential discrimination', is developed that constrains the classified vectors to be chosen from those with a minimal multivariate kriging variance. Resulting classification and uncertainty maps are compared between all non-collocated approaches as well as to the original collocated approach. The non-collocated approaches lead to significantly different group definitions compared to the collocated case. To some extent, these differences can be explained by the kriging variance of the estimated variables. Sequential discrimination of locations with a minimum multivariate kriging variance constraint produces slightly improved results relative to the collection point and the non-hierarchical classification of the estimated vectors.
We show theoretically that an electric current in a high-mobility quasi-two-dimensional electron layer induces a significant drift of excitons in an adjacent layer through interlayer Coulomb interaction. The exciton gas is shown to drift with a velocity which can be a significant fraction of the electron drift velocity at low temperatures. The estimated drift length is of the order of micrometers or larger during the typical exciton lifetime for GaAs/Al{sub x}Ga{sub 1-x} double quantum wells. A possible enhancement of the exciton radiative lifetime due to the drift is discussed.
The U.S. Department of Energy's (DOE's) GeoPowering the West (GPW) program works with the U.S. geothermal industry, power companies, industrial and residential consumers, and federal, state, and local officials to provide technical and institutional support and limited, cost-shared funding to state-level activities. By demonstrating the benefits of geothermal energy, GPW increases state and regional awareness of opportunities to enhance local economies and strengthen our nation's energy security while minimizing environmental impact. By identifying barriers to development and working with others to eliminate them, GPW helps a state or region create a regulatory and economic environment that is more favorable for geothermal and other renewable energy development. Electricity is produced using expanding steam or very hot water from the underground reservoir to spin a conventional turbine-generator. Geothermal power plants operate at high capacity factors (70-100%), with availability factors typically greater than 95%. Geothermal plants are among the cleanest sources of electric power available. Direct use applications directly pipe hot water from geothermal resources to provide heat for industrial processes, crop drying, greenhouses, aquaculture, recreation, sidewalk snow-melting, and buildings. Geothermal district heating systems supply heat to multiple buildings through a network of pipes carrying the hot geothermal water.
The effects of wall interaction on combustion and soot formation processes of a diesel fuel jet were investigated in an optically-accessible constant-volume combustion vessel at experimental conditions typical of a diesel engine. At identical ambient and injector conditions, soot processes were studied in free jets, plane wall jets, and 'confined' wall jets (a box-shaped geometry simulating secondary interaction with adjacent walls and jets in an engine). The investigation showed that soot levels are significantly lower in a plane wall jet compared to a free jet. At some operating conditions, sooting free jets become soot-free as plane wall jets. Possible mechanisms to explain the reduced or delayed soot formation upon wall interaction include an increased fuel-air mixing rate and a wall-jet-cooling effect. However, in a confined-jet configuration, there is an opposite trend in soot formation. Jet confinement causes combustion gases to be redirected towards the incoming jet, causing the lift-off length to shorten and soot to increase. This effect can be avoided by ending fuel injection prior to the time of significant interaction with redirected combustion gases. For a fixed confined-wall geometry, an increase in ambient gas density delays jet interaction, allowing longer injection durations with no increase in soot. Jet interaction with redirected combustion products may also be avoided using reduced ambient oxygen concentration because of an increased ignition delay. Although simplified geometries were employed, the identification of important mechanisms affecting soot formation after the time of wall interaction is expected to be useful for understanding these processes in more complex and realistic diesel engine geometries.
One challenge faced by engineers today is replicating an operating environment such as transportation in a test lab. This paper focuses on the process of identifying sine-on-random content in an aircraft transportation environment, although the methodology can be applied to other events. The ultimate goal of this effort was to develop an automated way to identify significant peaks in the PSDs of the operating data, catalog the peaks, and determine whether each peak was sinusoidal or random in nature. This information helps design a test environment that accurately reflects the operating environment. A series of Matlab functions have been developed to achieve this goal with a relatively high degree of accuracy. The software is able to distinguish between sine-on-random and random-on-random peaks in most cases. This paper describes the approach taken for converting the time history segments to the frequency domain, identifying peaks from the resulting PSD, and filtering the time histories to determine the peak amplitude and characteristics. This approach is validated through some contrived data, and then applied to actual test data. Observations and conclusions, including limitations of this process, are also presented.
Dielectric spectroscopy, lattice structure, and thermal properties have revealed the relaxor dielectric response of Ba-substituted lead zirconate/titanate (PZT) having the composition (Pb0.71Ba0.29) (Zr0.71Ti0.29)O3 and containing 2 at. % Bi as an additive. The relaxor behavior is attributed to the compositional disorder introduced by the substitution of Ba2+ at the A site and Bi3+/5+ at the B site (and possibly A site) of the ABO3 PZT host lattice. Analysis of the results gives clear evidence for the nucleation of polar nanodomains at a temperature much higher than the peak (Tm) in the dielectric susceptibility. These nanodomains grow in size as their correlation length increases with decreasing temperature, and ultimately their dipolar fluctuations slow down below Tm leading to the formation of the relaxor state. The influences of hydrostatic pressure on the dielectric susceptibility and the dynamics of the relaxation of the polar nanodomains were investigated and can be understood in terms of the decrease in the size of the nanodomains with pressure. The influence of dc electrical bias on the susceptibility was also investigated. Physical models of the relaxor response of this material are discussed.
Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis.
The Arsenic Water Technology Partnership (AWTP) program is a multi-year program funded by a congressional appropriation through the Department of Energy to develop and test innovative technologies that have the potential to reduce the costs of arsenic removal from drinking water. The AWTP members include Sandia National Laboratories, the American Water Works Association (Awwa) Research Foundation and WERC (A Consortium for Environmental Education and Technology Development). The program is designed to move technologies from bench-scale tests to field demonstrations. The Awwa Research Foundation is managing bench-scale research programs; Sandia National Laboratories is conducting the pilot demonstration program and WERC will evaluate the economic feasibility of the technologies investigated and conduct technology transfer activities. The objective of the Sandia Arsenic Treatment Technology Demonstration project (SATTD) is the field demonstration testing of both commercial and innovative technologies. The scope for this work includes: (1) Identification of sites for pilot demonstrations; (2) Accelerated identification of candidate technologies through Vendor Forums, proof-of-principle laboratory and local pilot-scale studies, collaboration with the Awwa Research Foundation bench-scale research program and consultation with relevant advisory panels; and (3) Pilot testing multiple technologies at several sites throughout the country, gathering information on: (a) Performance, as measured by arsenic removal; (b) Costs, including capital and Operation and Maintenance (O&M) costs; (c) O&M requirements, including personnel requirements, and level of operator training; and (d) Waste residuals generation. The New Mexico Environment Department has identified over 90 public water systems that currently exceed the 10 {micro}g/L MCL for arsenic. The Sandia Arsenic Treatment Technology Demonstration project is currently operating pilots at three sites in New Mexico. The cities of Socorro, Anthony, and Rio Rancho vary in population, water chemistry, and source of arsenic. Figure 1 shows the locations of each city. The following pages summarize the work being performed at each site. At each site, the owners (e.g. city utility) provide access to the site, water, electricity, means to discharge treated water, and daily operational checks. Daily checks include filling out a logsheet with information on the flow rates, pressure drops, flow adjustments (when needed), and notification of Sandia personnel if a leak is present. Sandia owns all equipment and is responsible for the disposal of spent media and other waste streams. Sandia also performs all field tests and collects water samples for laboratory analysis.
We present theoretical performance estimates for nanotube optoelectronic devices under bias. Current-voltage characteristics of illuminated nanotube p-n junctions are calculated using a self-consistent nonequilibrium Green's function approach. Energy conversion rates reaching tens of percent are predicted for incident photon energies near the band gap energy. In addition, the energy conversion rate increases as the diameter of the nanotube is reduced, even though the quantum efficiency shows little dependence on nanotube radius. These results indicate that the quantum efficiency is not a limiting factor for use of nanotubes in optoelectronics.
Tabu search is one of the most effective heuristics for locating high-quality solutions to a diverse array of NP-hard combinatorial optimization problems. Despite the widespread success of tabu search, researchers have a poor understanding of many key theoretical aspects of this algorithm, including models of the high-level run-time dynamics and identification of those search space features that influence problem difficulty. We consider these questions in the context of the job-shop scheduling problem (JSP), a domain where tabu search algorithms have been shown to be remarkably effective. Previously, we demonstrated that the mean distance between random local optima and the nearest optimal solution is highly correlated with problem difficulty for a well-known tabu search algorithm for the JSP introduced by Taillard. In this paper, we discuss various shortcomings of this measure and develop a new model of problem difficulty that corrects these deficiencies. We show that Taillard's algorithm can be modeled with high fidelity as a simple variant of a straightforward random walk. The random walk model accounts for nearly all of the variability in the cost required to locate both optimal and sub-optimal solutions to random JSPs, and provides an explanation for differences in the difficulty of random versus structured JSPs. Finally, we discuss and empirically substantiate two novel predictions regarding tabu search algorithm behavior. First, the method for constructing the initial solution is highly unlikely to impact the performance of tabu search. Second, tabu tenure should be selected to be as small as possible while simultaneously avoiding search stagnation; values larger than necessary lead to significant degradations in performance.
Contaminant dispersal models for use at scales ranging from meters to miles are widely used for planning sensor locations, first-responder actions for release scenarios, etc. and are constantly being improved. Applications range from urban contaminant dispersal to locating buried targets from an exhaust signature. However, these models need detailed data for model improvement and validation. A small Sandia National Laboratories Laboratory Directed Research and Development (LDRD) program was funded in FY04 to examine the feasibility and usefulness of a scale-model capability for quantitative characterization of flow and contaminant dispersal in complex environments. This report summarizes the work performed in that LDRD. The basics of atmospheric dispersion and dispersion modeling are reviewed. We examine the need for model scale data, and the capability of existing model test methods. Currently, both full-scale and model scale experiments are performed in order to collect validation data for numerical models. Full-scale experiments are expensive, are difficult to repeat, and usually produce relatively sparse data fields. Model scale tests often employ wind tunnels, and the data collected is, in many cases, derived from single point measurements. We review the scaling assumptions and methods that are used to relate model and full scale flows. In particular, we examine how liquid flows may be used to examine the process of atmospheric dispersion. The scaling between liquid and gas flows is presented. Use of liquid as the test fluid has some advantages in terms of achieving fully turbulent Reynolds numbers and in seeding the flow with neutrally buoyant tracer particles. In general, using a liquid flow instead of a gas flow somewhat simplifies the use of full field diagnostics, such as Particle Image Velocimetry and Laser Induced Fluorescence. It is also possible to create stratified flows through mixtures of fluids (e.g., water, alcohol, and brine). Lastly, we describe our plan to create a small prototype water flume for the modeling of stratified atmospheric flows around complex objects. The incoming velocity profile could be tailored to produce a realistic atmospheric boundary layer for flow-in-urban-canyon measurements. The water tunnel would allow control of stratification to produce, for example, stable and unstable atmospheric conditions. Models ranging from a few buildings to cityscapes would be used as the test section. Existing noninvasive diagnostics would be applied, including particle image velocimetry for detailed full-field velocity measurement, and laser induced fluorescence for noninvasive concentration measurement. This scale-model facility will also be used as a test-bed for data acquisition and model testing related to the inverse problem, i.e., determination of source location from distributed, sparse measurement locations. In these experiments the velocity field would again be measured and data from single or multiple concentration monitors would be used to locate the continuous or transient source.
We present a model for optimizing the placement of sensors in municipal water networks to detect maliciously injected contaminants. An optimal sensor configuration minimizes the expected fraction of the population at risk. We formulate this problem as a mixed-integer program, which can be solved with generally available solvers. We find optimal sensor placements for three test networks with synthetic risk and population data. Our experiments illustrate that this formulation can be solved relatively quickly and that the predicted sensor configuration is relatively insensitive to uncertainties in the data used for prediction.
As Sandia looks toward petaflops computing and other advanced architectures, it is necessary to provide a programming environment that can exploit this additional computing power while supporting reasonable development time for applications. Thus, they evaluate the Partitioned Global Address Space (PGAS) programming model as implemented in Unified Parallel C (UPC) for its applicability. They report on their experiences in implementing sorting and minimum spanning tree algorithms on a test system, a Cray T3e, with UPC support. They describe several macros that could serve as language extensions and several building-block operations that could serve as a foundation for a PGAS programming library. They analyze the limitations of the UPC implementation available on the test system, and suggest improvements necessary before UPC can be used in a production environment.
The SSP is a hardware implementation of a subset of the JVM for use in high consequence embedded applications. In this context, a majority of the activities belonging to class loading, as it is defined in the specification of the JVM, can be performed statically. Static class loading has the net result of dramatically simplifying the design of the SSP as well as increasing its performance. Due to the high consequence nature of its applications, strong evidence must be provided that all aspects of the SSP have been implemented correctly. This includes the class loader. This article explores the possibility of formally verifying a class loader for the SSP implemented in the strategic programming language TL. Specifically, an implementation of the core activities of an abstract class loader is presented and its verification in ACL2 is considered.
The mathematical and physical foundations and domain of applicability of Sandia's GeoModel are presented along with descriptions of the source code and user instructions. The model is designed to be used in conventional finite element architectures, and (to date) it has been installed in five host codes without requiring customizing the model subroutines for any of these different installations. Although developed for application to geological materials, the GeoModel actually applies to a much broader class of materials, including rock-like engineered materials (such as concretes and ceramics) and even to metals when simplified parameters are used. Nonlinear elasticity is supported through an empirically fitted function that has been found to be well-suited to a wide variety of materials. Fundamentally, the GeoModel is a generalized plasticity model. As such, it includes a yield surface, but the term 'yield' is generalized to include any form of inelastic material response including microcrack growth and pore collapse. The geomodel supports deformation-induced anisotropy in a limited capacity through kinematic hardening (in which the initially isotropic yield surface is permitted to translate in deviatoric stress space to model Bauschinger effects). Aside from kinematic hardening, however, the governing equations are otherwise isotropic. The GeoModel is a genuine unification and generalization of simpler models. The GeoModel can employ up to 40 material input and control parameters in the rare case when all features are used. Simpler idealizations (such as linear elasticity, or Von Mises yield, or Mohr-Coulomb failure) can be replicated by simply using fewer parameters. For high-strain-rate applications, the GeoModel supports rate dependence through an overstress model.
An integrated system for the fusion of product and process sensors and controls for production of flat glass was envisioned, having as its objective the maximization of throughput and product quality subject to emission limits, furnace refractory wear, and other constraints. Although the project was prematurely terminated, stopping the work short of its goal, the tasks that were completed show the value of the approach and objectives. Though the demonstration was to have been done on a flat glass production line, the approach is applicable to control of production in the other sectors of the glass industry. Furthermore, the system architecture is also applicable in other industries utilizing processes in which product uniformity is determined by ability to control feed composition, mixing, heating and cooling, chemical reactions, and physical processes such as distillation, crystallization, drying, etc. The first phase of the project, with Visteon Automotive Systems as industrial partner, was focused on simulation and control of the glass annealing lehr. That work produced the analysis and computer code that provide the foundation for model-based control of annealing lehrs during steady state operation and through color and thickness changes. In the second phase of the work, with PPG Industries as the industrial partner, the emphasis was on control of temperature and combustion stoichiometry in the melting furnace, to provide a wider operating window, improve product yield, and increase energy efficiency. A program of experiments with the furnace, CFD modeling and simulation, flow measurements, and sensor fusion was undertaken to provide the experimental and theoretical basis for an integrated, model-based control system utilizing the new infrastructure installed at the demonstration site for the purpose. In spite of the fact that the project was terminated during the first year of the second phase of the work, the results of these first steps toward implementation of model-based control were sufficient to demonstrate the value of the approach to improving the productivity of glass manufacture.
Given pre-existing Groundwater Modeling System (GMS) models of the Horonobe Underground Research Laboratory (URL) at both the regional and site scales, this work performs an example uncertainty analysis for performance assessment (PA) applications. After a general overview of uncertainty and sensitivity analysis techniques, the existing GMS sitescale model is converted to a PA model of the steady-state conditions expected after URL closure. This is done to examine the impact of uncertainty in site-specific data in conjunction with conceptual model uncertainty regarding the location of the Oomagari Fault. In addition, a quantitative analysis of the ratio of dispersive to advective forces, the F-ratio, is performed for stochastic realizations of each conceptual model. All analyses indicate that accurate characterization of the Oomagari Fault with respect to both location and hydraulic conductivity is critical to PA calculations. This work defines and outlines typical uncertainty and sensitivity analysis procedures and demonstrates them with example PA calculations relevant to the Horonobe URL.
One problem facing today's nuclear power industry is flow-accelerated corrosion and erosion in pipe elbows. The Korean Atomic Energy Research Institute (KAERI) is performing experiments in their Flow-Accelerated Corrosion (FAC) test loop to better characterize these phenomena, and develop advanced sensor technologies for the condition monitoring of critical elbows on a continuous basis. In parallel with these experiments, Sandia National Laboratories is performing Computational Fluid Dynamic (CFD) simulations of the flow in one elbow of the FAC test loop. The simulations are being performed using the FLUENT commercial software developed and marketed by Fluent, Inc. The model geometry and mesh were created using the GAMBIT software, also from Fluent, Inc. This report documents the results of the simulations that have been made to date; baseline results employing the RNG k-e turbulence model are presented. The predicted value for the diametrical pressure coefficient is in reasonably good agreement with published correlations. Plots of the velocities, pressure field, wall shear stress, and turbulent kinetic energy adjacent to the wall are shown within the elbow section. Somewhat to our surprise, these indicate that the maximum values of both wall shear stress and turbulent kinetic energy occur near the elbow entrance, on the inner radius of the bend. Additional simulations were performed for the same conditions, but with the RNG k-e model replaced by either the standard k-{var_epsilon}, or the realizable k-{var_epsilon} turbulence model. The predictions using the standard k-{var_epsilon} model are quite similar to those obtained in the baseline simulation. However, with the realizable k-{var_epsilon} model, more significant differences are evident. The maximums in both wall shear stress and turbulent kinetic energy now appear on the outer radius, near the elbow exit, and are {approx}11% and 14% greater, respectively, than those predicted in the baseline calculation; secondary maxima in both quantities still occur near the elbow entrance on the inner radius. Which set of results better reflects reality must await experimental corroboration. Additional calculations demonstrate that whether or not FLUENT's radial equilibrium pressure distribution option is used in the PRESSURE OUTLET boundary condition has no significant impact on the flowfield near the elbow. Simulations performed with and without the chemical sensor and associated support bracket that were present in the experiments demonstrate that the latter have a negligible influence on the flow in the vicinity of the elbow. The fact that the maxima in wall shear stress and turbulent kinetic energy occur on the inner radius is therefore not an artifact of having introduced the sensor into the flow.
This document summarizes the equations and applications associated with the photovoltaic array performance model developed at Sandia National Laboratories over the last twelve years. Electrical, thermal, and optical characteristics for photovoltaic modules are included in the model, and the model is designed to use hourly solar resource and meteorological data. The versatility and accuracy of the model has been validated for flat-plate modules (all technologies) and for concentrator modules, as well as for large arrays of modules. Applications include system design and sizing, 'translation' of field performance measurements to standard reporting conditions, system performance optimization, and real-time comparison of measured versus expected system performance.
ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.
The Z machine at Sandia National Laboratories (SNL) is the world's largest and most powerful laboratory X-ray source. The Z Refurbishment Project (ZR) is presently underway to provide an improved precision, more shot capacity, and a higher current capability. The ZR upgrade has a total output current requirement of at least 26 MA for a 100-ns standard Z-pinch load. To accomplish this with minimal impact on the surrounding hardware, the 60 high-energy discharge capacitors in each of the existing 36 Marx generators must be replaced with identical size units but with twice the capacitance. Before the six-month shut down and transition from Z to ZR occurs, 2500 of these capacitors will be delivered. We chose to undertake an ambitious vendor qualification program to reduce the risk of not meeting ZR performance goals, to encourage the pulsed-power industry to revisit the design and development of high- energy discharge capacitors, and to meet the cost and delivery schedule within the ZR project plans. Five manufacturers were willing to fabricate and sell SNL samples of six capacitors each to be evaluated. The 8000-shot qualification test phase of the evaluation effort is now complete. This paper summarizes how the 0.279 x 0.356 x 0.635-m (11 x 14 x 25-in) stainless steel can, Scyllac-style insulator bushing, 2.65-{micro}F, < 30-nH, 100-kV, 35%-reversal capacitor lifetime specifications were determined, briefly describes the nominal 260-kJ test facility configuration, presents the test results of the most successful candidates, and discusses acceptance testing protocols that balance available resources against performance, cost, and schedule risk. We also summarize the results of our accelerated lifetime testing of the selected General Atomics P/N 32896 capacitor. We have completed lifetime tests with twelve capacitors at 100 kV and with fourteen capacitors at 110-kV charge voltage. The means of the fitted Weibull distributions for these two cases are about 17,000 and 10,000 shots, respectively. As a result of this effort plus the rigorous vendor testing prior to shipping, we are confident in the high reliability of these capacitors and have acquired information pertaining to their lifetime dependence on the operating voltage. One result of the analysis is that, for these capacitors, lifetime scales inversely with voltage to the 6.28 {+-} 0.91 power, over this 100 to 110-kV voltage range. Accepting the assumptions leading to this outcome allows us to predict the overall ZR system Marx generator capacitor reliability at the expected lower operating voltage of about 85 kV.
In SAND report 2004-1617, we outline a method for edge-based tetrahedral subdivision that does not rely on saving state or communication to produce compatible tetrahedralizations. This report analyzes the performance of the technique by characterizing (a) mesh quality, (b) execution time, and (c) traits of the algorithm that could affect quality or execution time differently for different meshes. It also details the method used to debug the several hundred subdivision templates that the algorithm relies upon. Mesh quality is on par with other similar refinement schemes and throughput on modern hardware can exceed 600,000 output tetrahedra per second. But if you want to understand the traits of the algorithm, you have to read the report!
The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.
A Mueller matrix imaging polarimeter is used to acquire polarization-sensitive images of seven different manmade samples in multiple scattering geometries. Successive Mueller matrix images of a sample with changing incidence and scatter angles are used to develop a Mueller matrix bidirectional reflectance distribution function for the sample in one plane of measurement. The Mueller matrix bidirectional reflectance distribution functions are compared, and patterns are noted. The most significant data for the scattering samples measured occurs along the diagonal of the respective Mueller matrices, indicating significant depolarization effects. Reduced depolarization data in the form of the average degree of polarization (of exiting light) for each sample is examined as a function of changing scattering geometry. Five of seven manmade samples exhibit an inverted Gaussian profile of depolarization with changing scattering geometry, the shape of which may prove useful for measuring sample properties (e.g. roughness) and for classifying or categorizing samples in a remote sensing scheme. Depolarization differences for each sample in response to changing incident polarization states are also examined, and a new metric, the degree of polarization surface, has been developed to visualize all such data simultaneously.
This report provides soil evaluation and characterization testing for the submarine bases at Kings Bay, Georgia, and Bangor, Washington, using triaxial testing at high confining pressures with different moisture contents. In general, the samples from the Bangor and Kings Bay sites appeared to be stronger than a previously used reference soil. Assuming the samples of the material were representative of the material found at the sites, they should be adequate for use in the planned construction. Since soils can vary greatly over even a small site, a soil specification for the construction contractor would be needed to insure that soil variations found at the site would meet or exceed the requirements. A suggested specification for the Bangor and Kings Bay soils was presented based on information gathered from references plus data obtained from this study, which could be used as a basis for design by the construction contractor.
The need for fresh water has increased exponentially during the last several decades due to the continuous growth of human population and industrial and agricultural activities. Yet existing resources are limited often because of their high salinity. This unfavorable situation requires the development of new, long-term strategies and alternative technologies for desalination of saline waters presently not being used to supply the population growth occurring in arid regions. We have developed a novel environmentally friendly method for desalinating inland brackish waters. This process can be applied to either brackish ground water or produced waters (i.e., coal-bed methane or oil and gas produced waters). Using a set of ion exchange and sorption materials, our process effectively removes anions and cations in separate steps. The ion exchange materials were chosen because of their specific selectivity for ions of interest, and for their ability to work in the temperature and pH regions necessary for cost and energy effectiveness. For anion exchange, we have focused on hydrotalcite (HTC), a layered hydroxide similar to clay in structure. For cation exchange, we have developed an amorphous silica material that has enhanced cation (in particular Na{sup +}) selectivity. In the case of produced waters with high concentrations of Ca{sup 2+}, a lime softening step is included.
Conclusions of this paper are: (1) Adsorption/desorption on bulk unmodified zeolites showed isoprene adsorbed by zeolite-L and n-pentane adsorbed by zeolite-Y and ZSM-5; (2) Bulk carbonization is used to passivate zeolite activity toward organic adsorption/decomposition; (3) Based on the bulk modified zeolite separation results, we have determined that the MFI type has the most potential for isoprene enrichment; (4) Modified MFI type membranes are jointly made by Sandia and the Univ. of Colorado. Separation experiments are performed by Goodyear Chemical; (5) Isoprene/n-pentane separations have been demonstrated by using both zeolite membranes and modified bulk zeolites at various temperatures on the Goodyear Pilot-scale unit; and (6) Target zeolite membrane separations values of 6.7% isoprene enrichment have been established by economic analysis calculations by Burns & McDonnell.
It is shown that for any material or structural model expressible as a Masing model, there exists a unique parallel-series (displacement-based) Iwan system that characterizes that model as a function of displacement history. This poses advantages both in terms of more convenient force evaluation in arbitrary deformation histories as well as in terms of model inversion. Characterization as an Iwan system is demonstrated through the inversion of the Ramberg-Osgood model, a force(stress)-based material model that is not explicitly invertible. An implication of the inversion process is that direct, rigorous comparisons of different Masing models, regardless of the ability to invert their constitutive relationship, can be achieved through the comparison of their associated Iwan distribution densities.
This report summarizes numerical analyses conducted to assess the relative importance on penetration depth calculations of rock constitutive model physics features representing the presence of microscale flaws such as porosity and networks of microcracks and rock mass structural features. Three-dimensional, nonlinear, transient dynamic finite element penetration simulations are made with a realistic geomaterial constitutive model to determine which features have the most influence on penetration depth calculations. A baseline penetration calculation is made with a representative set of material parameters evaluated from measurements made from laboratory experiments conducted on a familiar sedimentary rock. Then, a sequence of perturbations of various material parameters allows an assessment to be made of the main penetration effects. A cumulative probability distribution function is calculated with the use of an advanced reliability method that makes use of this sensitivity database, probability density functions, and coefficients of variation of the key controlling parameters for penetration depth predictions. Thus the variability of the calculated penetration depth is known as a function of the variability of the input parameters. This simulation modeling capability should impact significantly the tools that are needed to design enhanced penetrator systems, support weapons effects studies, and directly address proposed HDBT defeat scenarios.
Pioneering x-ray imaging has been undertaken on a number of AWE's and Sandia National Laboratories radiation effects x-ray simulators. These simulators typically yield a single very short (<50ns) pulse of high-energy (MeV endpoint energy bremsstrahlung) x-ray radiation with doses in the kilorad (krad(Si)) region. X-ray source targets vary in size from 2 to 25cm diameter, dependent upon the particular simulator. Electronic imaging of the source x-ray emission under dynamic conditions yields valuable information upon how the simulator is performing. The resultant images are of interest to the simulator designer who may configure new x-ray source converter targets and diode designs. The images can provide quantitative information about machine performance during radiation effects testing of components under active conditions. The effects testing program is a valuable interface for validation of high performance computer codes and models for the radiation effects community. A novel high-energy x-ray imaging spectrometer is described whereby the spectral energy (0.1 to 2.5MeV) profile may be discerned from the digitally recorded and viewable images via a pinhole/scintillator/CCD imaging system and knowledge of the filtration parameters. Unique images, analysis and a preliminary evaluation of the capability of the spectrometer are presented. Further, a novel time resolved imaging system is described that captures a sequence of high spatial resolution temporal images, with zero interframe time, in the nanosecond timeframe, of our source x-rays.
One of the most important types of data in the National Nuclear Security Administration (NNSA) Ground-Based Nuclear Explosion Monitoring Research and Engineering (GNEM R&E) Knowledge Base (KB) is parametric grid (PG) data. PG data can be used to improve signal detection, signal association, and event discrimination, but so far their greatest use has been for improving event location by providing ground-truth-based corrections to travel-time base models. In this presentation we discuss the latest versions of the complete suite of Knowledge Base PG tools developed by NNSA to create, access, manage, and view PG data. The primary PG population tool is the Knowledge Base calibration integration tool (KBCIT). KBCIT is an interactive computer application to produce interpolated calibration-based information that can be used to improve monitoring performance by improving precision of model predictions and by providing proper characterizations of uncertainty. It is used to analyze raw data and produce kriged correction surfaces that can be included in the Knowledge Base. KBCIT not only produces the surfaces but also records all steps in the analysis for later review and possible revision. New features in KBCIT include a new variogram autofit algorithm; the storage of database identifiers with a surface; the ability to merge surfaces; and improved surface-smoothing algorithms. The Parametric Grid Library (PGL) provides the interface to access the data and models stored in a PGL file database. The PGL represents the core software library used by all the GNEM R&E tools that read or write PGL data (e.g., KBCIT and LocOO). The library provides data representations and software models to support accurate and efficient seismic phase association and event location. Recent improvements include conversion of the flat-file database (FDB) to an Oracle database representation; automatic access of station/phase tagged models from the FDB during location; modification of the core geometric data representations; a new multimodel representation for combining separate seismic data models that partially overlap; and a port of PGL to the Microsoft Windows platform. The Data Manager (DM) tool provides access to PG data for purposes of managing the organization of the generated PGL file database, or for perusing the data for visualization and informational purposes. It is written as a graphical user interface (GUI) that can directly access objects stored in any PGL file database and display it in an easily interpreted textual or visual format. New features include enhanced station object processing; low-level conversion to a new core graphics visualization library, the visualization toolkit (VTK); additional visualization support for most of the PGL geometric objects; and support for the Environmental Systems Research Institute (ESRI) shape files (which are used to enhance the geographical context during visualization). The Location Object-Oriented (LocOO) tool computes seismic event locations and associated uncertainty based on travel time, azimuth, and slowness observations. It uses a linearized least-squares inversion algorithm (the Geiger method), enhanced with Levenberg-Marquardt damping to improve performance in highly nonlinear regions of model space. LocOO relies on PGL for all predicted quantities and is designed to fully exploit all the capabilities of PGL that are relevant to seismic event location. New features in LocOO include a redesigned internal architecture implemented to enhance flexibility and to support simultaneous multiple event location. Database communication has been rewritten using new object-relational features available in Oracle 9i.
To make use of some portions of the National Nuclear Security Administration (NNSA) Knowledge Base (KB) for which no current operational monitoring applications were available, Sandia National Laboratories have developed a set of prototype regional analysis tools (MatSeis, EventID Tool, CodaMag Tool, PhaseMatch Tool, Dendro Tool, Infra Tool, etc.), and we continue to maintain and improve these. Individually, these tools have proven effective in addressing specific monitoring tasks, but collectively their number and variety tend to overwhelm KB users, so we developed another application - the KB Navigator - to launch the tools and facilitate their use for real monitoring tasks. The KB Navigator is a flexible, extensible java application that includes a browser for KB data content, as well as support to launch any of the regional analysis tools. In this paper, we will discuss the latest versions of KB Navigator and the regional analysis tools, with special emphasis on the new overarching inter-tool communication methodology that we have developed to make the KB Navigator and the tools function together seamlessly. We use a peer-to-peer communication model, which allows any tool to communicate with any other. The messages themselves are passed as serialized XML, and the conversion from Java to XML (and vice versa) is done using Java Architecture for XML Binding (JAXB).
TrackEye is a film digitization and target tracking system that offers the potential for quantitatively measuring the dynamic state variables (e.g., absolute and relative position, orientation, linear and angular velocity/acceleration, spin rate, trajectory, angle of attack, etc.) for moving objects using captured single or dual view image sequences. At the heart of the system is a set of tracking algorithms that automatically find and quantify the location of user selected image details such as natural test article features or passive fiducials that have been applied to cooperative test articles. This image position data is converted into real world coordinates and rates with user specified information such as the image scale and frame rate. Though tracking methods such as correlation algorithms are typically robust by nature, the accuracy and suitability of each TrackEye tracking algorithm is in general unknown even under good imaging conditions. The challenges of optimal algorithm selection and algorithm performance/measurement uncertainty are even more significant for long range tracking of high-speed targets where temporally varying atmospheric effects degrade the imagery. This paper will present the preliminary results from a controlled test sequence used to characterize the performance of the TrackEye tracking algorithm suite.
The West Hackberry salt dome, in southwestern Louisiana, is one of four underground oil-storage facilities managed by the U. S. Department of Energy Strategic Petroleum Reserve (SPR) Program. Sandia National Laboratories, as the geotechnical advisor to the SPR, conducts site-characterization investigations and other longer-term geotechnical and engineering studies in support of the program. This report describes the conversion of two-dimensional geologic interpretations of the West Hackberry site into three-dimensional geologic models. The new models include the geometry of the salt dome, the surrounding sedimentary layers, mapped faults, and a portion of the oil storage caverns at the site. This work provides a realistic and internally consistent geologic model of the West Hackberry site that can be used in support of future work.
The breacher's training aid described in this report was designed to simulate features of magazine and steel-plate doors. The training aid enables breachers to practice using their breaching tools on components that they may encounter when attempting to enter a facility. Two types of fixtures were designed and built: (1) a large fixture incorporates simulated hinges, hasps, lock shrouds, and pins, and (2) a small fixture simulates the cross section of magazine and steel-plate doors. The small fixture consists of steel plates on either side of a structural member, such as an I-beam. The report contains detailed descriptions and photographs of the training aids, assembly instructions, and drawings.
Approximate formulas are constructed and numerical simulations are carried out for electric field derivative probes that have the form of flush mounted monopoles. Effects such as rounded edges are included. A method is introduced to make results from two-dimensional conformal mapping analyses accurately apply to the three-dimensional axisymmetric probe geometry
The need for a Combustion and Melting Research Facility focused on the solution of glass manufacturing problems common to all segments of the glass industry was given high priority in the earliest version of the Glass Industry Technology Roadmap (Eisenhauer et al., 1997). Visteon Glass Systems and, later, PPG Industries proposed to meet this requirement, in partnership with the DOE/OIT Glass Program and Sandia National Laboratories, by designing and building a research furnace equipped with state-of-the-art diagnostics in the DOE Combustion Research Facility located at the Sandia site in Livermore, CA. Input on the configuration and objectives of the facility was sought from the entire industry by a variety of routes: (1) through a survey distributed to industry leaders by GMIC, (2) by conducting an open workshop following the OIT Glass Industry Project Review in September 1999, (3) from discussions with numerous glass engineers, scientists, and executives, and (4) during visits to glass manufacturing plants and research centers. The recommendations from industry were that the melting tank be made large enough to reproduce the essential processes and features of industrial furnaces yet flexible enough to be operated in as many as possible of the configurations found in industry as well as in ways never before attempted in practice. Realization of these objectives, while still providing access to the glass bath and combustion space for optical diagnostics and measurements using conventional probes, was the principal challenge in the development of the tank furnace design. The present report describes a facility having the requirements identified as important by members of the glass industry and equipped to do the work that the industry recommended should be the focus of research. The intent is that the laboratory would be available to U.S. glass manufacturers for collaboration with Sandia scientists and engineers on both precompetitive basic research and the solution of proprietary glass production problems. As a consequence of the substantial increase in scale and scope of the initial furnace concept in response to industry recommendations, constraints on funding of industrial programs by DOE, and reorientation of the Department's priorities, the OIT Glass Program is unable to provide the support for construction of such a facility. However, it is the present investigators' hope that a group of industry partners will emerge to carry the project forward, taking advantage of the detailed furnace design presented in this report. The engineering, including complete construction drawings, bill of materials, and equipment specifications, is complete. The project is ready to begin construction as soon as the quotations are updated. The design of the research melter closely follows the most advanced industrial practice, firing by natural gas with oxygen. The melting area is 13 ft x 6 ft, with a glass depth of 3 ft and an average height in the combustion space of 3 ft. The maximum pull rate is 25 tons/day, ranging from 100% batch to 100% cullet, continuously fed, with variable batch composition, particle size distribution, and raft configuration. The tank is equipped with bubblers to control glass circulation. The furnace can be fired in three modes: (1) using a single large burner mounted on the front wall, (2) by six burners in a staggered/opposed arrangement, three in each breast wall, and (3) by down-fired burners mounted in the crown in any combination with the front wall or breast-wall-mounted burners. Horizontal slots are provided between the tank blocks and tuck stones and between the breast wall and skewback blocks, running the entire length of the furnace on both sides, to permit access to the combustion space and the surface of the glass for optical measurements and sampling probes. Vertical slots in the breast walls provide additional access for measurements and sampling. The furnace and tank are to be fully instrumented with standard measuring equipment, such as flow meters, thermocouples, continuous gas composition analyzers, optical pyrometers, and a video camera. The output from the instruments is to be continuously recorded and simultaneously made available to other researchers via the Internet. A unique aspect of the research facility would be its access to the expertise in optical measurements in flames and high temperature reacting flows residing in the Sandia Combustion Research Facility. Development of new techniques for monitoring and control of glass melting would be a major focus of the work. The lab would be equipped with conventional and laser light sources and detectors for optical measurements of gas temperature, velocity, and gaseous species and, using new techniques to be developed in the Research Facility itself, glass temperature and glass composition.
Transmission electron microscope (TEM) tomography provides three-dimensional structural information from tilt series with nanoscale resolution. We have collected TEM projection data sets to study the internal structure of photocatalytic nanoparticles. Multiple cross-sectional slices of the nanoparticles are reconstructed using an algebraic reconstruction technique (ART) and then assembled to form a 3D rendering of the object. We recently upgraded our TEM with a new sample holder having a tilt range of +/-70{sup o} and have collected tomography data over a range of 125{sup o}. Simulations were performed to study the effects of field-of-view displacement (shift and rotation), limited tilt angle range, hollow (missing) projections, stage angle accuracy, and number of projections on the reconstructed image quality. This paper discusses our experimental and computational approaches, presents some examples of TEM tomography, and considers future directions.
The magnetic-pressure drive technique allows single-shot measurements of compression isentropes. We have used this method to measure the isentropes in the pressure-volume space of bulk and single-crystal lead, and lead-antimony alloy to {approx}400 kbar. The isentrope pressure-volume curves were found from integration of the experimentally deduced Lagrangian sound speed as a function of particle velocity. A characteristics calculation method was used to convert time-resolved free-surface velocity measurements to corresponding in situ particle-velocity histories, from which the Lagrangian sound speed was determined from the times for samples of different thicknesses to reach the same particle velocity. Use of multiple velocity interferometry probes decreased the uncertainty due to random errors by allowing multiple measurements. Our results have errors of from 4% to 6% in pressure, {approx}1% to 1.5% in volume, depending on the number of measurements, and are consistent with existing isotherm and Hugoniot data and models for lead.
Proposed for publication in Journal of Chemical Physics.
Relative integrated cross sections are measured for rotationally inelastic scattering of NO({sup 2}{pi}{sub 1/2}), hexapole selected in the upper {Lambda}-doublet level of the ground rotational state (j = 0.5), in collisions with He at a nominal energy of 514 cm{sup -1}. Application of a static electric field E in the scattering region, directed parallel or antiparallel to the relative velocity vector v, allows the state-selected NO molecule to be oriented with either the N end or the O end towards the incoming He atom. Laser-induced fluorescence detection of the final state of the NO molecule is used to determine the experimental steric asymmetry, SA {triple_bond} ({sigma}{sub v}{up_arrow}{down_arrow}{sub E}-{sigma}{sub v}{up_arrow}{up_arrow}{sub E})/({sigma}{sub v}{up_arrow}{down_arrow}{sub E} + {sigma}{sub v}{up_arrow}{up_arrow}{sub E}), which is equal to within a factor of (-1) to the molecular steric effect, S{sub i {yields} f} {triple_bond} ({sigma}{sub He {yields} NO} - {sigma}{sub He {yields} ON})/({sigma}{sub He {yields} NO} + {sigma}{sub He {yields} ON}). The dependence of the integral inelastic cross section on the incoming {lambda}-doublet component is also observed as a function of the final rotational (j{prime}), spin-orbit ({Omega}{prime}), and {Lambda}-doublet ({epsilon}{prime}) state. The measured steric asymmetries are significantly larger than previously observed for NO-Ar scattering, supporting earlier proposals that the repulsive part of the interaction potential is responsible for the steric asymmetry. In contrast to the case of scattering with Ar, the steric asymmetry of NO-He collisions is not very sensitive to the value of {Omega}{prime} . However, the {Lambda}-doublet propensities are very different for [{Omega} = 0.5(F{sub 1}) {yields} {Omega}{prime} = 0.5(F{sub 1})] transitions. Spin-orbit manifold conserving collisions exhibit a propensity for parity conservation at low {Delta}{sub j}, but spin-orbit manifold changing collisions do not show this propensity. In conjunction with the experiments, state-to-state cross sections for scattering of oriented NO({sup 2}{pi}) molecules with He atoms are predicted from close-coupling calculations on restricted coupled-cluster methods including single, double, and noniterated triple excitations [J. Klos, G. Chalasinski, M. T. Berry, R. Bukowski, and S. M. Cybulski, J. Chem. Phys. 112, 2195 (2000)] and correlated electron-pair approximation [M. Yang and M. H. Alexander, J. Chem. Phys. 103, 6973 (1995)] potential energy surfaces. The calculated steric asymmetry S{sub i {yields} f} of the inelastic cross sections at E{sub tr} = 514 cm{sup -1} is in reasonable agreement with that derived from the present experimental measurements for both spin-manifold conserving (F{sub 1} {yields} F{sub 1}) and spin-manifold changing (F{sub 1} {yields} F{sub 2}) collisions, except that the overall sign of the effect is opposite. Additionally, calculated field-free integral cross sections for collisions at E{sub tr} = 508 cm{sup -1} are compared to the experimental data of Joswig et al. [J. Chem. Phys. 85, 1904 (1986)]. Finally, the calculated differential cross section for collision energy E{sub tr} = 491 cm{sup -1} is compared to experimental data of Westley et al. [J. Chem. Phys. 114, 2669 (2001)] for the spin-orbit conserving transition F{sub 1} (j = 0.5) {yields} F{sub 1}f(j{prime} = 3.5).
At the direction of the Department of Defense Explosives Safety Board (DDESB), a Peer Review Team was established to review the status of development of the risk-based explosives safety siting process and criteria as currently implemented in the software 'Safety Assessment for Explosive Risk (SAFER)' Version 2.1. The objective of the Peer Review Team was to provide an independent evaluation of the components of the SAFER model, the ongoing development of the model and the risk assessment process and criteria. This peer review report addressed procedures; protocols; physical and statistical science algorithms; related documents; and software quality assurance, validation and verification. Overall, the risk-based method in SAFER represents a major improvement in the Department of Defense (DoD) approach to explosives safety management. The DDESB and Risk Based Explosives Safety Criteria Team (RBESCT) have made major strides in developing a methodology, which over time may become a worldwide model. The current status of all key areas of the SAFER code has been logically developed and is defensible. Continued improvement and refinement can be expected as implementation proceeds. A consistent approach to addressing and refining uncertainty in each of the primary areas (probability of event, consequences of event and exposure) will be a very beneficial future activity.
The length scale of stress domain patterns formed at solid surfaces is usually calculated using isotropic elasticity theory. Because this length depends exponentially on elastic constants; deviations between isotropic and anisotropic elasticity can lead to large errors. Another inaccuracy of isotropic elasticity theory is that it neglects the dependence of elastic relaxations on stripe orientation. To remove these inaccuracies; we calculate the energy of striped domain patterns using anisotropic elasticity theory for an extensive set of surfaces encountered in experimental studies of self-assembly. We present experimental and theoretical evidence that elastic anisotropy is large enough to determine the stripe orientation when Pb is deposited on Cu(111). Our analytical and numerical results should be useful for analysis of a broad range of experimental systems.
The formation of HO{sub 2} in the reactions of C{sub 2}H{sub 5}, n-C{sub 3}H{sub 7}, and i-C{sub 3}H{sub 7} radicals with O{sub 2} is investigated using the technique of laser photolysis/long-path frequency-modulation spectroscopy. The alkyl radicals are formed by 266 nm photolysis of alkyl iodides. The formation of HO{sub 2} from the subsequent reaction of the alkyl radicals with O{sub 2} is followed by infrared frequency-modulation spectroscopy. The concentration of I atoms is simultaneously monitored by direct absorption of a second laser probe on the spin?orbit transition. The measured profiles are compared to a kinetic model taken from time-resolved master-equation results based on previously published ab initio characterizations of the relevant stationary points on the potential-energy surface. The ab initio energies are adjusted to produce agreement with the present experimental data and with available literature studies. The isomer specificity of the present results enables refinement of the model for i-C{sub 3}H{sub 7} + O{sub 2} and improved agreement with experimental measurements of HO{sub 2} production in propane oxidation.
This manual describes the theory behind many of the constructs in Salinas. For a more detailed description of how to use Salinas , we refer the reader to Salinas, User's Notes. Many of the constructs in Salinas are pulled directly from published material. Where possible, these materials are referenced herein. However, certain functions in Salinas are specific to our implementation. We try to be far more complete in those areas. The theory manual was developed from several sources including general notes, a programer-notes manual, the user's notes and of course the material in the open literature.
Salinas provides a massively parallel implementation of structural dynamics finite element analysis. This capability is required for high fidelity, validated models used in modal, vibration, static and shock analysis of weapons systems. General capabilities for modal, statics and transient dynamics are provided. Salinas is similar to commercial codes like Nastran or Abaqus. It has some nonlinear capability, but excels in linear computation. It is different than the above commercial codes in that it is designed to operate efficiently in a massively parallel environment. Even for an experienced analyst, running a new finite element package can be a challenge. This little primer is intended to make part of this task easier by presenting the basic steps in a simple way. The analyst is referred to the theory manual for details of the mathematics behind the work. The User's Notes should be used for more complex inputs, and will have more details about the process (as well as many more examples). More information can be found on our web pages, 3 or 4. Finite element analysis can be deceptive. Any software can give the wrong answers if used improperly, and occasionally even when used properly. Certainly a solid background in structural mechanics is necessary to build an adequate finite element model and interpret the results. This primer should provide a quick start in answering some of the more common questions that come up in using Salinas.
This work presents an experimental evaluation of patch repair of solid laminated composites. The study was focused on destructive and nondestructive tests of full-scale repaired panels under static tension loading conditions. The testing program consisted of ten panels: three pristine, three damaged, three repaired and one repaired with mismatched fiber orientation patch. The evaluated panels were (300 mm x 675 mm) in size and consisted of 6-ply ((-60 /60/0){sub s}) quasi-isotropic laminates. The destructive tests were performed by North Carolina A&T State University and the nondestructive tests were performed by Iowa State University using Pulse-echo C-scan, Air coupled TTU and Auto-Tap. Sandia National Laboratories validated the NDT tests by implementing NDE field methods. Based on the evaluation performed in this study, it appears that the patch repair is an effective means in retrofitting damaged solid composite laminates.
The peridynamic theory of continuum mechanics allows damage, fracture, and long-range forces to be treated as natural components of the deformation of a material. In this paper, the peridynamic approach is applied to small thickness two- and one-dimensional structures. For membranes, a constitutive model is described appropriate for rubbery sheets that can form cracks. This model is used to perform numerical simulations of the stretching and dynamic tearing of membranes. A similar approach is applied to one-dimensional string like structures that undergrow stretching, bending, and failure. Long-range forces similar to van der Waals interactions at the nanoscale influence the equilibrium configurations of these structures, how they deform, and possibly self-assembly.
A laser hazard analysis to determine the Extended Ocular Hazard Distances associated with a possible intrabeam aided viewing of the Sandia Remote Sensing System (SRSS) airborne AURA laser (Big Sky Laser Technology) was performed based on the 2000 version of the American National Standard Institute's (ANSI) Standard Z136.1, for the Safe Use of Lasers and the 2000 version of the ANSI Standard Z136.6, for the Safe Use of Lasers Outdoors. The AURA lidar system is installed in the instrument pod of a Proteus airframe and is used to perform laser interaction experiments and tests at various national test sites. The targets are located at various distances (ranges) from the airborne platform. Nominal Ocular Hazard Distance (NOHD) and maximum ''eye-safe'' dwell times for various operational altitudes associated with unaided intrabeam exposure of ground personnel were determined and presented in a previous SAND report. Although the target areas are controlled and the use of viewing aids are prohibited there is the possibility of the unauthorized use of viewing aids such as binoculars. This aided viewing hazard analysis is supplemental to the previous SAND report for the laser hazard analysis of the airborne AURA.
We present all-atom molecular dynamics simulations of biologically realistic transmembrane potential gradients across a DMPC bilayer. These simulations are the first to model this gradient in all-atom detail, with the field generated solely by explicit ion dynamics. Unlike traditional bilayer simulations that have one bilayer per unit cell, we simulate a 170 mV potential gradient by using a unit cell consisting of three salt-water baths separated by two bilayers, with full three-dimensional periodicity. The study shows that current computational resources are powerful enough to generate a truly electrified interface, as we show the predicted effect of the field on the overall charge distribution. Additionally, starting from Poisson's equation, we show a new derivation of the double integral equation for calculating the potential profile in systems with this type of periodicity.