Hydrogen energy may provide the means to an environmentally friendly future. One of the problems related to its application for transportation is 'on-board' storage. Hydrogen storage in solids has long been recognized as one of the most practical approaches for this purpose. The H-capacity in interstitial hydrides of most metals and alloys is limited to below 2.5% by weight and this is unsatisfactory for on-board transportation applications. Magnesium hydride is an exception with hydrogen capacity of {approx}8.2 wt.%, however, its operating temperature, above 350 C, is too high for practical use. Sodium alanate (NaAlH{sub 4}) absorbs hydrogen up to 5.6 wt.% theoretically; however, its reaction kinetics and partial reversibility do not completely meet the new target for transportation application. Recently Chen et al. [1] reported that (Li{sub 3} N + 2H{sub 2} {leftrightarrow} LiNH{sub 2} + 2LiH) provides a storage material with a possible high capacity, up to 11.5 wt.%, although this material is still too stable to meet the operating pressure/temperature requirement. Here we report a new approach to destabilize lithium imide system by partial substitution of lithium by magnesium in the (LiNH{sub 2 + LiH {leftrightarrow} Li2NH + H2}) system with a minimal capacity loss. This Mg-substituted material can reversibly absorb 5.2 wt.% hydrogen at pressure of 30 bar at 200 C. This is a very promising material for on-board hydrogen storage applications. It is interesting to observe that the starting material (2LiNH{sub 2 + MgH2}) converts to (Mg(NH{sub 2}){sub 2} + 2LiH) after a desorption/re-absorption cycle.
This report documents state-of-the-art methods, tools, and data for the conduct of a fire Probabilistic Risk Assessment (PRA) for a commercial nuclear power plant (NPP) application. The methods have been developed under the Fire Risk Re-quantification Study. This study was conducted as a joint activity between EPRI and the U. S. NRC Office of Nuclear Regulatory Research (RES) under the terms of an EPRI/RES Memorandum of Understanding [RS.1] and an accompanying Fire Research Addendum [RS.2]. Industry participants supported demonstration analyses and provided peer review of this methodology. The documented methods are intended to support future applications of Fire PRA, including risk-informed regulatory applications. The documented method reflects state-of-the-art fire risk analysis approaches. The primary objective of the Fire Risk Study was to consolidate recent research and development activities into a single state-of-the-art fire PRA analysis methodology. Methodological issues raised in past fire risk analyses, including the Individual Plant Examination of External Events (IPEEE) fire analyses, have been addressed to the extent allowed by the current state-of-the-art and the overall project scope. Methodological debates were resolved through a consensus process between experts representing both EPRI and RES. The consensus process included a provision whereby each major party (EPRI and RES) could maintain differing technical positions if consensus could not be reached. No cases were encountered where this provision was invoked. While the primary objective of the project was to consolidate existing state-of-the-art methods, in many areas, the newly documented methods represent a significant advancement over previously documented methods. In several areas, this project has, in fact, developed new methods and approaches. Such advances typically relate to areas of past methodological debate.
In flight tests, certain finned bodies of revolution firing lateral jets experience slower spin rates than expected. The primary cause for the reduced spin rate is the interaction between the lateral jets and the freestream air flowing past the body. This interaction produces vortices that interact with the fins (Vortex-Fin Interaction (VFI)) altering the pressure distribution over the fins and creating torque that counteracts the desired spin (counter torque). The current task is to develop an automated procedure for analyzing the pressures measured at an array of points on the fin surfaces of a body tested in a production-scale wind tunnel to determine the VFI-induced roll torque and compare it to the roll torque experimentally measured with an aerodynamic balance. Basic pressure, force, and torque relationships were applied to finite elements defined by the pressure measurement locations and integrated across the fin surface. The integrated fin pressures will help assess the distinct contributions of the individual fins to the counter torque and aid in correlating the counter torque with the positions and strengths of the vortices. The methodology produced comparisons of the effects of VFI for varying flow conditions such as freestream Mach number and dynamic pressure. The results show that for some cases the calculated counter torque agreed with the measured counter torque; however, the results were less consistent with increased freestream Mach numbers and dynamic pressures.
The analytical model for the depth of correlation (measurement depth) of a microscopic particle image velocimetry (micro-PIV) experiment derived by Olsen and Adrian (Exp. Fluids, 29, pp. S166-S174, 2000) has been modified to be applicable to experiments using high numerical aperture optics. A series of measurements are presented that experimentally quantify the depth of correlation of micro-PIV velocity measurements which employ high numerical aperture and magnification optics. These measurements demonstrate that the modified analytical model is quite accurate in estimating the depth of correlation in micro-PIV measurements using this class of optics. Additionally, it was found that the Gaussian particle approximation made in this model does not significantly affect the model's performance. It is also demonstrated that this modified analytical model easily predicts the depth of correlation when viewing into a medium of a different index of refraction than the immersion medium.
Solid-state {sup 1}H NMR relaxometry studies were conducted on a hydroxy-terminated polybutadiene (HTPB) based polyurethane elastomer thermo-oxidatively aged at 80 C. The {sup 1}H T{sub 1}, T{sub 2}, and T{sub 1{rho}} relaxation times of samples thermally aged for various periods of time were determined as a function of NMR measurement temperature. The response of each measurement was calculated from a best-fit linear function of the relaxation time vs. aging time. It was found that the T{sub 2,H} and T{sub 1{rho},H} relaxation times exhibited the largest response to thermal degradation, whereas T{sub 1,H} showed minimal change. All of the NMR relaxation measurements on solid samples showed significantly less sensitivity to thermal aging than the T{sub 2,H} relaxation times of solvent-swollen samples.
The microstructure and mechanical properties of niobium-modified lead zirconate titanate (PNZT) 95/5 ceramics, where 95/5 refers to the ratio of lead zirconate to lead titanate, were evaluated as a function of lead (Pb) stoichiometry. Chemically-prepared PNZT 95/5 is produced at Sandia National Laboratories by the Ceramics and Glass Processing Department (14154) for use as voltage elements in ferroelectric neutron generator power supplies. PNZT 95/5 was prepared according to the nominal formulation of Pb{sub 0.991+x}(Zr{sub 0.955}Ti{sub 0.045}){sub 0.982}Nb{sub 0.018}O{sub 3+x}, where x (-0.0274 {approx}< x {approx}< 0.0297) refers to the mole fraction of Pb and O that deviated from the stoichiometric value. The Pb concentrations were determined from calcined powders; no adjustments were made to Pb compositions due to weight loss during sintering. The microstructure (second phases, fracture mode and grain size) varied appreciably with Pb stoichiometry, whereas the mechanical properties (hardness, fracture toughness, strength and Weibull parameters) exhibited modest variation. Specimens deficient in Pb, 2.74% (x = -0.0274) and 2.15% (x = -0.02150), had a high area fraction of a zirconia (ZrO{sub 2}) second phase on the order of 0.02. As the Pb content in solid solution increased the ZrO{sub 2} content decreased; no ZrO{sub 2} was observed for the specimen containing 2.97% excess Pb (x = 0.0297). Over the range of Pb stoichiometry most specimens fractured predominately transgranularly; however, 2.97% Pb excess PNZT 95/5 fractured predominately intergranularly. No systematic changes in hardness or Weibull modulus were observed as a function of Pb content. Fracture toughness decreased slightly from 1.8 MPa{center_dot}m{sup 1/2} for Pb deficient specimens to 1.6 MPa{center_dot}m{sup 1/2} for specimens with excess Pb. Although there are microstructural differences with changes in Pb content, the mechanical properties did not vary substantially. However, the average failure stress and fracture toughness for PNZT 95/5 containing 2.97% excess Pb decreased slightly. It is expected that additional increases in Pb content would result in further mechanical property degradation. The decrease in mechanical properties for the 2.97% Pb excess ceramics could be the result of a weaker PbO-rich grain boundary phase present in the material. If better mechanical properties are desired, it is recommended that PNZT 95/5 ceramics are processed by a method whereby any excess Pb is depleted from the final sintered ceramic so that near-stoichiometric values of Pb concentration are reached. Otherwise, a PbO-rich grain boundary phase may exist in the ceramic which could potentially be detrimental to the mechanical properties of PNZT 95/5 ceramics.
This report represents the completion of a Laboratory-Directed Research and Development (LDRD) program to develop and fabricate geometric test structures for the measurement of transport properties in bulk GaN and AlGaN/GaN heterostructures. A large part of this study was spent examining fabrication issues related to the test structures used in these measurements, due to the fact that GaN processing is still in its infancy. One such issue had to do with surface passivation. Test samples without a surface passivation, often failed at electric fields below 50 kV/cm, due to surface breakdown. A silicon nitride passivation layer of approximately 200 nm was used to reduce the effects of surface states and premature surface breakdown. Another issue was finding quality contacts for the material, especially in the case of the AlGaN/GaN heterostructure samples. Poor contact performance in the heterostructures plagued the test structures with lower than expected velocities due to carrier injection from the contacts themselves. Using a titanium-rich ohmic contact reduced the contact resistance and stopped the carrier injection. The final test structures had an etch constriction with varying lengths and widths (8x2, 10x3, 12x3, 12x4, 15x5, and 16x4 {micro}m) and massive contacts. A pulsed voltage input and a four-point measurement in a 50 {Omega} environment was used to determine the current through and the voltage dropped across the constriction. From these measurements, the drift velocity as a function of the applied electric field was calculated and thus, the velocity-field characteristics in n-type bulk GaN and AlGaN/GaN test structures were determined. These measurements show an apparent saturation velocity near to 2.5x10{sup 7} cm/s at 180 kV/cm and 3.1x10{sup 7} cm/s, at a field of 140 kV/cm, for the bulk GaN and AlGaN heterostructure samples, respectively. These experimental drift velocities mark the highest velocities measured in these materials to date and confirm the predictions of previous theoretical models using ensemble Monte Carlo simulations.
The ability to precisely place nanomaterials at predetermined locations is necessary for realizing applications using these new materials. Using an organic template, we demonstrate directed growth of zinc oxide (ZnO) nanorods on silver films from aqueous solution. Spatial organization of ZnO nanorods in prescribed arbitrary patterns was achieved, with unprecedented control in selectivity, crystal orientation, and nucleation density. Surprisingly, we found that caboxylate endgroups of {omega}-alkanethiol molecules strongly inhibit ZnO nucleation. The mechanism for this observed selectivity is discussed.
The first viscous compressible three-dimensional BiGlobal linear instability analysis of leading-edge boundary layer flow has been performed. Results have been obtained by independent application of asymptotic analysis and numerical solution of the appropriate partial-differential eigenvalue problem. It has been shown that the classification of three-dimensional linear instabilities of the related incompressible flow [13] into symmetric and antisymmetric mode expansions in the chordwise coordinate persists for compressible, subsonic flow-regime at sufficiently large Reynolds numbers.
Techniques for mitigating the adsorption of {sup 137}Cs and {sup 60}Co on metal surfaces (e.g. RAM packages) exposed to contaminated water (e.g. spent-fuel pools) have been developed and experimentally verified. The techniques are also effective in removing some of the {sup 60}Co and {sup 137}Cs that may have been adsorbed on the surfaces after removal from the contaminated water. The principle for the {sup 137}Cs mitigation technique is based upon ion-exchange processes. In contrast, {sup 60}Co contamination primarily resides in minute particles of crud that become lodged on cask surfaces. Crud is an insoluble Fe-Ni-Cr oxide that forms colloidal-sized particles as reactor cooling systems corrode. Because of the similarity between Ni{sup 2+} and Co{sup 2+}, crud is able to scavenge and retain traces of cobalt as it forms. A number of organic compounds have a great specificity for combining with nickel and cobalt. Ongoing research is investigating the effectiveness of chemical complexing agent EDTA with regard to its ability to dissolve the host phase (crud) thereby liberating the entrained {sup 60}Co into a solution where it can be rinsed away.
The National Spent Nuclear Fuel Program, located at the Idaho National Laboratory (INL), coordinates and integrates national efforts in management and disposal of US Department of Energy (DOE)-owned spent nuclear fuel. These management functions include development of standardized systems for long-term disposal in the proposed Yucca Mountain repository. Nuclear criticality control measures are needed in these systems to avoid restrictive fissile loading limits because of the enrichment and total quantity of fissile material in some types of the DOE spent nuclear fuel. This need is being addressed by development of corrosion-resistant, neutron-absorbing structural alloys for nuclear criticality control. This paper outlines results of a metallurgical development program that is investigating the alloying of gadolinium into a nickel-chromium-molybdenum alloy matrix. Gadolinium has been chosen as the neutron absorption alloying element due to its high thermal neutron absorption cross section and low solubility in the expected repository environment. The nickel-chromium-molybdenum alloy family was chosen for its known corrosion performance, mechanical properties, and weldability. The workflow of this program includes chemical composition definition, primary and secondary melting studies, ingot conversion processes, properties testing, and national consensus codes and standards work. The microstructural investigation of these alloys shows that the gadolinium addition is present in the alloy as a gadolinium-rich second phase. The mechanical strength values are similar to those expected for commercial Ni-Cr-Mo alloys. The alloys have been corrosion tested with acceptable results. The initial results of weldability tests have also been acceptable. Neutronic testing in a moderated critical array has generated favorable results. An American Society for Testing and Materials material specification has been issued for the alloy and a Code Case has been submitted to the American Society of Mechanical Engineers for code qualification.
This report describes work carried out under a Sandia National Laboratories Excellence in Engineering Fellowship in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. Our research group (at UIUC) is developing a intelligent robot, and attempting to teach it language. While there are many aspects of this research, for the purposes of this report the most important are the following ideas. Language is primarily based on semantics, not syntax. To truly learn meaning, the language engine must be part of an embodied intelligent system, one capable of using associative learning to form concepts from the perception of experiences in the world, and further capable of manipulating those concepts symbolically. In the work described here, we explore the use of hidden Markov models (HMMs) in this capacity. HMMs are capable of automatically learning and extracting the underlying structure of continuous-valued inputs and representing that structure in the states of the model. These states can then be treated as symbolic representations of the inputs. We describe a composite model consisting of a cascade of HMMs that can be embedded in a small mobile robot and used to learn correlations among sensory inputs to create symbolic concepts. These symbols can then be manipulated linguistically and used for decision making. This is the project final report for the University Collaboration LDRD project, 'A Robotic Framework for Semantic Concept Learning'.
Finding the central sets, such as center and median sets, of a network topology is a fundamental step in the design and analysis of complex distributed systems. This paper presents distributed synchronous algorithms for finding central sets in general tree structures. Our algorithms are distinguished from previous work in that they take only qualitative information, thus reducing the constants hidden in the asymptotic notation, and all vertices of the topology know the central sets upon their termination.
Extremely short collision mean free paths and near-singular elastic and inelastic differential cross sections (DCS) make analog Monte Carlo simulation an impractical tool for charged particle transport. The widely used alternative, the condensed history method, while efficient, also suffers from several limitations arising from the use of precomputed smooth distributions for sampling. There is much interest in developing computationally efficient algorithms that implement the correct transport mechanics. Here we present a nonanalog transport-based method that incorporates the correct transport mechanics and is computationally efficient for implementation in single event Monte Carlo codes. Our method systematically preserves important physics and is mathematically rigorous. It builds on higher order Fokker-Planck and Boltzmann Fokker-Planck representations of the scattering and energy-loss process, and we accordingly refer to it as a Generalized Boltzmann Fokker-Planck (GBFP) approach. We postulate the existence of nonanalog single collision scattering and energy-loss distributions (differential cross sections) and impose the constraint that the first few momentum transfer and energy loss moments be identical to corresponding analog values. This is effected through a decomposition or hybridizing scheme wherein the singular forward peaked, small energy-transfer collisions are isolated and de-singularized using different moment-preserving strategies, while the large angle, large energy-transfer collisions are described by the exact (analog) DCS or approximated to a high degree of accuracy. The inclusion of the latter component allows the higher angle and energy-loss moments to be accurately captured. This procedure yields a regularized transport model characterized by longer mean free paths and smoother scattering and energy transfer kernels than analog. In practice, acceptable accuracy is achieved with two rigorously preserved moments, but accuracy can be systematically increased to analog level by preserving successively higher moments with almost no change to the algorithm. Details of specific moment-preserving strategies will be described and results presented for dose in heterogeneous media due to a pencil beam and a line source of monoenergetic electrons. Error and runtimes of our nonanalog formulations will be contrasted against condensed history implementations.
The efficiency of neuronal encoding in sensory and motor systems has been proposed as a first principle governing response properties within the central nervous system. We present a continuation of a theoretical study presented by Zhang and Sejnowski, where the influence of neuronal tuning properties on encoding accuracy is analyzed using information theory. When a finite stimulus space is considered, we show that the encoding accuracy improves with narrow tuning for one- and two-dimensional stimuli. For three dimensions and higher, there is an optimal tuning width.
Human behavior is a function of an iterative interaction between the stimulus environment and past experience. It is not simply a matter of the current stimulus environment activating the appropriate experience or rule from memory (e.g., if it is dark and I hear a strange noise outside, then I turn on the outside lights and investigate). Rather, it is a dynamic process that takes into account not only things one would generally do in a given situation, but things that have recently become known (e.g., there have recently been coyotes seen in the area and one is known to be rabid), as well as other immediate environmental characteristics (e.g., it is snowing outside, I know my dog is outside, I know the police are already outside, etc.). All of these factors combine to inform me of the most appropriate behavior for the situation. If it were the case that humans had a rule for every possible contingency, the amount of storage that would be required to enable us to fluidly deal with most situations we encounter would rapidly become biologically untenable. We can all deal with contingencies like the one above with fairly little effort, but if it isn't based on rules, what is it based on? The assertion of the Cognitive Systems program at Sandia for the past 5 years is that at the heart of this ability to effectively navigate the world is an ability to discriminate between different contexts (i.e., Dynamic Context Discrimination, or DCD). While this assertion in and of itself might not seem earthshaking, it is compelling that this ability and its components show up in a wide variety of paradigms across different subdisciplines in psychology. We begin by outlining, at a high functional level, the basic ideas of DCD. We then provide evidence from several different literatures and paradigms that support our assertion that DCD is a core aspect of cognitive functioning. Finally, we discuss DCD and the computational model that we have developed as an instantiation of DCD in more detail. Before commencing with our overview of DCD, we should note that DCD is not necessarily a theory in the classic sense. Rather, it is a description of cognitive functioning that seeks to unify highly similar findings across a wide variety of literatures. Further, we believe that such convergence warrants a central place in efforts to computationally emulate human cognition. That is, DCD is a general principle of cognition. It is also important to note that while we are drawing parallels across many literatures, these are functional parallels and are not necessarily structural ones. That is, we are not saying that the same neural pathways are involved in these phenomena. We are only saying that the different neural pathways that are responsible for the appearance of these various phenomena follow the same functional rules - the mechanisms are the same even if the physical parts are distinct. Furthermore, DCD is not a causal mechanism - it is an emergent property of the way the brain is constructed. DCD is the result of neurophysiology (cf. John, 2002, 2003). Finally, it is important to note that we are not proposing a generic learning mechanism such that one biological algorithm can account for all situation interpretation. Rather, we are pointing out that there are strikingly similar empirical results across a wide variety of disciplines that can be understood, in part, by similar cognitive processes. It is entirely possible, even assumed in some cases (i.e., primary language acquisition) that these more generic cognitive processes are complemented and constrained by various limits which may or may not be biological in nature (cf. Bates & Elman, 1996; Elman, in press).
Dual-frequency reactors employ source rf power supplies to generate plasma and bias supplies to extract ions. There is debate over choices for the source and bias frequencies. Higher frequencies facilitate plasma generation but their shorter wavelengths may cause spatial variations in plasma properties. Electrical nonlinearity of plasma sheaths causes harmonic generation and mixing of source and bias frequencies. These processes, and the resulting spectrum of frequencies, are as much dependent on electrical characteristics of matching networks and on chamber geometry as on plasma sheath properties. We investigated such electrical effects in a 300-mm Applied-Materials plasma reactor. Data were taken for 13.56-MHz bias frequency (chuck) and for source frequencies from 30 to 160 MHz (upper electrode). An rf-magnetic-field probe (B-dot loop) was used to measure the radial variation of fields inside the plasma. We will describe the results of this work.
A laser hazard analysis and safety assessment was performed for the LH-40 IR Laser Rangefinder based on the 2000 version of the American National Standard Institute's Standard Z136.1, for the Safe Use of Lasers and Z136.6, for the Safe Use of Lasers Outdoors. The LH-40 IR Laser is central to the Long Range Reconnaissance and Observation System (LORROS). The LORROS is being evaluated by the Department 4149 Group to determine its capability as a long-range assessment tool. The manufacture lists the laser rangefinder as 'eye safe' (Class 1 laser classified under the CDRH Compliance Guide for Laser Products and 21 CFR 1040 Laser Product Performance Standard). It was necessary that SNL validate this prior to its use involving the general public. A formal laser hazard analysis is presented for the typical mode of operation.
This report surveys the needs associated with environmental monitoring and long-term environmental stewardship. Emerging sensor technologies are reviewed to identify compatible technologies for various environmental monitoring applications. The contaminants that are considered in this report are grouped into the following categories: (1) metals, (2) radioisotopes, (3) volatile organic compounds, and (4) biological contaminants. Regulatory drivers are evaluated for different applications (e.g., drinking water, storm water, pretreatment, and air emissions), and sensor requirements are derived from these regulatory metrics. Sensor capabilities are then summarized according to contaminant type, and the applicability of the different sensors to various environmental monitoring applications is discussed.
This report describes both a general methodology and some specific examples of passive radio receivers. A passive radio receiver uses no direct electrical power but makes sole use of the power available in the radio spectrum. These radio receivers are suitable as low data-rate receivers or passive alerting devices for standard, high power radio receivers. Some zero-power radio architectures exhibit significant improvements in range with the addition of very low power amplifiers or signal processing electronics. These ultra-low power radios are also discussed and compared to the purely zero-power approaches.
We modeled the effects of temperature, degree of polymerization, and surface coverage on the equilibrium structure of tethered poly(N-isopropylacrylamide) chains immersed in water. We employed a numerical self-consistent field theory where the experimental phase diagram was used as input to the theory. At low temperatures, the composition profiles are approximately parabolic and extend into the solvent. In contrast, at temperatures above the LCST of the bulk solution, the polymer profiles are collapsed near the surface. The layer thickness and the effective monomer fraction within the layer undergo what appears to be a first-order change at a temperature that depends on surface coverage and chain length. Our results suggest that as a result of the tethering constraint, the phase diagram becomes distorted relative to the bulk polymer solution and exhibits closed loop behavior. As a consequence, we find that the relative magnitude of the layer thickness change at 20 and 40 C is a nonmonotonic function of surface coverage, with a maximum that shifts to lower surface coverage as the chain length increases in qualitative agreement with experiment.
Sandia National Laboratories, under contract to Nuclear Waste Management Organization of Japan (NUMO), is performing research on regional classification of given sites in Japan with respect to potential volcanic disruption using multivariate statistics and geo-statistical interpolation techniques. This report provides results obtained for hierarchical probabilistic regionalization of volcanism for the Sengan region in Japan by applying multivariate statistical techniques and geostatistical interpolation techniques on the geologic data provided by NUMO. A workshop report produced in September 2003 by Sandia National Laboratories (Arnold et al., 2003) on volcanism lists a set of most important geologic variables as well as some secondary information related to volcanism. Geologic data extracted for the Sengan region in Japan from the data provided by NUMO revealed that data are not available at the same locations for all the important geologic variables. In other words, the geologic variable vectors were found to be incomplete spatially. However, it is necessary to have complete geologic variable vectors to perform multivariate statistical analyses. As a first step towards constructing complete geologic variable vectors, the Universal Transverse Mercator (UTM) zone 54 projected coordinate system and a 1 km square regular grid system were selected. The data available for each geologic variable on a geographic coordinate system were transferred to the aforementioned grid system. Also the recorded data on volcanic activity for Sengan region were produced on the same grid system. Each geologic variable map was compared with the recorded volcanic activity map to determine the geologic variables that are most important for volcanism. In the regionalized classification procedure, this step is known as the variable selection step. The following variables were determined as most important for volcanism: geothermal gradient, groundwater temperature, heat discharge, groundwater pH value, presence of volcanic rocks and presence of hydrothermal alteration. Data available for each of these important geologic variables were used to perform directional variogram modeling and kriging to estimate values for each variable at 23949 centers of the chosen 1 km cell grid system that represents the Sengan region. These values formed complete geologic variable vectors at each of the 23,949 one km cell centers.
Sandia has developed and tested mockups armored with W rods over the last decade and pioneered the initial development of W rod armor for International Thermonuclear Experimental Reactor (ITER) in the 1990's. We have also developed 2D and 3D thermal and stress models of W rod-armored plasma facing components (PFCs) and test mockups and are applying the models to both short pulses, i.e. edge localized modes (ELMs), and thermal performance in steady state for applications in C-MOD, DiMES testing and ITER. This paper briefly describes the 2D and 3D models and their applications with emphasis on modeling for an ongoing test program that simulates repeated heat loads from ITER ELMs.
In recent dynamic hohlraum experiments on the Z facility, Al and MgF{sub 2} tracer layers were embedded in cylindrical CH{sub 2} foam targets to provide K-shell lines in the keV spectral region for diagnosing the conditions of the interior hohlraum plasma. The position of the tracers was varied: sometimes they were placed 2 mm from the ends of the foam cylinder and sometimes at the ends of the cylinder. Also varied was the composition of the tracers in the sense that pure Al layers, pure MgF{sub 2} layers, or mixtures of the elements were employed on various shots. Time-resolved K-shell spectra of both Al and Mg show mostly absorption lines. These data can be analyzed with detailed configuration atomic models of carbon, aluminum, and magnesium in which spectra are calculated by solving the radiation transport equation for as many as 4100 frequencies. We report results from shot Z1022 to illustrate the basic radiation physics and the capabilities as well as limitations of this diagnostic method.
We present a formulation for coupling atomistic and continuum simulation methods for application to both quasistatic and dynamic analyses. In our formulation, a coarse-scale continuum discretization is assumed to cover all parts of the computational domain with atomistic crystals introduced only in regions of interest. The geometry of the discretization and crystal are allowed to overlap arbitrarily. Our approach uses interpolation and projection operators to link the kinematics of each region, which are then used to formulate a system potential energy from which we derive coupled expressions for the forces acting in each region. A hyperelastic constitutive formulation is used to compute the stress response of the defect-free continuum with constitutive properties derived from the Cauchy-Born rule. A correction to the Cauchy-Born rule is introduced in the overlap region to minimize fictitious boundary effects. Features of our approach will be demonstrated with simulations in one, two and three dimensions.
With increased terrorist threats in the past few years, it is no longer feasible to feel confident that a facility is well protected with a static security system. Potential adversaries often research their targets, examining procedural and system changes, in order to attack at a vulnerable time. Such system changes may include scheduled sensor maintenance, scheduled or unscheduled changes in the guard force, facility alert level changes, sensor failures or degradation, etc. All of these changes impact the system effectiveness and can make a facility more vulnerable. Currently, a standard analysis of system effectiveness is performed approximately every six months using a vulnerability assessment tool called ASSESS (Analytical Systems and Software for Evaluating Safeguards and Systems). New standards for determining a facility's system effectiveness will be defined by tools that are currently under development, such as ATLAS (Adversary Time-line Analysis System) and NextGen (Next Generation Security Simulation). Although these tools are useful to model analyses at different spatial resolutions and can support some sensor dynamics using statistical models, they are limited in that they require a static system state as input. They cannot account for the dynamics of the system through day-to-day operations. The emphasis of this project was to determine the feasibility of dynamically monitoring the facility security system and performing an analysis as changes occur. Hence, the system effectiveness is known at all times, greatly assisting time-critical decisions in response to a threat or a potential threat.
We have successfully demonstrated selective trapping, concentration, and release of various biological organisms and inert beads by insulator-based dielectrophoresis within a polymeric microfluidic device. The microfluidic channels and internal features, in this case arrays of insulating posts, were initially created through standard wet-etch techniques in glass. This glass chip was then transformed into a nickel stamp through the process of electroplating. The resultant nickel stamp was then used as the replication tool to produce the polymeric devices through injection molding. The polymeric devices were made of Zeonor{reg_sign} 1060R, a polyolefin copolymer resin selected for its superior chemical resistance and optical properties. These devices were then optically aligned with another polymeric substrate that had been machined to form fluidic vias. These two polymeric substrates were then bonded together through thermal diffusion bonding. The sealed devices were utilized to selectively separate and concentrate a biological pathogen simulants. These include spores that were selectively concentrated and released by simply applying D.C. voltages across the plastic replicates via platinum electrodes in inlet and outlet reservoirs. The dielectrophoretic response of the organisms is observed to be a function of the applied electric field and post size, geometry and spacing. Cells were selectively trapped against a background of labeled polystyrene beads and spores to demonstrate that samples of interest can be separated from a diverse background. We have implemented and demonstrated here a methodology to determine the concentration factors obtained in these devices.
The effects of ionizing and neutron radiation on the characteristics and performance of laser diodes are reviewed, and the formation mechanisms for nonradiative recombination centers, the primary type of radiation damage in laser diodes, are discussed. Additional topics include the detrimental effects of aluminum in the active (lasing) volume, the transient effects of high-dose-rate pulses of ionizing radiation, and a summary of ways to improve the radiation hardness of laser diodes. Radiation effects on laser diodes emitting in the wavelength region around 808 nm are emphasized.
We find that small temperature changes cause steps on the NiAl(110) surface to move. We show that this step motion occurs because mass is transferred between the bulk and the surface as the concentration of bulk thermal defects (i.e., vacancies) changes with temperature. Since the change in an island's area with a temperature change is found to scale strictly with the island's step length, the thermally generated defects are created (annihilated) very near the surface steps. To quantify the bulk/surface exchange, we oscillate the sample temperature and measure the amplitude and phase lag of the system response, i.e., the change in an island's area normalized to its perimeter. Using a one-dimensional model of defect diffusion through the bulk in a direction perpendicular to the surface, we determine the migration and formation energies of the bulk thermal defects. During surface smoothing, we show that there is no flow of material between islands on the same terrace and that all islands in a stack shrink at the same rate. We conclude that smoothing occurs by mass transport through the bulk of the crystal rather than via surface diffusion. Based on the measured relative sizes of the activation energies for island decay, defect migration, and defect formation, we show that attachment/detachment at the steps is the rate-limiting step in smoothing.
The goal of z-pinch inertial fusion energy (IFE) is to extend the single-shot z-pinch inertial confinement fusion (ICF) results on Z to a repetitive-shot z-pinch power plant concept for the economical production of electricity. Z produces up to 1.8 MJ of x-rays at powers as high as 230 TW. Recent target experiments on Z have demonstrated capsule implosion convergence ratios of 14-21 with a double-pinch driven target, and DD neutron yields up to 8x10exp10 with a dynamic hohlraum target. For z-pinch IFE, a power plant concept is discussed that uses high-yield IFE targets (3 GJ) with a low rep-rate per chamber (0.1 Hz). The concept includes a repetitive driver at 0.1 Hz, a Recyclable Transmission Line (RTL) to connect the driver to the target, high-yield targets, and a thick-liquid wall chamber. Recent funding by a U.S. Congressional initiative for $4M for FY04 is supporting research on RTLs, repetitive pulsed power drivers, shock mitigation, full RTL cycle planned experiments, high-yield IFE targets, and z-pinch power plant technologies. Recent results of research in all of these areas are discussed, and a Road Map for Z-Pinch IFE is presented.
This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.
The Z-Pinch Power Plant uses the results from Sandia National Laboratories Z accelerator in a power plant application to generate energy pulses using inertial confinement fusion. A collaborative project has been initiated by Sandia to investigate the scientific principles of a power generation system using this technology. Research is under way to develop an integrated concept that describes the operational issues of a 1000 MW electrical power plant. Issues under consideration include: 1-20 gigajoule fusion pulse containment, repetitive mechanical connection of heavy hardware, generation of terawatt pulses every 10 seconds, recycling of ten thousand tons of steel, and manufacturing of millions of hohlraums and capsules per year. Additionally, waste generation and disposal issues are being examined. This paper describes the current concept for the plant and also the objectives for future research.
As part of DARPA Information Processing Technology Office (IPTO) Software for Distributed Robotics (SDR) Program, Sandia National Laboratories has developed analysis and control software for coordinating tens to thousands of autonomous cooperative robotic agents (primarily unmanned ground vehicles) performing military operations such as reconnaissance, surveillance and target acquisition; countermine and explosive ordnance disposal; force protection and physical security; and logistics support. Due to the nature of these applications, the control techniques must be distributed, and they must not rely on high bandwidth communication between agents. At the same time, a single soldier must easily direct these large-scale systems. Finally, the control techniques must be provably convergent so as not to cause undo harm to civilians. In this project, provably convergent, moderate communication bandwidth, distributed control algorithms have been developed that can be regulated by a single soldier. We have simulated in great detail the control of low numbers of vehicles (up to 20) navigating throughout a building, and we have simulated in lesser detail the control of larger numbers of vehicles (up to 1000) trying to locate several targets in a large outdoor facility. Finally, we have experimentally validated the resulting control algorithms on smaller numbers of autonomous vehicles.
As part of DARPA's Software for Distributed Robotics Program within the Information Processing Technologies Office (IPTO), Sandia National Laboratories was tasked with identifying military airborne and maritime missions that require cooperative behaviors as well as identifying generic collective behaviors and performance metrics for these missions. This report documents this study. A prioritized list of general military missions applicable to land, air, and sea has been identified. From the top eight missions, nine generic reusable cooperative behaviors have been defined. A common mathematical framework for cooperative controls has been developed and applied to several of the behaviors. The framework is based on optimization principles and has provably convergent properties. A three-step optimization process is used to develop the decentralized control law that minimizes the behavior's performance index. A connective stability analysis is then performed to determine constraints on the communication sample period and the local control gains. Finally, the communication sample period for four different network protocols is evaluated based on the network graph, which changes throughout the task. Using this mathematical framework, two metrics for evaluating these behaviors are defined. The first metric is the residual error in the global performance index that is used to create the behavior. The second metric is communication sample period between robots, which affects the overall time required for the behavior to reach its goal state.
When residual range migration due to either real or apparent motion errors exceeds the range resolution, conventional autofocus algorithms fail. A new migration-correction autofocus algorithm has been developed that estimates the migration and applies phase and frequency corrections to properly focus the image.
This report is a collection of documents written by the group members of the Engineering Sciences Research Foundation (ESRF), Laboratory Directed Research and Development (LDRD) project titled 'A Robust, Coupled Approach to Atomistic-Continuum Simulation'. Presented in this document is the development of a formulation for performing quasistatic, coupled, atomistic-continuum simulation that includes cross terms in the equilibrium equations that arise due to kinematic coupling and corrections used for the calculation of system potential energy to account for continuum elements that overlap regions containing atomic bonds, evaluations of thermo-mechanical continuum quantities calculated within atomistic simulations including measures of stress, temperature and heat flux, calculation used to determine the appropriate spatial and time averaging necessary to enable these atomistically-defined expressions to have the same physical meaning as their continuum counterparts, and a formulation to quantify a continuum 'temperature field', the first step towards constructing a coupled atomistic-continuum approach capable of finite temperature and dynamic analyses.
This document describes the modeling of the physics (and eventually features) in the Integrated TIGER Series (ITS) codes [Franke 04] which is largely pulled from various sources in the open literature (especially [Seltzer 88], [Seltzer 91], [Lorence 89], [Halbleib 92]), although those sources often describe the ETRAN Code from which the physics engine of ITS is derived, not necessarily identical. This is meant to be an evolving document, with more coverage and detail as time goes on. As such, entire sections are still incomplete. Presently, this document covers the continuous-energy ITS codes with more completeness on photon transport (though electron transport will not be completely ignored). In particular, this document does not cover the Multigroup code, MCODES (externally applied electromagnetic fields), or high-energy phenomena (photon pair-production). In this version, equations are largely left to the references though they may be pulled in over time.
An experiment at Sandia National Laboratories confirmed that a ternary salt (Flinabe, a ternary mixture of LiF, BeF{sub 2} and NaF) had a sufficiently low melting temperature ({approx}305 C) to be useful for first wall and blanket applications using flowing molten salts that were investigated in the Advanced Power Extraction (APEX) Program.[1] In the experiment, the salt pool was contained in a stainless steel crucible under vacuum. One thermocouple was placed in the salt and two others were embedded in the crucible. The results and observations from the experiment are reported in the companion paper.[2] The paper presented here will cover a 3-D finite element thermal analysis of the salt pool and crucible. The analysis was done to evaluate the thermal gradients in the salt pool and crucible and to compare the temperatures of the three thermocouples. One salt mixture appeared to melt and to solidify as a eutectic with a visible plateau in the cooling curve (i. e, time versus temperature for the thermocouple in the salt pool). This behavior was reproduced with the thermal model. Cases were run with several values of the thermal conductivity and latent heat of fusion to see the parametric effects of these changes on the respective cooling curves. The crucible was heated by an electrical heater in an inverted well at the base of the crucible. It lost heat primarily by radiation from the outer surfaces of the crucible and the top surface of the salt. The primary independent factors in the model were the emissivity of the crucible (and of the salt) and the fraction of the heater power coupled into the crucible. The model was 'calibrated' using (thermocouple) data and heating power from runs in which the crucible contained no salt.
This paper analyzes the relationship between current renewable energy technology costs and cumulative production, research, development and demonstration expenditures, and other institutional influences. Combining the theoretical framework of 'learning by doing' and developments in 'learning by searching' with the fields of organizational learning and institutional economics offers a complete methodological framework to examine the underlying capital cost trajectory when developing electricity cost estimates used in energy policy planning models. Sensitivities of the learning rates for global wind and solar photovoltaic technologies to changes in the model parameters are tested. The implications of the results indicate that institutional policy instruments play an important role for these technologies to achieve cost reductions and further market adoption.
Waste characterization is probably the most costly part of radioactive waste management. An important part of this characterization is the measurements of headspace gas in waste containers in order to demonstrate the compliance with Resource Conservation and Recovery Act (RCRA) or transportation requirements. The traditional chemical analysis methods, which include all steps of gas sampling, sample shipment and laboratory analysis, are expensive and time-consuming as well as increasing worker's exposure to hazardous environments. Therefore, an alternative technique that can provide quick, in-situ, and real-time detections of headspace gas compositions is highly desirable. This report summarizes the results obtained from a Laboratory Directed Research & Development (LDRD) project entitled 'Potential Application of Microsensor Technology in Radioactive Waste Management with Emphasis on Headspace Gas Detection'. The objective of this project is to bridge the technical gap between the current status of microsensor development and the intended applications of these sensors in nuclear waste management. The major results are summarized below: {sm_bullet} A literature review was conducted on the regulatory requirements for headspace gas sampling/analysis in waste characterization and monitoring. The most relevant gaseous species and the related physiochemical environments were identified. It was found that preconcentrators might be needed in order for chemiresistor sensors to meet desired detection {sm_bullet} A long-term stability test was conducted for a polymer-based chemresistor sensor array. Significant drifts were observed over the time duration of one month. Such drifts should be taken into account for long-term in-situ monitoring. {sm_bullet} Several techniques were explored to improve the performance of sensor polymers. It has been demonstrated that freeze deposition of black carbon (CB)-polymer composite can effectively eliminate the so-called 'coffee ring' effect and lead to a desirable uniform distribution of CB particles in sensing polymer films. The optimal ratio of CB/polymer has been determined. UV irradiation has been shown to improve sensor sensitivity. {sm_bullet} From a large set of commercially available polymers, five polymers were selected to form a sensor array that was able to provide optimal responses to six target-volatile organic compounds (VOCs). A series of tests on the response of sensor array to various VOC concentrations have been performed. Linear sensor responses have been observed over the tested concentration ranges, although the responses over a whole concentration range are generally nonlinear. {sm_bullet} Inverse models have been developed for identifying individual VOCs based on sensor array responses. A linear solvation energy model is particularly promising for identifying an unknown VOC in a single-component system. It has been demonstrated that a sensor array as such we developed is able to discriminate waste containers for their total VOC concentrations and therefore can be used as screening tool for reducing the existing headspace gas sampling rate. {sm_bullet} Various VOC preconcentrators have been fabricated using Carboxen 1000 as an absorbent. Extensive tests have been conducted in order to obtain optimal configurations and parameter ranges for preconcentrator performance. It has been shown that use of preconcentrators can reduce the detection limits of chemiresistors by two orders of magnitude. The life span of preconcentrators under various physiochemical conditions has also been evaluated. {sm_bullet} The performance of Pd film-based H2 sensors in the presence of VOCs has been evaluated. The interference of sensor readings by VOC has been observed, which can be attributed to the interference of VOC with the H2-O2 reaction on the Pd alloy surface. This interference can be eliminated by coating a layer of silicon dioxide on sensing film surface. Our work has demonstrated a wide range of applications of gas microsensors in radioactive waste management. Such applications can potentially lead to a significant cost saving and risk reduction for waste characterization.
In the mid-90's, breakthroughs were achieved at Sandia with z-pinches for high energy density physics on the Saturn machine. These initial tests led to the modification of the PBFA II machine to provide high currents rather than the high voltage it was initially designed for. The success of z-pinch for high energy density physics experiments insured a new mission for the converted accelerator, known as Z since 1997. Z now provides a unique capability to a number of basic science communities and has expanded its mission to include radiation effects research, inertial confinement fusion and material properties research. To achieve continued success, the physics community has requested higher peak current, better precision and pulse shaping versatility be incorporated into the refurbishment of the Z machine, known as ZR. In addition to the performance specification for ZR of a peak current of 26 MA with an implosion time of 100 ns, the machine also has a reliability specification to achieve 400 shots per year. While changes to the basic architecture of the Z machine are minor, the vast majority of its components have been redesigned. Moreover the increase in peak current from its present 18 MA to ZR's peak current of 26 MA at nominal operating parameters requires significantly higher voltages. These higher voltages, along with the reliability requirement, mandate a system assessment be performed to insure the requirements have been met. This paper will describe the System Assessment Test Program (SATPro) for the ZR project and report on the results.
Multivariate spatial classification schemes such as regionalized classification or principal components analysis combined with kriging rely on all variables being collocated at the sample locations. In these approaches, classification of the multivariate data into a finite number of groups is done prior to the spatial estimation. However, in some cases, the variables may be sampled at different locations with the extreme case being complete heterotopy of the data set. In these situations, it is necessary to adapt existing techniques to work with non-collocated data. Two approaches are considered: (1) kriging of existing data onto a series of 'collection points' where the classification into groups is completed and a measure of the degree of group membership is kriged to all other locations; and (2) independent kriging of all attributes to all locations after which the classification is done at each location. Calculations are conducted using an existing groundwater chemistry data set in the upper Dakota aquifer in Kansas (USA) and previously examined using regionalized classification (Bohling, 1997). This data set has all variables measured at all locations. To test the ability of the first approach for dealing with non-collocated data, each variable is reestimated at each sample location through a cross-validation process and the reestimated values are then used in the regionalized classification. The second approach for non-collocated data requires independent kriging of each attribute across the entire domain prior to classification. Hierarchical and non-hierarchical classification of all vectors is completed and a computationally less burdensome classification approach, 'sequential discrimination', is developed that constrains the classified vectors to be chosen from those with a minimal multivariate kriging variance. Resulting classification and uncertainty maps are compared between all non-collocated approaches as well as to the original collocated approach. The non-collocated approaches lead to significantly different group definitions compared to the collocated case. To some extent, these differences can be explained by the kriging variance of the estimated variables. Sequential discrimination of locations with a minimum multivariate kriging variance constraint produces slightly improved results relative to the collection point and the non-hierarchical classification of the estimated vectors.