We modeled the effects of temperature, degree of polymerization, and surface coverage on the equilibrium structure of tethered poly(N-isopropylacrylamide) chains immersed in water. We employed a numerical self-consistent field theory where the experimental phase diagram was used as input to the theory. At low temperatures, the composition profiles are approximately parabolic and extend into the solvent. In contrast, at temperatures above the LCST of the bulk solution, the polymer profiles are collapsed near the surface. The layer thickness and the effective monomer fraction within the layer undergo what appears to be a first-order change at a temperature that depends on surface coverage and chain length. Our results suggest that as a result of the tethering constraint, the phase diagram becomes distorted relative to the bulk polymer solution and exhibits closed loop behavior. As a consequence, we find that the relative magnitude of the layer thickness change at 20 and 40 C is a nonmonotonic function of surface coverage, with a maximum that shifts to lower surface coverage as the chain length increases in qualitative agreement with experiment.
Sandia National Laboratories, under contract to Nuclear Waste Management Organization of Japan (NUMO), is performing research on regional classification of given sites in Japan with respect to potential volcanic disruption using multivariate statistics and geo-statistical interpolation techniques. This report provides results obtained for hierarchical probabilistic regionalization of volcanism for the Sengan region in Japan by applying multivariate statistical techniques and geostatistical interpolation techniques on the geologic data provided by NUMO. A workshop report produced in September 2003 by Sandia National Laboratories (Arnold et al., 2003) on volcanism lists a set of most important geologic variables as well as some secondary information related to volcanism. Geologic data extracted for the Sengan region in Japan from the data provided by NUMO revealed that data are not available at the same locations for all the important geologic variables. In other words, the geologic variable vectors were found to be incomplete spatially. However, it is necessary to have complete geologic variable vectors to perform multivariate statistical analyses. As a first step towards constructing complete geologic variable vectors, the Universal Transverse Mercator (UTM) zone 54 projected coordinate system and a 1 km square regular grid system were selected. The data available for each geologic variable on a geographic coordinate system were transferred to the aforementioned grid system. Also the recorded data on volcanic activity for Sengan region were produced on the same grid system. Each geologic variable map was compared with the recorded volcanic activity map to determine the geologic variables that are most important for volcanism. In the regionalized classification procedure, this step is known as the variable selection step. The following variables were determined as most important for volcanism: geothermal gradient, groundwater temperature, heat discharge, groundwater pH value, presence of volcanic rocks and presence of hydrothermal alteration. Data available for each of these important geologic variables were used to perform directional variogram modeling and kriging to estimate values for each variable at 23949 centers of the chosen 1 km cell grid system that represents the Sengan region. These values formed complete geologic variable vectors at each of the 23,949 one km cell centers.
Sandia has developed and tested mockups armored with W rods over the last decade and pioneered the initial development of W rod armor for International Thermonuclear Experimental Reactor (ITER) in the 1990's. We have also developed 2D and 3D thermal and stress models of W rod-armored plasma facing components (PFCs) and test mockups and are applying the models to both short pulses, i.e. edge localized modes (ELMs), and thermal performance in steady state for applications in C-MOD, DiMES testing and ITER. This paper briefly describes the 2D and 3D models and their applications with emphasis on modeling for an ongoing test program that simulates repeated heat loads from ITER ELMs.
In recent dynamic hohlraum experiments on the Z facility, Al and MgF{sub 2} tracer layers were embedded in cylindrical CH{sub 2} foam targets to provide K-shell lines in the keV spectral region for diagnosing the conditions of the interior hohlraum plasma. The position of the tracers was varied: sometimes they were placed 2 mm from the ends of the foam cylinder and sometimes at the ends of the cylinder. Also varied was the composition of the tracers in the sense that pure Al layers, pure MgF{sub 2} layers, or mixtures of the elements were employed on various shots. Time-resolved K-shell spectra of both Al and Mg show mostly absorption lines. These data can be analyzed with detailed configuration atomic models of carbon, aluminum, and magnesium in which spectra are calculated by solving the radiation transport equation for as many as 4100 frequencies. We report results from shot Z1022 to illustrate the basic radiation physics and the capabilities as well as limitations of this diagnostic method.
We present a formulation for coupling atomistic and continuum simulation methods for application to both quasistatic and dynamic analyses. In our formulation, a coarse-scale continuum discretization is assumed to cover all parts of the computational domain with atomistic crystals introduced only in regions of interest. The geometry of the discretization and crystal are allowed to overlap arbitrarily. Our approach uses interpolation and projection operators to link the kinematics of each region, which are then used to formulate a system potential energy from which we derive coupled expressions for the forces acting in each region. A hyperelastic constitutive formulation is used to compute the stress response of the defect-free continuum with constitutive properties derived from the Cauchy-Born rule. A correction to the Cauchy-Born rule is introduced in the overlap region to minimize fictitious boundary effects. Features of our approach will be demonstrated with simulations in one, two and three dimensions.
With increased terrorist threats in the past few years, it is no longer feasible to feel confident that a facility is well protected with a static security system. Potential adversaries often research their targets, examining procedural and system changes, in order to attack at a vulnerable time. Such system changes may include scheduled sensor maintenance, scheduled or unscheduled changes in the guard force, facility alert level changes, sensor failures or degradation, etc. All of these changes impact the system effectiveness and can make a facility more vulnerable. Currently, a standard analysis of system effectiveness is performed approximately every six months using a vulnerability assessment tool called ASSESS (Analytical Systems and Software for Evaluating Safeguards and Systems). New standards for determining a facility's system effectiveness will be defined by tools that are currently under development, such as ATLAS (Adversary Time-line Analysis System) and NextGen (Next Generation Security Simulation). Although these tools are useful to model analyses at different spatial resolutions and can support some sensor dynamics using statistical models, they are limited in that they require a static system state as input. They cannot account for the dynamics of the system through day-to-day operations. The emphasis of this project was to determine the feasibility of dynamically monitoring the facility security system and performing an analysis as changes occur. Hence, the system effectiveness is known at all times, greatly assisting time-critical decisions in response to a threat or a potential threat.
We have successfully demonstrated selective trapping, concentration, and release of various biological organisms and inert beads by insulator-based dielectrophoresis within a polymeric microfluidic device. The microfluidic channels and internal features, in this case arrays of insulating posts, were initially created through standard wet-etch techniques in glass. This glass chip was then transformed into a nickel stamp through the process of electroplating. The resultant nickel stamp was then used as the replication tool to produce the polymeric devices through injection molding. The polymeric devices were made of Zeonor{reg_sign} 1060R, a polyolefin copolymer resin selected for its superior chemical resistance and optical properties. These devices were then optically aligned with another polymeric substrate that had been machined to form fluidic vias. These two polymeric substrates were then bonded together through thermal diffusion bonding. The sealed devices were utilized to selectively separate and concentrate a biological pathogen simulants. These include spores that were selectively concentrated and released by simply applying D.C. voltages across the plastic replicates via platinum electrodes in inlet and outlet reservoirs. The dielectrophoretic response of the organisms is observed to be a function of the applied electric field and post size, geometry and spacing. Cells were selectively trapped against a background of labeled polystyrene beads and spores to demonstrate that samples of interest can be separated from a diverse background. We have implemented and demonstrated here a methodology to determine the concentration factors obtained in these devices.
The effects of ionizing and neutron radiation on the characteristics and performance of laser diodes are reviewed, and the formation mechanisms for nonradiative recombination centers, the primary type of radiation damage in laser diodes, are discussed. Additional topics include the detrimental effects of aluminum in the active (lasing) volume, the transient effects of high-dose-rate pulses of ionizing radiation, and a summary of ways to improve the radiation hardness of laser diodes. Radiation effects on laser diodes emitting in the wavelength region around 808 nm are emphasized.
We find that small temperature changes cause steps on the NiAl(110) surface to move. We show that this step motion occurs because mass is transferred between the bulk and the surface as the concentration of bulk thermal defects (i.e., vacancies) changes with temperature. Since the change in an island's area with a temperature change is found to scale strictly with the island's step length, the thermally generated defects are created (annihilated) very near the surface steps. To quantify the bulk/surface exchange, we oscillate the sample temperature and measure the amplitude and phase lag of the system response, i.e., the change in an island's area normalized to its perimeter. Using a one-dimensional model of defect diffusion through the bulk in a direction perpendicular to the surface, we determine the migration and formation energies of the bulk thermal defects. During surface smoothing, we show that there is no flow of material between islands on the same terrace and that all islands in a stack shrink at the same rate. We conclude that smoothing occurs by mass transport through the bulk of the crystal rather than via surface diffusion. Based on the measured relative sizes of the activation energies for island decay, defect migration, and defect formation, we show that attachment/detachment at the steps is the rate-limiting step in smoothing.
The goal of z-pinch inertial fusion energy (IFE) is to extend the single-shot z-pinch inertial confinement fusion (ICF) results on Z to a repetitive-shot z-pinch power plant concept for the economical production of electricity. Z produces up to 1.8 MJ of x-rays at powers as high as 230 TW. Recent target experiments on Z have demonstrated capsule implosion convergence ratios of 14-21 with a double-pinch driven target, and DD neutron yields up to 8x10exp10 with a dynamic hohlraum target. For z-pinch IFE, a power plant concept is discussed that uses high-yield IFE targets (3 GJ) with a low rep-rate per chamber (0.1 Hz). The concept includes a repetitive driver at 0.1 Hz, a Recyclable Transmission Line (RTL) to connect the driver to the target, high-yield targets, and a thick-liquid wall chamber. Recent funding by a U.S. Congressional initiative for $4M for FY04 is supporting research on RTLs, repetitive pulsed power drivers, shock mitigation, full RTL cycle planned experiments, high-yield IFE targets, and z-pinch power plant technologies. Recent results of research in all of these areas are discussed, and a Road Map for Z-Pinch IFE is presented.
This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.
The Z-Pinch Power Plant uses the results from Sandia National Laboratories Z accelerator in a power plant application to generate energy pulses using inertial confinement fusion. A collaborative project has been initiated by Sandia to investigate the scientific principles of a power generation system using this technology. Research is under way to develop an integrated concept that describes the operational issues of a 1000 MW electrical power plant. Issues under consideration include: 1-20 gigajoule fusion pulse containment, repetitive mechanical connection of heavy hardware, generation of terawatt pulses every 10 seconds, recycling of ten thousand tons of steel, and manufacturing of millions of hohlraums and capsules per year. Additionally, waste generation and disposal issues are being examined. This paper describes the current concept for the plant and also the objectives for future research.
As part of DARPA Information Processing Technology Office (IPTO) Software for Distributed Robotics (SDR) Program, Sandia National Laboratories has developed analysis and control software for coordinating tens to thousands of autonomous cooperative robotic agents (primarily unmanned ground vehicles) performing military operations such as reconnaissance, surveillance and target acquisition; countermine and explosive ordnance disposal; force protection and physical security; and logistics support. Due to the nature of these applications, the control techniques must be distributed, and they must not rely on high bandwidth communication between agents. At the same time, a single soldier must easily direct these large-scale systems. Finally, the control techniques must be provably convergent so as not to cause undo harm to civilians. In this project, provably convergent, moderate communication bandwidth, distributed control algorithms have been developed that can be regulated by a single soldier. We have simulated in great detail the control of low numbers of vehicles (up to 20) navigating throughout a building, and we have simulated in lesser detail the control of larger numbers of vehicles (up to 1000) trying to locate several targets in a large outdoor facility. Finally, we have experimentally validated the resulting control algorithms on smaller numbers of autonomous vehicles.
As part of DARPA's Software for Distributed Robotics Program within the Information Processing Technologies Office (IPTO), Sandia National Laboratories was tasked with identifying military airborne and maritime missions that require cooperative behaviors as well as identifying generic collective behaviors and performance metrics for these missions. This report documents this study. A prioritized list of general military missions applicable to land, air, and sea has been identified. From the top eight missions, nine generic reusable cooperative behaviors have been defined. A common mathematical framework for cooperative controls has been developed and applied to several of the behaviors. The framework is based on optimization principles and has provably convergent properties. A three-step optimization process is used to develop the decentralized control law that minimizes the behavior's performance index. A connective stability analysis is then performed to determine constraints on the communication sample period and the local control gains. Finally, the communication sample period for four different network protocols is evaluated based on the network graph, which changes throughout the task. Using this mathematical framework, two metrics for evaluating these behaviors are defined. The first metric is the residual error in the global performance index that is used to create the behavior. The second metric is communication sample period between robots, which affects the overall time required for the behavior to reach its goal state.
When residual range migration due to either real or apparent motion errors exceeds the range resolution, conventional autofocus algorithms fail. A new migration-correction autofocus algorithm has been developed that estimates the migration and applies phase and frequency corrections to properly focus the image.
This report is a collection of documents written by the group members of the Engineering Sciences Research Foundation (ESRF), Laboratory Directed Research and Development (LDRD) project titled 'A Robust, Coupled Approach to Atomistic-Continuum Simulation'. Presented in this document is the development of a formulation for performing quasistatic, coupled, atomistic-continuum simulation that includes cross terms in the equilibrium equations that arise due to kinematic coupling and corrections used for the calculation of system potential energy to account for continuum elements that overlap regions containing atomic bonds, evaluations of thermo-mechanical continuum quantities calculated within atomistic simulations including measures of stress, temperature and heat flux, calculation used to determine the appropriate spatial and time averaging necessary to enable these atomistically-defined expressions to have the same physical meaning as their continuum counterparts, and a formulation to quantify a continuum 'temperature field', the first step towards constructing a coupled atomistic-continuum approach capable of finite temperature and dynamic analyses.
This document describes the modeling of the physics (and eventually features) in the Integrated TIGER Series (ITS) codes [Franke 04] which is largely pulled from various sources in the open literature (especially [Seltzer 88], [Seltzer 91], [Lorence 89], [Halbleib 92]), although those sources often describe the ETRAN Code from which the physics engine of ITS is derived, not necessarily identical. This is meant to be an evolving document, with more coverage and detail as time goes on. As such, entire sections are still incomplete. Presently, this document covers the continuous-energy ITS codes with more completeness on photon transport (though electron transport will not be completely ignored). In particular, this document does not cover the Multigroup code, MCODES (externally applied electromagnetic fields), or high-energy phenomena (photon pair-production). In this version, equations are largely left to the references though they may be pulled in over time.
An experiment at Sandia National Laboratories confirmed that a ternary salt (Flinabe, a ternary mixture of LiF, BeF{sub 2} and NaF) had a sufficiently low melting temperature ({approx}305 C) to be useful for first wall and blanket applications using flowing molten salts that were investigated in the Advanced Power Extraction (APEX) Program.[1] In the experiment, the salt pool was contained in a stainless steel crucible under vacuum. One thermocouple was placed in the salt and two others were embedded in the crucible. The results and observations from the experiment are reported in the companion paper.[2] The paper presented here will cover a 3-D finite element thermal analysis of the salt pool and crucible. The analysis was done to evaluate the thermal gradients in the salt pool and crucible and to compare the temperatures of the three thermocouples. One salt mixture appeared to melt and to solidify as a eutectic with a visible plateau in the cooling curve (i. e, time versus temperature for the thermocouple in the salt pool). This behavior was reproduced with the thermal model. Cases were run with several values of the thermal conductivity and latent heat of fusion to see the parametric effects of these changes on the respective cooling curves. The crucible was heated by an electrical heater in an inverted well at the base of the crucible. It lost heat primarily by radiation from the outer surfaces of the crucible and the top surface of the salt. The primary independent factors in the model were the emissivity of the crucible (and of the salt) and the fraction of the heater power coupled into the crucible. The model was 'calibrated' using (thermocouple) data and heating power from runs in which the crucible contained no salt.
This paper analyzes the relationship between current renewable energy technology costs and cumulative production, research, development and demonstration expenditures, and other institutional influences. Combining the theoretical framework of 'learning by doing' and developments in 'learning by searching' with the fields of organizational learning and institutional economics offers a complete methodological framework to examine the underlying capital cost trajectory when developing electricity cost estimates used in energy policy planning models. Sensitivities of the learning rates for global wind and solar photovoltaic technologies to changes in the model parameters are tested. The implications of the results indicate that institutional policy instruments play an important role for these technologies to achieve cost reductions and further market adoption.
Waste characterization is probably the most costly part of radioactive waste management. An important part of this characterization is the measurements of headspace gas in waste containers in order to demonstrate the compliance with Resource Conservation and Recovery Act (RCRA) or transportation requirements. The traditional chemical analysis methods, which include all steps of gas sampling, sample shipment and laboratory analysis, are expensive and time-consuming as well as increasing worker's exposure to hazardous environments. Therefore, an alternative technique that can provide quick, in-situ, and real-time detections of headspace gas compositions is highly desirable. This report summarizes the results obtained from a Laboratory Directed Research & Development (LDRD) project entitled 'Potential Application of Microsensor Technology in Radioactive Waste Management with Emphasis on Headspace Gas Detection'. The objective of this project is to bridge the technical gap between the current status of microsensor development and the intended applications of these sensors in nuclear waste management. The major results are summarized below: {sm_bullet} A literature review was conducted on the regulatory requirements for headspace gas sampling/analysis in waste characterization and monitoring. The most relevant gaseous species and the related physiochemical environments were identified. It was found that preconcentrators might be needed in order for chemiresistor sensors to meet desired detection {sm_bullet} A long-term stability test was conducted for a polymer-based chemresistor sensor array. Significant drifts were observed over the time duration of one month. Such drifts should be taken into account for long-term in-situ monitoring. {sm_bullet} Several techniques were explored to improve the performance of sensor polymers. It has been demonstrated that freeze deposition of black carbon (CB)-polymer composite can effectively eliminate the so-called 'coffee ring' effect and lead to a desirable uniform distribution of CB particles in sensing polymer films. The optimal ratio of CB/polymer has been determined. UV irradiation has been shown to improve sensor sensitivity. {sm_bullet} From a large set of commercially available polymers, five polymers were selected to form a sensor array that was able to provide optimal responses to six target-volatile organic compounds (VOCs). A series of tests on the response of sensor array to various VOC concentrations have been performed. Linear sensor responses have been observed over the tested concentration ranges, although the responses over a whole concentration range are generally nonlinear. {sm_bullet} Inverse models have been developed for identifying individual VOCs based on sensor array responses. A linear solvation energy model is particularly promising for identifying an unknown VOC in a single-component system. It has been demonstrated that a sensor array as such we developed is able to discriminate waste containers for their total VOC concentrations and therefore can be used as screening tool for reducing the existing headspace gas sampling rate. {sm_bullet} Various VOC preconcentrators have been fabricated using Carboxen 1000 as an absorbent. Extensive tests have been conducted in order to obtain optimal configurations and parameter ranges for preconcentrator performance. It has been shown that use of preconcentrators can reduce the detection limits of chemiresistors by two orders of magnitude. The life span of preconcentrators under various physiochemical conditions has also been evaluated. {sm_bullet} The performance of Pd film-based H2 sensors in the presence of VOCs has been evaluated. The interference of sensor readings by VOC has been observed, which can be attributed to the interference of VOC with the H2-O2 reaction on the Pd alloy surface. This interference can be eliminated by coating a layer of silicon dioxide on sensing film surface. Our work has demonstrated a wide range of applications of gas microsensors in radioactive waste management. Such applications can potentially lead to a significant cost saving and risk reduction for waste characterization.
In the mid-90's, breakthroughs were achieved at Sandia with z-pinches for high energy density physics on the Saturn machine. These initial tests led to the modification of the PBFA II machine to provide high currents rather than the high voltage it was initially designed for. The success of z-pinch for high energy density physics experiments insured a new mission for the converted accelerator, known as Z since 1997. Z now provides a unique capability to a number of basic science communities and has expanded its mission to include radiation effects research, inertial confinement fusion and material properties research. To achieve continued success, the physics community has requested higher peak current, better precision and pulse shaping versatility be incorporated into the refurbishment of the Z machine, known as ZR. In addition to the performance specification for ZR of a peak current of 26 MA with an implosion time of 100 ns, the machine also has a reliability specification to achieve 400 shots per year. While changes to the basic architecture of the Z machine are minor, the vast majority of its components have been redesigned. Moreover the increase in peak current from its present 18 MA to ZR's peak current of 26 MA at nominal operating parameters requires significantly higher voltages. These higher voltages, along with the reliability requirement, mandate a system assessment be performed to insure the requirements have been met. This paper will describe the System Assessment Test Program (SATPro) for the ZR project and report on the results.
Multivariate spatial classification schemes such as regionalized classification or principal components analysis combined with kriging rely on all variables being collocated at the sample locations. In these approaches, classification of the multivariate data into a finite number of groups is done prior to the spatial estimation. However, in some cases, the variables may be sampled at different locations with the extreme case being complete heterotopy of the data set. In these situations, it is necessary to adapt existing techniques to work with non-collocated data. Two approaches are considered: (1) kriging of existing data onto a series of 'collection points' where the classification into groups is completed and a measure of the degree of group membership is kriged to all other locations; and (2) independent kriging of all attributes to all locations after which the classification is done at each location. Calculations are conducted using an existing groundwater chemistry data set in the upper Dakota aquifer in Kansas (USA) and previously examined using regionalized classification (Bohling, 1997). This data set has all variables measured at all locations. To test the ability of the first approach for dealing with non-collocated data, each variable is reestimated at each sample location through a cross-validation process and the reestimated values are then used in the regionalized classification. The second approach for non-collocated data requires independent kriging of each attribute across the entire domain prior to classification. Hierarchical and non-hierarchical classification of all vectors is completed and a computationally less burdensome classification approach, 'sequential discrimination', is developed that constrains the classified vectors to be chosen from those with a minimal multivariate kriging variance. Resulting classification and uncertainty maps are compared between all non-collocated approaches as well as to the original collocated approach. The non-collocated approaches lead to significantly different group definitions compared to the collocated case. To some extent, these differences can be explained by the kriging variance of the estimated variables. Sequential discrimination of locations with a minimum multivariate kriging variance constraint produces slightly improved results relative to the collection point and the non-hierarchical classification of the estimated vectors.