Publications

Results 86201–86300 of 96,771

Search results

Jump to search filters

Leading-edge boundary layer flow : Prandtl's vision, current developments and future perspectives

The first viscous compressible three-dimensional BiGlobal linear instability analysis of leading-edge boundary layer flow has been performed. Results have been obtained by independent application of asymptotic analysis and numerical solution of the appropriate partial-differential eigenvalue problem. It has been shown that the classification of three-dimensional linear instabilities of the related incompressible flow [13] into symmetric and antisymmetric mode expansions in the chordwise coordinate persists for compressible, subsonic flow-regime at sufficiently large Reynolds numbers.

More Details

Mitigation of cesium and cobalt contamination on the surfaces of RAM packages

Krumhansl, James L.; Bonhomme, F.

Techniques for mitigating the adsorption of {sup 137}Cs and {sup 60}Co on metal surfaces (e.g. RAM packages) exposed to contaminated water (e.g. spent-fuel pools) have been developed and experimentally verified. The techniques are also effective in removing some of the {sup 60}Co and {sup 137}Cs that may have been adsorbed on the surfaces after removal from the contaminated water. The principle for the {sup 137}Cs mitigation technique is based upon ion-exchange processes. In contrast, {sup 60}Co contamination primarily resides in minute particles of crud that become lodged on cask surfaces. Crud is an insoluble Fe-Ni-Cr oxide that forms colloidal-sized particles as reactor cooling systems corrode. Because of the similarity between Ni{sup 2+} and Co{sup 2+}, crud is able to scavenge and retain traces of cobalt as it forms. A number of organic compounds have a great specificity for combining with nickel and cobalt. Ongoing research is investigating the effectiveness of chemical complexing agent EDTA with regard to its ability to dissolve the host phase (crud) thereby liberating the entrained {sup 60}Co into a solution where it can be rinsed away.

More Details

Nickel-based gadolinium alloy for neutron adsorption application in ram packages

Robino, Charles V.

The National Spent Nuclear Fuel Program, located at the Idaho National Laboratory (INL), coordinates and integrates national efforts in management and disposal of US Department of Energy (DOE)-owned spent nuclear fuel. These management functions include development of standardized systems for long-term disposal in the proposed Yucca Mountain repository. Nuclear criticality control measures are needed in these systems to avoid restrictive fissile loading limits because of the enrichment and total quantity of fissile material in some types of the DOE spent nuclear fuel. This need is being addressed by development of corrosion-resistant, neutron-absorbing structural alloys for nuclear criticality control. This paper outlines results of a metallurgical development program that is investigating the alloying of gadolinium into a nickel-chromium-molybdenum alloy matrix. Gadolinium has been chosen as the neutron absorption alloying element due to its high thermal neutron absorption cross section and low solubility in the expected repository environment. The nickel-chromium-molybdenum alloy family was chosen for its known corrosion performance, mechanical properties, and weldability. The workflow of this program includes chemical composition definition, primary and secondary melting studies, ingot conversion processes, properties testing, and national consensus codes and standards work. The microstructural investigation of these alloys shows that the gadolinium addition is present in the alloy as a gadolinium-rich second phase. The mechanical strength values are similar to those expected for commercial Ni-Cr-Mo alloys. The alloys have been corrosion tested with acceptable results. The initial results of weldability tests have also been acceptable. Neutronic testing in a moderated critical array has generated favorable results. An American Society for Testing and Materials material specification has been issued for the alloy and a Code Case has been submitted to the American Society of Mechanical Engineers for code qualification.

More Details

A robotic framework for semantic concept learning

Xavier, Patrick G.

This report describes work carried out under a Sandia National Laboratories Excellence in Engineering Fellowship in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. Our research group (at UIUC) is developing a intelligent robot, and attempting to teach it language. While there are many aspects of this research, for the purposes of this report the most important are the following ideas. Language is primarily based on semantics, not syntax. To truly learn meaning, the language engine must be part of an embodied intelligent system, one capable of using associative learning to form concepts from the perception of experiences in the world, and further capable of manipulating those concepts symbolically. In the work described here, we explore the use of hidden Markov models (HMMs) in this capacity. HMMs are capable of automatically learning and extracting the underlying structure of continuous-valued inputs and representing that structure in the states of the model. These states can then be treated as symbolic representations of the inputs. We describe a composite model consisting of a cascade of HMMs that can be embedded in a small mobile robot and used to learn correlations among sensory inputs to create symbolic concepts. These symbols can then be manipulated linguistically and used for decision making. This is the project final report for the University Collaboration LDRD project, 'A Robotic Framework for Semantic Concept Learning'.

More Details

Finding central sets of tree structures in synchronous distributed systems

Finding the central sets, such as center and median sets, of a network topology is a fundamental step in the design and analysis of complex distributed systems. This paper presents distributed synchronous algorithms for finding central sets in general tree structures. Our algorithms are distinguished from previous work in that they take only qualitative information, thus reducing the constants hidden in the asymptotic notation, and all vertices of the topology know the central sets upon their termination.

More Details

A moment-preserving nonanalog method for charged particle transport

Franke, Brian C.

Extremely short collision mean free paths and near-singular elastic and inelastic differential cross sections (DCS) make analog Monte Carlo simulation an impractical tool for charged particle transport. The widely used alternative, the condensed history method, while efficient, also suffers from several limitations arising from the use of precomputed smooth distributions for sampling. There is much interest in developing computationally efficient algorithms that implement the correct transport mechanics. Here we present a nonanalog transport-based method that incorporates the correct transport mechanics and is computationally efficient for implementation in single event Monte Carlo codes. Our method systematically preserves important physics and is mathematically rigorous. It builds on higher order Fokker-Planck and Boltzmann Fokker-Planck representations of the scattering and energy-loss process, and we accordingly refer to it as a Generalized Boltzmann Fokker-Planck (GBFP) approach. We postulate the existence of nonanalog single collision scattering and energy-loss distributions (differential cross sections) and impose the constraint that the first few momentum transfer and energy loss moments be identical to corresponding analog values. This is effected through a decomposition or hybridizing scheme wherein the singular forward peaked, small energy-transfer collisions are isolated and de-singularized using different moment-preserving strategies, while the large angle, large energy-transfer collisions are described by the exact (analog) DCS or approximated to a high degree of accuracy. The inclusion of the latter component allows the higher angle and energy-loss moments to be accurately captured. This procedure yields a regularized transport model characterized by longer mean free paths and smoother scattering and energy transfer kernels than analog. In practice, acceptable accuracy is achieved with two rigorously preserved moments, but accuracy can be systematically increased to analog level by preserving successively higher moments with almost no change to the algorithm. Details of specific moment-preserving strategies will be described and results presented for dose in heterogeneous media due to a pencil beam and a line source of monoenergetic electrons. Error and runtimes of our nonanalog formulations will be contrasted against condensed history implementations.

More Details

Optimal neuronal tuning for finite stimulus spaces

Proposed for publication in Neural computation.

Brown, William M.; Backer, Alejandro B.

The efficiency of neuronal encoding in sensory and motor systems has been proposed as a first principle governing response properties within the central nervous system. We present a continuation of a theoretical study presented by Zhang and Sejnowski, where the influence of neuronal tuning properties on encoding accuracy is analyzed using information theory. When a finite stimulus space is considered, we show that the encoding accuracy improves with narrow tuning for one- and two-dimensional stimuli. For three dimensions and higher, there is an optimal tuning width.

More Details

Dynamic context discrimination : psychological evidence for the Sandia Cognitive Framework

Speed, Ann S.

Human behavior is a function of an iterative interaction between the stimulus environment and past experience. It is not simply a matter of the current stimulus environment activating the appropriate experience or rule from memory (e.g., if it is dark and I hear a strange noise outside, then I turn on the outside lights and investigate). Rather, it is a dynamic process that takes into account not only things one would generally do in a given situation, but things that have recently become known (e.g., there have recently been coyotes seen in the area and one is known to be rabid), as well as other immediate environmental characteristics (e.g., it is snowing outside, I know my dog is outside, I know the police are already outside, etc.). All of these factors combine to inform me of the most appropriate behavior for the situation. If it were the case that humans had a rule for every possible contingency, the amount of storage that would be required to enable us to fluidly deal with most situations we encounter would rapidly become biologically untenable. We can all deal with contingencies like the one above with fairly little effort, but if it isn't based on rules, what is it based on? The assertion of the Cognitive Systems program at Sandia for the past 5 years is that at the heart of this ability to effectively navigate the world is an ability to discriminate between different contexts (i.e., Dynamic Context Discrimination, or DCD). While this assertion in and of itself might not seem earthshaking, it is compelling that this ability and its components show up in a wide variety of paradigms across different subdisciplines in psychology. We begin by outlining, at a high functional level, the basic ideas of DCD. We then provide evidence from several different literatures and paradigms that support our assertion that DCD is a core aspect of cognitive functioning. Finally, we discuss DCD and the computational model that we have developed as an instantiation of DCD in more detail. Before commencing with our overview of DCD, we should note that DCD is not necessarily a theory in the classic sense. Rather, it is a description of cognitive functioning that seeks to unify highly similar findings across a wide variety of literatures. Further, we believe that such convergence warrants a central place in efforts to computationally emulate human cognition. That is, DCD is a general principle of cognition. It is also important to note that while we are drawing parallels across many literatures, these are functional parallels and are not necessarily structural ones. That is, we are not saying that the same neural pathways are involved in these phenomena. We are only saying that the different neural pathways that are responsible for the appearance of these various phenomena follow the same functional rules - the mechanisms are the same even if the physical parts are distinct. Furthermore, DCD is not a causal mechanism - it is an emergent property of the way the brain is constructed. DCD is the result of neurophysiology (cf. John, 2002, 2003). Finally, it is important to note that we are not proposing a generic learning mechanism such that one biological algorithm can account for all situation interpretation. Rather, we are pointing out that there are strikingly similar empirical results across a wide variety of disciplines that can be understood, in part, by similar cognitive processes. It is entirely possible, even assumed in some cases (i.e., primary language acquisition) that these more generic cognitive processes are complemented and constrained by various limits which may or may not be biological in nature (cf. Bates & Elman, 1996; Elman, in press).

More Details

Potentials and fields in a 300-mm dual-frequency reactor

Miller, Paul A.; Barnat, Edward V.; Hebner, Gregory A.

Dual-frequency reactors employ source rf power supplies to generate plasma and bias supplies to extract ions. There is debate over choices for the source and bias frequencies. Higher frequencies facilitate plasma generation but their shorter wavelengths may cause spatial variations in plasma properties. Electrical nonlinearity of plasma sheaths causes harmonic generation and mixing of source and bias frequencies. These processes, and the resulting spectrum of frequencies, are as much dependent on electrical characteristics of matching networks and on chamber geometry as on plasma sheath properties. We investigated such electrical effects in a 300-mm Applied-Materials plasma reactor. Data were taken for 13.56-MHz bias frequency (chuck) and for source frequencies from 30 to 160 MHz (upper electrode). An rf-magnetic-field probe (B-dot loop) was used to measure the radial variation of fields inside the plasma. We will describe the results of this work.

More Details

The Long Range Reconnaissance and Observation System (LORROS) with the Kollsman, Inc. Model LH-40, Infrared (Erbium) Laser Rangefinder hazard analysis and safety assessment

Augustoni, Arnold L.

A laser hazard analysis and safety assessment was performed for the LH-40 IR Laser Rangefinder based on the 2000 version of the American National Standard Institute's Standard Z136.1, for the Safe Use of Lasers and Z136.6, for the Safe Use of Lasers Outdoors. The LH-40 IR Laser is central to the Long Range Reconnaissance and Observation System (LORROS). The LORROS is being evaluated by the Department 4149 Group to determine its capability as a long-range assessment tool. The manufacture lists the laser rangefinder as 'eye safe' (Class 1 laser classified under the CDRH Compliance Guide for Laser Products and 21 CFR 1040 Laser Product Performance Standard). It was necessary that SNL validate this prior to its use involving the general public. A formal laser hazard analysis is presented for the typical mode of operation.

More Details

Sensors for environmental monitoring and long-term environmental stewardship

Ho, Clifford K.; Robinson, Alex L.; Miller, David R.

This report surveys the needs associated with environmental monitoring and long-term environmental stewardship. Emerging sensor technologies are reviewed to identify compatible technologies for various environmental monitoring applications. The contaminants that are considered in this report are grouped into the following categories: (1) metals, (2) radioisotopes, (3) volatile organic compounds, and (4) biological contaminants. Regulatory drivers are evaluated for different applications (e.g., drinking water, storm water, pretreatment, and air emissions), and sensor requirements are derived from these regulatory metrics. Sensor capabilities are then summarized according to contaminant type, and the applicability of the different sensors to various environmental monitoring applications is discussed.

More Details

A zero-power radio receiver

Brocato, Robert W.

This report describes both a general methodology and some specific examples of passive radio receivers. A passive radio receiver uses no direct electrical power but makes sole use of the power available in the radio spectrum. These radio receivers are suitable as low data-rate receivers or passive alerting devices for standard, high power radio receivers. Some zero-power radio architectures exhibit significant improvements in range with the addition of very low power amplifiers or signal processing electronics. These ultra-low power radios are also discussed and compared to the purely zero-power approaches.

More Details

Computational modeling of the temperature-induced structural changes of tethered Poly(N-isopropylacrylamide) with self-consistent field theory

Proposed for publication in Macromolecules.

Curro, John G.

We modeled the effects of temperature, degree of polymerization, and surface coverage on the equilibrium structure of tethered poly(N-isopropylacrylamide) chains immersed in water. We employed a numerical self-consistent field theory where the experimental phase diagram was used as input to the theory. At low temperatures, the composition profiles are approximately parabolic and extend into the solvent. In contrast, at temperatures above the LCST of the bulk solution, the polymer profiles are collapsed near the surface. The layer thickness and the effective monomer fraction within the layer undergo what appears to be a first-order change at a temperature that depends on surface coverage and chain length. Our results suggest that as a result of the tethering constraint, the phase diagram becomes distorted relative to the bulk polymer solution and exhibits closed loop behavior. As a consequence, we find that the relative magnitude of the layer thickness change at 20 and 40 C is a nonmonotonic function of surface coverage, with a maximum that shifts to lower surface coverage as the chain length increases in qualitative agreement with experiment.

More Details

Hierarchical probabilistic regionalization of volcanism for Sengan region in Japan using multivariate statistical techniques and geostatistical interpolation techniques

Mckenna, Sean A.

Sandia National Laboratories, under contract to Nuclear Waste Management Organization of Japan (NUMO), is performing research on regional classification of given sites in Japan with respect to potential volcanic disruption using multivariate statistics and geo-statistical interpolation techniques. This report provides results obtained for hierarchical probabilistic regionalization of volcanism for the Sengan region in Japan by applying multivariate statistical techniques and geostatistical interpolation techniques on the geologic data provided by NUMO. A workshop report produced in September 2003 by Sandia National Laboratories (Arnold et al., 2003) on volcanism lists a set of most important geologic variables as well as some secondary information related to volcanism. Geologic data extracted for the Sengan region in Japan from the data provided by NUMO revealed that data are not available at the same locations for all the important geologic variables. In other words, the geologic variable vectors were found to be incomplete spatially. However, it is necessary to have complete geologic variable vectors to perform multivariate statistical analyses. As a first step towards constructing complete geologic variable vectors, the Universal Transverse Mercator (UTM) zone 54 projected coordinate system and a 1 km square regular grid system were selected. The data available for each geologic variable on a geographic coordinate system were transferred to the aforementioned grid system. Also the recorded data on volcanic activity for Sengan region were produced on the same grid system. Each geologic variable map was compared with the recorded volcanic activity map to determine the geologic variables that are most important for volcanism. In the regionalized classification procedure, this step is known as the variable selection step. The following variables were determined as most important for volcanism: geothermal gradient, groundwater temperature, heat discharge, groundwater pH value, presence of volcanic rocks and presence of hydrothermal alteration. Data available for each of these important geologic variables were used to perform directional variogram modeling and kriging to estimate values for each variable at 23949 centers of the chosen 1 km cell grid system that represents the Sengan region. These values formed complete geologic variable vectors at each of the 23,949 one km cell centers.

More Details

Thermal modeling of W rod armor

Nygren, Richard E.

Sandia has developed and tested mockups armored with W rods over the last decade and pioneered the initial development of W rod armor for International Thermonuclear Experimental Reactor (ITER) in the 1990's. We have also developed 2D and 3D thermal and stress models of W rod-armored plasma facing components (PFCs) and test mockups and are applying the models to both short pulses, i.e. edge localized modes (ELMs), and thermal performance in steady state for applications in C-MOD, DiMES testing and ITER. This paper briefly describes the 2D and 3D models and their applications with emphasis on modeling for an ongoing test program that simulates repeated heat loads from ITER ELMs.

More Details

Diagnosing dynamic hohlraums with tracer absorption line spectroscopy

Proposed for publication in Physics of Plasmas.

Sanford, Thomas W.; Nash, Thomas J.

In recent dynamic hohlraum experiments on the Z facility, Al and MgF{sub 2} tracer layers were embedded in cylindrical CH{sub 2} foam targets to provide K-shell lines in the keV spectral region for diagnosing the conditions of the interior hohlraum plasma. The position of the tracers was varied: sometimes they were placed 2 mm from the ends of the foam cylinder and sometimes at the ends of the cylinder. Also varied was the composition of the tracers in the sense that pure Al layers, pure MgF{sub 2} layers, or mixtures of the elements were employed on various shots. Time-resolved K-shell spectra of both Al and Mg show mostly absorption lines. These data can be analyzed with detailed configuration atomic models of carbon, aluminum, and magnesium in which spectra are calculated by solving the radiation transport equation for as many as 4100 frequencies. We report results from shot Z1022 to illustrate the basic radiation physics and the capabilities as well as limitations of this diagnostic method.

More Details

Coupled atomistic-continuum simulation using arbitrary overlapping domains

Proposed for publication in Journal of Computational Physics.

Zimmerman, Jonathan A.; Klein, Patrick A.

We present a formulation for coupling atomistic and continuum simulation methods for application to both quasistatic and dynamic analyses. In our formulation, a coarse-scale continuum discretization is assumed to cover all parts of the computational domain with atomistic crystals introduced only in regions of interest. The geometry of the discretization and crystal are allowed to overlap arbitrarily. Our approach uses interpolation and projection operators to link the kinematics of each region, which are then used to formulate a system potential energy from which we derive coupled expressions for the forces acting in each region. A hyperelastic constitutive formulation is used to compute the stress response of the defect-free continuum with constitutive properties derived from the Cauchy-Born rule. A correction to the Cauchy-Born rule is introduced in the overlap region to minimize fictitious boundary effects. Features of our approach will be demonstrated with simulations in one, two and three dimensions.

More Details

Dynamic vulnerability assessment

Nelson, Cynthia L.

With increased terrorist threats in the past few years, it is no longer feasible to feel confident that a facility is well protected with a static security system. Potential adversaries often research their targets, examining procedural and system changes, in order to attack at a vulnerable time. Such system changes may include scheduled sensor maintenance, scheduled or unscheduled changes in the guard force, facility alert level changes, sensor failures or degradation, etc. All of these changes impact the system effectiveness and can make a facility more vulnerable. Currently, a standard analysis of system effectiveness is performed approximately every six months using a vulnerability assessment tool called ASSESS (Analytical Systems and Software for Evaluating Safeguards and Systems). New standards for determining a facility's system effectiveness will be defined by tools that are currently under development, such as ATLAS (Adversary Time-line Analysis System) and NextGen (Next Generation Security Simulation). Although these tools are useful to model analyses at different spatial resolutions and can support some sensor dynamics using statistical models, they are limited in that they require a static system state as input. They cannot account for the dynamics of the system through day-to-day operations. The emphasis of this project was to determine the feasibility of dynamically monitoring the facility security system and performing an analysis as changes occur. Hence, the system effectiveness is known at all times, greatly assisting time-critical decisions in response to a threat or a potential threat.

More Details

Polymeric insulation post electrodeless dielectrophoresis (iDEP) for the monitoring of water-borne pathogens

Mcgraw, Gregory J.; Brazzle, John D.; Cummings, Eric B.; Shediac, Renee S.; Fintschenko, Yolanda F.; Davalos, Rafael V.; Ceremuga, Joseph T.; Chames, Jeffery M.; Hunter, Marion C.; Fiechtner, Gregory J.

We have successfully demonstrated selective trapping, concentration, and release of various biological organisms and inert beads by insulator-based dielectrophoresis within a polymeric microfluidic device. The microfluidic channels and internal features, in this case arrays of insulating posts, were initially created through standard wet-etch techniques in glass. This glass chip was then transformed into a nickel stamp through the process of electroplating. The resultant nickel stamp was then used as the replication tool to produce the polymeric devices through injection molding. The polymeric devices were made of Zeonor{reg_sign} 1060R, a polyolefin copolymer resin selected for its superior chemical resistance and optical properties. These devices were then optically aligned with another polymeric substrate that had been machined to form fluidic vias. These two polymeric substrates were then bonded together through thermal diffusion bonding. The sealed devices were utilized to selectively separate and concentrate a biological pathogen simulants. These include spores that were selectively concentrated and released by simply applying D.C. voltages across the plastic replicates via platinum electrodes in inlet and outlet reservoirs. The dielectrophoretic response of the organisms is observed to be a function of the applied electric field and post size, geometry and spacing. Cells were selectively trapped against a background of labeled polystyrene beads and spores to demonstrate that samples of interest can be separated from a diverse background. We have implemented and demonstrated here a methodology to determine the concentration factors obtained in these devices.

More Details

Effects of radiation on laser diodes

Phifer, Carol P.

The effects of ionizing and neutron radiation on the characteristics and performance of laser diodes are reviewed, and the formation mechanisms for nonradiative recombination centers, the primary type of radiation damage in laser diodes, are discussed. Additional topics include the detrimental effects of aluminum in the active (lasing) volume, the transient effects of high-dose-rate pulses of ionizing radiation, and a summary of ways to improve the radiation hardness of laser diodes. Radiation effects on laser diodes emitting in the wavelength region around 808 nm are emphasized.

More Details

Surface dynamics dominated by bulk thermal defects -- the case of NiAl (110)

Proposed for publication in Physical Review B.

Nobel, Jan A.; Bartelt, Norman C.

We find that small temperature changes cause steps on the NiAl(110) surface to move. We show that this step motion occurs because mass is transferred between the bulk and the surface as the concentration of bulk thermal defects (i.e., vacancies) changes with temperature. Since the change in an island's area with a temperature change is found to scale strictly with the island's step length, the thermally generated defects are created (annihilated) very near the surface steps. To quantify the bulk/surface exchange, we oscillate the sample temperature and measure the amplitude and phase lag of the system response, i.e., the change in an island's area normalized to its perimeter. Using a one-dimensional model of defect diffusion through the bulk in a direction perpendicular to the surface, we determine the migration and formation energies of the bulk thermal defects. During surface smoothing, we show that there is no flow of material between islands on the same terrace and that all islands in a stack shrink at the same rate. We conclude that smoothing occurs by mass transport through the bulk of the crystal rather than via surface diffusion. Based on the measured relative sizes of the activation energies for island decay, defect migration, and defect formation, we show that attachment/detachment at the steps is the rate-limiting step in smoothing.

More Details

Progress on Z-pinch inertial fusion energy

Olson, Craig L.

The goal of z-pinch inertial fusion energy (IFE) is to extend the single-shot z-pinch inertial confinement fusion (ICF) results on Z to a repetitive-shot z-pinch power plant concept for the economical production of electricity. Z produces up to 1.8 MJ of x-rays at powers as high as 230 TW. Recent target experiments on Z have demonstrated capsule implosion convergence ratios of 14-21 with a double-pinch driven target, and DD neutron yields up to 8x10exp10 with a dynamic hohlraum target. For z-pinch IFE, a power plant concept is discussed that uses high-yield IFE targets (3 GJ) with a low rep-rate per chamber (0.1 Hz). The concept includes a repetitive driver at 0.1 Hz, a Recyclable Transmission Line (RTL) to connect the driver to the target, high-yield targets, and a thick-liquid wall chamber. Recent funding by a U.S. Congressional initiative for $4M for FY04 is supporting research on RTLs, repetitive pulsed power drivers, shock mitigation, full RTL cycle planned experiments, high-yield IFE targets, and z-pinch power plant technologies. Recent results of research in all of these areas are discussed, and a Road Map for Z-Pinch IFE is presented.

More Details

Modeling conflict : research methods, quantitative modeling, and lessons learned

Malczynski, Leonard A.; Kobos, Peter H.; Rexroth, Paul E.; Hendrickson, Gerald A.; McNamara, Laura A.

This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

More Details

Analysis and control of distributed cooperative systems

Feddema, John T.; Schoenwald, David A.; Parker, Eric P.; Wagner, John S.

As part of DARPA Information Processing Technology Office (IPTO) Software for Distributed Robotics (SDR) Program, Sandia National Laboratories has developed analysis and control software for coordinating tens to thousands of autonomous cooperative robotic agents (primarily unmanned ground vehicles) performing military operations such as reconnaissance, surveillance and target acquisition; countermine and explosive ordnance disposal; force protection and physical security; and logistics support. Due to the nature of these applications, the control techniques must be distributed, and they must not rely on high bandwidth communication between agents. At the same time, a single soldier must easily direct these large-scale systems. Finally, the control techniques must be provably convergent so as not to cause undo harm to civilians. In this project, provably convergent, moderate communication bandwidth, distributed control algorithms have been developed that can be regulated by a single soldier. We have simulated in great detail the control of low numbers of vehicles (up to 20) navigating throughout a building, and we have simulated in lesser detail the control of larger numbers of vehicles (up to 1000) trying to locate several targets in a large outdoor facility. Finally, we have experimentally validated the resulting control algorithms on smaller numbers of autonomous vehicles.

More Details

Military airborne and maritime application for cooperative behaviors

Byrne, Raymond H.; Robinett, R.D.

As part of DARPA's Software for Distributed Robotics Program within the Information Processing Technologies Office (IPTO), Sandia National Laboratories was tasked with identifying military airborne and maritime missions that require cooperative behaviors as well as identifying generic collective behaviors and performance metrics for these missions. This report documents this study. A prioritized list of general military missions applicable to land, air, and sea has been identified. From the top eight missions, nine generic reusable cooperative behaviors have been defined. A common mathematical framework for cooperative controls has been developed and applied to several of the behaviors. The framework is based on optimization principles and has provably convergent properties. A three-step optimization process is used to develop the decentralized control law that minimizes the behavior's performance index. A connective stability analysis is then performed to determine constraints on the communication sample period and the local control gains. Finally, the communication sample period for four different network protocols is evaluated based on the network graph, which changes throughout the task. Using this mathematical framework, two metrics for evaluating these behaviors are defined. The first metric is the residual error in the global performance index that is used to create the behavior. The second metric is communication sample period between robots, which affects the overall time required for the behavior to reach its goal state.

More Details

A robust, coupled approach for atomistic-continuum simulation

Zimmerman, Jonathan A.; Aubry, Sylvie A.; Bammann, Douglas J.; Hoyt, Jeffrey J.; Jones, Reese E.; Kimmer, Christopher J.; Klein, Patrick A.; Webb, Edmund B.

This report is a collection of documents written by the group members of the Engineering Sciences Research Foundation (ESRF), Laboratory Directed Research and Development (LDRD) project titled 'A Robust, Coupled Approach to Atomistic-Continuum Simulation'. Presented in this document is the development of a formulation for performing quasistatic, coupled, atomistic-continuum simulation that includes cross terms in the equilibrium equations that arise due to kinematic coupling and corrections used for the calculation of system potential energy to account for continuum elements that overlap regions containing atomic bonds, evaluations of thermo-mechanical continuum quantities calculated within atomistic simulations including measures of stress, temperature and heat flux, calculation used to determine the appropriate spatial and time averaging necessary to enable these atomistically-defined expressions to have the same physical meaning as their continuum counterparts, and a formulation to quantify a continuum 'temperature field', the first step towards constructing a coupled atomistic-continuum approach capable of finite temperature and dynamic analyses.

More Details

ITS5 theory manual

Kensek, Ronald P.; Franke, Brian C.; Laub, Thomas W.

This document describes the modeling of the physics (and eventually features) in the Integrated TIGER Series (ITS) codes [Franke 04] which is largely pulled from various sources in the open literature (especially [Seltzer 88], [Seltzer 91], [Lorence 89], [Halbleib 92]), although those sources often describe the ETRAN Code from which the physics engine of ITS is derived, not necessarily identical. This is meant to be an evolving document, with more coverage and detail as time goes on. As such, entire sections are still incomplete. Presently, this document covers the continuous-energy ITS codes with more completeness on photon transport (though electron transport will not be completely ignored). In particular, this document does not cover the Multigroup code, MCODES (externally applied electromagnetic fields), or high-energy phenomena (photon pair-production). In this version, equations are largely left to the references though they may be pulled in over time.

More Details

Thermal modeling of the Sandia Flinabe (LiF-BeF2-NaF) experiment

Nygren, Richard E.

An experiment at Sandia National Laboratories confirmed that a ternary salt (Flinabe, a ternary mixture of LiF, BeF{sub 2} and NaF) had a sufficiently low melting temperature ({approx}305 C) to be useful for first wall and blanket applications using flowing molten salts that were investigated in the Advanced Power Extraction (APEX) Program.[1] In the experiment, the salt pool was contained in a stainless steel crucible under vacuum. One thermocouple was placed in the salt and two others were embedded in the crucible. The results and observations from the experiment are reported in the companion paper.[2] The paper presented here will cover a 3-D finite element thermal analysis of the salt pool and crucible. The analysis was done to evaluate the thermal gradients in the salt pool and crucible and to compare the temperatures of the three thermocouples. One salt mixture appeared to melt and to solidify as a eutectic with a visible plateau in the cooling curve (i. e, time versus temperature for the thermocouple in the salt pool). This behavior was reproduced with the thermal model. Cases were run with several values of the thermal conductivity and latent heat of fusion to see the parametric effects of these changes on the respective cooling curves. The crucible was heated by an electrical heater in an inverted well at the base of the crucible. It lost heat primarily by radiation from the outer surfaces of the crucible and the top surface of the salt. The primary independent factors in the model were the emissivity of the crucible (and of the salt) and the fraction of the heater power coupled into the crucible. The model was 'calibrated' using (thermocouple) data and heating power from runs in which the crucible contained no salt.

More Details

Technological learning and renewable energy costs: implications for U.S. renewable energy policy

Proposed for publication in Energy Policy.

Kobos, Peter H.; Drennen, Thomas E.; Drennen, Thomas E.

This paper analyzes the relationship between current renewable energy technology costs and cumulative production, research, development and demonstration expenditures, and other institutional influences. Combining the theoretical framework of 'learning by doing' and developments in 'learning by searching' with the fields of organizational learning and institutional economics offers a complete methodological framework to examine the underlying capital cost trajectory when developing electricity cost estimates used in energy policy planning models. Sensitivities of the learning rates for global wind and solar photovoltaic technologies to changes in the model parameters are tested. The implications of the results indicate that institutional policy instruments play an important role for these technologies to achieve cost reductions and further market adoption.

More Details

Potential application of microsensor technology in radioactive waste management with emphasis on headspace gas detection

Wang, Yifeng

Waste characterization is probably the most costly part of radioactive waste management. An important part of this characterization is the measurements of headspace gas in waste containers in order to demonstrate the compliance with Resource Conservation and Recovery Act (RCRA) or transportation requirements. The traditional chemical analysis methods, which include all steps of gas sampling, sample shipment and laboratory analysis, are expensive and time-consuming as well as increasing worker's exposure to hazardous environments. Therefore, an alternative technique that can provide quick, in-situ, and real-time detections of headspace gas compositions is highly desirable. This report summarizes the results obtained from a Laboratory Directed Research & Development (LDRD) project entitled 'Potential Application of Microsensor Technology in Radioactive Waste Management with Emphasis on Headspace Gas Detection'. The objective of this project is to bridge the technical gap between the current status of microsensor development and the intended applications of these sensors in nuclear waste management. The major results are summarized below: {sm_bullet} A literature review was conducted on the regulatory requirements for headspace gas sampling/analysis in waste characterization and monitoring. The most relevant gaseous species and the related physiochemical environments were identified. It was found that preconcentrators might be needed in order for chemiresistor sensors to meet desired detection {sm_bullet} A long-term stability test was conducted for a polymer-based chemresistor sensor array. Significant drifts were observed over the time duration of one month. Such drifts should be taken into account for long-term in-situ monitoring. {sm_bullet} Several techniques were explored to improve the performance of sensor polymers. It has been demonstrated that freeze deposition of black carbon (CB)-polymer composite can effectively eliminate the so-called 'coffee ring' effect and lead to a desirable uniform distribution of CB particles in sensing polymer films. The optimal ratio of CB/polymer has been determined. UV irradiation has been shown to improve sensor sensitivity. {sm_bullet} From a large set of commercially available polymers, five polymers were selected to form a sensor array that was able to provide optimal responses to six target-volatile organic compounds (VOCs). A series of tests on the response of sensor array to various VOC concentrations have been performed. Linear sensor responses have been observed over the tested concentration ranges, although the responses over a whole concentration range are generally nonlinear. {sm_bullet} Inverse models have been developed for identifying individual VOCs based on sensor array responses. A linear solvation energy model is particularly promising for identifying an unknown VOC in a single-component system. It has been demonstrated that a sensor array as such we developed is able to discriminate waste containers for their total VOC concentrations and therefore can be used as screening tool for reducing the existing headspace gas sampling rate. {sm_bullet} Various VOC preconcentrators have been fabricated using Carboxen 1000 as an absorbent. Extensive tests have been conducted in order to obtain optimal configurations and parameter ranges for preconcentrator performance. It has been shown that use of preconcentrators can reduce the detection limits of chemiresistors by two orders of magnitude. The life span of preconcentrators under various physiochemical conditions has also been evaluated. {sm_bullet} The performance of Pd film-based H2 sensors in the presence of VOCs has been evaluated. The interference of sensor readings by VOC has been observed, which can be attributed to the interference of VOC with the H2-O2 reaction on the Pd alloy surface. This interference can be eliminated by coating a layer of silicon dioxide on sensing film surface. Our work has demonstrated a wide range of applications of gas microsensors in radioactive waste management. Such applications can potentially lead to a significant cost saving and risk reduction for waste characterization.

More Details

SATPro: the system assessment test program for Z-R

Lehr, J.M.

In the mid-90's, breakthroughs were achieved at Sandia with z-pinches for high energy density physics on the Saturn machine. These initial tests led to the modification of the PBFA II machine to provide high currents rather than the high voltage it was initially designed for. The success of z-pinch for high energy density physics experiments insured a new mission for the converted accelerator, known as Z since 1997. Z now provides a unique capability to a number of basic science communities and has expanded its mission to include radiation effects research, inertial confinement fusion and material properties research. To achieve continued success, the physics community has requested higher peak current, better precision and pulse shaping versatility be incorporated into the refurbishment of the Z machine, known as ZR. In addition to the performance specification for ZR of a peak current of 26 MA with an implosion time of 100 ns, the machine also has a reliability specification to achieve 400 shots per year. While changes to the basic architecture of the Z machine are minor, the vast majority of its components have been redesigned. Moreover the increase in peak current from its present 18 MA to ZR's peak current of 26 MA at nominal operating parameters requires significantly higher voltages. These higher voltages, along with the reliability requirement, mandate a system assessment be performed to insure the requirements have been met. This paper will describe the System Assessment Test Program (SATPro) for the ZR project and report on the results.

More Details

Evaluating techniques for multivariate classification of non-collocated spatial data

Mckenna, Sean A.

Multivariate spatial classification schemes such as regionalized classification or principal components analysis combined with kriging rely on all variables being collocated at the sample locations. In these approaches, classification of the multivariate data into a finite number of groups is done prior to the spatial estimation. However, in some cases, the variables may be sampled at different locations with the extreme case being complete heterotopy of the data set. In these situations, it is necessary to adapt existing techniques to work with non-collocated data. Two approaches are considered: (1) kriging of existing data onto a series of 'collection points' where the classification into groups is completed and a measure of the degree of group membership is kriged to all other locations; and (2) independent kriging of all attributes to all locations after which the classification is done at each location. Calculations are conducted using an existing groundwater chemistry data set in the upper Dakota aquifer in Kansas (USA) and previously examined using regionalized classification (Bohling, 1997). This data set has all variables measured at all locations. To test the ability of the first approach for dealing with non-collocated data, each variable is reestimated at each sample location through a cross-validation process and the reestimated values are then used in the regionalized classification. The second approach for non-collocated data requires independent kriging of each attribute across the entire domain prior to classification. Hierarchical and non-hierarchical classification of all vectors is completed and a computationally less burdensome classification approach, 'sequential discrimination', is developed that constrains the classified vectors to be chosen from those with a minimal multivariate kriging variance. Resulting classification and uncertainty maps are compared between all non-collocated approaches as well as to the original collocated approach. The non-collocated approaches lead to significantly different group definitions compared to the collocated case. To some extent, these differences can be explained by the kriging variance of the estimated variables. Sequential discrimination of locations with a minimum multivariate kriging variance constraint produces slightly improved results relative to the collection point and the non-hierarchical classification of the estimated vectors.

More Details

Exciton drag and drift by a two-dimensional electron gas

Proposed for publication in Physical Review B.

Lyo, S.K.

We show theoretically that an electric current in a high-mobility quasi-two-dimensional electron layer induces a significant drift of excitons in an adjacent layer through interlayer Coulomb interaction. The exciton gas is shown to drift with a velocity which can be a significant fraction of the electron drift velocity at low temperatures. The estimated drift length is of the order of micrometers or larger during the typical exciton lifetime for GaAs/Al{sub x}Ga{sub 1-x} double quantum wells. A possible enhancement of the exciton radiative lifetime due to the drift is discussed.

More Details

GeoPowering the west

Hill, Roger

The U.S. Department of Energy's (DOE's) GeoPowering the West (GPW) program works with the U.S. geothermal industry, power companies, industrial and residential consumers, and federal, state, and local officials to provide technical and institutional support and limited, cost-shared funding to state-level activities. By demonstrating the benefits of geothermal energy, GPW increases state and regional awareness of opportunities to enhance local economies and strengthen our nation's energy security while minimizing environmental impact. By identifying barriers to development and working with others to eliminate them, GPW helps a state or region create a regulatory and economic environment that is more favorable for geothermal and other renewable energy development. Electricity is produced using expanding steam or very hot water from the underground reservoir to spin a conventional turbine-generator. Geothermal power plants operate at high capacity factors (70-100%), with availability factors typically greater than 95%. Geothermal plants are among the cleanest sources of electric power available. Direct use applications directly pipe hot water from geothermal resources to provide heat for industrial processes, crop drying, greenhouses, aquaculture, recreation, sidewalk snow-melting, and buildings. Geothermal district heating systems supply heat to multiple buildings through a network of pipes carrying the hot geothermal water.

More Details

Jet-wall interaction effects on diesel combustion and soot formation

Pickett, Lyle M.

The effects of wall interaction on combustion and soot formation processes of a diesel fuel jet were investigated in an optically-accessible constant-volume combustion vessel at experimental conditions typical of a diesel engine. At identical ambient and injector conditions, soot processes were studied in free jets, plane wall jets, and 'confined' wall jets (a box-shaped geometry simulating secondary interaction with adjacent walls and jets in an engine). The investigation showed that soot levels are significantly lower in a plane wall jet compared to a free jet. At some operating conditions, sooting free jets become soot-free as plane wall jets. Possible mechanisms to explain the reduced or delayed soot formation upon wall interaction include an increased fuel-air mixing rate and a wall-jet-cooling effect. However, in a confined-jet configuration, there is an opposite trend in soot formation. Jet confinement causes combustion gases to be redirected towards the incoming jet, causing the lift-off length to shorten and soot to increase. This effect can be avoided by ending fuel injection prior to the time of significant interaction with redirected combustion gases. For a fixed confined-wall geometry, an increase in ambient gas density delays jet interaction, allowing longer injection durations with no increase in soot. Jet interaction with redirected combustion products may also be avoided using reduced ambient oxygen concentration because of an increased ignition delay. Although simplified geometries were employed, the identification of important mechanisms affecting soot formation after the time of wall interaction is expected to be useful for understanding these processes in more complex and realistic diesel engine geometries.

More Details

An automated approach to identifying sine-on-random content from short duration aircraft flight operating data

Cap, Jerome S.

One challenge faced by engineers today is replicating an operating environment such as transportation in a test lab. This paper focuses on the process of identifying sine-on-random content in an aircraft transportation environment, although the methodology can be applied to other events. The ultimate goal of this effort was to develop an automated way to identify significant peaks in the PSDs of the operating data, catalog the peaks, and determine whether each peak was sinusoidal or random in nature. This information helps design a test environment that accurately reflects the operating environment. A series of Matlab functions have been developed to achieve this goal with a relatively high degree of accuracy. The software is able to distinguish between sine-on-random and random-on-random peaks in most cases. This paper describes the approach taken for converting the time history segments to the frequency domain, identifying peaks from the resulting PSD, and filtering the time histories to determine the peak amplitude and characteristics. This approach is validated through some contrived data, and then applied to actual test data. Observations and conclusions, including limitations of this process, are also presented.

More Details

The relaxor properties of compositionally disordered perovskites: Ba- and Bi-substituted Pb(Zr1-xTix)O3

Proposed for publication in Physical Review B.

Samara, George A.

Dielectric spectroscopy, lattice structure, and thermal properties have revealed the relaxor dielectric response of Ba-substituted lead zirconate/titanate (PZT) having the composition (Pb0.71Ba0.29) (Zr0.71Ti0.29)O3 and containing 2 at. % Bi as an additive. The relaxor behavior is attributed to the compositional disorder introduced by the substitution of Ba2+ at the A site and Bi3+/5+ at the B site (and possibly A site) of the ABO3 PZT host lattice. Analysis of the results gives clear evidence for the nucleation of polar nanodomains at a temperature much higher than the peak (Tm) in the dielectric susceptibility. These nanodomains grow in size as their correlation length increases with decreasing temperature, and ultimately their dipolar fluctuations slow down below Tm leading to the formation of the relaxor state. The influences of hydrostatic pressure on the dielectric susceptibility and the dynamics of the relaxation of the polar nanodomains were investigated and can be understood in terms of the decrease in the size of the nanodomains with pressure. The influence of dc electrical bias on the susceptibility was also investigated. Physical models of the relaxor response of this material are discussed.

More Details

Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan

Roberts, Barry L.; Arnold, Bill W.; Mckenna, Sean A.

Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis.

More Details
Results 86201–86300 of 96,771
Results 86201–86300 of 96,771