Optimization-based constrained modeling : a new transport paradigm
Abstract not provided.
Abstract not provided.
The increased demand for Liquefied Natural Gas (LNG) as a fuel source in the U.S. has prompted a study to improve our capability to predict cascading damage to LNG tankers from cryogenic spills and subsequent fire. To support this large modeling and simulation effort, a suite of experiments were conducted on two tanker steels, ABS Grade A steel and ABS Grade EH steel. A thorough and complete understanding of the mechanical behavior of the tanker steels was developed that was heretofore unavailable for the span of temperatures of interest encompassing cryogenic to fire temperatures. This was accomplished by conducting several types of experiments, including tension, notched tension and Charpy impact tests at fourteen temperatures over the range of -191 C to 800 C. Several custom fixtures and special techniques were developed for testing at the various temperatures. The experimental techniques developed and the resulting data will be presented, along with a complete description of the material behavior over the temperature span.
Systems and Control Letters
We develop a switched feedback controller that optimizes the rate of convergence of the state trajectories to the origin for a class of second order LTI systems. Specifically, we derive an algorithm which optimizes the rate of convergence by employing a controller that switches between symmetric gains. As a byproduct of our investigation, we find that, in general, the controllers which optimize the rate of convergence switch between two linear subsystems, one of which is unstable. The algorithm we investigate will design optimal switching laws for the specific case of second order LTI plants of relative degree two.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Instrumented, fully coupled thermal-mechanical experiments were conducted to provide validation data for finite element simulations of failure in pressurized, high temperature systems. The design and implementation of the experimental methodology is described in another paper of this conference. Experimental coupling was accomplished on tubular 304L stainless steel specimens by mechanical loading imparted by internal pressurization and thermal loading by side radiant heating. Experimental parameters, including temperature and pressurization ramp rates, maximum temperature and pressure, phasing of the thermal and mechanical loading and specimen geometry details were studied. Experiments were conducted to increasing degrees of deformation, up to and including failure. Mechanical characterization experiments of the 304L stainless steel tube material was also completed for development of a thermal elastic-plastic material constitutive model used in the finite element simulations of the validation experiments. The material was characterized in tension at a strain rate of 0.001/s from room temperature to 800 C. The tensile behavior of the tube material was found to differ substantially from 304L bar stock material, with the plasticity characteristics and strain to failure differing at every test temperature.
Coupled thermal-mechanical experiments with well-defined, controlled boundary conditions were designed through an iterative process involving a team of experimentalists, material modelers and computational analysts. First the basic experimental premise was selected: an axisymmetric tubular specimen mechanically loaded by internal pressurization and thermally loaded asymmetrically by side radiant heating. Then several integrated experimental-analytical steps were taken to determine the experimental details. The boundary conditions were mostly thermally driven and were chosen so they could be modeled accurately; the experimental fixtures were designed to ensure that the boundary conditions were met. Preliminary, uncoupled analyses were used to size the specimen diameter, height and thickness with experimental consideration of maximum pressure loads and fixture design. Iterations of analyses and experiments were used to efficiently determine heating parameters including lamp and heating shroud design, set off distance between the lamps and shroud and between the shroud and specimen, obtainable ramp rates, and the number and spatial placement of thermocouples. The design process and the experimental implementation of the final coupled thermomechanical failure experiment design will be presented.
The blades of a modern wind turbine are critical components central to capturing and transmitting most of the load experienced by the system. They are complex structural items composed of many layers of fiber and resin composite material and typically, one or more shear webs. Large turbine blades being developed today are beyond the point of effective trial-and-error design of the past and design for reliability is always extremely important. Section analysis tools are used to reduce the three-dimensional continuum blade structure to a simpler beam representation for use in system response calculations to support full system design and certification. One model simplification approach is to analyze the two-dimensional blade cross sections to determine the properties for the beam. Another technique is to determine beam properties using static deflections of a full three-dimensional finite element model of a blade. This paper provides insight into discrepancies observed in outputs from each approach. Simple two-dimensional geometries and three-dimensional blade models are analyzed in this investigation. Finally, a subset of computational and experimental section properties for a full turbine blade are compared.
Abstract not provided.
Abstract not provided.
Resonant plasmonic detectors are potentially important for terahertz (THz) spectroscopic imaging. We have fabricated and characterized antenna coupled detectors that integrate a broad-band antenna, which improves coupling of THz radiation. The vertex of the antenna contains the tuning gates and the bolometric barrier gate. Incident THz radiation may excite 2D plasmons with wave-vectors defined by either a periodic grating gate or a plasmonic cavity determined by ohmic contacts and gate terminals. The latter approach of exciting plasmons in a cavity defined by a short micron-scale channel appears most promising. With this short-channel geometry, we have observed multiple harmonics of THz plasmons. At 20 K with detector bias optimized we report responsivity on resonance of 2.5 kV/W and an NEP of 5 x 10{sup -10} W/Hz{sup 1/2}.
Separation distances are used in hydrogen refueling stations to protect people, structures, and equipment from the consequences of accidental hydrogen releases. Specifically, hydrogen jet flames resulting from ignition of unintended releases can be extensive in length and pose significant radiation and impingement hazards. Depending on the leak diameter and source pressure, the resulting separation distances can be unacceptably large. One possible mitigation strategy to reduce exposure to hydrogen flames is to incorporate barriers around hydrogen storage, process piping, and delivery equipment. The effectiveness of barrier walls to reduce hazards at hydrogen facilities has been previously evaluated using experimental and modeling information developed at Sandia National Laboratories. The effect of barriers on the risk from different types of hazards including direct flame contact, radiation heat fluxes, and overpressures associated with delayed hydrogen ignition has subsequently been evaluated and used to identify potential reductions in separation distances in hydrogen facilities. Both the frequency and consequences used in this risk assessment and the risk results are described. The results of the barrier risk analysis can also be used to help establish risk-informed barrier design requirements for use in hydrogen codes and standards.
Abstract not provided.
This study investigates a pathway to nanoporous structures created by hydrogen and helium implantation in aluminum. Previous experiments for fusion applications have indicated that hydrogen and helium ion implantations are capable of producing bicontinuous nanoporous structures in a variety of metals. This study focuses specifically on implantations of hydrogen and helium ions at 25 keV in aluminum. The hydrogen and helium systems result in remarkably different nanostructures of aluminum at the surface. Scanning electron microscopy, focused ion beam, and transmission electron microscopy show that both implantations result in porosity that persists approximately 200 nm deep. However, hydrogen implantations tend to produce larger and more irregular voids that preferentially reside at defects. Implantations of helium at a fluence of 10{sup 18} cm{sup -2} produce much smaller porosity on the order of 10 nm that is regular and creates a bicontinuous structure in the porous region. The primary difference driving the formation of the contrasting structures is likely the relatively high mobility of hydrogen and the ability of hydrogen to form alanes that are capable of desorbing and etching Al (111) faces.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
There is an increasing need to assess the performance of high consequence systems using a modeling and simulation based approach. Central to this approach are the need to quantify the uncertainties present in the system and to compare the system response to an expected performance measure. At Sandia National Laboratories, this process is referred to as quantification of margins and uncertainties or QMU. Depending on the outcome of the assessment, there might be a need to increase the confidence in the predicted response of a system model; thus a need to understand where resources need to be allocated to increase this confidence. This paper examines the problem of resource allocation done within the context of QMU. An optimization based approach to solving the resource allocation is considered and sources of aleatoric and epistemic uncertainty are included in the calculations.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physics or Materials Journals, to be determined
Abstract not provided.
The relatively recent development of short (nsec) and ultra-short (fsec) pulsed laser systems has introduced process capabilities which are particularly suited for micro-manufacturing applications. Micrometer feature resolutions and minimal heat affected zones are commonly cited benefits, although unique material interactions also prove attractive for many applications. A background of short and ultra-short pulsed laser system capabilities and material interactions will be presented for micro-scale processing. Processing strengths and limitations will be discussed and demonstrated within the framework of applications related to micro-machining, material surface modifications, and fundamental material science research.
Abstract not provided.
Abstract not provided.
The amounts of charge collection by single-photon absorption to that by two-photon absorption laser testing techniques have been directly compared using specially made SOI diodes. Details of this comparison are discussed.
Abstract not provided.
Journal of Chemical Physics
Abstract not provided.
Abstract not provided.
LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.
Abstract not provided.
Physics of Plasmas
Abstract not provided.
Abstract not provided.
We report reflectivity, design and laser damage comparisons of our AR coatings for use at 1054 nm and/or 527 nm, and at angles of incidence between 0 and 45 degrees.
Abstract not provided.
Axial Ge/Si heterostructure nanowires (NWs) allow energy band-edge engineering along the axis of the NW, which is the charge transport direction, and the realization of asymmetric devices for novel device architectures. This work reports on two significant advances in the area of heterostructure NWs and tunnel FETs: (i) the realization of 100% compositionally modulated Si/Ge axial heterostructure NWs with lengths suitable for device fabrication and (ii) the design and implementation of Schottky barrier tunnel FETs on these NWs for high-on currents and suppressed ambipolar behavior. Initial prototype devices with 10 nm PECVD SiN{sub x} gate dielectric resulted in a very high current drive in excess of 100 {micro}A/{micro}m (I/{pi}D) and 10{sup 5} I{sub on}/I{sub off} ratios. Prior work on the synthesis of Ge/Si axial NW heterostructures through the VLS mechanism have resulted in axial Si/Si{sub 1-x}Ge{sub x} NW heterostructures with x{sub max} {approx} 0.3, and more recently 100% composition modulation was achieved with a solid growth catalyst. In this latter case, the thickness of the heterostructure cannot exceed few atomic layers due to the slow axial growth rate and concurrent radial deposition on the NW sidewalls leading to a mixture of axial and radial deposition, which imposes a big challenge for fabricating useful devices form these NWs in the near future. Here, we report the VLS growth of 100% doping and composition modulated axial Ge/Si heterostructure NWs with lengths appropriate for device fabrication by devising a growth procedure that eliminates Au diffusion on the NW sidewalls and minimizes random kinking in the heterostructure NWs as deduced from detailed microscopy analysis. Fig. 1 a shows a cross-sectional SEM image of epitaxial Ge/Si axial NW heterostructures grown on a Ge(111) surface. The interface abruptness in these Ge/Si heterostructure NWs is of the order of the NW diameter. Some of these NWs develop a crystallographic kink that is {approx}20{sup o} off the <111> axis at about 300 nm away from the Ge/Si interface. This provides a natural marker for placing the gate contact electrodes and gate metal at appropriate location for desired high-on current and reduced ambipolarity as shown in Fig. 2. The 1D heterostructures allow band-edge engineering in the transport direction, not easily accessible in planar devices, providing an additional degree of freedom for designing tunnel FETs (TFETs). For instance, a Ge tunnel source can be used for efficient electron/hole tunneling and a Si drain can be used for reduced back-tunneling and ambipolar behavior. Interface abruptness on the other hand (particularly for doping) imposes challenges in these structures and others for realizing high performance TFETs in p-i-n junctions. Since the metal-semiconductor contacts provide a sharp interface with band-edge control, we use properly designed Schottky contacts (aided by 3D Silvaco simulations) as the tunnel barriers both at the source and drain and utilize the asymmetry in the Ge/Si channel bandgap to reduce ambipolar transport behavior generally observed in TFETs. Fig. 3 shows the room-temperature transfer curves of a Ge/Si heterostructure TFET (H-TFET) for different V{sub DS} values showing a maximum on-current of {approx}7 {micro}A, {approx}170 mV/decade inverse subthreshold slope and 5 orders of magnitude I{sub on}/I{sub off} ratios for all V{sub DS} biases considered here. This high on-current value is {approx}1750 X higher than that obtained with Si p-i-n{sup +} NW TFETs and {approx}35 X higher than that obtained with CNT TFET. The I{sub on}/I{sub off} ratio and inverse subthreshold slope compare favorably to that of Si {approx} 10{sup 3} I{sub on}/I{sub off} and {approx} 800 mV/decade SS{sup -1} but lags behind those of CNT TFET due to poor PECVD nitride gate oxide quality ({var_epsilon}{sub r} {approx} 3-4). The asymmetry in the Schottky barrier heights used here eliminates the stringent requirements of abrupt doped interfaces used in p-i-n based TFETs, which is hard to achieve both in thin-film and in NW growth. These initial promising results are expected to be further improved by using a high-k gate dielectric.
Numerical simulations indicate that significant fusion yields (>100 kJ) may be obtained by pulsed-power-driven implosions of cylindrical metal liners onto magnetized and preheated deuterium-tritium fuel. The primary physics risk to this approach is the Magneto-Rayleigh-Taylor (MRT) instability, which operates during both the acceleration and deceleration phase of the liner implosion. We have designed and performed some experiments to study the MRT during the acceleration phase, where the light fluid is purely magnetic. Results from our first series of experiments and plans for future experiments will be presented. According to simulations, an initial axial magnetic field of 10 T is compressed to >100 MG within the liner during the implosion. The magnetic pressure becomes comparable to the plasma pressure during deceleration, which could significantly affect the growth of the MRT instability at the fuel/liner interface. The MRT instability is also important in some astronomical objects such as the Crab Nebula (NGC1962). In particular, the morphological structure of the observed filaments may be determined by the ratio of the magnetic to material pressure and alignment of the magnetic field with the direction of acceleration [Hester, ApJ, 456, 225 1996]. Potential experiments to study this MRT behavior using the Z facility will be presented.
3-D cubic unit cell arrays containing split ring resonators were fabricated and characterized. The unit cells are {approx}3 orders-of-magnitude smaller than microwave SRR-based metamaterials and exhibit both electrically and magnetically excited resonances for normally incident TEM waves in addition to showing improved isotropic response.
Abstract not provided.
An AlN MEMS resonator technology has been developed, enabling massively parallel filter arrays on a single chip. Low-loss filter banks covering the 10 MHz--10-GHz frequency range have been demonstrated, as has monolithic integration with inductors and CMOS circuitry. The high level of integration enables miniature multi-bandm spectrally aware, and cognitive radios.
Physics of Plasmas
Abstract not provided.
We describe a time-domain spectroscopy system in the thermal infrared used for complete transmission and reflection characterization of metamaterials in amplitude and phase. The system uses a triple-output near-infrared ultrafast fiber laser, phase-locked difference frequency generation and phase-matched electro-optic sampling. We will present measurements of several metamaterials designs.
Abstract not provided.
Abstract not provided.
Four approaches to modeling multi-junction concentrating photovoltaic system performance are assessed by comparing modeled performance to measured performance. Measured weather, irradiance, and system performance data were collected on two systems over a one month period. Residual analysis is used to assess the models and to identify opportunities for model improvement. Large photovoltaic systems are typically developed as projects which supply electricity to a utility and are owned by independent power producers. Obtaining financing at favorable rates and attracting investors requires confidence in the projected energy yield from the plant. In this paper, various performance models for projecting annual energy yield from Concentrating Photovoltaic (CPV) systems are assessed by comparing measured system output to model predictions based on measured weather and irradiance data. The results are statistically analyzed to identify systematic error sources.
Four approaches to modeling multi-junction concentrating photovoltaic system performance are assessed by comparing modeled performance to measured performance. Measured weather, irradiance, and system performance data were collected on two systems over a one month period. Residual analysis is used to assess the models and to identify opportunities for model improvement.
Problem Statement: (1) Uncertainties in PV system performance and reliability impact business decisions - Project cost and financing estimates, Pricing service contracts and guarantees, Developing deployment and O&M strategies; (2) Understanding and reducing these uncertainties will help make the PV industry more competitive (3) Performance has typically been estimated without much attention to reliability of components; and (4) Tools are needed to assess all inputs to the value proposition (e.g., LCOE, cash flow, reputation, etc.). Goals and objectives are: (1) Develop a stochastic simulation model (in GoldSim) that can represent PV system performance as a function of system design, weather, reliability, and O&M policies; (2) Evaluate performance for an example system to quantify sources of uncertainty and identify dominant parameters via a sensitivity study; and (3) Example System - 1 inverter, 225 kW DC Array latitude tilt (90 strings of 12 modules {l_brace}1080 modules{r_brace}), Weather from Tucumcari, NM (TMY2 with annual uncertainty).
Abstract not provided.
The Fracture-Matrix Transport (FMT) code developed at Sandia National Laboratories solves chemical equilibrium problems using the Pitzer activity coefficient model with a database containing actinide species. The code is capable of predicting actinide solubilities at 25 C in various ionic-strength solutions from dilute groundwaters to high-ionic-strength brines. The code uses oxidation state analogies, i.e., Am(III) is used to predict solubilities of actinides in the +III oxidation state; Th(IV) is used to predict solubilities of actinides in the +IV state; Np(V) is utilized to predict solubilities of actinides in the +V state. This code has been qualified for predicting actinide solubilities for the Waste Isolation Pilot Plant (WIPP) Compliance Certification Application in 1996, and Compliance Re-Certification Applications in 2004 and 2009. We have established revised actinide-solubility uncertainty ranges and probability distributions for Performance Assessment (PA) by comparing actinide solubilities predicted by the FMT code with solubility data in various solutions from the open literature. The literature data used in this study include solubilities in simple solutions (NaCl, NaHCO{sub 3}, Na{sub 2}CO{sub 3}, NaClO{sub 4}, KCl, K{sub 2}CO{sub 3}, etc.), binary mixing solutions (NaCl+NaHCO{sub 3}, NaCl+Na{sub 2}CO{sub 3}, KCl+K{sub 2}CO{sub 3}, etc.), ternary mixing solutions (NaCl+Na{sub 2}CO{sub 3}+KCl, NaHCO{sub 3}+Na{sub 2}CO{sub 3}+NaClO{sub 4}, etc.), and multi-component synthetic brines relevant to the WIPP.
Abstract not provided.
Abstract not provided.
Journal of Alloys and Compounds
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The need for the engineering analysis of systems in which the transport of thermal energy occurs primarily through a conduction process is a common situation. For all but the simplest geometries and boundary conditions, analytic solutions to heat conduction problems are unavailable, thus forcing the analyst to call upon some type of approximate numerical procedure. A wide variety of numerical packages currently exist for such applications, ranging in sophistication from the large, general purpose, commercial codes, such as COMSOL, COSMOSWorks, ABAQUS and TSS to codes written by individuals for specific problem applications. The original purpose for developing the finite element code described here, COYOTE, was to bridge the gap between the complex commercial codes and the more simplistic, individual application programs. COYOTE was designed to treat most of the standard conduction problems of interest with a user-oriented input structure and format that was easily learned and remembered. Because of its architecture, the code has also proved useful for research in numerical algorithms and development of thermal analysis capabilities. This general philosophy has been retained in the current version of the program, COYOTE, Version 5.0, though the capabilities of the code have been significantly expanded. A major change in the code is its availability on parallel computer architectures and the increase in problem complexity and size that this implies. The present document describes the theoretical and numerical background for the COYOTE program. This volume is intended as a background document for the user's manual. Potential users of COYOTE are encouraged to become familiar with the present report and the simple example analyses reported in before using the program. The theoretical and numerical background for the finite element computer program, COYOTE, is presented in detail. COYOTE is designed for the multi-dimensional analysis of nonlinear heat conduction problems. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in COYOTE are also outlined. Instructions for use of the code are documented in SAND2010-0714.
The viscosity of molten salts comprising ternary and quaternary mixtures of the nitrates of sodium, potassium, lithium and calcium was determined experimentally. Viscosity was measured over the temperature range from near the relatively low liquidus temperatures of he individual mixtures to 200C. Molten salt mixtures that do not contain calcium nitrate exhibited relatively low viscosity and an Arrhenius temperature dependence. Molten salt mixtures that contained calcium nitrate were relatively more viscous and viscosity increased as the roportion of calcium nitrate increased. The temperature dependence of viscosity of molten salts containing calcium nitrate displayed curvature, rather than linearity, when plotted in Arrhenius format. Viscosity data for these mixtures were correlated by the Vogel-Fulcher- ammann-Hesse equation.
Abstract not provided.
In a multiyear research agreement with Tenix Investments Pty. Ltd., Sandia has been developing field deployable technologies for detection of biotoxins in water supply systems. The unattended water sensor or UWS employs microfluidic chip based gel electrophoresis for monitoring biological analytes in a small integrated sensor platform. This instrument collects, prepares, and analyzes water samples in an automated manner. Sample analysis is done using the {mu}ChemLab{trademark} analysis module. This report uses analysis results of two datasets collected using the UWS to estimate performance of the device. The first dataset is made up of samples containing ricin at varying concentrations and is used for assessing instrument response and detection probability. The second dataset is comprised of analyses of water samples collected at a water utility which are used to assess the false positive probability. The analyses of the two sets are used to estimate the Receiver Operating Characteristic or ROC curves for the device at one set of operational and detection algorithm parameters. For these parameters and based on a statistical estimate, the ricin probability of detection is about 0.9 at a concentration of 5 nM for a false positive probability of 1 x 10{sup -6}.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual test images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.
Abstract not provided.
A mesoscale dimensional artifact based on silicon bulk micromachining fabrication has been developed and manufactured with the intention of evaluating the artifact both on a high precision coordinate measuring machine (CMM) and video-probe based measuring systems. This hybrid artifact has features that can be located by both a touch probe and a video probe system with a k=2 uncertainty of 0.4 {micro}m, more than twice as good as a glass reference artifact. We also present evidence that this uncertainty could be lowered to as little as 50 nm (k=2). While video-probe based systems are commonly used to inspect mesoscale mechanical components, a video-probe system's certified accuracy is generally much worse than its repeatability. To solve this problem, an artifact has been developed which can be calibrated using a commercially available high-accuracy tactile system and then be used to calibrate typical production vision-based measurement systems. This allows for error mapping to a higher degree of accuracy than is possible with a glass reference artifact. Details of the designed features and manufacturing process of the hybrid dimensional artifact are given and a comparison of the designed features to the measured features of the manufactured artifact is presented and discussed. Measurement results from vision and touch probe systems are compared and evaluated to determine the capability of the manufactured artifact to serve as a calibration tool for video-probe systems. An uncertainty analysis for calibration of the artifact using a CMM is presented.
Forward radiation transport is the problem of calculating the radiation field given a description of the radiation source and transport medium. In contrast, inverse transport is the problem of inferring the configuration of the radiation source and transport medium from measurements of the radiation field. As such, the identification and characterization of special nuclear materials (SNM) is a problem of inverse radiation transport, and numerous techniques to solve this problem have been previously developed. The authors have developed a solver based on nonlinear regression applied to deterministic coupled neutron-photon transport calculations. The subject of this paper is the experimental validation of that solver. This paper describes a series of experiments conducted with a 4.5-kg sphere of alpha-phase, weapons-grade plutonium. The source was measured in six different configurations: bare, and reflected by high-density polyethylene (HDPE) spherical shells with total thicknesses of 1.27, 2.54, 3.81, 7.62, and 15.24 cm. Neutron and photon emissions from the source were measured using three instruments: a gross neutron counter, a portable neutron multiplicity counter, and a high-resolution gamma spectrometer. These measurements were used as input to the inverse radiation transport solver to characterize the solver's ability to correctly infer the configuration of the source from its measured signatures.
Abstract not provided.
We present Poblano v1.0, a Matlab toolbox for solving gradient-based unconstrained optimization problems. Poblano implements three optimization methods (nonlinear conjugate gradients, limited-memory BFGS, and truncated Newton) that require only first order derivative information. In this paper, we describe the Poblano methods, provide numerous examples on how to use Poblano, and present results of Poblano used in solving problems from a standard test collection of unconstrained optimization problems.
Abstract not provided.
The difficulty of calculating the ambient properties of molecular crystals, such as the explosive PETN, has long hampered much needed computational investigations of these materials. One reason for the shortcomings is that the exchange-correlation functionals available for Density Functional Theory (DFT) based calculations do not correctly describe the weak intermolecular van der Waals' forces present in molecular crystals. However, this weak interaction also poses other challenges for the computational schemes used. We will discuss these issues in the context of calculations of lattice constants and structure of PETN with a number of different functionals, and also discuss if these limitations can be circumvented for studies at non-ambient conditions.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Nowadays composite materials have been extensively utilized in many military and industrial applications. For example, the newest Boeing 787 uses 50% composite (mostly carbon fiber reinforced plastic) in production. However, the weak delamination strength of fiber reinforced composites, when subjected to external impact such as ballistic impact, has been always potential serious threats to the safety of passengers. Dynamic fracture toughness is a critical indicator of the performance from delamination in such impact events. Quasi-static experimental techniques for fracture toughness have been well developed. For example, end notched flexure (ENF) technique, which is illustrated in Fig. 1, has become a typical method to determined mode-II fracture toughness for composites under quasi-static loading conditions. However, dynamic fracture characterization of composites has been challenging. This has resulted in conflictive and confusing conclusions in regard to strain rate effects on fracture toughness of composites.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Sound and Vibration
Abstract not provided.
Abstract not provided.
Nuclear News
Abstract not provided.
Liquid foams are viscoelastic liquids, exhibiting a fast relaxation attributed to local bubble motions and a slow response due to structural evolution of the intrinsically unstable system. In this work, these processes are examined in unique organic foams that differ from the typically investigated aqueous systems in two major ways: the organic foams (1) posses a much higher continuous phase viscosity and (2) exhibit a coarsening response that involves coalescence of cells. The transient and dynamic relaxation responses of the organic foams are evaluated and discussed in relation to the response of aqueous foams. The change in the foam response with increasing gas fraction, from that of a Newtonian liquid to one that is strongly viscoelastic, is also presented. In addition, the temporal dependencies of the linear viscoelastic response are assessed in the context of the foam structural evolution. These foams and characterization techniques provide a basis for testing stabilization mechanisms in epoxy-based foams for encapsulation applications.
Abstract not provided.
Alkali nitrate eutectic mixtures are finding application as industrial heat transfer fluids in concentrated solar power generation systems. An important property for such applications is the melting point, or phase coexistence temperature. We have computed melting points for lithium, sodium and potassium nitrate from molecular dynamics simulations using a recently developed method, which uses thermodynamic integration to compute the free energy difference between the solid and liquid phases. The computed melting point for NaNO3 was within 15K of its experimental value, while for LiNO3 and KNO3, the computed melting points were within 100K of the experimental values [4]. We are currently extending the approach to calculate melting temperatures for binary mixtures of lithium and sodium nitrate.
The annual program report provides detailed information about all aspects of the SNL/CA Pollution Prevention Program for a given calendar year. It functions as supporting documentation to the SNL/CA Environmental Management System Program Manual. The program report describes the activities undertaken during the past year, and activities planned in future years to implement the Pollution Prevention Program, one of six programs that supports environmental management at SNL/CA.
Abstract not provided.
Development of silicon, enhancement mode nanostructures for solid-state quantum computing will be described. A primary motivation of this research is the recent unprecedented manipulation of single electron spins in GaAs quantum dots, which has been used to demonstrate a quantum bit. Long spin decoherence times are predicted possible in silicon qubits. This talk will focus on silicon enhancement mode quantum dot structures that emulate the GaAs lateral quantum dot qubit but use an enhancement mode field effect transistor (FET) structure. One critical concern for silicon quantum dots that use oxides as insulators in the FET structure is that defects in the metal oxide semiconductor (MOS) stack can produce both detrimental electrostatic and paramagnetic effects on the qubit. Understanding the implications of defects in the Si MOS system is also relevant for other qubit architectures that have nearby dielectric passivated surfaces. Stable, lithographically defined, single-period Coulomb-blockade and single-electron charge sensing in a quantum dot nanostructure using a MOS stack will be presented. A combination of characterization of defects, modeling and consideration of modified approaches that incorporate SiGe or donors provides guidance about the enhancement mode MOS approach for future qubits and quantum circuit micro-architecture.
The microscopic Polymer Reference Interaction Site Model theory has been applied to spherical and rodlike fillers dissolved in three types of chemically heterogeneous polymer melts: alternating AB copolymer, random AB copolymers, and an equimolar blend of two homopolymers. In each case, one monomer species adsorbs more strongly on the filler mimicking a specific attraction, while all inter-monomer potentials are hard core which precludes macrophase or microphase separation. Qualitative differences in the filler potential-of-mean force are predicted relative to the homopolymer case. The adsorbed bound layer for alternating copolymers exhibits a spatial moduluation or layering effect but is otherwise similar to that of the homopolymer system. Random copolymers and the polymer blend mediate a novel strong, long-range bridging interaction between fillers at moderate to high adsorption strengths. The bridging strength is a non-monotonic function of random copolymer composition, reflecting subtle competing enthalpic and entropic considerations.
Abstract not provided.
Chemistry of Materials
Abstract not provided.
The low-energy properties of the Anderson model for a single impurity coupled to two leads are studied using the GW approximation. We find that quantities such as the spectral function at zero temperature, the linear-response conductance as function of temperature or the differential conductance as function of bias voltage exhibit universal scaling behavior in the Kondo regime. We show how the form of the GW scaling functions relates to the form of the scaling functions obtained from the exact solution at equilibrium. We also compare the energy scale that goes inside the GW scaling functions with the exact Kondo temperature, for a broad range of the Coulomb interaction strength in the asymptotic regime. This analysis allows to clarify a presently suspended question in the literature, namely whether or not the GW solution captures the Kondo resonance.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Applied Physics Letters
Abstract not provided.
American Ceramics Society Transactions
Abstract not provided.
Abstract not provided.
FRMAC was born out of circumstances 25 years ago when 17 federal agencies descended on the states with good intention during the Three-Mile Island nuclear power plant incident. At that time it quickly became evident that a better way was needed to support state and local governments in their time of emergency and recovery process. FRMAC's single voice of Federal support coordinates the multiple agencies that respond to a radiological event. Over the years, FRMAC has exercised, evaluated, and honed its ability to quickly respond to the needs of our communities. As the times have changed, FRMAC has expanded its focus from nuclear power plant incidents, to threats of a terrorist radiological dispersal device (RDD), to the unthinkable - an Improvised nuclear device (IND). And just as having the right tools are part of any trade, FRMAC's tool set has and is evolving to meet contemporary challenges - not just to improve the time it takes to collect data and assess the situation, but to provide a quality and comprehensive product that supports a stressed decision maker, responsible for the protection of the public. Innovations in the movement of data and information have changed our everyday lives. So too, FRMAC is capitalizing on industry innovations to improve the flow of information: from the early predictive models, to streamlining the process of getting data out of the field; to improving the time it takes to get assessed products in to the hands of the decision makers. FRMAC is focusing on the future through the digital age of electronic data processing. Public protective action and dose avoidance is the challenge.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
In many industrial processes, gaseous moisture is undesirable as it can lead to metal corrosion, polymer degradation, and other materials aging processes. However, generating and measuring precise moisture concentrations is challenging due to the need to cover a broad concentration range (parts-per-billion to percent) and the affinity of moisture to a wide range surfaces and materials. This document will discuss the techniques employed by the Mass Spectrometry Laboratory of the Materials Reliability Department at Sandia National Laboratories to generate and measure known gaseous moisture concentrations. This document highlights the use of a chilled mirror and primary standard humidity generator for the characterization of aluminum oxide moisture sensors. The data presented shows an excellent correlation in frost point measured between the two instruments, and thus provides an accurate and reliable platform for characterizing moisture sensors and performing other moisture related experiments.
The goal of z-pinch inertial fusion energy (IFE) is to extend the single-shot z-pinch inertial confinement fusion (ICF) results on Z to a repetitive-shot z-pinch power plant concept for the economical production of electricity. Z produces up to 1.8 MJ of x-rays at powers as high as 230 TW. Recent target experiments on Z have demonstrated capsule implosion convergence ratios of 14-21 with a double-pinch driven target, and DD neutron yields up to 8x10exp10 with a dynamic hohlraum target. For z-pinch IFE, a power plant concept is discussed that uses high-yield IFE targets (3 GJ) with a low rep-rate per chamber (0.1 Hz). The concept includes a repetitive driver at 0.1 Hz, a Recyclable Transmission Line (RTL) to connect the driver to the target, high-yield targets, and a thick-liquid wall chamber. Recent funding by a U.S. Congressional initiative for $4M for FY04 is supporting research on RTLs, repetitive pulsed power drivers, shock mitigation, full RTL cycle planned experiments, high-yield IFE targets, and z-pinch power plant technologies. Recent results of research in all of these areas are discussed, and a Road Map for Z-Pinch IFE is presented.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This paper discusses implications and appropriate treatment of systematic uncertainty in experiments and modeling. Systematic uncertainty exists when experimental conditions, and/or measurement bias errors, and/or bias contributed by post-processing the data, are constant over the set of experiments but the particular values of the conditions and/or biases are unknown to within some specified uncertainty. Systematic uncertainties in experiments do not automatically show up in the output data, unlike random uncertainty which is revealed when multiple experiments are performed. Therefore, the output data must be properly 'conditioned' to reflect important sources of systematic uncertainty in the experiments. In industrial scale experiments the systematic uncertainty in experimental conditions (especially boundary conditions) is often large enough that the inference error on how the experimental system maps inputs to outputs is often quite substantial. Any such inference error and uncertainty thereof also has implications in model validation and calibration/conditioning; ignoring systematic uncertainty in experiments can lead to 'Type X' error in these procedures. Apart from any considerations of modeling and simulation, reporting of uncertainty associated with experimental results should include the effects of any significant systematic uncertainties in the experiments. This paper describes and illustrates the treatment of multivariate systematic uncertainties of interval and/or probabilistic natures, and combined cases. The paper also outlines a practical and versatile 'real-space' framework and methodology within which experimental and modeling uncertainties (correlated and uncorrelated, systematic and random, aleatory and epistemic) are treated to mitigate risk in model validation, calibration/conditioning, hierarchical modeling, and extrapolative prediction.
Abstract not provided.
Abstract not provided.
The goals of this project are to understand the fundamental principles that govern the formation and function of novel nanoscale and nanocomposite materials. Specific scientific issues being addressed include: design and synthesis of complex molecular precursors with controlled architectures, controlled synthesis of nanoclusters and nanoparticles, development of robust two or three-dimensionally ordered nanocomposite materials with integrated functionalities that can respond to internal or external stimuli through specific molecular interactions or phase transitions, fundamental understanding of molecular self-assembly mechanisms on multiple length scales, and fundamental understanding of transport, electronic, optical, magnetic, catalytic and photocatalytic properties derived from the nanoscale phenomena and unique surface and interfacial chemistry for DOE's energy mission.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Reliability Engineering and System Safety
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
With the Lemnos framework, interoperability of control security equipment is straightforward. To obtain interoperability between proprietary security appliance units, one or both vendors must now write cumbersome 'translation code.' If one party changes something, the translation code 'breaks.' The Lemnos project is developing and testing a framework that uses widely available security functions and protocols like IPsec - to form a secure communications channel - and Syslog, to exchange security log messages. Using this model, security appliances from two or more different vendors can clearly and securely exchange information, helping to better protect the total system. Simplify regulatory compliance in a complicated security environment by leveraging the Lemnos framework. As an electric utility, are you struggling to implement the NERC CIP standards and other regulations? Are you weighing the misery of multiple management interfaces against committing to a ubiquitous single-vendor solution? When vendors build their security appliances to interoperate using the Lemnos framework, it becomes practical to match best-of-breed offerings from an assortment of vendors to your specific control systems needs. The Lemnos project is developing and testing a framework that uses widely available open-source security functions and protocols like IPsec and Syslog to create a secure communications channel between appliances in order to exchange security data.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The internal structure of stars depends on the radiative opacity of the stellar matter. However, opacity models have never been experimentally tested at the conditions that exist inside stars. Experiments at the Sandia Z facility are underway to measure the x-ray transmission of iron, an important stellar constituent, at temperature and density high enough to evaluate the physical underpinnings of stellar opacity models. Initial experiments provided information on the charge state distribution and the energy level structure for the iron ions that exist at the solar radiation/convection boundary. Data analysis and new experiments at higher densities and temperatures will be described.
International Journal of Multiscale Computational Engineering
Abstract not provided.
Abstract not provided.
Abstract not provided.
Changing paradigms from paper laboratory notebooks to electronic creates challenges. Meeting regulatory requirements in an R&D environment drives thorough documentation. Creating complete experimental records is easier using electronic laboratory notebooks. Supporting investigations through re-creating experimental conditions is greatly facilitated using an ELN.
Pandemic influenza has become a serious global health concern; in response, governments around the world have allocated increasing funds to containment of public health threats from this disease. Pandemic influenza is also recognized to have serious economic implications, causing illness and absence that reduces worker productivity and economic output and, through mortality, robs nations of their most valuable assets - human resources. This paper reports two studies that investigate both the short- and long-term economic implications of a pandemic flu outbreak. Policy makers can use the growing number of economic impact estimates to decide how much to spend to combat the pandemic influenza outbreaks. Experts recognize that pandemic influenza has serious global economic implications. The illness causes absenteeism, reduced worker productivity, and therefore reduced economic output. This, combined with the associated mortality rate, robs nations of valuable human resources. Policy makers can use economic impact estimates to decide how much to spend to combat the pandemic influenza outbreaks. In this paper economists examine two studies which investigate both the short- and long-term economic implications of a pandemic influenza outbreak. Resulting policy implications are also discussed. The research uses the Regional Economic Modeling, Inc. (REMI) Policy Insight + Model. This model provides a dynamic, regional, North America Industrial Classification System (NAICS) industry-structured framework for forecasting. It is supported by a population dynamics model that is well-adapted to investigating macro-economic implications of pandemic influenza, including possible demand side effects. The studies reported in this paper exercise all of these capabilities.
Abstract not provided.
Abstract not provided.
This document provides common best practices for the efficient utilization of parallel file systems for analysts and application developers. A multi-program, parallel supercomputer is able to provide effective compute power by aggregating a host of lower-power processors using a network. The idea, in general, is that one either constructs the application to distribute parts to the different nodes and processors available and then collects the result (a parallel application), or one launches a large number of small jobs, each doing similar work on different subsets (a campaign). The I/O system on these machines is usually implemented as a tightly-coupled, parallel application itself. It is providing the concept of a 'file' to the host applications. The 'file' is an addressable store of bytes and that address space is global in nature. In essence, it is providing a global address space. Beyond the simple reality that the I/O system is normally composed of a small, less capable, collection of hardware, that concept of a global address space will cause problems if not very carefully utilized. How much of a problem and the ways in which those problems manifest will be different, but that it is problem prone has been well established. Worse, the file system is a shared resource on the machine - a system service. What an application does when it uses the file system impacts all users. It is not the case that some portion of the available resource is reserved. Instead, the I/O system responds to requests by scheduling and queuing based on instantaneous demand. Using the system well contributes to the overall throughput on the machine. From a solely self-centered perspective, using it well reduces the time that the application or campaign is subject to impact by others. The developer's goal should be to accomplish I/O in a way that minimizes interaction with the I/O system, maximizes the amount of data moved per call, and provides the I/O system the most information about the I/O transfer per request.
Abstract not provided.
Abstract not provided.
Experimental Mechanics
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.