Publications

Results 69401–69600 of 96,771

Search results

Jump to search filters

International perspectives on mitigating laboratory biorisks

Salazar, Carlos S.; Pinard, William J.

The International Perspectives on Mitigating Laboratory Biorisks workshop, held at the Renaissance Polat Istanbul Hotel in Istanbul, Republic of Turkey, from October 25 to 27, 2010, sought to promote discussion between experts and stakeholders from around the world on issues related to the management of biological risk in laboratories. The event was organized by Sandia National Laboratories International Biological Threat Reduction program, on behalf of the US Department of State Biosecurity Engagement Program and the US Department of Defense Cooperative Biological Engagement Program. The workshop came about as a response to US Under Secretary of State Ellen O. Tauscher's statements in Geneva on December 9, 2009, during the Annual Meeting of the States Parties to the Biological Weapons Convention (BWC). Pursuant to those remarks, the workshop was intended to provide a forum for interested countries to share information on biorisk management training, standards, and needs. Over the course of the meeting's three days, participants discussed diverse topics such as the role of risk assessment in laboratory biorisk management, strategies for mitigating risk, measurement of performance and upkeep, international standards, training and building workforce competence, and the important role of government and regulation. The meeting concluded with affirmations of the utility of international cooperation in this sphere and recognition of positive prospects for the future. The workshop was organized as a series of short presentations by international experts on the field of biorisk management, followed by breakout sessions in which participants were divided into four groups and urged to discuss a particular topic with the aid of a facilitator and a set of guiding questions. Rapporteurs were present during the plenary session as well as breakout sessions and in particular were tasked with taking notes during discussions and reporting back to the assembled participants a brief summary of points discussed. The presentations and breakout sessions were divided into five topic areas: 'Challenges in Biorisk Management,' 'Risk Assessment and Mitigation Measures,' 'Biorisk Management System Performance,' 'Training,' and 'National Oversight and Regulations.' The topics and questions were chosen by the organizers through consultation with US Government sponsors. The Chattham House Rule on non-attribution was in effect during question and answer periods and breakout session discussions.

More Details

The nature of airbursts and their contribution to the impact threat

Boslough, Mark B.

Ongoing simulations of low-altitude airbursts from hypervelocity asteroid impacts have led to a re-evaluation of the impact hazard that accounts for the enhanced damage potential relative to the standard point-source approximations. Computational models demonstrate that the altitude of maximum energy deposition is not a good estimate of the equivalent height of a point explosion, because the center of mass of an exploding projectile maintains a significant fraction of its initial momentum and is transported downward in the form of a high-temperature jet of expanding gas. This 'fireball' descends to a depth well beneath the burst altitude before its velocity becomes subsonic. The time scale of this descent is similar to the time scale of the explosion itself, so the jet simultaneously couples both its translational and its radial kinetic energy to the atmosphere. Because of this downward flow, larger blast waves and stronger thermal radiation pulses are experienced at the surface than would be predicted for a nuclear explosion of the same yield at the same burst height. For impacts with a kinetic energy below some threshold value, the hot jet of vaporized projectile loses its momentum before it can make contact with the Earth's surface. The 1908 Tunguska explosion is the largest observed example of this first type of airburst. For impacts above the threshold, the fireball descends all the way to the ground, where it expands radially, driving supersonic winds and radiating thermal energy at temperatures that can melt silicate surface materials. The Libyan Desert Glass event, 29 million years ago, may be an example of this second, larger, and more destructive type of airburst. The kinetic energy threshold that demarcates these two airburst types depends on asteroid velocity, density, strength, and impact angle. Airburst models, combined with a reexamination of the surface conditions at Tunguska in 1908, have revealed that several assumptions from the earlier analyses led to erroneous conclusions, resulting in an overestimate of the size of the Tunguska event. Because there is no evidence that the Tunguska fireball descended to the surface, the yield must have been about 5 megatons or lower. Better understanding of airbursts, combined with the diminishing number of undiscovered large asteroids, leads to the conclusion that airbursts represent a large and growing fraction of the total impact threat.

More Details

Ultrafast 25 keV backlighting for experiments on Z

Geissel, Matthias G.; Schollmeier, Marius; Kimmel, Mark W.; Pitts, Todd A.; Rambo, Patrick K.; Schwarz, Jens S.; Sefkow, Adam B.; Atherton, B.W.

To extend the backlighting capabilities for Sandia's Z-Accelerator, Z-Petawatt, a laser which can provide laser pulses of 500 fs length and up to 120 J (100TW target area) or up to 450 J (Z / Petawatt target area) has been built over the last years. The main mission of this facility focuses on the generation of high energy X-rays, such as tin Ka at 25 keV in ultra-short bursts. Achieving 25 keV radiographs with decent resolution and contrast required addressing multiple problems such as blocking of hot electrons, minimization of the source, development of suitable filters, and optimization of laser intensity. Due to the violent environment inside of Z, an additional very challenging task is finding massive debris and radiation protection measures without losing the functionality of the backlighting system. We will present the first experiments on 25 keV backlighting including an analysis of image quality and X-ray efficiency.

More Details

The evolution of instabilities during magnetically driven liner implosions

Slutz, Stephen A.; Sinars, Daniel S.; McBride, Ryan D.; Jennings, Christopher A.; Herrmann, Mark H.; Cuneo, M.E.

Numerical simulations [S.A. Slutz et al Phys. Plasmas 17, 056303 (2010)] indicate that fuel magnetization and preheat could enable cylindrical liner implosions to become an efficient means to generate fusion conditions. A series of simulations has been performed to study the stability of magnetically driven liner implosions. These simulations exhibit the initial growth and saturation of an electro-thermal instability. The Rayleigh-Taylor instability further amplifies the resultant density perturbations developing a spectrum of modes initially peaked at short wavelengths. With time the spectrum of modes evolves towards longer wavelengths developing an inverse cascade. The effects of mode coupling, the radial dependence of the magnetic pressure, and the initial surface roughness will be discussed.

More Details

Network discovery, characterization, and prediction : a grand challenge LDRD final report

Kegelmeyer, William P.

This report is the final summation of Sandia's Grand Challenge LDRD project No.119351, 'Network Discovery, Characterization and Prediction' (the 'NGC') which ran from FY08 to FY10. The aim of the NGC, in a nutshell, was to research, develop, and evaluate relevant analysis capabilities that address adversarial networks. Unlike some Grand Challenge efforts, that ambition created cultural subgoals, as well as technical and programmatic ones, as the insistence on 'relevancy' required that the Sandia informatics research communities and the analyst user communities come to appreciate each others needs and capabilities in a very deep and concrete way. The NGC generated a number of technical, programmatic, and cultural advances, detailed in this report. There were new algorithmic insights and research that resulted in fifty-three refereed publications and presentations; this report concludes with an abstract-annotated bibliography pointing to them all. The NGC generated three substantial prototypes that not only achieved their intended goals of testing our algorithmic integration, but which also served as vehicles for customer education and program development. The NGC, as intended, has catalyzed future work in this domain; by the end it had already brought in, in new funding, as much funding as had been invested in it. Finally, the NGC knit together previously disparate research staff and user expertise in a fashion that not only addressed our immediate research goals, but which promises to have created an enduring cultural legacy of mutual understanding, in service of Sandia's national security responsibilities in cybersecurity and counter proliferation.

More Details

Characterization, propagation and analysis of aleatory and epistemic uncertainty in the 2008 performance assessment for the proposed repository for high-level radioactive waste at Yucca Mountain, Nevada

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Hansen, Clifford W.; Helton, Jon C.; Sallaberry, Cedric J.

The 2008 performance assessment (PA) for the proposed repository for high-level radioactive waste at Yucca Mountain (YM), Nevada, illustrates the conceptual structure of risk assessments for complex systems. The 2008 YM PA is based on the following three conceptual entities:a probability space that characterizes aleatory uncertainty; a function that predicts consequences for individual elements of the sample space for aleatory uncertainty; and a probability space that characterizes epistemic uncertainty. These entities and their use in the characterization, propagation and analysis of aleatory and epistemic uncertainty are described and illustrated with results from the 2008 YM PA. © 2010 Springer-Verlag Berlin Heidelberg.

More Details

Poly(methacrylic acid) polymer hydrogel capsules: Drug carriers, sub-compartmentalized microreactors, artificial organelles

Small

Zelikin, Alexander N.; Städler, Brigitte; Price, Andrew D.

Multilayered polymer capsules attract significant research attention and are proposed as candidate materials for diverse biomedical applications, from targeted drug delivery to microencapsulated catalysis and sensors. Despite tremendous efforts, the studies which extend beyond proof of concept and report on the use of polymer capsules in drug delivery are few, as are the developments in encapsulated catalysis with the use of these carriers. In this Concept article, the recent successes of poly(methacrylic acid) hydrogel capsules as carrier vessels for delivery of therapeutic cargo, creation of microreactors, and assembly of sub-compartmentalized cell mimics are discussed. The developed technologies are outlined, successful applications of these capsules are highlighted, capsules properties which contribute to their performance in diverse applications are discussed, and further directions and plausible developments in the field are suggested. © Copyright 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

More Details

Time-resolved picosecond pure-rotational coherent anti-stokes Raman spectroscopy for thermometry and species concentration in flames

Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010

Kliewer, Christopher J.; Farrow, Roger L.; Settersten, Thomas B.; Kiefer, Johannes; Patterson, Brian D.; Gao, Yi; Settersten, Thomas B.

Time-resolved picosecond pure-rotational coherent anti-Stokes Raman spectroscopy is demonstrated for thermometry and species concentration determination in flames. Time-delaying the probe pulse enables successful suppression of unwanted signals. A theoretical model is under development. ©2010 Optical Society of America.

More Details

Silicon microring modulator with integrated heater and temperature sensor for thermal control

Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010

DeRose, Christopher T.; Watts, Michael W.; Trotter, Douglas C.; Luck, David L.; Nielson, Gregory N.; Young, Ralph W.

The first demonstration of a silicon microring modulator with both an integrated resistive heater and diode-based temperature sensor is shown. The temperature-sensor exhibits a linear response for more than an 85 °C external temperature range. ©2010 Optical Society of America.

More Details

Low-power high-speed silicon microdisk modulators

Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010

Zortman, William A.; Watts, Michael W.; Trotter, Douglas C.; Young, Ralph W.; Lentine, Anthony L.

A novel silicon microdisk modulator with "error-free" ∼3 femtojoule/bit modulation at 12.5Gbs has been demonstrated. Modulation with a 1 volt swing allows for compatibility with current and future digital logic CMOS electronics. ©2010 IEEE.

More Details

Parametric results of the AlGaInAs quantum-well saturable absorber for use as a passive Q-switch

Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010

Bender, Daniel A.; Cederberg, Jeffrey G.; Hebner, Gregory A.

We have successfully designed, built and operated a microlaser based on a AlGaInAs multiple quantum well (MQW) semiconductor saturable absorber (SESA). Optical characterization of the semiconductor absorber, as well as, the microlaser output is presented. © 2010 Ontical Society of America.

More Details

Dual-wavelength pumped 1.550μm high-power optical parametric chirped-pulse amplifier system

Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010

Law, Ryan L.; Nelson, T.R.; Kohl, I.T.; Lovesee, Alex L.; Rudd, J.V.; Buckley, J.R.

A 1.550 μm OPCPA utilizing a dual wavelength pumping scheme has been constructed. The system incorporates LBO and KTA for the first and second-stage amplifiers. Peak powers >310GW (60mJ, 180fs) at 10Hz have been achieved. ©2010 Optical Society of America.

More Details

Photofragmentation approaches for the detection of polyatomic molecules

Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010

Reichardt, Thomas A.; Hoops, Alexandra A.; Headrick, Jeffrey M.; Farrow, Roger L.; Settersten, Thomas B.; Bisson, Scott E.; Kulp, Thomas J.

We review three photofragmentation detection approaches, describing the detection of (1) vapor-phase mercuric chloride by photofragment emission, (2) vapor-phase nitro-containing compounds by photofragmentation-ionization, and (3) surface-bound organophosphonate compounds by photofragmentation-laser-induced fluorescence. © 2010 Optical Society of America.

More Details

The effect of dispersion on spectral broadening of incoherent continuous-wave light in optical fibers

Optics Express

Soh, Daniel B.; Koplow, Jeffrey P.; Moore, Sean M.; Schroder, Kevin L.; Hsu, Wen L.

In addition to fiber nonlinearity, fiber dispersion plays a significant role in spectral broadening of incoherent continuous-wave light. In this paper we have performed a numerical analysis of spectral broadening of incoherent light based on a fully stochastic model. Under a wide range of operating conditions, these numerical simulations exhibit striking features such as damped oscillatory spectral broadening (during the initial stages of propagation), and eventual convergence to a stationary, steady state spectral distribution at sufficiently long propagation distances. In this study we analyze the important role of fiber dispersion in such phenomena. We also demonstrate an analytical rate equation expression for spectral broadening. © 2010 Optical Society of America.

More Details

Synthesis of an ionic liquid with an iron coordination cation

Dalton Transactions

Anderson, Travis M.; Ingersoll, David I.; Hensley, Alyssa H.; Staiger, Chad S.; Leonard, Jonathan C.

An iron-based ionic liquid, Fe((OHCH2CH2) 2NH)6(CF3SO3)3, is synthesized in a single-step complexation reaction. Infrared and Raman data suggest NH(CH2CH2OH)2 primarily coordinates to Fe(iii) through alcohol groups. The compound has Tg and Td values of -64°C and 260°C, respectively. Cyclic voltammetry reveals quasi-reversible Fe(iii)/Fe(ii) reduction waves. © 2010 The Royal Society of Chemistry.

More Details

Asynchronous parallel hybrid optimization combining DIRECT and GSS

Optimization Methods and Software

Griffin, Joshua D.; Kolda, Tamara G.

In this paper, we explore hybrid parallel global optimization using Dividing Rectangles (DIRECT) and asynchronous generating set search (GSS). Both DIRECT and GSS are derivative-free and so require only objective function values; this makes these methods applicable to a wide variety of science and engineering problems. DIRECT is a global search method that strategically divides the search space into ever-smaller rectangles, sampling the objective function at the centre point for each rectangle. GSS is a local search method that samples the objective function at trial points around the current best point, i.e. the point with the lowest function value. Latin hypercube sampling can be used to seed GSS with a good starting point. Using a set of global optimization test problems, we compare the parallel performance of DIRECT and GSS with hybrids that combine the two methods. Our experiments suggest that the hybrid methods are much faster than DIRECT and scale better when more processors are added. This improvement in performance is achieved without any sacrifice in the quality of the solution - the hybrid methods find the global optimum whenever DIRECT does. © 2010 Taylor & Francis.

More Details

Estimating the degree of nonlinearity in transient responses with zeroed early-time fast Fourier transforms

Mechanical Systems and Signal Processing

Allen, Matthew S.; Mayes, R.L.

This work presents time-frequency signal processing methods for detecting and characterizing nonlinearity in transient response measurements. The methods are intended for systems whose response becomes increasingly linear as the response amplitude decays. The discrete Fourier transform of the response data is found with various sections of the initial response set to zero. These frequency responses, dubbed zeroed early-time fast Fourier transforms (ZEFFTs), acquire the usual shape of linear frequency response functions (FRFs) as more of the initial nonlinear response is nullified. Hence, nonlinearity is evidenced by a qualitative change in the shape of the ZEFFT as the length of the initial nullified section is varied. These spectra are shown to be sensitive to nonlinearity, revealing its presence even if it is active in only the first few cycles of a response, as may be the case with macro-slip in mechanical joints. They also give insight into the character of the nonlinearity, potentially revealing nonlinear energy transfer between modes or the modal amplitudes below which a system behaves linearly. In some cases one can identify a linear model from the late time, linear response, and use it to reconstruct the response that the system would have executed at previous times if it had been linear. This gives an indication of the severity of the nonlinearity and its effect on the measured response. The methods are demonstrated on both analytical and experimental data from systems with slip and impact nonlinearities. © 2010 Elsevier Ltd. All rights reserved.

More Details

Pulsed- and DC-charged PCSS-based trigger generators

IEEE Transactions on Plasma Science

Glover, Steven F.; Zutavern, Fred J.; Swalby, Michael S.; Cich, Michael C.; Loubriel, Guillermo M.; Mar, Alan M.; White, Forest E.

Prior to this research, we have developed high-gain GaAs photoconductive semiconductor switches (PCSSs) to trigger 50-300 kV high-voltage switches (HVSs). We have demonstrated that PCSSs can trigger a variety of pulsed-power switches operating at 50300 kV by locating the trigger generator (TG) directly at the HVS. This was demonstrated for two types of dc-charged trigatrons and two types of field distortion midplane switches, including a ±100 kVDC switch produced by the High Current Electronics Institute used in the linear transformer driver. The lowest rms jitter obtained from triggering an HVS with a PCSS was 100 ps from a 300 kV pulse-charged trigatron. PCSSs are the key component in these independently timed fiber-optically controlled low jitter TGs for HVSs. TGs are critical subsystems for reliable and efficient pulsed-power facilities because they control the timing synchronization and amplitude variation of multiple pulse-forming lines that combine to produce the total system output. Future facility-scale pulsed-power systems are even more dependent on triggering, as they are composed of many more triggered HVSs, and they produce shaped pulses by independent timing of the HVSs. As pulsed-power systems become more complex, the complexity of the associated trigger systems also increases. One of the means to reduce this complexity is to allow the trigger system to be charged directly from the voltage appearing across the HVS. However, for slow or dc-charged pulsed-power systems, this can be particularly challenging as the dc hold-off of the PCSS dramatically declines. This paper presents results that are seeking to address HVS performance requirements over large operating ranges by triggering using a pulsed-charged PCSS-based TG. Switch operating conditions that are as low as 45% of the self-break were achieved. A dc-charged PCSS-based TG is also introduced and demonstrated over a 39-61 kV operating range. DC-charged PCSS allows the TG to be directly charged from slow or dc-charged pulsed-power systems. GaAs and neutron-irradiated GaAs (n-GaAs) PCSSs were used to investigate the dc-charged operation. © 2010 IEEE.

More Details

Design of dynamic Hohlraum opacity samples to increase measured sample density on Z

Review of Scientific Instruments

Nash, Thomas J.; Rochau, G.A.; Bailey, James E.

We are attempting to measure the transmission of iron on Z at plasma temperatures and densities relevant to the solar radiation and convection zone boundary. The opacity data published by us to date has been taken at an electron density about a factor of 10 below the 9× 1022/cm3 electron density of this boundary. We present results of two-dimensional (2D) simulations of the heating and expansion of an opacity sample driven by the dynamic Hohlraum radiation source on Z. The aim of the simulations is to design foil samples that provide opacity data at increased density. The inputs or source terms for the simulations are spatially and temporally varying radiation temperatures with a Lambertian angular distribution. These temperature profiles were inferred on Z with on-axis time-resolved pinhole cameras, x-ray diodes, and bolometers. A typical sample is 0.3 μm of magnesium and 0.078 μm of iron sandwiched between 10 μm layers of plastic. The 2D LASNEX simulations indicate that to increase the density of the sample one should increase the thickness of the plastic backing. © 2010 American Institute of Physics.

More Details

A theory-based logic model for innovation policy and evaluation

Research Evaluation

Jordan, Gretchen B.

Current policy and program rationale, objectives, and evaluation use a fragmented picture of the innovation process. This presents a challenge since in the United States officials in both the executive and legislative branches of government see innovation, whether that be new products or processes or business models, as the solution to many of the problems the country faces. The logic model is a popular tool for developing and describing the rationale for a policy or program and its context. This article sets out to describe generic logic models of both the R&D process and the diffusion process, building on existing theory-based frameworks. Then a combined, theory-based logic model for the innovation process is presented. Examples of the elements of the logic, each a possible leverage point or intervention, are provided, along with a discussion of how this comprehensive but simple model might be useful for both evaluation and policy development. © Beech Tree Publishing 2010.

More Details

Two-dimensional mapping of electron densities and temperatures using laser-collisional induced fluorescence

Plasma Sources Science and Technology

Barnat, Edward V.; Frederickson, K.

We discuss the application of the laser-collisional induced fluorescence (LCIF) technique to produce two-dimensional maps of both electron densities and electron temperatures in a helium plasma. A collisional-radiative model (CRM) is used to describe the evolution of electronic states after laser excitation. We discuss generalizations to the time dependent results which are useful for simplifying data acquisition and analysis. LCIF measurements are performed in plasma containing densities ranging from ∼109 electrons cm -3 and approaching 1011 electrons cm-3 and comparison is made between the predictions made by the CRM and the measurements. Finally, spatial and temporal evolution of an ion sheath formed during a pulse bias is measured to demonstrate this technique. © 2010 IOP Publishing Ltd.

More Details

Shifted power method for computing tensor eigenpairs

Kolda, Tamara G.; Dunlavy, Daniel D.

Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

More Details

Methods for modeling impact-induced reactivity changes in small reactors

Smith, J.A.; Villa, Daniel V.; Radel, Ross R.; Radel, Tracy R.; Tallman, Tyler N.; Lipinski, Ronald J.

This paper describes techniques for determining impact deformation and the subsequent reactivity change for a space reactor impacting the ground following a potential launch accident or for large fuel bundles in a shipping container following an accident. This technique could be used to determine the margin of subcriticality for such potential accidents. Specifically, the approach couples a finite element continuum mechanics model (Pronto3D or Presto) with a neutronics code (MCNP). DAGMC, developed at the University of Wisconsin-Madison, is used to enable MCNP geometric queries to be performed using Pronto3D output. This paper summarizes what has been done historically for reactor launch analysis, describes the impact criticality analysis methodology, and presents preliminary results using representative reactor designs.

More Details

Biosafety Risk Assessment Methodology

Caskey, Susan A.; Gaudioso, Jennifer M.; Salerno, Reynolds M.

Laboratories that work with biological agents need to manage their safety risks to persons working the laboratories and the human and animal community in the surrounding areas. Biosafety guidance defines a wide variety of biosafety risk mitigation measures, which include measures which fall under the following categories: engineering controls, procedural and administrative controls, and the use of personal protective equipment; the determination of which mitigation measures should be used to address the specific laboratory risks are dependent upon a risk assessment. Ideally, a risk assessment should be conducted in a manner which is standardized and systematic which allows it to be repeatable and comparable. A risk assessment should clearly define the risk being assessed and avoid over complication.

More Details

Computational modeling of composite material fires

Dodd, Amanda B.; Hubbard, Joshua A.; Erickson, Kenneth L.

Composite materials behave differently from conventional fuel sources and have the potential to smolder and burn for extended time periods. As the amount of composite materials on modern aircraft continues to increase, understanding the response of composites in fire environments becomes increasingly important. An effort is ongoing to enhance the capability to simulate composite material response in fires including the decomposition of the composite and the interaction with a fire. To adequately model composite material in a fire, two physical model development tasks are necessary; first, the decomposition model for the composite material and second, the interaction with a fire. A porous media approach for the decomposition model including a time dependent formulation with the effects of heat, mass, species, and momentum transfer of the porous solid and gas phase is being implemented in an engineering code, ARIA. ARIA is a Sandia National Laboratories multiphysics code including a range of capabilities such as incompressible Navier-Stokes equations, energy transport equations, species transport equations, non-Newtonian fluid rheology, linear elastic solid mechanics, and electro-statics. To simulate the fire, FUEGO, also a Sandia National Laboratories code, is coupled to ARIA. FUEGO represents the turbulent, buoyantly driven incompressible flow, heat transfer, mass transfer, and combustion. FUEGO and ARIA are uniquely able to solve this problem because they were designed using a common architecture (SIERRA) that enhances multiphysics coupling and both codes are capable of massively parallel calculations, enhancing performance. The decomposition reaction model is developed from small scale experimental data including thermogravimetric analysis (TGA) and Differential Scanning Calorimetry (DSC) in both nitrogen and air for a range of heating rates and from available data in the literature. The response of the composite material subject to a radiant heat flux boundary condition is examined to study the propagation of decomposition fronts of the epoxy and carbon fiber and their dependence on the ambient conditions such as oxygen concentration, surface flow velocity, and radiant heat flux. In addition to the computational effort, small scaled experimental efforts to attain adequate data used to validate model predictions is ongoing. The goal of this paper is to demonstrate the progress of the capability for a typical composite material and emphasize the path forward.

More Details

High fidelity nuclear energy system optimization towards an environmentally benign, sustainable, and secure energy source

Rochau, Gary E.; Rodriguez, Salvador B.

A new high-fidelity integrated system method and analysis approach was developed and implemented for consistent and comprehensive evaluations of advanced fuel cycles leading to minimized Transuranic (TRU) inventories. The method has been implemented in a developed code system integrating capabilities of Monte Carlo N - Particle Extended (MCNPX) for high-fidelity fuel cycle component simulations. In this report, a Nuclear Energy System (NES) configuration was developed to take advantage of used fuel recycling and transmutation capabilities in waste management scenarios leading to minimized TRU waste inventories, long-term activities, and radiotoxicities. The reactor systems and fuel cycle components that make up the NES were selected for their ability to perform in tandem to produce clean, safe, and dependable energy in an environmentally conscious manner. The diversity in performance and spectral characteristics were used to enhance TRU waste elimination while efficiently utilizing uranium resources and providing an abundant energy source. A computational modeling approach was developed for integrating the individual models of the NES. A general approach was utilized allowing for the Integrated System Model (ISM) to be modified in order to provide simulation for other systems with similar attributes. By utilizing this approach, the ISM is capable of performing system evaluations under many different design parameter options. Additionally, the predictive capabilities of the ISM and its computational time efficiency allow for system sensitivity/uncertainty analysis and the implementation of optimization techniques.

More Details

Ion beam energy spectrum calculation via dosimetry data deconvolution

Sharp, Andrew C.; Harper-Slaboszewicz, V.H.

The energy spectrum of a H{sup +} beam generated within the HERMES III accelerator is calculated from dosimetry data to refine future experiments. Multiple layers of radiochromic film are exposed to the beam. A graphic user interface was written in MATLAB to align the film images and calculate the beam's dose depth profile. Singular value regularization is used to stabilize the unfolding and provide the H{sup +} beam's energy spectrum. The beam was found to have major contributions from 1 MeV and 8.5 MeV protons. The HERMES III accelerator is typically used as a pulsed photon source to experimentally obtain photon impulse response of systems due to high energy photons. A series of experiments were performed to explore the use of Hermes III to generate an intense pulsed proton beam. Knowing the beam energy spectrum allows for greater precision in experiment predictions and beam model verification.

More Details

Super-resolution microscopy reveals protein spatial reorganization in early innate immune responses

Carson, Bryan C.; Timlin, Jerilyn A.

Over the past decade optical approaches were introduced that effectively break the diffraction barrier. Of particular note were introductions of Stimulated Emission/Depletion (STED) microscopy, Photo-Activated Localization Microscopy (PALM), and the closely related Stochastic Optical Reconstruction Microscopy (STORM). STORM represents an attractive method for researchers, as it does not require highly specialized optical setups, can be implemented using commercially available dyes, and is more easily amenable to multicolor imaging. We implemented a simultaneous dual-color, direct-STORM imaging system through the use of an objective-based TIRF microscope and filter-based image splitter. This system allows for excitation and detection of two fluorophors simultaneously, via projection of each fluorophor's signal onto separate regions of a detector. We imaged the sub-resolution organization of the TLR4 receptor, a key mediator of innate immune response, after challenge with lipopolysaccharide (LPS), a bacteria-specific antigen. While distinct forms of LPS have evolved among various bacteria, only some LPS variations (such as that derived from E. coli) typically result in significant cellular immune response. Others (such as from the plague bacteria Y. pestis) do not, despite affinity to TLR4. We will show that challenge with LPS antigens produces a statistically significant increase in TLR4 receptor clusters on the cell membrane, presumably due to recruitment of receptors to lipid rafts. These changes, however, are only detectable below the diffraction limit and are not evident using conventional imaging methods. Furthermore, we will compare the spatiotemporal behavior of TLR4 receptors in response to different LPS chemotypes in order to elucidate possible routes by which pathogens such as Y. pestis are able to circumvent the innate immune system. Finally, we will exploit the dual-color STORM capabilities to simultaneously image LPS and TLR4 receptors in the cellular membrane at resolutions at or below 40nm.

More Details

Radiation effects on the electrical properties of hafnium oxide based MOS capacitors

Bielejec, Edward S.

Hafnium oxide-based MOS capacitors were investigated to determine electrical property response to radiation environments. In situ capacitance versus voltage measurements were analyzed to identify voltage shifting as a result of changes to trapped charge with increasing dose of gamma, neutron, and ion radiation. In situ measurements required investigation and optimization of capacitor fabrication to include dicing, cleaning, metalization, packaging, and wire bonding. A top metal contact of 200 angstroms of titanium followed by 2800 angstroms of gold allowed for repeatable wire bonding and proper electrical response. Gamma and ion irradiations of atomic layer deposited hafnium oxide on silicon devices both resulted in a midgap voltage shift of no more than 0.2 V toward less positive voltages. This shift indicates recombination of radiation induced positive charge with negative trapped charge in the bulk oxide. Silicon ion irradiation caused interface effects in addition to oxide trap effects that resulted in a flatband voltage shift of approximately 0.6 V also toward less positive voltages. Additionally, no bias dependent voltage shifts with gamma irradiation and strong oxide capacitance room temperature annealing after ion irradiation was observed. These characteristics, in addition to the small voltage shifts observed, demonstrate the radiation hardness of hafnium oxide and its applicability for use in space systems.

More Details

Dosimetry experiments at the MEDUSA Facility (Little Mountain)

Harper-Slaboszewicz, V.H.; Hartman, Elmer F.; Shaneyfelt, Marty R.; Schwank, James R.; Sheridan, Timothy J.

A series of experiments on the MEDUSA linear accelerator radiation test facility were performed to evaluate the difference in dose measured using different methods. Significant differences in dosimeter-measured radiation dose were observed for the different dosimeter types for the same radiation environments, and the results are compared and discussed in this report.

More Details

Transportation implications of a closed fuel cycle

Weiner, Ruth F.; Sorenson, Ken B.; Dennis, Matthew L.

Transportation for each step of a closed fuel cycle is analyzed in consideration of the availability of appropriate transportation infrastructure. The United States has both experience and certified casks for transportation that may be required by this cycle, except for the transport of fresh and used MOX fuel and fresh and used Advanced Burner Reactor (ABR) fuel. Packaging that had been used for other fuel with somewhat similar characteristics may be appropriate for these fuels, but would be inefficient. Therefore, the required neutron and gamma shielding, heat dissipation, and criticality were calculated for MOX and ABR fresh and spent fuel. Criticality would not be an issue, but the packaging design would need to balance neutron shielding and regulatory heat dissipation requirements.

More Details

Parallel mesh management using interoperable tools

Devine, Karen D.

This presentation included a discussion of challenges arising in parallel mesh management, as well as demonstrated solutions. They also described the broad range of software for mesh management and modification developed by the Interoperable Technologies for Advanced Petascale Simulations (ITAPS) team, and highlighted applications successfully using the ITAPS tool suite.

More Details

Fire tests and analyses of a rail cask-sized calorimeter

Figueroa Faria, Victor G.

Three large open pool fire experiments involving a calorimeter the size of a spent fuel rail cask were conducted at Sandia National Laboratories Lurance Canyon Burn Site. These experiments were performed to study the heat transfer between a very large fire and a large cask-like object. In all of the tests, the calorimeter was located at the center of a 7.93-meter diameter fuel pan, elevated 1 meter above the fuel pool. The relative pool size and positioning of the calorimeter conformed to the required positioning of a package undergoing certification fire testing. Approximately 2000 gallons of JP-8 aviation fuel were used in each test. The first two tests had relatively light winds and lasted 40 minutes, while the third had stronger winds and consumed the fuel in 25 minutes. Wind speed and direction, calorimeter temperature, fire envelop temperature, vertical gas plume speed, and radiant heat flux near the calorimeter were measured at several locations in all tests. Fuel regression rate data was also acquired. The experimental setup and certain fire characteristics that were observed during the test are described in this paper. Results from three-dimensional fire simulations performed with the Cask Analysis Fire Environment (CAFE) fire code are also presented. Comparisons of the thermal response of the calorimeter as measured in each test to the results obtained from the CAFE simulations are presented and discussed.

More Details

Regulatory fire test requirements for plutonium air transport packages : JP-4 or JP-5 vs. JP-8 aviation fuel

Figueroa Faria, Victor G.; Nicolette, Vernon F.

For certification, packages used for the transportation of plutonium by air must survive the hypothetical thermal environment specified in 10CFR71.74(a)(5). This regulation specifies that 'the package must be exposed to luminous flames from a pool fire of JP-4 or JP-5 aviation fuel for a period of at least 60 minutes.' This regulation was developed when jet propellant (JP) 4 and 5 were the standard jet fuels. However, JP-4 and JP-5 currently are of limited availability in the United States of America. JP-4 is very hard to obtain as it is not used much anymore. JP-5 may be easier to get than JP-4, but only through a military supplier. The purpose of this paper is to illustrate that readily-available JP-8 fuel is a possible substitute for the aforementioned certification test. Comparisons between the properties of the three fuels are given. Results from computer simulations that compared large JP-4 to JP-8 pool fires using Sandia's VULCAN fire model are shown and discussed. Additionally, the Container Analysis Fire (CAFE) code was used to compare the thermal response of a large calorimeter exposed to engulfing fires fueled by these three jet propellants. The paper then recommends JP-8 as an alternate fuel that complies with the thermal environment implied in 10CFR71.74.

More Details

Construction of an unyielding target for large horizontal impacts

Ammerman, Douglas J.; Davie, Neil T.; Kalan, Robert K.

Sandia National Laboratories has constructed an unyielding target at the end of its 2000-foot rocket sled track. This target is made up of approximately 5 million pounds of concrete, an embedded steel load spreading structure, and a steel armor plate face that varies from 10 inches thick at the center to 4 inches thick at the left and right edges. The target/track combination will allow horizontal impacts at regulatory speeds of very large objects, such as a full-scale rail cask, or high-speed impacts of smaller packages. The load-spreading mechanism in the target is based upon the proven design that has been in use for over 20 years at Sandia's aerial cable facility. That target, with a weight of 2 million pounds, has successfully withstood impact forces of up to 25 million pounds. It is expected that the new target will be capable of withstanding impact forces of more than 70 million pounds. During construction various instrumentation was placed in the target so that the response of the target during severe impacts can be monitored. This paper will discuss the construction of the target and provide insights on the testing capabilities at the sled track with this new target.

More Details

Flat plate puncture test convergence study

Ammerman, Douglas J.

The ASME Task Group on Computational Mechanics for Explicit Dynamics is investigating the types of finite element models needed to accurately solve various problems that occur frequently in cask design. One type of problem is the 1-meter impact onto a puncture spike. The work described in this paper considers this impact for a relatively thin-walled shell, represented as a flat plate. The effects of mesh refinement, friction coefficient, material models, and finite element code will be discussed. The actual punch, as defined in the transport regulations, is 15 cm in diameter with a corner radius of no more than 6 mm. The punch used in the initial part of this study has the same diameter, but has a corner radius of 25 mm. This more rounded punch was used to allow convergence of the solution with a coarser mesh. A future task will be to investigate the effect of having a punch with a smaller corner radius. The 25-cm thick type 304 stainless steel plate that represents the cask wall is 1 meter in diameter and has added mass on the edge to represent the remainder of the cask. The amount of added mass to use was calculated using Nelm's equation, an empirically derived relationship between weight, wall thickness, and ultimate strength that prevents punch through. The outer edge of the plate is restrained so that it can only move in the direction parallel to the axis of the punch. Results that are compared include the deflection at the edge of the plate, the deflection at the center of the plate, the plastic strains at radius r=50 cm and r=100 cm , and qualitatively, the distribution of plastic strains. The strains of interest are those on the surface of the plate, not the integration point strains. Because cask designers are using analyses of this type to determine if shell will puncture, a failure theory, including the effect of the tri-axial nature of the stress state, is also discussed. The results of this study will help to determine what constitutes an adequate finite element model for analyzing the puncture hypothetical accident.

More Details

Storing carbon dioxide in saline formations : analyzing extracted water treatment and use for power plant cooling

Kobos, Peter H.; Roach, Jesse D.; Klise, Geoffrey T.; Krumhansl, James L.; Dewers, Thomas D.; Heath, Jason; Dwyer, Brian P.; Borns, David J.

In an effort to address the potential to scale up of carbon dioxide (CO{sub 2}) capture and sequestration in the United States saline formations, an assessment model is being developed using a national database and modeling tool. This tool builds upon the existing NatCarb database as well as supplemental geological information to address scale up potential for carbon dioxide storage within these formations. The focus of the assessment model is to specifically address the question, 'Where are opportunities to couple CO{sub 2} storage and extracted water use for existing and expanding power plants, and what are the economic impacts of these systems relative to traditional power systems?' Initial findings indicate that approximately less than 20% of all the existing complete saline formation well data points meet the working criteria for combined CO{sub 2} storage and extracted water treatment systems. The initial results of the analysis indicate that less than 20% of all the existing complete saline formation well data may meet the working depth, salinity and formation intersecting criteria. These results were taken from examining updated NatCarb data. This finding, while just an initial result, suggests that the combined use of saline formations for CO{sub 2} storage and extracted water use may be limited by the selection criteria chosen. A second preliminary finding of the analysis suggests that some of the necessary data required for this analysis is not present in all of the NatCarb records. This type of analysis represents the beginning of the larger, in depth study for all existing coal and natural gas power plants and saline formations in the U.S. for the purpose of potential CO{sub 2} storage and water reuse for supplemental cooling. Additionally, this allows for potential policy insight when understanding the difficult nature of combined potential institutional (regulatory) and physical (engineered geological sequestration and extracted water system) constraints across the United States. Finally, a representative scenario for a 1,800 MW subcritical coal fired power plant (amongst other types including supercritical coal, integrated gasification combined cycle, natural gas turbine and natural gas combined cycle) can look to existing and new carbon capture, transportation, compression and sequestration technologies along with a suite of extracting and treating technologies for water to assess the system's overall physical and economic viability. Thus, this particular plant, with 90% capture, will reduce the net emissions of CO{sub 2} (original less the amount of energy and hence CO{sub 2} emissions required to power the carbon capture water treatment systems) less than 90%, and its water demands will increase by approximately 50%. These systems may increase the plant's LCOE by approximately 50% or more. This representative example suggests that scaling up these CO{sub 2} capture and sequestration technologies to many plants throughout the country could increase the water demands substantially at the regional, and possibly national level. These scenarios for all power plants and saline formations throughout U.S. can incorporate new information as it becomes available for potential new plant build out planning.

More Details

Design implementation and migration of security systems as an extreme project

Scharmer, Carol S.

Decision Trees, algorithms, software code, risk management, reports, plans, drawings, change control, presentations, and analysis - all useful tools and efforts but time consuming, resource intensive, and potentially costly for projects that have absolute schedule and budget constraints. What are necessary and prudent efforts when a customer calls with a major security problem that needs to be fixed with a proven, off-the-approval-list, multi-layered integrated system with high visibility and limited funding and expires at the end of the Fiscal Year? Whether driven by budget cycles, safety, or by management decree, many such projects begin with generic scopes and funding allocated based on a rapid management 'guestimate.' Then a Project Manager (PM) is assigned a project with a predefined and potentially limited scope, compressed schedule, and potentially insufficient funding. The PM is tasked to rapidly and cost effectively coordinate a requirements-based design, implementation, test, and turnover of a fully operational system to the customer, all while the customer is operating and maintaining an existing security system. Many project management manuals call this an impossible project that should not be attempted. However, security is serious business and the reality is that rapid deployment of proven systems via an 'Extreme Project' is sometimes necessary. Extreme Projects can be wildly successful but require a dedicated team of security professionals lead by an experienced project manager using a highly-tailored and agile project management process with management support at all levels, all combined with significant interface with the customer. This paper does not advocate such projects or condone eliminating the valuable analysis and project management techniques. Indeed, having worked on a well-planned project provides the basis for experienced team members to complete Extreme Projects. This paper does, however, provide insight into what it takes for projects to be successfully implemented and accepted when completed under extreme conditions.

More Details

Source physics experiments at the Nevada Test Site

Corbell, Bobby H.

The U. S. capability to monitor foreign underground nuclear test activities relies heavily on measurement of explosion phenomena, including characteristic seismic, infrasound, radionuclide, and acoustic signals. Despite recent advances in each of these fields, empirical, rather than physics-based, approaches are used to predict and explain observations. Seismologists rely on prior knowledge of the variations of teleseismic and regional seismic parameters such as p- and s-wave arrivals from simple one-dimensional models for the teleseismic case to somewhat more complicated enhanced two-dimensional models for the regional case. Likewise, radionuclide experts rely on empirical results from a handful of limited experiments to determine the radiological source terms present at the surface after an underground test. To make the next step in the advancement of the science of monitoring we need to transform these fields to enable predictive, physics-based modeling and analysis. The Nevada Test Site Source Physics Experiments (N-SPE) provide a unique opportunity to gather precise data from well-designed experiments to improve physics-based modeling capability. In the seismic experiments, data collection will include time domain reflectometry to measure explosive performance and yield, free-field accelerometers, extensive seismic arrays, and infrasound and acoustic measurements. The improved modeling capability that we will develop using this data should enable important advances in our ability to monitor worldwide for nuclear testing. The first of a series of source physics experiments will be conducted in the granite of Climax Stock at the NTS, near the locations of the HARD HAT and PILE DRIVER nuclear tests. This site not only provides a fairly homogeneous and well-documented geology, but also an opportunity to improve our understanding of how fractures, joints, and faults affect seismic wave generation and propagation. The Climax Stock experiments will consist of a 220 lb (TNT equivalent) calibration shot and a 2200 lb (TNT equivalent) over-buried shot conducted in the same emplacement hole. An identical 2200 lb shot at the same location will follow to investigate the effects of pre-conditioning. These experiments also provide an opportunity to advance capabilities for near-field monitoring, and on-site inspections (OSIs) of suspected testing sites. In particular, geologic, physical, and cultural signatures of underground testing can be evaluated using the N-SPE activities as case studies. Furthermore, experiments to measure the migration of radioactive noble gases to the surface from underground explosions will enable development of higher fidelity radiological source term models that can predict migration through a variety of geologic conditions. Because the detection of short-lived radionuclides is essential to determining if an explosion was nuclear or conventional, a better understanding of the gaseous and particulate radionuclide source terms that reach the surface from underground testing is critical to development of OSI capability.

More Details

Modeling needs for very large systems

Stein, Joshua S.

Most system performance models assume a point measurement for irradiance and that, except for the impact of shading from nearby obstacles, incident irradiance is uniform across the array. Module temperature is also assumed to be uniform across the array. For small arrays and hourly-averaged simulations, this may be a reasonable assumption. Stein is conducting research to characterize variability in large systems and to develop models that can better accommodate large system factors. In large, multi-MW arrays, passing clouds may block sunlight from a portion of the array but never affect another portion. Figure 22 shows that two irradiance measurements at opposite ends of a multi-MW PV plant appear to have similar irradiance (left), but in fact the irradiance is not always the same (right). Module temperature may also vary across the array, with modules on the edges being cooler because they have greater wind exposure. Large arrays will also have long wire runs and will be subject to associated losses. Soiling patterns may also vary, with modules closer to the source of soiling, such as an agricultural field, receiving more dust load. One of the primary concerns associated with this effort is how to work with integrators to gain access to better and more comprehensive data for model development and validation.

More Details

Results of model intercomparison : predicted vs. measured system performance

Stein, Joshua S.

This is a blind modeling study to illustrate the variability expected between PV performance model results. Objectives are to answer: (1) What is the modeling uncertainty; (2) Do certain models do better than others; (3) How can performance modeling be improved; and (4) What are the sources of uncertainty? Some preliminary conclusions are: (1) Large variation seen in model results; (2) Variation not entirely consistent across systems; (3) Uncertainty in assigning derates; (4) Discomfort when components are not included in database - Is there comfort when the components are in the database?; and (5) Residual analysis will help to uncover additional patterns in the models.

More Details

A standardized approach to PV system performance model validation

Cameron, Christopher P.

PV performance models are used to predict how much energy a PV system will produce at a given location and subject to prescribed weather conditions. These models are commonly used by project developers to choose between module technologies and array designs (e.g., fixed tilt vs. tracking) for a given site or to choose between different geographic locations, and are used by the financial community to establish project viability. Available models can differ significantly in their underlying mathematical formulations and assumptions and in the options available to the analyst for setting up a simulation. Some models lack complete documentation and transparency, which can result in confusion on how to properly set up, run, and document a simulation. Furthermore, the quality and associated uncertainty of the available data upon which these models rely (e.g., irradiance, module parameters, etc.) is often quite variable and frequently undefined. For these reasons, many project developers and other industry users of these simulation tools have expressed concerns related to the confidence they place in PV performance model results. To address this problem, we propose a standardized method for the validation of PV system-level performance models and a set of guidelines for setting up these models and reporting results. This paper describes the basic elements for a standardized model validation process adapted especially for PV performance models, suggests a framework to implement the process, and presents an example of its application to a number of available PV performance models.

More Details

Expanding the Trilinos developer community

Heroux, Michael A.

The Trilinos Project started approximately nine years ago as a small effort to enable research, development and ongoing support of small, related solver software efforts. The 'Tri' in Trilinos was intended to indicate the eventual three packages we planned to develop. In 2007 the project expanded its scope to include any package that was an enabling technology for technical computing. Presently the Trilinos repository contains over 55 packages covering a broad spectrum of reusable tools for constructing full-featured scalable scientific and engineering applications. Trilinos usage is now worldwide, and many applications have an explicit dependence on Trilinos for essential capabilities. Users come from other US laboratories, universities, industry and international research groups. Awareness and use of Trilinos is growing rapidly outside of Sandia. Members of the external research community are becoming more familiar with Trilinos, its design and collaborative nature. As a result, the Trilinos project is receiving an increasing number of requests from external community members who want to contribute to Trilinos as developers. To-date we have worked with external developers in an ad hoc fashion. Going forward, we want to develop a set of policies, procedures, tools and infrastructure to simplify interactions with external developers. As we go forward with multi-laboratory efforts such as CASL and X-Stack, and international projects such as IESP, we will need a more streamlined and explicit process for making external developers 'first-class citizens' in the Trilinos development community. This document is intended to frame the discussion for expanding the Trilinos community to all strategically important external members, while at the same time preserving Sandia's primary leadership role in the project.

More Details

Use of a hybrid technology in a critical security system

Trujillo, David J.

Assigning an acceptable level of power reliability in a security system environment requires a methodical approach to design when considering the alternatives tied to the reliability and life of the system. The downtime for a piece of equipment, be it for failure, routine maintenance, replacement, or refurbishment or connection of new equipment is a major factor in determining the reliability of the overall system. In addition to these factors is the condition where the system is static or dynamic in its growth. Most highly reliable security power source systems are supplied by utility power with uninterruptable power source (UPS) and generator backup. The combination of UPS and generator backup with a reliable utility typically provides full compliance to security requirements. In the energy market and from government agencies, there is growing pressure to utilize alternative sources of energy other than fossil fuel to increase the number of local generating systems to reduce dependence on remote generating stations and cut down on carbon effects to the environment. There are also conditions where a security system may be limited on functionality due to lack of utility power in remote locations. One alternative energy source is a renewable energy hybrid system including a photovoltaic or solar system with battery bank and backup generator set. This is a viable source of energy in the residential and commercial markets where energy management schemes can be incorporated and systems are monitored and maintained regularly. But, the reliability of this source could be considered diminished when considering the security system environment where stringent uptime requirements are required.

More Details

Discontinuous Galerkin finite element methods for gradient plasticity

Ostien, Jakob O.

In this report we apply discontinuous Galerkin finite element methods to the equations of an incompatibility based formulation of gradient plasticity. The presentation is motivated with a brief overview of the description of dislocations within a crystal lattice. A tensor representing a measure of the incompatibility with the lattice is used in the formulation of a gradient plasticity model. This model is cast in a variational formulation, and discontinuous Galerkin machinery is employed to implement the formulation into a finite element code. Finally numerical examples of the model are shown.

More Details

Thermal modeling of carbon-epoxy laminates in fire environments

Dodd, Amanda B.

A thermal model is developed for the response of carbon-epoxy composite laminates in fire environments. The model is based on a porous media description that includes the effects of gas transport within the laminate along with swelling. Model comparisons are conducted against the data from Quintere et al. Simulations are conducted for both coupon level and intermediate scale one-sided heating tests. Comparisons of the heat release rate (HRR) as well as the final products (mass fractions, volume percentages, porosity, etc.) are conducted. Overall, the agreement between available the data and model is excellent considering the simplified approximations to account for flame heat flux. A sensitivity study using a newly developed swelling model shows the importance of accounting for laminate expansion for the prediction of burnout. Excellent agreement is observed between the model and data of the final product composition that includes porosity, mass fractions and volume expansion ratio.

More Details

A life cycle cost analysis framework for geologic storage of hydrogen : a scenario analysis

Lord, Anna S.; Kobos, Peter H.; Borns, David J.

The U.S. Department of Energy has an interest in large scale hydrogen geostorage, which would offer substantial buffer capacity to meet possible disruptions in supply. Geostorage options being considered are salt caverns, depleted oil/gas reservoirs, aquifers and potentially hard rock cavrns. DOE has an interest in assessing the geological, geomechanical and economic viability for these types of hydrogen storage options. This study has developed an ecocomic analysis methodology to address costs entailed in developing and operating an underground geologic storage facility. This year the tool was updated specifically to (1) a version that is fully arrayed such that all four types of geologic storage options can be assessed at the same time, (2) incorporate specific scenarios illustrating the model's capability, and (3) incorporate more accurate model input assumptions for the wells and storage site modules. Drawing from the knowledge gained in the underground large scale geostorage options for natural gas and petroleum in the U.S. and from the potential to store relatively large volumes of CO{sub 2} in geological formations, the hydrogen storage assessment modeling will continue to build on these strengths while maintaining modeling transparency such that other modeling efforts may draw from this project.

More Details

Potential underground risks associated with CAES

Bauer, Stephen J.

CAES in geologic media has been proposed to help 'firm' renewable energy sources (wind and solar) by providing a means to store energy when excess energy was available, and to provide an energy source during non-productive renewable energy time periods. Such a storage media may experience hourly (perhaps small) pressure swings. Salt caverns represent the only proven underground storage used for CAES, but not in a mode where renewable energy sources are supported. Reservoirs, both depleted natural gas and aquifers represent other potential underground storage vessels for CAES, however, neither has yet to be demonstrated as a functional/operational storage media for CAES.

More Details

Characterization of the surface changes during the activation process of erbium/erbium oxide for hydrogen storage

Brumbach, Michael T.; Zavadil, Kevin R.; Snow, Clark S.; Ohlhausen, J.A.

Erbium is known to effectively load with hydrogen when held at high temperature in a hydrogen atmosphere. To make the storage of hydrogen kinetically feasible, a thermal activation step is required. Activation is a routine practice, but very little is known about the physical, chemical, and/or electronic processes that occur during Activation. This work presents in situ characterization of erbium Activation using variable energy photoelectron spectroscopy at various stages of the Activation process. Modification of the passive surface oxide plays a significant role in Activation. The chemical and electronic changes observed from core-level and valence band spectra will be discussed along with corroborating ion scattering spectroscopy measurements.

More Details

High performance semantic factoring of giga-scale semantic graph databases

Goodman, Eric G.; Mackey, Greg

As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deploying that for the analysis of the Billion Triple dataset with respect to its semantic factors, including basic properties, connected components, namespace interaction, and typed paths.

More Details

Premature ignition of a rocket motor

Moore, Darlene R.

During preparation for a rocket sled track (RST) event, there was an unexpected ignition of the zuni rocket motor (10/9/08). Three Sandia staff and a contractor were involved in the accident; the contractor was seriously injured and made full recovery. The data recorder battery energized the low energy initiator in the rocket.

More Details

Foundations to the unified psycho-cognitive engine

Backus, George A.

This document outlines the key features of the SNL psychological engine. The engine is designed to be a generic presentation of cognitive entities interacting among themselves and with the external world. The engine combines the most accepted theories of behavioral psychology with those of behavioral economics to produce a unified simulation of human response from stimuli through executed behavior. The engine explicitly recognizes emotive and reasoned contributions to behavior and simulates the dynamics associated with cue processing, learning, and choice selection. Most importantly, the model parameterization can come from available media or survey information, as well subject-matter-expert information. The framework design allows the use of uncertainty quantification and sensitivity analysis to manage confidence in using the analysis results for intervention decisions.

More Details

Split Hopkinson bar experiments of preloaded interfaces

Luk, Vincent K.

Preloads are routinely applied to stiffen structural members in many applications. However, the preloaded structural members have been observed to lose a significant portion of the imposed load due to internal relaxation mechanisms during impulsive impact events. This paper describes the design and initial experiments for a novel Hopkinson bar configuration designed to investigate the effect of preloads on the stress wave propagation across interfaces between the incident and transmission bars. Dynamic responses are measured by a variety of sensors, including accelerometers, strain gages, and a laser vibrometer. The transmissibility of a titanium incident bar is measured to establish the baseline frequency response between the input and the test interface. Wave transmission across an titanium-aluminum interface is also examined by analyzing the frequency response function, transmission efficiency, and transmissibility between the incident and transmitted waves. The presence of vacuum grease is shown to strongly influence the dynamic behavior of the system.

More Details

XeF2 vapor phase silicon etch used in the fabrication of movable SOI structures

Shul, Randy J.; Bauer, Todd B.; Plut, Thomas A.; Sanchez, Carlos A.

Vapor phase XeF{sub 2} has been used in the fabrication of various types of devices including MEMS, resonators, RF switches, and micro-fluidics, and for wafer level packaging. In this presentation we demonstrate the use of XeF{sub 2} Si etch in conjunction with deep reactive ion etch (DRIE) to release single crystal Si structures on Silicon On Insulator (SOI) wafers. XeF{sub 2} vapor phase etching is conducive to the release of movable SOI structures due to the isotropy of the etch, the high etch selectivity to silicon dioxide (SiO{sub 2}) and fluorocarbon (FC) polymer etch masks, and the ability to undercut large structures at high rates. Also, since XeF{sub 2} etching is a vapor phase process, stiction problems often associated with wet chemical release processes are avoided. Monolithic single crystal Si features were fabricated by etching continuous trenches in the device layer of an SOI wafer using a DRIE process optimized to stop on the buried SiO{sub 2}. The buried SiO{sub 2} was then etched to handle Si using an anisotropic plasma etch process. The sidewalls of the device Si features were then protected with a conformal passivation layer of either FC polymer or SiO{sub 2}. FC polymer was deposited from C4F8 gas precursor in an inductively coupled plasma reactor, and SiO{sub 2} was deposited by plasma enhanced chemical vapor deposition (PECVD). A relatively high ion energy, directional reactive ion etch (RIE) plasma was used to remove the passivation film on surfaces normal to the direction of the ions while leaving the sidewall passivation intact. After the bottom of the trench was cleared to the underlying Si handle wafer, XeF{sub 2} was used to isotropically etch the handle Si, thus undercutting and releasing the features patterned in the device Si layer. The released device Si structures were not etched by the XeF{sub 2} due to protection from the top SiO{sub 2} mask, sidewall passivation, and the buried SiO{sub 2} layer. Optimization of the XeF{sub 2} process and the sidewall passivation layers will be discussed. The advantages of releasing SOI devices with XeF{sub 2} include avoiding stiction, maintaining the integrity of the buried SiO{sub 2}, and simplifying the fabrication flow for thermally actuated devices.

More Details

The energy-water nexus and the role of carbon capture and sequestration

Malczynski, Leonard A.; Kobos, Peter H.; Castillo, Cesar R.

There is growing evidence of human induced climate change. Various legislation has been introduced to cap carbon emissions. Fossil powered electric generation is responsible for over 30% of the U.S. emissions. Carbon Capture and Sequestration (CCS) technology is water and energy intensive. The project's objectives are: (1) Explore water consumption implications associated with full deployment of a Carbon Capture and Storage (CCS) future; (2) Identify vulnerable areas in which water resources may be too limited to enable full deployment of CCS technology; and (3) Implement project with the cooperation of the National Energy Technology Laboratory (NETL) and DOE Office of Policy and International Affairs. Thermoelectric consumption projected to increase by 3.7 BGD due to CCS by 2035, a doubling over 2004. This increase is equivalent to projected growth in consumption by all other sectors. Demand is not equally distributed across the U.S. 18.5% of this future demand is located in watershed prone to surface and groundwater stress. 30% of current and future demand is located in watersheds prone to drought stress.

More Details

Nanocrystal-enabled solid state bonding

Holm, Elizabeth A.; Puskar, J.D.; Reece, Mark R.; Tikare, Veena T.

In this project, we performed a preliminary set of sintering experiments to examine nanocrystal-enabled diffusion bonding (NEDB) in Ag-on-Ag and Cu-on-Cu using Ag nanoparticles. The experimental test matrix included the effects of material system, temperature, pressure, and particle size. The nanoparticle compacts were bonded between plates using a customized hot press, tested in shear, and examined post mortem using microscopy techniques. NEDB was found to be a feasible mechanism for low-temperature, low-pressure, solid-state bonding of like materials, creating bonded interfaces that were able to support substantial loads. The maximum supported shear strength varied substantially within sample cohorts due to variation in bonded area; however, systematic variation with fabrication conditions was also observed. Mesoscale sintering simulations were performed in order to understand whether sintering models can aid in understanding the NEDB process. A pressure-assisted sintering model was incorporated into the SPPARKS kinetic Monte Carlo sintering code. Results reproduce most of the qualitative behavior observed in experiments, indicating that simulation can augment experiments during the development of the NEDB process. Because NEDB offers a promising route to low-temperature, low-pressure, solid-state bonding, we recommend further research and development with a goal of devising new NEDB bonding processes to support Sandia's customers.

More Details

Structural simulations of nanomaterials self-assembled from ionic macrocycles

Van Swol, Frank

Recent research at Sandia has discovered a new class of organic binary ionic solids with tunable optical, electronic, and photochemical properties. These nanomaterials, consisting of a novel class of organic binary ionic solids, are currently being developed at Sandia for applications in batteries, supercapacitors, and solar energy technologies. They are composed of self-assembled oligomeric arrays of very large anions and large cations, but their crucial internal arrangement is thus far unknown. This report describes (a) the development of a relevant model of nonconvex particles decorated with ions interacting through short-ranged Yukawa potentials, and (b) the results of initial Monte Carlo simulations of the self-assembly binary ionic solids.

More Details

Model electrode structures for studies of electrocatalyst degradation

Goeke, Ronald S.

Proton exchange membrane fuel cells are being extensively studied as power sources because of their technological advantages such as high energy efficiency and environmental friendliness. The most effective catalyst in these systems consists of nanoparticles of Pt or Pt-based alloys on carbon supports. Understanding the role of the nanoparticle size and structure on the catalytic activity and degradation is needed to optimize the fuel cell performance and reduce the noble metal loading. One of the more significant causes of fuel cell performance degradation is the cathode catalyst deactivation. There are four mechanisms considered relevant to the loss of electrochemically active surface area of Pt in the fuel cell electrodes that contribute to cathode catalyst degradation including: catalyst particle sintering such as Ostwald ripening, migration and coalescence, carbon corrosion and catalyst dissolution. Most approaches to study this catalyst degradation utilize membrane electrode assemblies (MEAs), which results in a complex system where it is difficult to deconvolute the effects of the metal nanoparticles. Our research addresses catalyst degradation by taking a fundamental approach to study electrocatalyst using model supports. Nanostructured particle arrays are engineered directly onto planar glassy carbon electrodes. These model electrocatalyst structures are applied to electrochemical activity measurements using a rotating disk electrode and surface characterization by scanning electron microscopy. Sample transfer between these measurement techniques enables examination of the same catalyst area before and after electrochemical cycling. This is useful to probe relationships between electrochemical activity and catalyst structure such as particle size and spacing. These model systems are applied to accelerated aging studies of activity degradation. We will present our work demonstrating the mechanistic aspects of catalyst degradation using this simplified geometric system. The active surface area loss observed in repeated cyclic voltammetry is explained through characterization and imaging of the same RDE electrode structures throughout the aging process.

More Details

Cloud computing security

Claycomb, William R.; Urias, Vincent U.

Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

More Details

Ultrafast 25 keV backlighting for experiments on Z

Geissel, Matthias G.; Atherton, B.W.; Pitts, Todd A.; Schollmeier, Marius; Headley, Daniel I.; Kimmel, Mark W.; Rambo, Patrick K.; Robertson, Grafton K.; Sefkow, Adam B.; Schwarz, Jens S.; Speas, Christopher S.

To extend the backlighting capabilities for Sandia's Z-Accelerator, Z-Petawatt, a laser which can provide laser pulses of 500 fs length and up to 120 J (100TW target area) or up to 450 J (Z/Petawatt target area) has been built over the last years. The main mission of this facility focuses on the generation of high energy X-rays, such as tin K{alpha} at 25 keV in ultra-short bursts. Achieving 25 keV radiographs with decent resolution and contrast required addressing multiple problems such as blocking of hot electrons, minimization of the source, development of suitable filters, and optimization of laser intensity. Due to the violent environment inside of Z, an additional very challenging task is finding massive debris and radiation protection measures without losing the functionality of the backlighting system. We will present the first experiments on 25 keV backlighting including an analysis of image quality and X-ray efficiency.

More Details

The theory of diversity and redundancy in information system security : LDRD final report

Mayo, Jackson M.; Armstrong, Robert C.; Allan, Benjamin A.; Walker, Andrea M.

The goal of this research was to explore first principles associated with mixing of diverse implementations in a redundant fashion to increase the security and/or reliability of information systems. Inspired by basic results in computer science on the undecidable behavior of programs and by previous work on fault tolerance in hardware and software, we have investigated the problem and solution space for addressing potentially unknown and unknowable vulnerabilities via ensembles of implementations. We have obtained theoretical results on the degree of security and reliability benefits from particular diverse system designs, and mapped promising approaches for generating and measuring diversity. We have also empirically studied some vulnerabilities in common implementations of the Linux operating system and demonstrated the potential for diversity to mitigate these vulnerabilities. Our results provide foundational insights for further research on diversity and redundancy approaches for information systems.

More Details

Measurements of radiative material properties for astrophysical plasmas

Bailey, James E.

The new generation of z-pinch, laser, and XFEL facilities opens the possibility to produce astrophysically-relevant laboratory plasmas with energy densities beyond what was previously possible. Furthermore, macroscopic plasmas with uniform conditions can now be created, enabling more accurate determination of the material properties. This presentation will provide an overview of our research at the Z facility investigating stellar interior opacities, AGN warm-absorber photoionized plasmas, and white dwarf photospheres. Atomic physics in plasmas heavily influence these topics. Stellar opacities are an essential ingredient of stellar models and they affect what we know about the structure and evolution of stars. Opacity models have become highly sophisticated, but laboratory tests have not been done at the conditions existing inside stars. Our research is presently focused on measuring Fe at conditions relevant to the base of the solar convection zone, where the electron temperature and density are believed to be 190 eV and 9 x 10{sup 22} e/cc, respectively. The second project is aimed at testing atomic kinetics models for photoionized plasmas. Photoionization is an important process in many astrophysical plasmas and the spectral signatures are routinely used to infer astrophysical object's characteristics. However, the spectral synthesis models at the heart of these interpretations have been the subject of very limited experimental tests. Our current research examines photoionization of neon plasma subjected to radiation flux similar to the warm absorber that surrounds active galactic nuclei. The third project is a recent initiative aimed at producing a white dwarf photosphere in the laboratory. Emergent spectra from the photosphere are used to infer the star's effective temperature and surface gravity. The results depend on knowledge of H, He, and C spectral line profiles under conditions where complex physics such as quasi-molecule formation may be important. These profiles have been studied in past experiments, but puzzles emerging from recent white dwarf analysis have raised questions about the accuracy of the line profile models. Proof-of-principle data has been acquired that indicates radiation-heated quiescent plasmas can be produced with {approx} 1 eV temperature and 10{sup 17}-10{sup 19} e/cc densities, in an {approx} 20cm{sup 3} volume. Such plasmas would provide a valuable platform for investigating numerous line profile questions.

More Details

Doppler effects on 3-D non-LTE radiation transport and emission spectra

Hansen, Stephanie B.; Jones, Brent M.; Ampleford, David A.; Bailey, James E.; Rochau, G.A.; Coverdale, Christine A.; Jennings, Christopher A.; Cuneo, M.E.

Spatially and temporally resolved X-ray emission lines contain information about temperatures, densities, velocities, and the gradients in a plasma. Extracting this information from optically thick lines emitted from complex ions in dynamic, three-dimensional, non-LTE plasmas requires self-consistent accounting for both non-LTE atomic physics and non-local radiative transfer. We present a brief description of a hybrid-structure spectroscopic atomic model coupled to an iterative tabular on-the-spot treatment of radiative transfer that can be applied to plasmas of arbitrary material composition, conditions, and geometries. The effects of Doppler line shifts on the self-consistent radiative transfer within the plasma and the emergent emission and absorption spectra are included in the model. Sample calculations for a two-level atom in a uniform cylindrical plasma are given, showing reasonable agreement with more sophisticated transport models and illustrating the potential complexity - or richness - of radially resolved emission lines from an imploding cylindrical plasma. Also presented is a comparison of modeled L- and K-shell spectra to temporally and radially resolved emission data from a Cu:Ni plasma. Finally, some shortcomings of the model and possible paths for improvement are discussed.

More Details

Aerosol cluster impact and break-up : model and implementation

Lechman, Jeremy B.

In this report a model for simulating aerosol cluster impact with rigid walls is presented. The model is based on JKR adhesion theory and is implemented as an enhancement to the granular (DEM) package within the LAMMPS code. The theory behind the model is outlined and preliminary results are shown. Modeling the interactions of small particles is relevant to a number of applications (e.g., soils, powders, colloidal suspensions, etc.). Modeling the behavior of aerosol particles during agglomeration and cluster dynamics upon impact with a wall is of particular interest. In this report we describe preliminary efforts to develop and implement physical models for aerosol particle interactions. Future work will consist of deploying these models to simulate aerosol cluster behavior upon impact with a rigid wall for the purpose of developing relationships for impact speed and probability of stick/bounce/break-up as well as to assess the distribution of cluster sizes if break-up occurs. These relationships will be developed consistent with the need for inputs into system-level codes. Section 2 gives background and details on the physical model as well as implementations issues. Section 3 presents some preliminary results which lead to discussion in Section 4 of future plans.

More Details

Shock compression of liquid helium and helium-hydrogen mixtures : development of a cryogenic capability for shock compression of liquid helium on Z, final report for LDRD Project 141536

Hanson, David L.; Lopez, A.; Shelton, Keegan P.; Knudson, Marcus D.

This final report on SNL/NM LDRD Project 141536 summarizes progress made toward the development of a cryogenic capability to generate liquid helium (LHe) samples for high accuracy equation-of-state (EOS) measurements on the Z current drive. Accurate data on He properties at Mbar pressures are critical to understanding giant planetary interiors and for validating first principles density functional simulations, but it is difficult to condense LHe samples at very low temperatures (<3.5 K) for experimental studies on gas guns, magnetic and explosive compression devices, and lasers. We have developed a conceptual design for a cryogenic LHe sample system to generate quiescent superfluid LHe samples at 1.5-1.8 K. This cryogenic system adapts the basic elements of a continuously operating, self-regulating {sup 4}He evaporation refrigerator to the constraints of shock compression experiments on Z. To minimize heat load, the sample holder is surrounded by a double layer of thermal radiation shields cooled with LHe to 5 K. Delivery of LHe to the pumped-He evaporator bath is controlled by a flow impedance. The LHe sample holder assembly features modular components and simplified fabrication techniques to reduce cost and complexity to levels required of an expendable device. Prototypes have been fabricated, assembled, and instrumented for initial testing.

More Details

OVIS 3.2 user's guide

Brandt, James M.; Gentile, Ann C.; Houf, Catherine A.; Mayo, Jackson M.; Pebay, Philippe P.; Roe, Diana C.; Thompson, David C.; Wong, Matthew H.

This document describes how to obtain, install, use, and enjoy a better life with OVIS version 3.2. The OVIS project targets scalable, real-time analysis of very large data sets. We characterize the behaviors of elements and aggregations of elements (e.g., across space and time) in data sets in order to detect meaningful conditions and anomalous behaviors. We are particularly interested in determining anomalous behaviors that can be used as advance indicators of significant events of which notification can be made or upon which action can be taken or invoked. The OVIS open source tool (BSD license) is available for download at ovis.ca.sandia.gov. While we intend for it to support a variety of application domains, the OVIS tool was initially developed for, and continues to be primarily tuned for, the investigation of High Performance Compute (HPC) cluster system health. In this application it is intended to be both a system administrator tool for monitoring and a system engineer tool for exploring the system state in depth. OVIS 3.2 provides a variety of statistical tools for examining the behavior of elements in a cluster (e.g., nodes, racks) and associated resources (e.g., storage appliances and network switches). It provides an interactive 3-D physical view in which the cluster elements can be colored by raw or derived element values (e.g., temperatures, memory errors). The visual display allows the user to easily determine abnormal or outlier behaviors. Additionally, it provides search capabilities for certain scheduler logs. The OVIS capabilities were designed to be highly interactive - for example, the job search may drive an analysis which in turn may drive the user generation of a derived value which would then be examined on the physical display. The OVIS project envisions the capabilities of its tools applied to compute cluster monitoring. In the future, integration with the scheduler or resource manager will be included in a release to enable intelligent resource utilization. For example, nodes that are deemed less healthy (i.e., nodes that exhibit outlier behavior with respect to some set of variables shown to be correlated with future failure) can be discovered and assigned to shorter duration or less important jobs. Further, HPC applications with fault-tolerant capabilities would respond to changes in resource health and other OVIS notifications as needed, rather than undertaking preventative measures (e.g. checkpointing) at regular intervals unnecessarily.

More Details

Metal fires and their implications for advanced reactors

Hewson, John C.; Nowlen, Steven P.; Figueroa Faria, Victor G.; Blanchat, Tom; Olivier, Tara J.

This report details the primary results of the Laboratory Directed Research and Development project (LDRD 08-0857) Metal Fires and Their Implications for Advance Reactors. Advanced reactors may employ liquid metal coolants, typically sodium, because of their many desirable qualities. This project addressed some of the significant challenges associated with the use of liquid metal coolants, primary among these being the extremely rapid oxidation (combustion) that occurs at the high operating temperatures in reactors. The project has identified a number of areas for which gaps existed in knowledge pertinent to reactor safety analyses. Experimental and analysis capabilities were developed in these areas to varying degrees. In conjunction with team participation in a DOE gap analysis panel, focus was on the oxidation of spilled sodium on thermally massive surfaces. These are spills onto surfaces that substantially cool the sodium during the oxidation process, and they are relevant because standard risk mitigation procedures seek to move spill environments into this regime through rapid draining of spilled sodium. While the spilled sodium is not quenched, the burning mode is different in that there is a transition to a smoldering mode that has not been comprehensively described previously. Prior work has described spilled sodium as a pool fire, but there is a crucial, experimentally-observed transition to a smoldering mode of oxidation. A series of experimental measurements have comprehensively described the thermal evolution of this type of sodium fire for the first time. A new physics-based model has been developed that also predicts the thermal evolution of this type of sodium fire for the first time. The model introduces smoldering oxidation through porous oxide layers to go beyond traditional pool fire analyses that have been carried out previously in order to predict experimentally observed trends. Combined, these developments add significantly to the safety analysis capabilities of the advanced-reactor community for directly relevant scenarios. Beyond the focus on the thermally-interacting and smoldering sodium pool fires, experimental and analysis capabilities for sodium spray fires have also been developed in this project.

More Details

Synthesis and electrical analysis of nano-crystalline barium titanate nanocomposites for use in high-energy density applications

DiAntonio, Christopher D.; Monson, Todd M.; Winter, Michael R.; Roesler, Alexander R.; Chavez, Tom C.; Yang, Pin Y.

Ceramic based nanocomposites have recently demonstrated the ability to provide enhanced permittivity, increased dielectric breakdown strength, and reduced electromechanical strain making them potential materials systems for high energy density applications. A systematic characterization and optimization of barium titanate and PLZT based nanoparticle composites employing a glass or polymer matrix to yield a high energy density component will be presented. This work will present the systematic characterization and optimization of barium titanate and lead lanthanum zirconate titanate nanoparticle based ceramics. The nanoparticles have been synthesized using solution and pH-based synthesis processing routes and employed to fabricate polycrystalline ceramic and nanocomposite based components. The dielectric/ferroelectric properties of these various components have been gauged by impedance analysis and electromechanical response and will be discussed.

More Details
Results 69401–69600 of 96,771
Results 69401–69600 of 96,771