Publications

Results 71101–71200 of 99,299

Search results

Jump to search filters

Thiol functionalized polymers for dielectric materials

Appelhans, Leah; Dirk, Shawn M.

The development of functionalized polymer dielectrics based on poly(norbornene) and poly(PhONDI) (PhONDI = N-phenyl-7-oxanorbornene-5,6-dicarboximide) is presented. Functionalization of the polymer backbones by the thiol-ene reaction was examined to determine if thiol addition improved dielectric properties. Poly(norbornene) was not amenable to functionalization due to the propensity to crosslink under the reaction conditions studied. Poly(PhONDI) could be successfully functionalized, and the functionalized polymer was found to have increased breakdown strength as well as improved solution stability. Initial studies on the development of thiol-functionalized silica/poly(PhONDI) nanocomposites and their dielectric properties will also be discussed.

More Details

Polynorbornene as a low loss matrix material for IR metamaterial applications

Rasberry, Roger D.; Ginn, James C.; Hines, Paul H.; Arrington, Christian L.; Sinclair, Michael B.; Clem, Paul; Dirk, Shawn M.

Novel low loss photopatternable matrix materials for IR metamaterial applications were synthesized using the ring opening metathesis polymerization reaction (ROMP) of norbornene followed by a partial hydrogenation to remove most of the IR absorbing olefin groups which absorb in the 8-12 {micro}m range. Photopatterning was achieved via crosslinking of the remaining olefin groups with alpha, omega-dithiols via the thiol-ene coupling reaction. Since ROMP is a living polymerization the molecular weight of the polymer can be controlled simply by varying the ratio of catalyst to monomer. In order to determine the optimum photopattenable IR matrix material we varied the amount of olefin remaining after the partial hydrogenation. Hydrogenation was accomplished using tosyl hydrazide. The degree of hydrogenation can be controlled by altering the reaction time or reaction stoichiometry and the by-products can be easily removed during workup by precipitation into ethanol. Several polymers have been prepared using this reduction scheme including two polymers which had 54% and 68% olefin remaining. Free standing films (approx. 12 {micro}m) were prepared from the 68% olefin material using draw-down technique and subsequently irradiated with a UV lamp (365 nm) for thirty minutes to induce crosslinking via thiol-ene reaction. After crosslinking, the olefin IR-absorption band disappeared and the Tg of the matrix material increased; both desirable properties for IR metamaterial applications. The polymer system has inherent photopatternable behavior primarily because of solubility differences between the pre-polymer and cross-linked matrix. Photopatterned structures using the 54% as well as the 68% olefin material were easily obtained. The synthesis, processing, and IR absorption data and the ramifications to dielectric metamaterials will be discussed.

More Details

Making silicon stronger

Boyce, Brad L.

Silicon microfabrication has seen many decades of development, yet the structural reliability of microelectromechanical systems (MEMS) is far from optimized. The fracture strength of Si MEMS is limited by a combination of poor toughness and nanoscale etch-induced defects. A MEMS-based microtensile technique has been used to characterize the fracture strength distributions of both standard and custom microfabrication processes. Recent improvements permit 1000's of test replicates, revealing subtle but important deviations from the commonly assumed 2-parameter Weibull statistical model. Subsequent failure analysis through a combination of microscopy and numerical simulation reveals salient aspects of nanoscale flaw control. Grain boundaries, for example, suffer from preferential attack during etch-release thereby forming failure-critical grain-boundary grooves. We will discuss ongoing efforts to quantify the various factors that affect the strength of polycrystalline silicon, and how weakest-link theory can be used to make worst-case estimates for design.

More Details

International perspectives on mitigating laboratory biorisks

Salazar, Carlos; Pinard, William J.

The International Perspectives on Mitigating Laboratory Biorisks workshop, held at the Renaissance Polat Istanbul Hotel in Istanbul, Republic of Turkey, from October 25 to 27, 2010, sought to promote discussion between experts and stakeholders from around the world on issues related to the management of biological risk in laboratories. The event was organized by Sandia National Laboratories International Biological Threat Reduction program, on behalf of the US Department of State Biosecurity Engagement Program and the US Department of Defense Cooperative Biological Engagement Program. The workshop came about as a response to US Under Secretary of State Ellen O. Tauscher's statements in Geneva on December 9, 2009, during the Annual Meeting of the States Parties to the Biological Weapons Convention (BWC). Pursuant to those remarks, the workshop was intended to provide a forum for interested countries to share information on biorisk management training, standards, and needs. Over the course of the meeting's three days, participants discussed diverse topics such as the role of risk assessment in laboratory biorisk management, strategies for mitigating risk, measurement of performance and upkeep, international standards, training and building workforce competence, and the important role of government and regulation. The meeting concluded with affirmations of the utility of international cooperation in this sphere and recognition of positive prospects for the future. The workshop was organized as a series of short presentations by international experts on the field of biorisk management, followed by breakout sessions in which participants were divided into four groups and urged to discuss a particular topic with the aid of a facilitator and a set of guiding questions. Rapporteurs were present during the plenary session as well as breakout sessions and in particular were tasked with taking notes during discussions and reporting back to the assembled participants a brief summary of points discussed. The presentations and breakout sessions were divided into five topic areas: 'Challenges in Biorisk Management,' 'Risk Assessment and Mitigation Measures,' 'Biorisk Management System Performance,' 'Training,' and 'National Oversight and Regulations.' The topics and questions were chosen by the organizers through consultation with US Government sponsors. The Chattham House Rule on non-attribution was in effect during question and answer periods and breakout session discussions.

More Details

The nature of airbursts and their contribution to the impact threat

Boslough, Mark

Ongoing simulations of low-altitude airbursts from hypervelocity asteroid impacts have led to a re-evaluation of the impact hazard that accounts for the enhanced damage potential relative to the standard point-source approximations. Computational models demonstrate that the altitude of maximum energy deposition is not a good estimate of the equivalent height of a point explosion, because the center of mass of an exploding projectile maintains a significant fraction of its initial momentum and is transported downward in the form of a high-temperature jet of expanding gas. This 'fireball' descends to a depth well beneath the burst altitude before its velocity becomes subsonic. The time scale of this descent is similar to the time scale of the explosion itself, so the jet simultaneously couples both its translational and its radial kinetic energy to the atmosphere. Because of this downward flow, larger blast waves and stronger thermal radiation pulses are experienced at the surface than would be predicted for a nuclear explosion of the same yield at the same burst height. For impacts with a kinetic energy below some threshold value, the hot jet of vaporized projectile loses its momentum before it can make contact with the Earth's surface. The 1908 Tunguska explosion is the largest observed example of this first type of airburst. For impacts above the threshold, the fireball descends all the way to the ground, where it expands radially, driving supersonic winds and radiating thermal energy at temperatures that can melt silicate surface materials. The Libyan Desert Glass event, 29 million years ago, may be an example of this second, larger, and more destructive type of airburst. The kinetic energy threshold that demarcates these two airburst types depends on asteroid velocity, density, strength, and impact angle. Airburst models, combined with a reexamination of the surface conditions at Tunguska in 1908, have revealed that several assumptions from the earlier analyses led to erroneous conclusions, resulting in an overestimate of the size of the Tunguska event. Because there is no evidence that the Tunguska fireball descended to the surface, the yield must have been about 5 megatons or lower. Better understanding of airbursts, combined with the diminishing number of undiscovered large asteroids, leads to the conclusion that airbursts represent a large and growing fraction of the total impact threat.

More Details

Ultrafast 25 keV backlighting for experiments on Z

Geissel, Matthias; Schollmeier, Marius; Kimmel, Mark; Pitts, Todd A.; Rambo, Patrick K.; Schwarz, Jens; Sefkow, Adam B.; Atherton, B.

To extend the backlighting capabilities for Sandia's Z-Accelerator, Z-Petawatt, a laser which can provide laser pulses of 500 fs length and up to 120 J (100TW target area) or up to 450 J (Z / Petawatt target area) has been built over the last years. The main mission of this facility focuses on the generation of high energy X-rays, such as tin Ka at 25 keV in ultra-short bursts. Achieving 25 keV radiographs with decent resolution and contrast required addressing multiple problems such as blocking of hot electrons, minimization of the source, development of suitable filters, and optimization of laser intensity. Due to the violent environment inside of Z, an additional very challenging task is finding massive debris and radiation protection measures without losing the functionality of the backlighting system. We will present the first experiments on 25 keV backlighting including an analysis of image quality and X-ray efficiency.

More Details

The evolution of instabilities during magnetically driven liner implosions

Slutz, Stephen A.; Sinars, Daniel; Mcbride, Ryan; Jennings, Christopher A.; Herrmann, Mark H.; Cuneo, Michael E.

Numerical simulations [S.A. Slutz et al Phys. Plasmas 17, 056303 (2010)] indicate that fuel magnetization and preheat could enable cylindrical liner implosions to become an efficient means to generate fusion conditions. A series of simulations has been performed to study the stability of magnetically driven liner implosions. These simulations exhibit the initial growth and saturation of an electro-thermal instability. The Rayleigh-Taylor instability further amplifies the resultant density perturbations developing a spectrum of modes initially peaked at short wavelengths. With time the spectrum of modes evolves towards longer wavelengths developing an inverse cascade. The effects of mode coupling, the radial dependence of the magnetic pressure, and the initial surface roughness will be discussed.

More Details

Network discovery, characterization, and prediction : a grand challenge LDRD final report

Kegelmeyer, William P.

This report is the final summation of Sandia's Grand Challenge LDRD project No.119351, 'Network Discovery, Characterization and Prediction' (the 'NGC') which ran from FY08 to FY10. The aim of the NGC, in a nutshell, was to research, develop, and evaluate relevant analysis capabilities that address adversarial networks. Unlike some Grand Challenge efforts, that ambition created cultural subgoals, as well as technical and programmatic ones, as the insistence on 'relevancy' required that the Sandia informatics research communities and the analyst user communities come to appreciate each others needs and capabilities in a very deep and concrete way. The NGC generated a number of technical, programmatic, and cultural advances, detailed in this report. There were new algorithmic insights and research that resulted in fifty-three refereed publications and presentations; this report concludes with an abstract-annotated bibliography pointing to them all. The NGC generated three substantial prototypes that not only achieved their intended goals of testing our algorithmic integration, but which also served as vehicles for customer education and program development. The NGC, as intended, has catalyzed future work in this domain; by the end it had already brought in, in new funding, as much funding as had been invested in it. Finally, the NGC knit together previously disparate research staff and user expertise in a fashion that not only addressed our immediate research goals, but which promises to have created an enduring cultural legacy of mutual understanding, in service of Sandia's national security responsibilities in cybersecurity and counter proliferation.

More Details

Characterization, propagation and analysis of aleatory and epistemic uncertainty in the 2008 performance assessment for the proposed repository for high-level radioactive waste at Yucca Mountain, Nevada

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Hansen, Clifford W.; Helton, Jon C.; Sallaberry, Cedric J.

The 2008 performance assessment (PA) for the proposed repository for high-level radioactive waste at Yucca Mountain (YM), Nevada, illustrates the conceptual structure of risk assessments for complex systems. The 2008 YM PA is based on the following three conceptual entities:a probability space that characterizes aleatory uncertainty; a function that predicts consequences for individual elements of the sample space for aleatory uncertainty; and a probability space that characterizes epistemic uncertainty. These entities and their use in the characterization, propagation and analysis of aleatory and epistemic uncertainty are described and illustrated with results from the 2008 YM PA. © 2010 Springer-Verlag Berlin Heidelberg.

More Details

Poly(methacrylic acid) polymer hydrogel capsules: Drug carriers, sub-compartmentalized microreactors, artificial organelles

Small

Zelikin, Alexander N.; Städler, Brigitte; Price, Andrew D.

Multilayered polymer capsules attract significant research attention and are proposed as candidate materials for diverse biomedical applications, from targeted drug delivery to microencapsulated catalysis and sensors. Despite tremendous efforts, the studies which extend beyond proof of concept and report on the use of polymer capsules in drug delivery are few, as are the developments in encapsulated catalysis with the use of these carriers. In this Concept article, the recent successes of poly(methacrylic acid) hydrogel capsules as carrier vessels for delivery of therapeutic cargo, creation of microreactors, and assembly of sub-compartmentalized cell mimics are discussed. The developed technologies are outlined, successful applications of these capsules are highlighted, capsules properties which contribute to their performance in diverse applications are discussed, and further directions and plausible developments in the field are suggested. © Copyright 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

More Details

Time-resolved picosecond pure-rotational coherent anti-stokes Raman spectroscopy for thermometry and species concentration in flames

Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010

Kliewer, Christopher; Farrow, Roger L.; Settersten, Thomas B.; Kiefer, Johannes; Patterson, Brian; Gao, Yi; Settersten, Thomas B.

Time-resolved picosecond pure-rotational coherent anti-Stokes Raman spectroscopy is demonstrated for thermometry and species concentration determination in flames. Time-delaying the probe pulse enables successful suppression of unwanted signals. A theoretical model is under development. ©2010 Optical Society of America.

More Details

Silicon microring modulator with integrated heater and temperature sensor for thermal control

Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010

Derose, Christopher; Watts, Michael W.; Trotter, Douglas C.; Luck, David L.; Nielson, Gregory N.; Young, Ralph W.

The first demonstration of a silicon microring modulator with both an integrated resistive heater and diode-based temperature sensor is shown. The temperature-sensor exhibits a linear response for more than an 85 °C external temperature range. ©2010 Optical Society of America.

More Details

Low-power high-speed silicon microdisk modulators

Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010

Zortman, William A.; Watts, Michael W.; Trotter, Douglas C.; Young, Ralph W.; Lentine, Anthony L.

A novel silicon microdisk modulator with "error-free" ∼3 femtojoule/bit modulation at 12.5Gbs has been demonstrated. Modulation with a 1 volt swing allows for compatibility with current and future digital logic CMOS electronics. ©2010 IEEE.

More Details

Parametric results of the AlGaInAs quantum-well saturable absorber for use as a passive Q-switch

Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010

Cederberg, Jeffrey G.; Hebner, Gregory A.

We have successfully designed, built and operated a microlaser based on a AlGaInAs multiple quantum well (MQW) semiconductor saturable absorber (SESA). Optical characterization of the semiconductor absorber, as well as, the microlaser output is presented. © 2010 Ontical Society of America.

More Details

Dual-wavelength pumped 1.550μm high-power optical parametric chirped-pulse amplifier system

Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010

Law, Ryan; Nelson, T.R.; Kohl, I.T.; Lovesee, Alex L.; Rudd, J.V.; Buckley, J.R.

A 1.550 μm OPCPA utilizing a dual wavelength pumping scheme has been constructed. The system incorporates LBO and KTA for the first and second-stage amplifiers. Peak powers >310GW (60mJ, 180fs) at 10Hz have been achieved. ©2010 Optical Society of America.

More Details

Photofragmentation approaches for the detection of polyatomic molecules

Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010

Reichardt, Thomas A.; Hoops, Alexandra A.; Headrick, Jeffrey M.; Farrow, Roger L.; Settersten, Thomas B.; Bisson, Scott E.; Kulp, Thomas J.

We review three photofragmentation detection approaches, describing the detection of (1) vapor-phase mercuric chloride by photofragment emission, (2) vapor-phase nitro-containing compounds by photofragmentation-ionization, and (3) surface-bound organophosphonate compounds by photofragmentation-laser-induced fluorescence. © 2010 Optical Society of America.

More Details

The effect of dispersion on spectral broadening of incoherent continuous-wave light in optical fibers

Optics Express

Soh, Daniel B.S.; Koplow, Jeffrey; Moore, Sean W.; Schroder, Kevin L.; Hsu, Wen L.

In addition to fiber nonlinearity, fiber dispersion plays a significant role in spectral broadening of incoherent continuous-wave light. In this paper we have performed a numerical analysis of spectral broadening of incoherent light based on a fully stochastic model. Under a wide range of operating conditions, these numerical simulations exhibit striking features such as damped oscillatory spectral broadening (during the initial stages of propagation), and eventual convergence to a stationary, steady state spectral distribution at sufficiently long propagation distances. In this study we analyze the important role of fiber dispersion in such phenomena. We also demonstrate an analytical rate equation expression for spectral broadening. © 2010 Optical Society of America.

More Details

Synthesis of an ionic liquid with an iron coordination cation

Dalton Transactions

Anderson, Travis M.; Ingersoll, David; Hensley, Alyssa H.; Staiger, Chad L.; Leonard, Jonathan C.

An iron-based ionic liquid, Fe((OHCH2CH2) 2NH)6(CF3SO3)3, is synthesized in a single-step complexation reaction. Infrared and Raman data suggest NH(CH2CH2OH)2 primarily coordinates to Fe(iii) through alcohol groups. The compound has Tg and Td values of -64°C and 260°C, respectively. Cyclic voltammetry reveals quasi-reversible Fe(iii)/Fe(ii) reduction waves. © 2010 The Royal Society of Chemistry.

More Details

Asynchronous parallel hybrid optimization combining DIRECT and GSS

Optimization Methods and Software

Griffin, Joshua D.; Kolda, Tamara G.

In this paper, we explore hybrid parallel global optimization using Dividing Rectangles (DIRECT) and asynchronous generating set search (GSS). Both DIRECT and GSS are derivative-free and so require only objective function values; this makes these methods applicable to a wide variety of science and engineering problems. DIRECT is a global search method that strategically divides the search space into ever-smaller rectangles, sampling the objective function at the centre point for each rectangle. GSS is a local search method that samples the objective function at trial points around the current best point, i.e. the point with the lowest function value. Latin hypercube sampling can be used to seed GSS with a good starting point. Using a set of global optimization test problems, we compare the parallel performance of DIRECT and GSS with hybrids that combine the two methods. Our experiments suggest that the hybrid methods are much faster than DIRECT and scale better when more processors are added. This improvement in performance is achieved without any sacrifice in the quality of the solution - the hybrid methods find the global optimum whenever DIRECT does. © 2010 Taylor & Francis.

More Details

Estimating the degree of nonlinearity in transient responses with zeroed early-time fast Fourier transforms

Mechanical Systems and Signal Processing

Allen, Matthew S.; Mayes, Randall L.

This work presents time-frequency signal processing methods for detecting and characterizing nonlinearity in transient response measurements. The methods are intended for systems whose response becomes increasingly linear as the response amplitude decays. The discrete Fourier transform of the response data is found with various sections of the initial response set to zero. These frequency responses, dubbed zeroed early-time fast Fourier transforms (ZEFFTs), acquire the usual shape of linear frequency response functions (FRFs) as more of the initial nonlinear response is nullified. Hence, nonlinearity is evidenced by a qualitative change in the shape of the ZEFFT as the length of the initial nullified section is varied. These spectra are shown to be sensitive to nonlinearity, revealing its presence even if it is active in only the first few cycles of a response, as may be the case with macro-slip in mechanical joints. They also give insight into the character of the nonlinearity, potentially revealing nonlinear energy transfer between modes or the modal amplitudes below which a system behaves linearly. In some cases one can identify a linear model from the late time, linear response, and use it to reconstruct the response that the system would have executed at previous times if it had been linear. This gives an indication of the severity of the nonlinearity and its effect on the measured response. The methods are demonstrated on both analytical and experimental data from systems with slip and impact nonlinearities. © 2010 Elsevier Ltd. All rights reserved.

More Details

Pulsed- and DC-charged PCSS-based trigger generators

IEEE Transactions on Plasma Science

Zutavern, Fred J.; Swalby, Michael E.; Cich, Michael J.; Loubriel, Guillermo M.; Mar, Alan

Prior to this research, we have developed high-gain GaAs photoconductive semiconductor switches (PCSSs) to trigger 50-300 kV high-voltage switches (HVSs). We have demonstrated that PCSSs can trigger a variety of pulsed-power switches operating at 50300 kV by locating the trigger generator (TG) directly at the HVS. This was demonstrated for two types of dc-charged trigatrons and two types of field distortion midplane switches, including a ±100 kVDC switch produced by the High Current Electronics Institute used in the linear transformer driver. The lowest rms jitter obtained from triggering an HVS with a PCSS was 100 ps from a 300 kV pulse-charged trigatron. PCSSs are the key component in these independently timed fiber-optically controlled low jitter TGs for HVSs. TGs are critical subsystems for reliable and efficient pulsed-power facilities because they control the timing synchronization and amplitude variation of multiple pulse-forming lines that combine to produce the total system output. Future facility-scale pulsed-power systems are even more dependent on triggering, as they are composed of many more triggered HVSs, and they produce shaped pulses by independent timing of the HVSs. As pulsed-power systems become more complex, the complexity of the associated trigger systems also increases. One of the means to reduce this complexity is to allow the trigger system to be charged directly from the voltage appearing across the HVS. However, for slow or dc-charged pulsed-power systems, this can be particularly challenging as the dc hold-off of the PCSS dramatically declines. This paper presents results that are seeking to address HVS performance requirements over large operating ranges by triggering using a pulsed-charged PCSS-based TG. Switch operating conditions that are as low as 45% of the self-break were achieved. A dc-charged PCSS-based TG is also introduced and demonstrated over a 39-61 kV operating range. DC-charged PCSS allows the TG to be directly charged from slow or dc-charged pulsed-power systems. GaAs and neutron-irradiated GaAs (n-GaAs) PCSSs were used to investigate the dc-charged operation. © 2010 IEEE.

More Details

Design of dynamic Hohlraum opacity samples to increase measured sample density on Z

Review of Scientific Instruments

Nash, Thomas J.; Rochau, G.A.; Bailey, James E.

We are attempting to measure the transmission of iron on Z at plasma temperatures and densities relevant to the solar radiation and convection zone boundary. The opacity data published by us to date has been taken at an electron density about a factor of 10 below the 9× 1022/cm3 electron density of this boundary. We present results of two-dimensional (2D) simulations of the heating and expansion of an opacity sample driven by the dynamic Hohlraum radiation source on Z. The aim of the simulations is to design foil samples that provide opacity data at increased density. The inputs or source terms for the simulations are spatially and temporally varying radiation temperatures with a Lambertian angular distribution. These temperature profiles were inferred on Z with on-axis time-resolved pinhole cameras, x-ray diodes, and bolometers. A typical sample is 0.3 μm of magnesium and 0.078 μm of iron sandwiched between 10 μm layers of plastic. The 2D LASNEX simulations indicate that to increase the density of the sample one should increase the thickness of the plastic backing. © 2010 American Institute of Physics.

More Details

A theory-based logic model for innovation policy and evaluation

Research Evaluation

Jordan, Gretchen B.

Current policy and program rationale, objectives, and evaluation use a fragmented picture of the innovation process. This presents a challenge since in the United States officials in both the executive and legislative branches of government see innovation, whether that be new products or processes or business models, as the solution to many of the problems the country faces. The logic model is a popular tool for developing and describing the rationale for a policy or program and its context. This article sets out to describe generic logic models of both the R&D process and the diffusion process, building on existing theory-based frameworks. Then a combined, theory-based logic model for the innovation process is presented. Examples of the elements of the logic, each a possible leverage point or intervention, are provided, along with a discussion of how this comprehensive but simple model might be useful for both evaluation and policy development. © Beech Tree Publishing 2010.

More Details

Two-dimensional mapping of electron densities and temperatures using laser-collisional induced fluorescence

Plasma Sources Science and Technology

Barnat, Edward; Frederickson, K.

We discuss the application of the laser-collisional induced fluorescence (LCIF) technique to produce two-dimensional maps of both electron densities and electron temperatures in a helium plasma. A collisional-radiative model (CRM) is used to describe the evolution of electronic states after laser excitation. We discuss generalizations to the time dependent results which are useful for simplifying data acquisition and analysis. LCIF measurements are performed in plasma containing densities ranging from ∼109 electrons cm -3 and approaching 1011 electrons cm-3 and comparison is made between the predictions made by the CRM and the measurements. Finally, spatial and temporal evolution of an ion sheath formed during a pulse bias is measured to demonstrate this technique. © 2010 IOP Publishing Ltd.

More Details

Shifted power method for computing tensor eigenpairs

Kolda, Tamara G.; Dunlavy, Daniel M.

Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

More Details

Methods for modeling impact-induced reactivity changes in small reactors

Smith, Jeffrey A.; Villa, Daniel L.; Radel, Ross F.; Radel, Tracy E.; Tallman, Tyler N.; Lipinski, Ronald

This paper describes techniques for determining impact deformation and the subsequent reactivity change for a space reactor impacting the ground following a potential launch accident or for large fuel bundles in a shipping container following an accident. This technique could be used to determine the margin of subcriticality for such potential accidents. Specifically, the approach couples a finite element continuum mechanics model (Pronto3D or Presto) with a neutronics code (MCNP). DAGMC, developed at the University of Wisconsin-Madison, is used to enable MCNP geometric queries to be performed using Pronto3D output. This paper summarizes what has been done historically for reactor launch analysis, describes the impact criticality analysis methodology, and presents preliminary results using representative reactor designs.

More Details

Biosafety Risk Assessment Methodology

Caskey, Susan; Gaudioso, Jennifer M.; Salerno, Reynolds M.

Laboratories that work with biological agents need to manage their safety risks to persons working the laboratories and the human and animal community in the surrounding areas. Biosafety guidance defines a wide variety of biosafety risk mitigation measures, which include measures which fall under the following categories: engineering controls, procedural and administrative controls, and the use of personal protective equipment; the determination of which mitigation measures should be used to address the specific laboratory risks are dependent upon a risk assessment. Ideally, a risk assessment should be conducted in a manner which is standardized and systematic which allows it to be repeatable and comparable. A risk assessment should clearly define the risk being assessed and avoid over complication.

More Details

Computational modeling of composite material fires

Dodd, Amanda B.; Hubbard, Joshua A.; Erickson, Kenneth L.

Composite materials behave differently from conventional fuel sources and have the potential to smolder and burn for extended time periods. As the amount of composite materials on modern aircraft continues to increase, understanding the response of composites in fire environments becomes increasingly important. An effort is ongoing to enhance the capability to simulate composite material response in fires including the decomposition of the composite and the interaction with a fire. To adequately model composite material in a fire, two physical model development tasks are necessary; first, the decomposition model for the composite material and second, the interaction with a fire. A porous media approach for the decomposition model including a time dependent formulation with the effects of heat, mass, species, and momentum transfer of the porous solid and gas phase is being implemented in an engineering code, ARIA. ARIA is a Sandia National Laboratories multiphysics code including a range of capabilities such as incompressible Navier-Stokes equations, energy transport equations, species transport equations, non-Newtonian fluid rheology, linear elastic solid mechanics, and electro-statics. To simulate the fire, FUEGO, also a Sandia National Laboratories code, is coupled to ARIA. FUEGO represents the turbulent, buoyantly driven incompressible flow, heat transfer, mass transfer, and combustion. FUEGO and ARIA are uniquely able to solve this problem because they were designed using a common architecture (SIERRA) that enhances multiphysics coupling and both codes are capable of massively parallel calculations, enhancing performance. The decomposition reaction model is developed from small scale experimental data including thermogravimetric analysis (TGA) and Differential Scanning Calorimetry (DSC) in both nitrogen and air for a range of heating rates and from available data in the literature. The response of the composite material subject to a radiant heat flux boundary condition is examined to study the propagation of decomposition fronts of the epoxy and carbon fiber and their dependence on the ambient conditions such as oxygen concentration, surface flow velocity, and radiant heat flux. In addition to the computational effort, small scaled experimental efforts to attain adequate data used to validate model predictions is ongoing. The goal of this paper is to demonstrate the progress of the capability for a typical composite material and emphasize the path forward.

More Details

High fidelity nuclear energy system optimization towards an environmentally benign, sustainable, and secure energy source

Rochau, Gary E.; Rodriguez, Salvador B.

A new high-fidelity integrated system method and analysis approach was developed and implemented for consistent and comprehensive evaluations of advanced fuel cycles leading to minimized Transuranic (TRU) inventories. The method has been implemented in a developed code system integrating capabilities of Monte Carlo N - Particle Extended (MCNPX) for high-fidelity fuel cycle component simulations. In this report, a Nuclear Energy System (NES) configuration was developed to take advantage of used fuel recycling and transmutation capabilities in waste management scenarios leading to minimized TRU waste inventories, long-term activities, and radiotoxicities. The reactor systems and fuel cycle components that make up the NES were selected for their ability to perform in tandem to produce clean, safe, and dependable energy in an environmentally conscious manner. The diversity in performance and spectral characteristics were used to enhance TRU waste elimination while efficiently utilizing uranium resources and providing an abundant energy source. A computational modeling approach was developed for integrating the individual models of the NES. A general approach was utilized allowing for the Integrated System Model (ISM) to be modified in order to provide simulation for other systems with similar attributes. By utilizing this approach, the ISM is capable of performing system evaluations under many different design parameter options. Additionally, the predictive capabilities of the ISM and its computational time efficiency allow for system sensitivity/uncertainty analysis and the implementation of optimization techniques.

More Details

Ion beam energy spectrum calculation via dosimetry data deconvolution

Sharp, Andrew C.; Harper-Slaboszewicz, V.

The energy spectrum of a H{sup +} beam generated within the HERMES III accelerator is calculated from dosimetry data to refine future experiments. Multiple layers of radiochromic film are exposed to the beam. A graphic user interface was written in MATLAB to align the film images and calculate the beam's dose depth profile. Singular value regularization is used to stabilize the unfolding and provide the H{sup +} beam's energy spectrum. The beam was found to have major contributions from 1 MeV and 8.5 MeV protons. The HERMES III accelerator is typically used as a pulsed photon source to experimentally obtain photon impulse response of systems due to high energy photons. A series of experiments were performed to explore the use of Hermes III to generate an intense pulsed proton beam. Knowing the beam energy spectrum allows for greater precision in experiment predictions and beam model verification.

More Details

Super-resolution microscopy reveals protein spatial reorganization in early innate immune responses

Carson, Bryan; Timlin, Jerilyn A.

Over the past decade optical approaches were introduced that effectively break the diffraction barrier. Of particular note were introductions of Stimulated Emission/Depletion (STED) microscopy, Photo-Activated Localization Microscopy (PALM), and the closely related Stochastic Optical Reconstruction Microscopy (STORM). STORM represents an attractive method for researchers, as it does not require highly specialized optical setups, can be implemented using commercially available dyes, and is more easily amenable to multicolor imaging. We implemented a simultaneous dual-color, direct-STORM imaging system through the use of an objective-based TIRF microscope and filter-based image splitter. This system allows for excitation and detection of two fluorophors simultaneously, via projection of each fluorophor's signal onto separate regions of a detector. We imaged the sub-resolution organization of the TLR4 receptor, a key mediator of innate immune response, after challenge with lipopolysaccharide (LPS), a bacteria-specific antigen. While distinct forms of LPS have evolved among various bacteria, only some LPS variations (such as that derived from E. coli) typically result in significant cellular immune response. Others (such as from the plague bacteria Y. pestis) do not, despite affinity to TLR4. We will show that challenge with LPS antigens produces a statistically significant increase in TLR4 receptor clusters on the cell membrane, presumably due to recruitment of receptors to lipid rafts. These changes, however, are only detectable below the diffraction limit and are not evident using conventional imaging methods. Furthermore, we will compare the spatiotemporal behavior of TLR4 receptors in response to different LPS chemotypes in order to elucidate possible routes by which pathogens such as Y. pestis are able to circumvent the innate immune system. Finally, we will exploit the dual-color STORM capabilities to simultaneously image LPS and TLR4 receptors in the cellular membrane at resolutions at or below 40nm.

More Details

Radiation effects on the electrical properties of hafnium oxide based MOS capacitors

Bielejec, Edward S.

Hafnium oxide-based MOS capacitors were investigated to determine electrical property response to radiation environments. In situ capacitance versus voltage measurements were analyzed to identify voltage shifting as a result of changes to trapped charge with increasing dose of gamma, neutron, and ion radiation. In situ measurements required investigation and optimization of capacitor fabrication to include dicing, cleaning, metalization, packaging, and wire bonding. A top metal contact of 200 angstroms of titanium followed by 2800 angstroms of gold allowed for repeatable wire bonding and proper electrical response. Gamma and ion irradiations of atomic layer deposited hafnium oxide on silicon devices both resulted in a midgap voltage shift of no more than 0.2 V toward less positive voltages. This shift indicates recombination of radiation induced positive charge with negative trapped charge in the bulk oxide. Silicon ion irradiation caused interface effects in addition to oxide trap effects that resulted in a flatband voltage shift of approximately 0.6 V also toward less positive voltages. Additionally, no bias dependent voltage shifts with gamma irradiation and strong oxide capacitance room temperature annealing after ion irradiation was observed. These characteristics, in addition to the small voltage shifts observed, demonstrate the radiation hardness of hafnium oxide and its applicability for use in space systems.

More Details

Dosimetry experiments at the MEDUSA Facility (Little Mountain)

Harper-Slaboszewicz, V.; Hartman, Elmer F.; Shaneyfelt, Marty R.; Schwank, James R.; Sheridan, Timothy J.

A series of experiments on the MEDUSA linear accelerator radiation test facility were performed to evaluate the difference in dose measured using different methods. Significant differences in dosimeter-measured radiation dose were observed for the different dosimeter types for the same radiation environments, and the results are compared and discussed in this report.

More Details

Transportation implications of a closed fuel cycle

Weiner, Ruth F.; Sorenson, Ken B.; Dennis, Matthew L.

Transportation for each step of a closed fuel cycle is analyzed in consideration of the availability of appropriate transportation infrastructure. The United States has both experience and certified casks for transportation that may be required by this cycle, except for the transport of fresh and used MOX fuel and fresh and used Advanced Burner Reactor (ABR) fuel. Packaging that had been used for other fuel with somewhat similar characteristics may be appropriate for these fuels, but would be inefficient. Therefore, the required neutron and gamma shielding, heat dissipation, and criticality were calculated for MOX and ABR fresh and spent fuel. Criticality would not be an issue, but the packaging design would need to balance neutron shielding and regulatory heat dissipation requirements.

More Details

Parallel mesh management using interoperable tools

Devine, Karen

This presentation included a discussion of challenges arising in parallel mesh management, as well as demonstrated solutions. They also described the broad range of software for mesh management and modification developed by the Interoperable Technologies for Advanced Petascale Simulations (ITAPS) team, and highlighted applications successfully using the ITAPS tool suite.

More Details

Fire tests and analyses of a rail cask-sized calorimeter

Figueroa Faria, Victor G.

Three large open pool fire experiments involving a calorimeter the size of a spent fuel rail cask were conducted at Sandia National Laboratories Lurance Canyon Burn Site. These experiments were performed to study the heat transfer between a very large fire and a large cask-like object. In all of the tests, the calorimeter was located at the center of a 7.93-meter diameter fuel pan, elevated 1 meter above the fuel pool. The relative pool size and positioning of the calorimeter conformed to the required positioning of a package undergoing certification fire testing. Approximately 2000 gallons of JP-8 aviation fuel were used in each test. The first two tests had relatively light winds and lasted 40 minutes, while the third had stronger winds and consumed the fuel in 25 minutes. Wind speed and direction, calorimeter temperature, fire envelop temperature, vertical gas plume speed, and radiant heat flux near the calorimeter were measured at several locations in all tests. Fuel regression rate data was also acquired. The experimental setup and certain fire characteristics that were observed during the test are described in this paper. Results from three-dimensional fire simulations performed with the Cask Analysis Fire Environment (CAFE) fire code are also presented. Comparisons of the thermal response of the calorimeter as measured in each test to the results obtained from the CAFE simulations are presented and discussed.

More Details

Regulatory fire test requirements for plutonium air transport packages : JP-4 or JP-5 vs. JP-8 aviation fuel

Figueroa Faria, Victor G.; Nicolette, Vernon F.

For certification, packages used for the transportation of plutonium by air must survive the hypothetical thermal environment specified in 10CFR71.74(a)(5). This regulation specifies that 'the package must be exposed to luminous flames from a pool fire of JP-4 or JP-5 aviation fuel for a period of at least 60 minutes.' This regulation was developed when jet propellant (JP) 4 and 5 were the standard jet fuels. However, JP-4 and JP-5 currently are of limited availability in the United States of America. JP-4 is very hard to obtain as it is not used much anymore. JP-5 may be easier to get than JP-4, but only through a military supplier. The purpose of this paper is to illustrate that readily-available JP-8 fuel is a possible substitute for the aforementioned certification test. Comparisons between the properties of the three fuels are given. Results from computer simulations that compared large JP-4 to JP-8 pool fires using Sandia's VULCAN fire model are shown and discussed. Additionally, the Container Analysis Fire (CAFE) code was used to compare the thermal response of a large calorimeter exposed to engulfing fires fueled by these three jet propellants. The paper then recommends JP-8 as an alternate fuel that complies with the thermal environment implied in 10CFR71.74.

More Details

Construction of an unyielding target for large horizontal impacts

Ammerman, Douglas; Davie, Neil T.; Kalan, Robert J.

Sandia National Laboratories has constructed an unyielding target at the end of its 2000-foot rocket sled track. This target is made up of approximately 5 million pounds of concrete, an embedded steel load spreading structure, and a steel armor plate face that varies from 10 inches thick at the center to 4 inches thick at the left and right edges. The target/track combination will allow horizontal impacts at regulatory speeds of very large objects, such as a full-scale rail cask, or high-speed impacts of smaller packages. The load-spreading mechanism in the target is based upon the proven design that has been in use for over 20 years at Sandia's aerial cable facility. That target, with a weight of 2 million pounds, has successfully withstood impact forces of up to 25 million pounds. It is expected that the new target will be capable of withstanding impact forces of more than 70 million pounds. During construction various instrumentation was placed in the target so that the response of the target during severe impacts can be monitored. This paper will discuss the construction of the target and provide insights on the testing capabilities at the sled track with this new target.

More Details

Flat plate puncture test convergence study

Ammerman, Douglas

The ASME Task Group on Computational Mechanics for Explicit Dynamics is investigating the types of finite element models needed to accurately solve various problems that occur frequently in cask design. One type of problem is the 1-meter impact onto a puncture spike. The work described in this paper considers this impact for a relatively thin-walled shell, represented as a flat plate. The effects of mesh refinement, friction coefficient, material models, and finite element code will be discussed. The actual punch, as defined in the transport regulations, is 15 cm in diameter with a corner radius of no more than 6 mm. The punch used in the initial part of this study has the same diameter, but has a corner radius of 25 mm. This more rounded punch was used to allow convergence of the solution with a coarser mesh. A future task will be to investigate the effect of having a punch with a smaller corner radius. The 25-cm thick type 304 stainless steel plate that represents the cask wall is 1 meter in diameter and has added mass on the edge to represent the remainder of the cask. The amount of added mass to use was calculated using Nelm's equation, an empirically derived relationship between weight, wall thickness, and ultimate strength that prevents punch through. The outer edge of the plate is restrained so that it can only move in the direction parallel to the axis of the punch. Results that are compared include the deflection at the edge of the plate, the deflection at the center of the plate, the plastic strains at radius r=50 cm and r=100 cm , and qualitatively, the distribution of plastic strains. The strains of interest are those on the surface of the plate, not the integration point strains. Because cask designers are using analyses of this type to determine if shell will puncture, a failure theory, including the effect of the tri-axial nature of the stress state, is also discussed. The results of this study will help to determine what constitutes an adequate finite element model for analyzing the puncture hypothetical accident.

More Details

Storing carbon dioxide in saline formations : analyzing extracted water treatment and use for power plant cooling

Kobos, Peter; Roach, Jesse D.; Klise, Geoffrey T.; Krumhansl, James L.; Dewers, Thomas; Heath, Jason E.; Dwyer, Brian P.; Borns, David J.

In an effort to address the potential to scale up of carbon dioxide (CO{sub 2}) capture and sequestration in the United States saline formations, an assessment model is being developed using a national database and modeling tool. This tool builds upon the existing NatCarb database as well as supplemental geological information to address scale up potential for carbon dioxide storage within these formations. The focus of the assessment model is to specifically address the question, 'Where are opportunities to couple CO{sub 2} storage and extracted water use for existing and expanding power plants, and what are the economic impacts of these systems relative to traditional power systems?' Initial findings indicate that approximately less than 20% of all the existing complete saline formation well data points meet the working criteria for combined CO{sub 2} storage and extracted water treatment systems. The initial results of the analysis indicate that less than 20% of all the existing complete saline formation well data may meet the working depth, salinity and formation intersecting criteria. These results were taken from examining updated NatCarb data. This finding, while just an initial result, suggests that the combined use of saline formations for CO{sub 2} storage and extracted water use may be limited by the selection criteria chosen. A second preliminary finding of the analysis suggests that some of the necessary data required for this analysis is not present in all of the NatCarb records. This type of analysis represents the beginning of the larger, in depth study for all existing coal and natural gas power plants and saline formations in the U.S. for the purpose of potential CO{sub 2} storage and water reuse for supplemental cooling. Additionally, this allows for potential policy insight when understanding the difficult nature of combined potential institutional (regulatory) and physical (engineered geological sequestration and extracted water system) constraints across the United States. Finally, a representative scenario for a 1,800 MW subcritical coal fired power plant (amongst other types including supercritical coal, integrated gasification combined cycle, natural gas turbine and natural gas combined cycle) can look to existing and new carbon capture, transportation, compression and sequestration technologies along with a suite of extracting and treating technologies for water to assess the system's overall physical and economic viability. Thus, this particular plant, with 90% capture, will reduce the net emissions of CO{sub 2} (original less the amount of energy and hence CO{sub 2} emissions required to power the carbon capture water treatment systems) less than 90%, and its water demands will increase by approximately 50%. These systems may increase the plant's LCOE by approximately 50% or more. This representative example suggests that scaling up these CO{sub 2} capture and sequestration technologies to many plants throughout the country could increase the water demands substantially at the regional, and possibly national level. These scenarios for all power plants and saline formations throughout U.S. can incorporate new information as it becomes available for potential new plant build out planning.

More Details

Design implementation and migration of security systems as an extreme project

Scharmer, Carol

Decision Trees, algorithms, software code, risk management, reports, plans, drawings, change control, presentations, and analysis - all useful tools and efforts but time consuming, resource intensive, and potentially costly for projects that have absolute schedule and budget constraints. What are necessary and prudent efforts when a customer calls with a major security problem that needs to be fixed with a proven, off-the-approval-list, multi-layered integrated system with high visibility and limited funding and expires at the end of the Fiscal Year? Whether driven by budget cycles, safety, or by management decree, many such projects begin with generic scopes and funding allocated based on a rapid management 'guestimate.' Then a Project Manager (PM) is assigned a project with a predefined and potentially limited scope, compressed schedule, and potentially insufficient funding. The PM is tasked to rapidly and cost effectively coordinate a requirements-based design, implementation, test, and turnover of a fully operational system to the customer, all while the customer is operating and maintaining an existing security system. Many project management manuals call this an impossible project that should not be attempted. However, security is serious business and the reality is that rapid deployment of proven systems via an 'Extreme Project' is sometimes necessary. Extreme Projects can be wildly successful but require a dedicated team of security professionals lead by an experienced project manager using a highly-tailored and agile project management process with management support at all levels, all combined with significant interface with the customer. This paper does not advocate such projects or condone eliminating the valuable analysis and project management techniques. Indeed, having worked on a well-planned project provides the basis for experienced team members to complete Extreme Projects. This paper does, however, provide insight into what it takes for projects to be successfully implemented and accepted when completed under extreme conditions.

More Details

Source physics experiments at the Nevada Test Site

Corbell, Bobby H.

The U. S. capability to monitor foreign underground nuclear test activities relies heavily on measurement of explosion phenomena, including characteristic seismic, infrasound, radionuclide, and acoustic signals. Despite recent advances in each of these fields, empirical, rather than physics-based, approaches are used to predict and explain observations. Seismologists rely on prior knowledge of the variations of teleseismic and regional seismic parameters such as p- and s-wave arrivals from simple one-dimensional models for the teleseismic case to somewhat more complicated enhanced two-dimensional models for the regional case. Likewise, radionuclide experts rely on empirical results from a handful of limited experiments to determine the radiological source terms present at the surface after an underground test. To make the next step in the advancement of the science of monitoring we need to transform these fields to enable predictive, physics-based modeling and analysis. The Nevada Test Site Source Physics Experiments (N-SPE) provide a unique opportunity to gather precise data from well-designed experiments to improve physics-based modeling capability. In the seismic experiments, data collection will include time domain reflectometry to measure explosive performance and yield, free-field accelerometers, extensive seismic arrays, and infrasound and acoustic measurements. The improved modeling capability that we will develop using this data should enable important advances in our ability to monitor worldwide for nuclear testing. The first of a series of source physics experiments will be conducted in the granite of Climax Stock at the NTS, near the locations of the HARD HAT and PILE DRIVER nuclear tests. This site not only provides a fairly homogeneous and well-documented geology, but also an opportunity to improve our understanding of how fractures, joints, and faults affect seismic wave generation and propagation. The Climax Stock experiments will consist of a 220 lb (TNT equivalent) calibration shot and a 2200 lb (TNT equivalent) over-buried shot conducted in the same emplacement hole. An identical 2200 lb shot at the same location will follow to investigate the effects of pre-conditioning. These experiments also provide an opportunity to advance capabilities for near-field monitoring, and on-site inspections (OSIs) of suspected testing sites. In particular, geologic, physical, and cultural signatures of underground testing can be evaluated using the N-SPE activities as case studies. Furthermore, experiments to measure the migration of radioactive noble gases to the surface from underground explosions will enable development of higher fidelity radiological source term models that can predict migration through a variety of geologic conditions. Because the detection of short-lived radionuclides is essential to determining if an explosion was nuclear or conventional, a better understanding of the gaseous and particulate radionuclide source terms that reach the surface from underground testing is critical to development of OSI capability.

More Details

Modeling needs for very large systems

Stein, Joshua

Most system performance models assume a point measurement for irradiance and that, except for the impact of shading from nearby obstacles, incident irradiance is uniform across the array. Module temperature is also assumed to be uniform across the array. For small arrays and hourly-averaged simulations, this may be a reasonable assumption. Stein is conducting research to characterize variability in large systems and to develop models that can better accommodate large system factors. In large, multi-MW arrays, passing clouds may block sunlight from a portion of the array but never affect another portion. Figure 22 shows that two irradiance measurements at opposite ends of a multi-MW PV plant appear to have similar irradiance (left), but in fact the irradiance is not always the same (right). Module temperature may also vary across the array, with modules on the edges being cooler because they have greater wind exposure. Large arrays will also have long wire runs and will be subject to associated losses. Soiling patterns may also vary, with modules closer to the source of soiling, such as an agricultural field, receiving more dust load. One of the primary concerns associated with this effort is how to work with integrators to gain access to better and more comprehensive data for model development and validation.

More Details
Results 71101–71200 of 99,299
Results 71101–71200 of 99,299