To provide input to numerical models for hazard and vulnerability analyses, thermal decomposition of eight polymers has been examined in both nitrogen and air atmospheres. Experiments have been done with poly(methyl methacrylate), poly(diallyl phthalate), Norwegian spruce, polyvinyl chloride), polycarbonate, poly(phenylene sulphide), and two polyurethanes. Polymers that formed a substantial amount of carbonaceous char during decomposition in a nitrogen atmosphere were completely consumed in an air atmosphere. However, in the case of polyurethanes, complete consumption did not occur until temperatures of 700° C or higher. Furthermore, to varying degrees, the presence of oxygen appeared to alter the decomposition processes in all of the materials studied.
The deployment of optical fibers in adverse radiation environments, such as those encountered in a low-Earth-orbit space setting, makes critical the development of an understanding of the effect of large accumulated ionizing-radiation doses on optical components and systems. In particular, gamma radiation is known to considerably affect the performance of optical components by inducing absorbing centers in the materials. Such radiation is present both as primary background radiation and as secondary radiation induced by proton collisions with space-craft material. This paper examines the effects of gamma radiation on erbium-, ytterbium-, and Yb/Er co-doped optical fibers by exposing a suite of such fibers to radiation from a Co-60 source over long periods of time while monitoring the temporal and spectral decrease in transmittance of a reference signal. For same total doses, results show increased photodarkening in erbium-doped fibers relative to ytterbium-doped fibers, as well as significant radiation resistance of the co-doped fibers over wavelengths of 1.0-1.6 microns. All three types of fibers were seen to exhibit dose-rate dependences.
A firing set capable of charging a 0.05 μF capacitor to 1.7 kV is constructed using a 2.5 mm diameter Series Connected Photovoltaic Array (SCPA) in lieu of a transformer as the method of high voltage generation. The source of illumination is a fiber coupled 3 W 808 nm laser diode. This paper discusses the performance and PSpice modeling of an SCPA used in a firing set application.
Split grating-gate field effect transistors (FETs) detectors made from high mobility quantum well two-dimensional electron gas material have been shown to exhibit greatly improved tunable resonant photoresponse compared to single grating-gate detectors due to the formation of a 'diode-like' element by the split-gate structure. These detectors are relatively large for FETs (1mm × 1mm area or larger) to match typical focused THz beam spot sizes. In the case where the focused THz spot size is smaller than the detector area, we have found evidence, through positional scanning of the detector element, that only a small portion of the detector is active. To further investigate this situation, detectors with the same channel width (1mm), but various channel lengths, were fabricated and tested. The results indicate that indeed, only a small portion of the split grating gated FET is active. This finding opens up the possibility for further enhancement of detector sensitivity by increasing the active area.
Sandia National Laboratories has developed a means of manufacturing high precision aspheric lenslet arrays turned on-center. An innovative chucking and indexing mechanism was designed and implemented which allows the part to be indexed in two orthogonal directions parallel to the spindle face. This system was designed to meet a need for center to center positioning of 2μm and form error of λ/10. The part utilizes scribed orthogonal sets of grooves that locate the part on the chuck. The averaging of the grooves increases the repeatability of the system. The part is moved an integral number of grooves across the chuck by means of a vacuum chuck on a tool post that is mated to the part and holds the part while the chuck repositions to receive the part. The current setup is designed to create as many as 169 lenslets distributed over a 3mm square area while holding a true position tolerance of 1μm for all lenslets.
The rapid autonomous detection of pathogenic microorganisms and bioagents by field deployable platforms is critical to human health and safety. To achieve a high level of sensitivity for fluidic detection applications, we have developed a 330 MHz Love wave acoustic biosensor on 36° YX Lithium Tantalate (LTO). Each die has four delay-line detection channels, permitting simultaneous measurement of multiple analytes or for parallel detection of single analyte containing samples. Crucial to our biosensor was the development of a transducer that excites the shear horizontal (SH) mode, through optimization of the transducer, minimizing propagation losses and reducing undesirable modes. Detection was achieved by comparing the reference phase of an input signal to the phase shift from the biosensor using an integrated electronic multi-readout system connected to a laptop computer or PDA The Love wave acoustic arrays were centered at 330 MHz, shifting to 325-328 MHz after application of the silicon dioxide waveguides. The insertion loss was -6 dB with an out-of-band rejection of 35 dB. The amplitude and phase ripple were 2.5 dB p-p and 2-3° pp, respectively. Time-domain gating confirmed propagation of the SH mode while showing suppression of the triple transit. Antigen capture and mass detection experiments demonstrate a sensitivity of 7.19 ± 0.74° mm2/ ng with a detection limit of 6.7 ± 0.40 pg / mm2 for each channel.
Parallel adaptive mesh refinement methods potentially lead to realistic modeling of complex three-dimensional physical phenomena. However, they also present significant challenges in data partitioning and load balancing. As the mesh adapts to the solution, the partitioning requirements change. By explicitly considering these dynamic conditions, the scalability for large, realistic simulations could possibly be significantly improved. Our hypothesis is that adaptive partitioning, meaning dynamic and automatic switching of partitioning techniques, based on the current run-time state, can be beneficial for these simulations. However, switching partitioners can be expensive due to differences in the algorithms' native mapping of data onto processors. We suggest forcing a uniform starting point for all included partitioners. We present a penalty-based method for determining whether switching is beneficial. We study the effects on data migration, as well as on overall cost, of using the uniform starting point and the switching-penalties to select the best partitioning algorithm, among a set of graph-based and geometric partitioning algorithms, for each adaptive time-step for four different adaptive scientific applications. The results show that data migration can be significantly reduced and that adaptive partitioning indeed can be effective for unstructured adaptive applications.
We have developed a system of differential-output monitors that diagnose current and voltage in the vacuum section of a 20-MA 3-MV pulsed-power accelerator. The system includes 62 gauges: 3 current and 6 voltage monitors that are fielded on each of the accelerator's 4 vacuum-insulator stacks, 6 current monitors on each of the accelerator's 4 outer magnetically insulated transmission lines (MITLs), and 2 current monitors on the accelerator's inner MITL. The inner-MITL monitors are located 6 cm from the axis of the load. Each of the stack and outer-MITL current monitors comprises two separate B-dot sensors, each of which consists of four 3-mm-diameter wire loops wound in series. The two sensors are separately located within adjacent cavities machined out of a single piece of copper. The high electrical conductivity of copper minimizes penetration of magnetic flux into the cavity walls, which minimizes changes in the sensitivity of the sensors on the 100-ns time scale of the accelerator's power pulse. A model of flux penetration has been developed and is used to correct (to first order) the B-dot signals for the penetration that does occur. The two sensors are designed to produce signals with opposite polarities; hence, each current monitor may be regarded as a single detector with differential outputs. Common-mode-noise rejection is achieved by combining these signals in a 50-{Omega} balun. The signal cables that connect the B-dot monitors to the balun are chosen to provide reasonable bandwidth and acceptable levels of Compton drive in the bremsstrahlung field of the accelerator. A single 50-{omega} cable transmits the output signal of each balun to a double-wall screen room, where the signals are attenuated, digitized (0.5-ns/sample), numerically compensated for cable losses, and numerically integrated. By contrast, each inner-MITL current monitor contains only a single B-dot sensor. These monitors are fielded in opposite-polarity pairs. The two signals from a pair are not combined in a balun; they are instead numerically processed for common-mode-noise rejection after digitization. All the current monitors are calibrated on a 76-cm-diameter axisymmetric radial transmission line that is driven by a 10-kA current pulse. The reference current is measured by a current-viewing resistor (CVR). The stack voltage monitors are also differential-output gauges, consisting of one 1.8-cm-diameter D-dot sensor and one null sensor. Hence, each voltage monitor is also a differential detector with two output signals, processed as described above. The voltage monitors are calibrated in situ at 1.5 MV on dedicated accelerator shots with a short-circuit load. Faraday's law of induction is used to generate the reference voltage: currents are obtained from calibrated outer-MITL B-dot monitors, and inductances from the system geometry. In this way, both current and voltage measurements are traceable to a single CVR. Dependable and consistent measurements are thus obtained with this system of calibrated diagnostics. On accelerator shots that deliver 22 MA to a low-impedance z-pinch load, the peak lineal current densities at the stack, outer-MITL, and inner-MITL monitor locations are 0.5, 1, and 58 MA/m, respectively. On such shots the peak currents measured at these three locations agree to within 1%.
The authors have developed a chip-scale atomic clock (CSAC) for applications requiring atomic timing accuracy in portable battery-powered applications. At PTTI/FCS 2005, they reported on the demonstration of a prototype CSAC, with an overall size of 10 cm{sup 3}, power consumption > 150 mW, and short-term stability sy(t) < 1 x 10-9t-1/2. Since that report, they have completed the development of the CSAC, including provision for autonomous lock acquisition and a calibrated output at 10.0 MHz, in addition to modifications to the physics package and system architecture to improve performance and manufacturability.
Solute plumes are believed to disperse in a non-Fickian manner due to small-scale heterogeneity and variable velocities that create preferential pathways. In order to accurately predict dispersion in naturally complex geologic media, the connection between heterogeneity and dispersion must be better understood. Since aquifer properties can not be measured at every location, it is common to simulate small-scale heterogeneity with random field generators based on a two-point covariance (e.g., through use of sequential simulation algorithms). While these random fields can produce preferential flow pathways, it is unknown how well the results simulate solute dispersion through natural heterogeneous media. To evaluate the influence that complex heterogeneity has on dispersion, we utilize high-resolution terrestrial lidar to identify and model lithofacies from outcrop for application in particle tracking solute transport simulations using RWHet. The lidar scan data are used to produce a lab (meter) scale two-dimensional model that captures 2-8 mm scale natural heterogeneity. Numerical simulations utilize various methods to populate the outcrop structure captured by the lidar-based image with reasonable hydraulic conductivity values. The particle tracking simulations result in residence time distributions used to evaluate the nature of dispersion through complex media. Particle tracking simulations through conductivity fields produced from the lidar images are then compared to particle tracking simulations through hydraulic conductivity fields produced from sequential simulation algorithms. Based on this comparison, the study aims to quantify the difference in dispersion when using realistic and simplified representations of aquifer heterogeneity.
This paper presents the conceptual framework that is being used to define quantification of margins and uncertainties (QMU) for application in the nuclear weapons (NW) work conducted at Sandia National Laboratories. The conceptual framework addresses the margins and uncertainties throughout the NW life cycle and includes the definition of terms related to QMU and to figures of merit. Potential applications of QMU consist of analyses based on physical data and on modeling and simulation. Appendix A provides general guidelines for addressing cases in which significant and relevant physical data are available for QMU analysis. Appendix B gives the specific guidance that was used to conduct QMU analyses in cycle 12 of the annual assessment process. Appendix C offers general guidelines for addressing cases in which appropriate models are available for use in QMU analysis. Appendix D contains an example that highlights the consequences of different treatments of uncertainty in model-based QMU analyses.
An innovative helium3 high pressure gas detection system, made possible by utilizing Sandia's expertise in Micro-electrical Mechanical fluidic systems, is proposed which appears to have many beneficial performance characteristics with regards to making these neutron measurements in the high bremsstrahlung and electrical noise environments found in High Energy Density Physics experiments and especially on the very high noise environment generated on the fast pulsed power experiments performed here at Sandia. This same system may dramatically improve active WMD and contraband detection as well when employed with ultrafast (10-50 ns) pulsed neutron sources.
We present the results of a three year LDRD project which has focused on the development of novel, compact, ultraviolet solid-state sources and fluorescence-based sensing platforms that apply such devices to the sensing of biological and nuclear materials. We describe our development of 270-280 nm AlGaN-based semiconductor UV LEDs with performance suitable for evaluation in biosensor platforms as well as our development efforts towards the realization of a 340 nm AlGaN-based laser diode technology. We further review our sensor development efforts, including evaluation of the efficacy of using modulated LED excitation and phase sensitive detection techniques for fluorescence detection of bio molecules and uranyl-containing compounds.
A sulfuric acid catalytic decomposer section was assembled and tested for the Integrated Laboratory Scale experiments of the Sulfur-Iodine Thermochemical Cycle. This cycle is being studied as part of the U. S. Department of Energy Nuclear Hydrogen Initiative. Tests confirmed that the 54-inch long silicon carbide bayonet could produce in excess of the design objective of 100 liters/hr of SO{sub 2} at 2 bar. Furthermore, at 3 bar the system produced 135 liters/hr of SO{sub 2} with only 31 mol% acid. The gas production rate was close to the theoretical maximum determined by equilibrium, which indicates that the design provides adequate catalyst contact and heat transfer. Several design improvements were also implemented to greatly minimize leakage of SO{sub 2} out of the apparatus. The primary modifications were a separate additional enclosure within the skid enclosure, and replacement of Teflon tubing with glass-lined steel pipes.
The geometry of ray paths through realistic Earth models can be extremely complex due to the vertical and lateral heterogeneity of the velocity distribution within the models. Calculation of high fidelity ray paths and travel times through these models generally involves sophisticated algorithms that require significant assumptions and approximations. To test such algorithms it is desirable to have available analytic solutions for the geometry and travel time of rays through simpler velocity distributions against which the more complex algorithms can be compared. Also, in situations where computational performance requirements prohibit implementation of full 3D algorithms, it may be necessary to accept the accuracy limitations of analytic solutions in order to compute solutions that satisfy those requirements. Analytic solutions are described for the geometry and travel time of infinite frequency rays through radially symmetric 1D Earth models characterized by an inner sphere where the velocity distribution is given by the function V (r) = A-Br{sup 2}, optionally surrounded by some number of spherical shells of constant velocity. The mathematical basis of the calculations is described, sample calculations are presented, and results are compared to the Taup Toolkit of Crotwell et al. (1999). These solutions are useful for evaluating the fidelity of sophisticated 3D travel time calculators and in situations where performance requirements preclude the use of more computationally intensive calculators. It should be noted that most of the solutions presented are only quasi-analytic. Exact, closed form equations are derived but computation of solutions to specific problems generally require application of numerical integration or root finding techniques, which, while approximations, can be calculated to very high accuracy. Tolerances are set in the numerical algorithms such that computed travel time accuracies are better than 1 microsecond.
Mode-stirred chamber and anechoic chamber measurements were made on two sets of canonical test objects (cylindrical and rectangular) with varying numbers of thin slot apertures. The shielding effectiveness was compared to determine the level of correction needed to compensate the mode-stirred data to levels commensurate with anechoic data from the same test object.
Er(D,T){sub 2-x} {sup 3}He{sub x}, erbium di-tritide, films of thicknesses 500 nm, 400 nm, 300 nm, 200 nm, and 100 nm were grown and analyzed by Transmission Electron Microscopy, X-Ray Diffraction, and Ion Beam Analysis to determine variations in film microstructure as a function of film thickness and age, due to the time-dependent build-up of {sup 3}He in the film from the radioactive decay of tritium. Several interesting features were observed: One, the amount of helium released as a function of film thickness is relatively constant. This suggests that the helium is being released only from the near surface region and that the helium is not diffusing to the surface from the bulk of the film. Two, lenticular helium bubbles are observed as a result of the radioactive decay of tritium into {sup 3}He. These bubbles grow along the [111] crystallographic direction. Three, a helium bubble free zone, or 'denuded zone' is observed near the surface. The size of this region is independent of film thickness. Four, an analysis of secondary diffraction spots in the Transmission Electron Microscopy study indicate that small erbium oxide precipitates, 5-10 nm in size, exist throughout the film. Further, all of the films had large erbium oxide inclusions, in many cases these inclusions span the depth of the film.
The objectives of this project were to develop a new scientific tool for studies of chemical processes at the single molecule level, and to provide enhanced capabilities for multiplexed, ultrasensitive separations and immunoassays. We have combined microfluidic separation techniques with our newly developed technology for spectrally and temporally resolved detection of single molecules. The detection of individual molecules can reveal fluctuations in molecular conformations, which are obscured in ensemble measurements, and allows detailed studies of reaction kinetics such as ligand or antibody binding. Detection near the single molecule level also enables the use of correlation techniques to extract information, such as diffusion rates, from the fluorescence signal. The micro-fluidic technology offers unprecedented control of the chemical environment and flow conditions, and affords the unique opportunity to study biomolecules without immobilization. For analytical separations, the fluorescence lifetime and spectral resolution of the detection makes it possible to use multiple parameters for identification of separation products to improve the certainty of identification. We have successfully developed a system that can measure fluorescence spectra, lifetimes and diffusion constants of the components of mixtures separated in a microfluidic electrophoresis chip.
Veloce is a medium-voltage, high-current, compact pulsed power generator developed for isentropic and shock compression experiments. Because of its increased availability and ease of operation, Veloce is well suited for studying isentropic compression experiments (ICE) in much greater detail than previously allowed with larger pulsed power machines such as the Z accelerator. Since the compact pulsed power technology used for dynamic material experiments has not been previously used, it is necessary to examine several key issues to ensure that accurate results are obtained. In the present experiments, issues such as panel and sample preparation, uniformity of loading, and edge effects were extensively examined. In addition, magnetohydrodynamic (MHD) simulations using the ALEGRA code were performed to interpret the experimental results and to design improved sample/panel configurations. Examples of recent ICE studies on aluminum are presented.
This document is considered a mechanical design best-practice guide to new and experienced designers alike. The contents consist of topics related to using Computer Aided Design (CAD) software, performing basic analyses, and using configuration management. The details specific to a particular topic have been leveraged against existing Product Realization Standard (PRS) and Technical Business Practice (TBP) requirements while maintaining alignment with sound engineering and design practices. This document is to be considered dynamic in that subsequent updates will be reflected in the main title, and each update will be published on an annual basis.
Results from recent experimental studies suggest that the N vacancy (V{sub N}) may compensate Mg acceptors in GaN in addition to the compensation arising from H introduced during growth. To investigate this possibility further, density-functional-theory calculations were performed to determine the interactions of V{sub N} with H, Mg, and the MgH center in GaN, and modeling was performed to determine the state populations at elevated temperatures. The results indicate that V{sub N}H and MgV{sub N}H complexes with H inside the vacancy are highly stable in p-type GaN and act to compensate or passivate Mg acceptors. Furthermore, barriers for formation of these complexes were investigated and the results indicate that they can readily form at temperatures > 400 C, which is well below temperatures typically used for GaN growth. Overall, the results indicate that the V{sub N} compensation behavior suggested by experiments arises not from isolated V{sub N}, but rather from V{sub N}H and MgV{sub N}H complexes with H located inside the vacancy.
This work describes the convergence analysis of a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems and, as such, the derived strong error estimates for the fully discrete solution are used to compare the computational efficiency of the proposed method with the Monte Carlo method. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo.
Alegra is an ALE (Arbitrary Lagrangian-Eulerian) multi-material finite element code that emphasizes large deformations and strong shock physics. The Lagrangian continuum dynamics package in Alegra uses a Galerkin finite element spatial discretization and an explicit central-difference stepping method in time. The goal of this report is to describe in detail the characteristics of this algorithm, including the conservation and stability properties. The details provided should help both researchers and analysts understand the underlying theory and numerical implementation of the Alegra continuum hydrodynamics algorithm.
The purpose of this nine-week project was to advance the understanding of low-altitude airbursts by developing the means to model them at extremely high resolution in order to span the scales of entry physics as well as blast wave and plume formation. Small asteroid impacts on Earth are a recognized hazard, but the full nature of the threat is still not well understood. We used shock physics codes to discover emergent phenomena associated with low-altitude airbursts such as the Siberian Tunguska event of 1908 and the Egyptian glass-forming event 29 million years ago. The planetary defense community is beginning to recognize the significant threat from such airbursts. Low-altitude airbursts are the only class of impacts that have a significant probability of occurring within a planning time horizon. There is roughly a 10% chance of a megaton-scale low-altitude airburst event in the next decade.The first part of this LDRD final project report is a preprint of our proceedings paper associated with the plenary presentation at the Hypervelocity Impact Society 2007 Symposium in Williamsburg, Virginia (International Journal of Impact Engineering, in press). The paper summarizes discoveries associated with a series of 2D axially-symmetric CTH simulations. The second part of the report contains slides from an invited presentation at the American Geophysical Union Fall 2007 meeting in San Francisco. The presentation summarizes the results of a series of 3D oblique impact simulations of the 1908 Tunguska explosion. Because of the brevity of this late-start project, the 3D results have not yet been written up for a peer-reviewed publication. We anticipate the opportunity to eventually run simulations that include the actual topography at Tunguska, at which time these results will be published.3
The objective of this short-term LDRD project was to acquire the tools needed to use our chemical imaging precision mass analyzer (ChIPMA) instrument to analyze tissue samples. This effort was an outgrowth of discussions with oncologists on the need to find the cellular origin of signals in mass spectra of serum samples, which provide biomarkers for ovarian cancer. The ultimate goal would be to collect chemical images of biopsy samples allowing the chemical images of diseased and nondiseased sections of a sample to be compared. The equipment needed to prepare tissue samples have been acquired and built. This equipment includes an cyro-ultramicrotome for preparing thin sections of samples and a coating unit. The coating unit uses an electrospray system to deposit small droplets of a UV-photo absorbing compound on the surface of the tissue samples. Both units are operational. The tissue sample must be coated with the organic compound to enable matrix assisted laser desorption/ionization (MALDI) and matrix enhanced secondary ion mass spectrometry (ME-SIMS) measurements with the ChIPMA instrument Initial plans to test the sample preparation using human tissue samples required development of administrative procedures beyond the scope of this LDRD. Hence, it was decided to make two types of measurements: (1) Testing the spatial resolution of ME-SIMS by preparing a substrate coated with a mixture of an organic matrix and a bio standard and etching a defined pattern in the coating using a liquid metal ion beam, and (2) preparing and imaging C. elegans worms. Difficulties arose in sectioning the C. elegans for analysis and funds and time to overcome these difficulties were not available in this project. The facilities are now available for preparing biological samples for analysis with the ChIPMA instrument. Some further investment of time and resources in sample preparation should make this a useful tool for chemical imaging applications.
This production process was generated for the satellite system program cables/interconnects group, which in essences had no well defined production process. The driver for the development of a formalized process was based on the set backs, problem areas, challenges, and need improvements faced from within the program at Sandia National Laboratories. In addition, the formal production process was developed from the Master's program of Engineering Management for New Mexico Institute of Mining and Technology in Socorro New Mexico and submitted as a thesis to meet the institute's graduating requirements.
Contrary to popular opinion, fully resolved speckles may not be the best option for interferometric applications where it is often advantageous to have unresolved speckles with up to hundreds of speckles in a single camera pixel. This paper seeks to elucidate the effect of unresolved speckles on electronic speckle pattern interferometry (ESPI) and laser Doppler velocimetry (LDV). Related techniques such as temporal speckle pattern interferometry (TSPI) and ultrasonic imaging can also benefit from the ideas presented in this paper. Speckle statistics will be briefly outlined as background to the main topic of optimizing speckle fields for use in interferometry. The complementary speckle-size analysis for LDV is compared to previous published results on ESPI.
As engineering challenges grow in the ever shrinking world of nano-design, methods of making dynamic measurements of these materials and systems will become important. Electron microscopes have imaged these extremely small samples for years, but are incapable of measuring dynamic events. A means of measuring these nano-scale dynamic events is envisioned by converting an electron microscope into a Doppler velocimeter. This idea proceeds from the analogous concept of laser Doppler velocimetry. However, the obvious solution of using a laser to probe at the nano-scale is not feasible because the diffraction limit of light is orders of magnitude larger than the samples of interest. This paper investigates the theoretical underpinnings of using electron beams for Doppler measurements. Potential issues and their solutions, including electron beam coherence and interference will be presented. If answers to these problems can be found, the invention of the Doppler electron velocimeter could yield a completely new measurement concept at atomistic scales.
Proceedings of the SEM Annual Conference and Exposition on Experimental and Applied Mechanics 2007
Raghavendra, Prasad; Paez, Thomas L.
Real mechanical shocks are nonstationary random processes, and it is becoming feasible to model these sources based on practical amounts of measured data. An efficient framework for modeling nonstationary random processes is the Karhunen-Loeve (KL) expansion. It simulates a random source as a mean function plus random deviation from the mean. We have developed an efficient technique for modeling non-Gaussian random processes in the KL framework, and we use it to characterize an electronic component's shock response to the random process. The random process is, in turn, used to create a shock test specification for this electronic component. Numerical results are presented in connection with the random process and test specifications. The electronic component is a speaker and is assembled inside a mobile phone. The test specification will be used in qualifying and selecting a robust component design (or manufacturer) for use inside a mobile phone.
Various techniques and heating methods have been employed to characterize the compressive and tensile behavior of 304L stainless steel over a wide range of test temperatures. Depending on the test temperature, the experimental apparatus required to produce uniform temperatures in the specimens varied significantly. Compression experiments imposed additional difficulty in achieving a uniform temperature throughout the specimen, but were attainable using secondary heating of the test platens. The 304L material was characterized in tension at quasi-static rates and in compression over an extensive range of strain rates to the very high strain rate regime. Strain rate effects were experimentally determined and a reversal in the strain rate effect was discovered at some temperature and strain rate combinations. Dynamic recrystallization was observed at some temperature and strain rate regimes.
Proceedings of the SEM Annual Conference and Exposition on Experimental and Applied Mechanics 2007
Song, Bo; Antoun, Bonnie R.; Chen, Weinong
A split Hopkinson pressure bar (SHPB) was modified to characterize the dynamic compressive behavior of a 304L stainless steel at high temperatures. The shapes of the loading pulses were controlled such that the specimen deformed under dynamic equilibrium at constant strain rates. A heating chamber was used to heat specimen to 815 C and 927 C during dynamic experiments. In order to investigate the recrystallization and other microstructural changes, the SHPB was also modified to load the specimen only once during a test. Moreover, the specimens were quenched 6 and 30 seconds after the dynamic loading was applied to the specimen. Dynamic compressive stress-strain data at high temperatures for the 304L alloy were experimentally obtained.
Micro Nano Technology-Based Systems (MNT-Based Systems) are expected to provide unprecedented capabilities for aerospace applications. However we have not sufficiently addressed the reliability of such systems for a number of reasons. For example, our foundational understanding of such systems is incomplete at the basic physics level and our understanding of how individual subsystems interact is much less than we originally assumed. In addition the manner in which we operate during the product realization cycle has large implications for the ultimate reliability we can expect to achieve. Currently it is quite difficult to determine the reliability of MNT-Based Systems and is in fact borne out by a number of estimates we have seen that are unsatisfactory. We shall discuss a number of issues that at present have slowed our progress in developing NMT-Based Systems and have detened us from effectively ascertaining the true "reliability" of such systems.
We present on the first inertial-confinement-fusion ignition facility, the target capsule will be DT filled through a long, narrow tube inserted into the shell. μg-scale shell perturbations Δm' arising from multiple, 10–50 μm-diameter, hollow SiO2 tubes on x-ray-driven, ignition-scale, 1-mg capsules have been measured on a subignition device. Finally, simulations compare well with observation, whence it is corroborated that Δm' arises from early x-ray shadowing by the tube rather than tube mass coupling to the shell, and inferred that 10–20 μm tubes will negligibly affect fusion yield on a full-ignition facility.
In this study, (CFx)n cathode reaction during discharge has been investigated using in situ X-ray diffraction (XRD). Mathematical treatment of the in situ XRD data set was performed using multivariate curve resolution with alternating least squares (MCR–ALS), a technique of multivariate analysis. MCR–ALS analysis successfully separated the relatively weak XRD signal intensity due to the chemical reaction from the other inert cell component signals. The resulting dynamic reaction component revealed the loss of (CFx)n cathode signal together with the simultaneous appearance of LiF by-product intensity. Careful examination of the XRD data set revealed an additional dynamic component which may be associated with the formation of an intermediate compound during the discharge process.
Niobium doped Lead Zirconate Titanate (PZT) with a Zr/Ti ratio of 95/5 (i.e., PZT 95/5-2Nb) is a ferroelectric with a rhombohedral structure at room temperature. A crystal (or a subdomain within a crystal) exhibits a spontaneous polarization in any one of eight crystallographically equivalent directions. Such a material becomes polarized when subjected to a large electric field. When the electric field is removed, a remanent polarization remains and a bound charge is stored. A displacive phase transition from a rhombohedral ferroelectric phase to an orthorhombic anti-ferroelectric phase can be induced with the application of a mechanical load. When this occurs, the material becomes depoled and the bound charge is released. The polycrystalline character of PZT 95/5-2Nb leads to highly non-uniform fields at the grain scale. These local fields lead to very complex material behavior during mechanical depoling that has important implications to device design and performance. This paper presents a microstructurally based numerical model that describes the 3D non-linear behavior of ferroelectric ceramics. The model resolves the structure of polycrystals directly in the topology of the problem domain and uses the extended finite element method (X-FEM) to solve the governing equations of electromechanics. The material response is computed from anisotropic single crystal constants and the volume fractions of the various polarization variants (i.e., three variants for rhombohedral anti-ferroelectric and eight for rhomobohedral ferroelectric ceramic). Evolution of the variant volume fractions is governed by the minimization of internally stored energy and accounts for ferroelectric and ferroelastic domain switching and phase transitions in response to the applied loads. The developed model is used to examine hydrostatic depoling in PZT 95/5-2Nb.
Making use of polypropylene samples that are selectively labeled with carbon-13 at each of the three unique positions within the repeating unit, we are conducting mass spectral analyses of the volatile organic oxidation products that are produced when the polymer is subjected to elevated temperature in the presence of air. By examination of both the parent and fragmentation ion peaks in the mass spectrum, we are able to identify the positioning of the C-13 labels within the volatile compounds, and thereby map each compound onto its site of origin from within the macromolecular structure of polypropylene. Most of the organic oxidation products are remarkably specific in terms of their genesis from the polymer. The structural results are discussed in terms of the oxidation chemistry of the macromolecule.
This report details the work completed under the TX-100 blade manufacturing portion of the Carbon-Hybrid Blade Developments: Standard and Twist-Coupled Prototype project. The TX-100 blade is a 9 meter prototype blade designed with bend-twist coupling to augment the mitigation of peak loads during normal turbine operation. This structural coupling was achieved by locating off axis carbon fiber in the outboard portion of the blade skins. The report will present the tooling selection, blade production, blade instrumentation, blade shipping and adapter plate design and fabrication. The baseline blade used for this project was the ERS-100 (Revision D) wind turbine blade. The molds used for the production of the TX-100 were originally built for the production of the CX-100 blade. The same high pressure and low pressure skin molds were used to manufacture the TX-100 skins. In order to compensate for the difference in skin thickness between the CX-100 and the TX-100, however, a new TX-100 shear web plug and mold were required. Both the blade assembly fixture and the root stud insertion fixture used for the CX-100 blades could be utilized for the TX-100 blades. A production run of seven TX-100 prototype blades was undertaken at TPI Composites during the month of October, 2004. Of those seven blades, four were instrumented with strain gauges before final assembly. After production at the TPI Composites facility in Rhode Island, the blades were shipped to various test sites: two blades to the National Wind Technology Center at the National Renewable Energy Laboratory in Boulder, Colorado, two blades to Sandia National Laboratory in Albuquerque, New Mexico and three blades to the United States Department of Agriculture turbine field test facility in Bushland, Texas. An adapter plate was designed to allow the TX-100 blades to be installed on existing Micon 65/13M turbines at the USDA site. The conclusion of this program is the kick-off of the TX-100 blade testing at the three testing facilities.
This new program at Sandia is focused on Iodine waste form development for GNEP cycle needs. Our research has a general theme of 'Waste Forms by Design' in which we are focused on silver loaded zeolite waste forms and related metal loaded zeolites that can be validated for chosen GNEP cycle designs. With that theme, we are interested in materials flexibility for iodine feed stream and sequestration material (in a sense, the ability to develop a universal material independent on the waste stream composition). We also are designing the flexibility to work in a variety of repository or storage scenarios. This is possible by studying the structure/property relationship of existing waste forms and optimizing them to our current needs. Furthermore, by understanding the properties of the waste and the storage forms we may be able to predict their long-term behavior and stability. Finally, we are working collaboratively with the Waste Form Development Campaign to ensure materials durability and stability testing.
The vapor-liquid-solid growth process for synthesis of group-IV semiconducting nanowires using silane, germane, disilane and digermane precursor gases has been investigated. The nanowire growth process combines in situ gold seed formation by vapor deposition on atomically clean silicon (111) surfaces, in situ growth from the gaseous precursor(s), and real-time monitoring of nanowire growth as a function of temperature and pressure by a novel optical reflectometry technique. A significant dependence on precursor pressure and growth temperature for the synthesis of silicon and germanium nanowires is observed, depending on the stability of the specific precursor used. Also, the presence of a nucleation time for the onset of nanowire growth has been found using our new in situ optical reflectometry technique.
This report provides strategies for minimizing machining distortion in future designs of aluminum alloy satellite boxes, based in part on key findings from this investigation. The report outlines types of aluminum alloys and how they are heat treated, how residual stresses develop during heat treatment of age hardening alloys, ways residual stresses can be minimized, and the design of machining approaches to minimize distortion in parts that contain residual stresses. Specific recommendations are made regarding alloy selection, heat treatment, stress relieving, and machining procedures for boxes requiring various strength levels with emphasis on 6061 and 7075 aluminum alloys.
Distortion frequently occurs during machining of age hardening aluminum alloys due to residual stresses introduced during the quenching step in the heat treatment process. This report quantifies, compares, and discusses the effectiveness of several methods for minimizing residual stresses and machining distortion in aluminum alloys 7075 and 6061.
This report gives a description of the development of a Stable Local Oscillator (StaLO) multi-chip module (MCM). It is a follow-on report to SAND2006-6414, Stable Local Oscillator Microcircuit. The StaLO accepts a 100MHz input signal and produces output signals at 1.2, 3.3, and 3.6 GHz. The circuit is built as a multi-chip module (MCM), since it makes use of integrated circuit technologies in silicon and lithium niobate as well as discrete passive components. This report describes the development of an MCM-based version of the complete StaLO, fabricated on an alumina thick film hybrid substrate.
The purpose of this LDRD was to study the effect of steady-state neutron and gamma irradiation on the transmission of waveguides designed to operate well in the near- or mid-IR region of the electromagnetic spectrum. In this context, near-IR refers to the region between 1.3 {mu}m and about 2.4 {mu}m, and mid-IR between 3.0 {mu}m and 4.5 {mu}m. Such radiation environments could exist in nuclear power plants or nuclear weapons. Pulsed and steady-state radiation effects had been extensively studied on silica-based optical fibers because they have been the most readily available, most widely used in communications and sensing, and the least expensive. However, silica-based fibers do not transmit well beyond about 1.8 {mu}m and they are virtually opaque in the mid-IR. The mid-IR, as defined above, and beyond, is where vibrational spectroscopy is carried out. This type of sensing is one important application of infrared optical fibers.
Within reactive geochemical transport, several conceptual models exist for simulating sorption processes in the subsurface. Historically, the K{sub D} approach has been the method of choice due to ease of implementation within a reactive transport model and straightforward comparison with experimental data. However, for modeling complex sorption phenomenon (e.g. sorption of radionuclides onto mineral surfaces), this approach does not systematically account for variations in location, time, or chemical conditions, and more sophisticated methods such as a surface complexation model (SCM) must be utilized. It is critical to determine which conceptual model to use; that is, when the material variation becomes important to regulatory decisions. The geochemical transport tool GEOQUIMICO has been developed to assist in this decision-making process. GEOQUIMICO provides a user-friendly framework for comparing the accuracy and performance of sorption conceptual models. The model currently supports the K{sub D} and SCM conceptual models. The code is written in the object-oriented Java programming language to facilitate model development and improve code portability. The basic theory underlying geochemical transport and the sorption conceptual models noted above is presented in this report. Explanations are provided of how these physicochemical processes are instrumented in GEOQUIMICO and a brief verification study comparing GEOQUIMICO results to data found in the literature is given.
Many current and future modeling applications at Sandia including ASC milestones will critically depend on the simultaneous solution of vastly different physical phenomena. Issues due to code coupling are often not addressed, understood, or even recognized. The objectives of the LDRD has been both in theory and in code development. We will show that we have provided a fundamental analysis of coupling, i.e., when strong coupling vs. a successive substitution strategy is needed. We have enabled the implementation of tighter coupling strategies through additions to the NOX and Sierra code suites to make coupling strategies available now. We have leveraged existing functionality to do this. Specifically, we have built into NOX the capability to handle fully coupled simulations from multiple codes, and we have also built into NOX the capability to handle Jacobi Free Newton Krylov simulations that link multiple applications. We show how this capability may be accessed from within the Sierra Framework as well as from outside of Sierra. The critical impact from this LDRD is that we have shown how and have delivered strategies for enabling strong Newton-based coupling while respecting the modularity of existing codes. This will facilitate the use of these codes in a coupled manner to solve multi-physic applications.
This report specifies the way in which Gauss points shall be named and ordered when storing them in an EXODUS II file so that they may be properly interpreted by visualization tools. This naming convention covers hexahedra and tetrahedra. Future revisions of this document will cover quadrilaterals, triangles, and shell elements.
We explored the potential of Quasi-Spherical Direct Drive (QSDD) to reduce the cost and risk of a future fusion driver for Inertial Confinement Fusion (ICF) and to produce megajoule thermonuclear yield on the renovated Z Machine with a pulse shortening Magnetically Insulated Current Amplifier (MICA). Analytic relationships for constant implosion velocity and constant pusher stability have been derived and show that the required current scales as the implosion time. Therefore, a MICA is necessary to drive QSDD capsules with hot-spot ignition on Z. We have optimized the LASNEX parameters for QSDD with realistic walls and mitigated many of the risks. Although the mix-degraded 1D yield is computed to be {approx}30 MJ on Z, unmitigated wall expansion under the > 100 gigabar pressure just before burn prevents ignition in the 2D simulations. A squeezer system of adjacent implosions may mitigate the wall expansion and permit the plasma to burn.