We describe a method of performing trilinear analysis on large data sets using a modification of the PARAFAC-ALS algorithm. Our method iteratively decomposes the data matrix into a core matrix and three loading matrices based on the Tuckerl model. The algorithm is particularly useful for data sets that are too large to upload into a computer's main memory. While the performance advantage in utilizing our algorithm is dependent on the number of data elements and dimensions of the data array, we have seen a significant performance improvement over operating PARAFAC-ALS on the full data set. In one case of data comprising hyperspectral images from a confocal microscope, our method of analysis was approximately 60 times faster than operating on the full data set, while obtaining essentially equivalent results. Published in 2008 by John Wiley & Sons, Ltd.
Chemical solution deposition has been used to fabricate continuous ultrathin lead lanthanum zirconate titanate (PLZT) films as thin as 20 nm. Further, multilayer capacitor structures with as many as 10 dielectric layers have been fabricated from these ultrathin PLZT films by alternating spin-coated dielectric layers with sputtered platinum electrodes. Integrating a photolithographically defined wet etch step to the fabrication process enabled the production of functional multilayer stacks with capacitance values exceeding 600 nF. Such ultrathin multilayer capacitors offer tremendous advantages for further miniaturization of integrated passive components.
The author will describe two-photon-resonant LIF detection of CO, O, and H. Application of these techniques in flames frequently suffers from significant photolytic interferences caused by the intense UV excitation pulses required to produce measurable signal. When compared to nanosecond excitation, the use of short pulse (picosecond) excitation can significantly reduce the effect of the photolytic interference. Results of recent atomic oxygen imaging experiments using picosecond- and nanosecond-duration laser pulses will be presented, and potential improvements to CO and H imaging will be discussed.
Within this paper, we provide a solution to the static frame validation challenge problem (see this issue) in a manner that is consistent with the guidelines provided by the Validation Challenge Workshop tasking document. The static frame problem is constructed such that variability in material properties is known to be the only source of uncertainty in the system description, but there is ignorance on the type of model that best describes this variability. Hence both types of uncertainty, aleatoric and epistemic, are present and must be addressed. Our approach is to consider a collection of competing probabilistic models for the material properties, and calibrate these models to the information provided; models of different levels of complexity and numerical efficiency are included in the analysis. A Bayesian formulation is used to select the optimal model from the collection, which is then used for the regulatory assessment. Lastly, bayesian credible intervals are used to provide a measure of confidence to our regulatory assessment.
In this work, we present two methods for solving overdetermined systems of the Time of Arrival (TOA) geolocation equations that achieve the minimum possible variance in all cases, not just when the satellites are at large equal radii. One of these techniques gives two solutions, and the other gives four solutions.
American Nuclear Society Embedded Topical Meeting - 2007 International Topical Meeting on Safety and Technology of Nuclear Hydrogen Production, Control, and Management
As part of the US DOE Nuclear Hydrogen Initiative, Sandia National Laboratories is designing and constructing a process for the conversion of sulfuric acid to produce sulfur dioxide. This process is part of the thermochemical Sulfur-Iodine (S-I) cycle that produces hydrogen from water. The Sandia process will be integrated with other sections of the S-I cycle in the near future to complete a demonstration-scale S-I process. In the Sandia process, sulfuric acid is concentrated by vacuum distillation and then catalytically decomposed at high temperature (850°C) to produce sulfur dioxide, oxygen and water. Major problems in the process, corrosion, and failure of high-temperature connections of process equipment, have been eliminated through the development of an integrated acid decomposer constructed of silicon carbide. The unit integrates acid boiling, superheating and decomposition into a single unit operation and provides for exceptional heat recuperation. The design of acid decomposition process, the new acid decomposer, other process units, and materials of construction for the process are described and discussed.
American Nuclear Society Embedded Topical Meeting - 2007 International Topical Meeting on Safety and Technology of Nuclear Hydrogen Production, Control, and Management
A preliminary study was conducted which considered capturing carbon dioxide from fossil-fired power plants and combining it with nuclear hydrogen in order to produce alternative liquid fuels for transportation. Among the alternative liquid hydrocarbons which can be used as fuel in internal combustion engines, the two that are most promising are methanol and ethanol. We choose these two because they are relatively simple compounds and can be used with only minor changes to the fuel systems of most automobiles today. In fact, there are some vehicles today which can operate with any combination of conventional gasoline, ethanol, or methanol. We estimated the quantity of carbon dioxide that would be emitted by fossil-fired power plants in the future. We then use this information to determine how much ethanol or methanol can be created if enough hydrogen is made available. Using the quantity of hydrogen required and the thermodynamics of the reactions involved, we estimate the nuclear power that would be needed to produce the liquid fuel. This amount of liquid fuel is then used to estimate the effect of such a program on conventional gasoline usage, need for foreign oil, and decrease in CO 2 emissions.
The dimensionless extinction coefficient, Ke, was measured for soot produced in 2m JP-8 pool fires. Light extinction and gravimetric sampling measurements were performed simultaneously at 635 and 1310nm wavelengths at three heights in the flame zone and in the overfire region. Measured average Ke values of 8.41.2 at 635nm and 8.71.1 at 1310nm in the overfire region agree well with values from 8-10 recently reported for different fuels and flame conditions. The overfire Ke values are also relatively independent of wavelength, in agreement with recent findings for JP-8 soot in smaller flames. Ke was nearly constant at 635nm for all sampling locations in the large fires. However, at 1310nm, the overfire Ke was higher than in the flame zone. Chemical analysis of physically sampled soot shows variations in carbon-to-hydrogen (C/H) ratio and polycyclic aromatic hydrocarbon (PAH) concentration that may account for the smaller Ke values measured in the flame zone. Rayleigh-Debye-Gans theory of scattering for polydisperse fractal aggregate (RDG-PFA) was applied to measured aggregate fractal dimensions and found to under-predict the extinction coefficient by 17-30% at 635nm using commonly accepted refractive indices of soot, and agreed well with the experiments using the more recently published refractive index of 1.99-0.89i. This study represents the first measurements of soot chemistry, morphology, and optical properties in the flame zone of large, fully-turbulent pool fires, and emphasizes the importance of accurate measurements of optical properties both in the flame zone and overfire regions for models of radiative transport and interpretation of laser-based diagnostics of soot volume fraction and temperature.
Cities without an early warning system of indwelling sensors can consider monitoring their networks manually, especially during times of heightened security levels. We consider the problem of calculating an optimal schedule for manual sampling in a municipal water network. Preliminary computations with a small-scale example indicate that during normal times, manual sampling can provide some benefit, but it is far inferior to an indwelling sensor network. However, given information that significantly constrains the nature of an imminent threat, manual sampling can perform as well as a small sensor network designed to handle normal threats. Copyright ASCE 2006.
Accurate material models are fundamental to predictive structural finite element models. Because potting foams are routinely used to mitigate shock and vibration of encapsulated components in electro/mechanical systems, accurate material models of foams are needed. A linear-viscoelastic foam constitutive model has been developed to represent the foam's stiffness and damping throughout an application space defined by temperature, strain rate or frequency and strain level. Validation of this linear-viscoelastic model, which is integrated into the Salinas structural dynamics code, is being achieved by modeling and testing a series of structural geometries of increasing complexity that have been designed to ensure sensitivity to material parameters. Both experimental and analytical uncertainties are being quantified to ensure the fair assessment of model validity. Quantitative model validation metrics are being developed to provide a means of comparison for analytical model predictions to observations made in the experiments. This paper is one of several recent papers documenting the validation process for simple to complex structures with foam encapsulated components. This paper specifically focuses on model validation over a wide temperature range and using a simple dumbbell structure for modal testing and simulation. Material variations of density and modulus have been included. A double blind validation process is described that brings together test data with model predictions.
This paper presents computational simulations and experiments of water flow and contaminant transport through pipes with incomplete mixing at pipe joints. The hydraulics and contaminant transport were modeled using computational fluid dynamics software that solves the continuity, momentum, energy, and species equations (laminar and turbulent) using finite-element methods. Simulations were performed of experiments consisting of individual and multiple pipe joints where tracer and clean water were separately introduced into the pipe junction. Results showed that the incoming flow streams generally remained separated within the junction, leading to incomplete mixing of the tracer. Simulations of the mixing matched the experimental results when appropriate scaling of the tracer diffusivity (via the turbulent Schmidt number) was calibrated based on results of single-joint experiments using cross and double-T configurations. Results showed that a turbulent Schmidt number between ∼0.001-0.01 was able to account for enhanced mixing caused by instabilities along the interface of impinging flows. Unequal flow rates within the network were also shown to affect the outlet concentration at each pipe junction, with "enhanced" or "reduced" mixing possible depending on the relative flow rates entering the junction. Copyright ASCE 2006.
In this paper we present the results of a study to quantify uncertainty in experimental modal parameters due to test set-up uncertainty, measurement uncertainty, and data analysis uncertainty. Uncertainty quantification is required to accomplish a number of tasks including model updating, model validation, and assessment of unit-tounit variation. We consider uncertainty in the modal parameters due to a number of sources including force input location/direction, force amplitude, instrumentation bias, support conditions, and the analysis method (algorithmic variation). We compute the total uncertainty due to all of these sources, and discuss the importance of proper characterization of bias errors on the total uncertainty. This uncertainty quantification was applied to modal tests designed to assess modeling capabilities for emerging designs of wind turbine blades. In an example, we show that unit-to-unit variation of the modal parameters of two nominally identical wind turbine blades is successfully assessed by performing uncertainty quantification. This study aims to demonstrate the importance of the proper pre-test design and analysis for understanding the uncertainty in modal parameters, in particular uncertainty due to bias error.
In order to predict blast damage on structures, it is current industry practice to decouple shock calculations from computational structural dynamics calculations. Pressure-time histories from experimental tests were used to assess computational models developed using a shock physics code (CTH) and a structural dynamics code (PRONTO3D). CTH was shown to be able to reproduce three independent characteristics of a blast wave: arrival time, peak overpressure, and decay time. Excellent agreement was achieved for early times, where the rigid wall assumptions used in the model analysis were valid. A one-way coupling was performed for this blast-structure interaction problem by taking the pressure-time history from the shock physics simulation and applying it to the structure at the corresponding locations in the PRONTO3D simulation to capture the structural deformation. In general, the one-way coupling was shown to be a cost-effective means of predicting the structural response when the time duration of the load was less than the response time of the structure. Therefore, the computational models were successfully evaluated for the internal blast problems studied herein.
This paper investigates methods for coupling analytical dynamic models of subcomponents with experimentally derived models in order to predict the response of the combined system, focusing on modal substructuring or Component Mode Synthesis (CMS), the experimental analog to the ubiquitous Craig-Bampton method. While the basic methods for combining experimental and analytical models have been around for many years, it appears that these are not often applied successfully. The CMS theory is presented along with a new strategy, dubbed the Maximum Rank Coordinate Choice (MRCC), that ensures that the constrained degrees of freedom can be found from the unconstrained without encountering numerical ill conditioning. The experimental modal substructuring approach is also compared with frequency response function coupling, sometimes called admittance or impedance coupling. These methods are used both to analytically remove models of a test fixture (required to include rotational degrees of freedom) and to predict the response of the coupled beams. Both rigid and elastic models for the fixture are considered. Similar results are obtained using either method although the modal substructuring method yields a more compact database and allows one to more easily interrogate the resulting system model to assure that physically meaningful results have been obtained. A method for coupling the fixture model to experimental measurements, dubbed the Modal Constraint for Fixture and Subsystem (MCFS) is presented that greatly improves the result and robustness when an elastic fixture model is used.
Like other interfaces, equilibrium grain boundaries are smooth at low temperature and rough at high temperature; however, little attention has been paid to roughening except for faceting boundaries. Using molecular dynamics simulations of face-centered cubic Ni, we studied two closely related grain boundaries with different boundary planes. In spite of their similarity, their boundary roughening temperatures differ by several hundred degrees, and boundary mobility is much larger above the roughening temperature. This has important implications for microstructural development during metallurgical processes.
We have discussed the key areas of the IR process that should not be circumvented if an organization is to achieve a high level of assurance in high-dollar, high-risk cost estimates; lessons learned; and possible solutions to improve the process. In summary, the best practices described are to do the following. Develop a corporate policy for review of cost estimates based on TPC and potential financial and reputation risk; Develop a database of qualified, experienced personnel, who can perform well as IR team members; Spell out the process for approval of review team members, including the executive approval process; Address review team availability by developing review team member alternates; Increase lead-time notice on high-dollar, high risk estimates by developing an advanced notice system with internal organizations; Improve coordination of the estimating team's responses to the review team's questions and concerns; and Develop alternatives such as representatives and electronic briefings to alleviate challenges in scheduling executives for cost estimate briefings. Each organization has its own needs, culture, and level of maturity. If you have an IR process that works, great! If not, we hope that we have sparked your interest in developing a process that works for your company. The goal is to continuously improve and further refine the process to meet the needs of both external and internal customers. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company for the US Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Accurate material models are fundamental to predictive structural finite element models. Because potting foams are routinely used to mitigate shock and vibration of encapsulated components in electro/mechanical systems, accurate material models of foams are needed. A linear-viscoelastic foam constitutive model has been developed to represent the foam's stiffness and damping throughout an application space defined by temperature, strain rate or frequency and strain level. Validation of this linear-viscoelastic model, which is integrated into the Salinas structural dynamics code, is being achieved by modeling and testing a series of structural geometries of increasing complexity that have been designed to ensure sensitivity to material parameters. Both experimental and analytical uncertainties are being quantified to ensure the fair assessment of model validity. Quantitative model validation metrics are being developed to provide a means of comparison for analytical model predictions to observations made in the experiments. This paper is one of several recent papers documenting the validation process for simple to complex structures with foam encapsulated components. This paper specifically focuses on model validation over a wide temperature range and using a simple dumbbell structure for modal testing and simulation. Material variations of density and modulus have been included. A double blind validation process is described that brings together test data with model predictions.
17th Annual International Symposium of the International Council on Systems Engineering, INCOSE 2007 - Systems Engineering: Key to Intelligent Enterprises
When a system design approach is applied to wind turbine blades, manufacturing and structural requirements are included along with aerodynamic considerations in the design optimization. The resulting system-driven design includes several innovative structural features such as flat-back airfoils, a constant thickness carbon spar-cap, and a thin, large diameter root. Subscale blades were manufactured to evaluate the as-built integrated performance. The design resulted in a 22% reduction in mass, but withstood over 300% of its design load during testing. Compressive strains of nearly 0.9% were measured in the carbon spar-cap. The test results from this and an earlier design are compared, as are finite element models of each design. Included in the analysis is a review of the acoustic emission events that were detected through the use of surface mounted microphones.
When measuring the structural dynamic response of test objects, the desired data is sometimes combined with some type of undesired periodic data. This can occur due to N-per-revolution excitation in systems with rotating components or when dither excitation is used. The response due to these (typically unmeasured) periodic excitations causes spikes in system frequency response functions (FRFs) and poor coherence. This paper describes a technique to remove these periodic components from the measured data. The data must be measured as a continuous time history which is initially processed as a single, long record. Given an initial guess for the periodic signal's fundamental frequency, an automated search will identify the actual fundamental frequency to very high accuracy. Then the fundamental and a user-specified number of harmonics are removed from the acquired data to create new time histories. These resulting time histories can then be processed using standard signal processing techniques. An example of this technique will be presented from a test where a vehicle is dithered with a fixed-frequency, sinusoidal force to linearize the behavior of the shock absorbers, while measuring the acceleration responses due to a random force applied elsewhere on the vehicle.
Partitioned global address space (PGAS) programming models have been identified as one of the few viable approaches for dealing with emerging many-core systems. These models tend to generate many small messages, which requires specific support from the network interface hardware to enable efficient execution. In the past, Cray included E-registers on the Cray T3E to support the SHMEM API; however, with the advent of multi-core processors, the balance of computation to communication capabilities has shifted toward computation. This paper explores the message rates that are achievable with multi-core processors and simplified PGAS support on a more conventional network interface. For message rate tests, we find that simple network interface hardware is more than sufficient. We also find that even typical data distributions, such as cyclic or block-cyclic, do not need specialized hardware support. Finally, we assess the impact of such support on the well known RandomAccess benchmark. (c) 2007 ACM.
In this paper, we present an optimal method for calculating turning maneuvers for an unmanned aerial vehicle (UAV) developed for ecological research. The algorithm calculates several possible solutions using vectors represented in complex notation, and selects the shortest turning path given constraints determined by the aircraft. This algorithm considers the UAV's turning capabilities, generating a two-dimensional path that is feasible for the UAV to fly. We generate a test flight path and show that the UAV is capable of following the turn maneuvers.
Parallel adaptive mesh refinement methods potentially lead to realistic modeling of complex three-dimensional physical phenomena. However, they also present significant challenges in data partitioning and load balancing. As the mesh adapts to the solution, the partitioning requirements change. By explicitly considering these dynamic conditions, the scalability for large, realistic simulations could possibly be significantly improved. Our hypothesis is that adaptive partitioning, meaning dynamic and automatic switching of partitioning techniques, based on the current run-time state, can be beneficial for these simulations. However, switching partitioners can be expensive due to differences in the algorithms' native mapping of data onto processors. We suggest forcing a uniform starting point for all included partitioners. We present a penalty-based method for determining whether switching is beneficial. We study the effects on data migration, as well as on overall cost, of using the uniform starting point and the switching-penalties to select the best partitioning algorithm, among a set of graph-based and geometric partitioning algorithms, for each adaptive time-step for four different adaptive scientific applications. The results show that data migration can be significantly reduced and that adaptive partitioning indeed can be effective for unstructured adaptive applications.
17th Annual International Symposium of the International Council on Systems Engineering, INCOSE 2007 - Systems Engineering: Key to Intelligent Enterprises
In this paper, we introduce EXACT, the EXperimental Algorithmics Computational Toolkit. EXACT is a software framework for describing, controlling, and analyzing computer experiments. It provides the experimentalist with convenient software tools to ease and organize the entire experimental process, including the description of factors and levels, the design of experiments, the control of experimental runs, the archiving of results, and analysis of results. As a case study for EXACT, we describe its interaction with FAST, the Sandia Framework for Agile Software Testing. EXACT and FAST now manage the nightly testing of several large software projects at Sandia. We also discuss EXACT's advanced features, which include a driver module that controls complex experiments such as comparisons of parallel algorithms. Copyright 2007 ACM.
Proceedings of SPIE - The International Society for Optical Engineering
Harrison, M.J.; Doty, F.P.
Lanthanide halide alloys have recently enabled scintillating gamma ray spectrometers comparable to room-temperature semiconductors (< 3% FWHM energy resolutions at 662keV). However brittle fracture of these materials hinders the growth of large volume crystals. Efforts to improve the strength through non-lanthanide alloy substitution, while preserving scintillation, are being pursued. Isovalent alloys nominal Ce0.9Al0.1Br 3, Ce0.9Ga0.1Br3, Ce 0.9Sc0.1Br3, Ce0.9In 0.1Br3 and Ce0.8Y0.2Br3, as well as aliovalent alloys nominal (CeBr3)0.99(CdCl 2)0.01, (CeBr3)0.99(CdBr 2)0.01, (CeBr3)0.99(ZnBr 2)0.01, (CeBr3)0.99(CaBr 2)0.01, (CeBr3)0.99(SrBr 2)0.01, (CeBr3)0.99(PbBr 2)0.01, (CeBr3)0.99(ZrBr 4)0.01, (CeBr3)0.99(HfBr 4)0.01 were prepared. All of these alloys exhibit bright fluorescence under UV excitation, with varying shifts in the spectral peaks and intensities relative to pure CeBr3. Further, these alloys scintillate when coupled to a photomultiplier tube (PMT) and exposed to 137Cs gamma rays. These data and the potential for improved crystal growth will be discussed.
Fracture of materials has a huge consequence in our daily life ranging from structural damage to loss of life. Understanding the mechanism of crack initiation and propagation in materials is very important. Great effort, both theoretically and experimentally, has been made to understand the nature of crack propagation in crystalline materials. However, crack propagation in disordered systems such as highly cross-linked polymers (e.g. epoxies) is less understood. Many composites such as carbon fibers have an epoxy matrix, and thus it is important to understand the epoxy properties by themselves. We study fracture in highly cross-linked polymer networks bonded to a solid surface using large-scale molecular dynamics simulations. An initial crack is created by forbidding bonds to occur on a fraction of the solid surface up to a crack tip. The time and length scales involved in this process dictate the use of coarse grained bead-spring model of the epoxy network. In order to avoid unwanted boundary effects, large systems of up to 300 000 particles are used. Stress-strain curves are determined for each system from tensile pull molecular dynamics simulations. We found that crack propagation and also formation of voids ahead of the crack are directly related to the network structure.
A growing number of applications involve the transmission of high-intensity laser pulses through optical fibers. Previously, our particular interests led to a series of studies on single-fiber transmission of Q-switched, 1064 nm pulses from multimode Nd:YAG lasers through step-index, multimode, fused silica fibers. The maximum pulse energy that could be transmitted through a given fiber was limited by the onset of laser-induced breakdown or damage. Breakdown at the fiber entrance face was often the first limiting process encountered, but other mechanisms were observed that could result in catastrophic damage at either fiber face, within the initial "entry" segment of the fiber, and at other internal sites along the fiber path. These studies examined system elements that can govern the relative importance of different damage mechanisms, including laser characteristics, the design and alignment of laser-to-flber injection optics, fiber end-face preparation, and fiber routing. In particular, criteria were established for injection optics in order to maximize margins between transmission requirements and thresholds for laser-induced damage. Recent interests have led us to examine laser injection into multiple fibers. Effective methods for generating multiple beams are available, but the resulting beam geometry can lead to challenges in applying the criteria for optimum injection optics. To illustrate these issues, we have examined a three-fiber injection system consisting of a beam-shaping element, a primary injection lens, and a grating beamsplitter. Damage threshold characteristics were established by testing fibers using the injection geometry imposed by this system design.
We have synthesized and tested new highly fluorescent metal organic framework (MOF) materials based on stilbene dicarboxylic acid as a linker. The crystal structure and porosity of the product are dependent on synthetic conditions and choice of solvent and a low-density cubic form has been identified by x-ray diffraction. In this work we report experiments demonstrating scintillation properties of these crystals. Bright proton-induced luminescence with large shifts relative to the fluorescence excitation spectra were recorded, peaking near 475 nm. Tolerance to fast proton radiation was evaluated by monitoring this radio-luminescence to absorbed doses of several hundred MRAD.
To provide input to numerical models for hazard and vulnerability analyses, thermal decomposition of eight polymers has been examined in both nitrogen and air atmospheres. Experiments have been done with poly(methyl methacrylate), poly(diallyl phthalate), Norwegian spruce, polyvinyl chloride), polycarbonate, poly(phenylene sulphide), and two polyurethanes. Polymers that formed a substantial amount of carbonaceous char during decomposition in a nitrogen atmosphere were completely consumed in an air atmosphere. However, in the case of polyurethanes, complete consumption did not occur until temperatures of 700° C or higher. Furthermore, to varying degrees, the presence of oxygen appeared to alter the decomposition processes in all of the materials studied.
The deployment of optical fibers in adverse radiation environments, such as those encountered in a low-Earth-orbit space setting, makes critical the development of an understanding of the effect of large accumulated ionizing-radiation doses on optical components and systems. In particular, gamma radiation is known to considerably affect the performance of optical components by inducing absorbing centers in the materials. Such radiation is present both as primary background radiation and as secondary radiation induced by proton collisions with space-craft material. This paper examines the effects of gamma radiation on erbium-, ytterbium-, and Yb/Er co-doped optical fibers by exposing a suite of such fibers to radiation from a Co-60 source over long periods of time while monitoring the temporal and spectral decrease in transmittance of a reference signal. For same total doses, results show increased photodarkening in erbium-doped fibers relative to ytterbium-doped fibers, as well as significant radiation resistance of the co-doped fibers over wavelengths of 1.0-1.6 microns. All three types of fibers were seen to exhibit dose-rate dependences.
A firing set capable of charging a 0.05 μF capacitor to 1.7 kV is constructed using a 2.5 mm diameter Series Connected Photovoltaic Array (SCPA) in lieu of a transformer as the method of high voltage generation. The source of illumination is a fiber coupled 3 W 808 nm laser diode. This paper discusses the performance and PSpice modeling of an SCPA used in a firing set application.
This paper presents the conceptual framework that is being used to define quantification of margins and uncertainties (QMU) for application in the nuclear weapons (NW) work conducted at Sandia National Laboratories. The conceptual framework addresses the margins and uncertainties throughout the NW life cycle and includes the definition of terms related to QMU and to figures of merit. Potential applications of QMU consist of analyses based on physical data and on modeling and simulation. Appendix A provides general guidelines for addressing cases in which significant and relevant physical data are available for QMU analysis. Appendix B gives the specific guidance that was used to conduct QMU analyses in cycle 12 of the annual assessment process. Appendix C offers general guidelines for addressing cases in which appropriate models are available for use in QMU analysis. Appendix D contains an example that highlights the consequences of different treatments of uncertainty in model-based QMU analyses.
Split grating-gate field effect transistors (FETs) detectors made from high mobility quantum well two-dimensional electron gas material have been shown to exhibit greatly improved tunable resonant photoresponse compared to single grating-gate detectors due to the formation of a 'diode-like' element by the split-gate structure. These detectors are relatively large for FETs (1mm × 1mm area or larger) to match typical focused THz beam spot sizes. In the case where the focused THz spot size is smaller than the detector area, we have found evidence, through positional scanning of the detector element, that only a small portion of the detector is active. To further investigate this situation, detectors with the same channel width (1mm), but various channel lengths, were fabricated and tested. The results indicate that indeed, only a small portion of the split grating gated FET is active. This finding opens up the possibility for further enhancement of detector sensitivity by increasing the active area.
Sandia National Laboratories has developed a means of manufacturing high precision aspheric lenslet arrays turned on-center. An innovative chucking and indexing mechanism was designed and implemented which allows the part to be indexed in two orthogonal directions parallel to the spindle face. This system was designed to meet a need for center to center positioning of 2μm and form error of λ/10. The part utilizes scribed orthogonal sets of grooves that locate the part on the chuck. The averaging of the grooves increases the repeatability of the system. The part is moved an integral number of grooves across the chuck by means of a vacuum chuck on a tool post that is mated to the part and holds the part while the chuck repositions to receive the part. The current setup is designed to create as many as 169 lenslets distributed over a 3mm square area while holding a true position tolerance of 1μm for all lenslets.
The rapid autonomous detection of pathogenic microorganisms and bioagents by field deployable platforms is critical to human health and safety. To achieve a high level of sensitivity for fluidic detection applications, we have developed a 330 MHz Love wave acoustic biosensor on 36° YX Lithium Tantalate (LTO). Each die has four delay-line detection channels, permitting simultaneous measurement of multiple analytes or for parallel detection of single analyte containing samples. Crucial to our biosensor was the development of a transducer that excites the shear horizontal (SH) mode, through optimization of the transducer, minimizing propagation losses and reducing undesirable modes. Detection was achieved by comparing the reference phase of an input signal to the phase shift from the biosensor using an integrated electronic multi-readout system connected to a laptop computer or PDA The Love wave acoustic arrays were centered at 330 MHz, shifting to 325-328 MHz after application of the silicon dioxide waveguides. The insertion loss was -6 dB with an out-of-band rejection of 35 dB. The amplitude and phase ripple were 2.5 dB p-p and 2-3° pp, respectively. Time-domain gating confirmed propagation of the SH mode while showing suppression of the triple transit. Antigen capture and mass detection experiments demonstrate a sensitivity of 7.19 ± 0.74° mm2/ ng with a detection limit of 6.7 ± 0.40 pg / mm2 for each channel.
An innovative helium3 high pressure gas detection system, made possible by utilizing Sandia's expertise in Micro-electrical Mechanical fluidic systems, is proposed which appears to have many beneficial performance characteristics with regards to making these neutron measurements in the high bremsstrahlung and electrical noise environments found in High Energy Density Physics experiments and especially on the very high noise environment generated on the fast pulsed power experiments performed here at Sandia. This same system may dramatically improve active WMD and contraband detection as well when employed with ultrafast (10-50 ns) pulsed neutron sources.
Parallel adaptive mesh refinement methods potentially lead to realistic modeling of complex three-dimensional physical phenomena. However, they also present significant challenges in data partitioning and load balancing. As the mesh adapts to the solution, the partitioning requirements change. By explicitly considering these dynamic conditions, the scalability for large, realistic simulations could possibly be significantly improved. Our hypothesis is that adaptive partitioning, meaning dynamic and automatic switching of partitioning techniques, based on the current run-time state, can be beneficial for these simulations. However, switching partitioners can be expensive due to differences in the algorithms' native mapping of data onto processors. We suggest forcing a uniform starting point for all included partitioners. We present a penalty-based method for determining whether switching is beneficial. We study the effects on data migration, as well as on overall cost, of using the uniform starting point and the switching-penalties to select the best partitioning algorithm, among a set of graph-based and geometric partitioning algorithms, for each adaptive time-step for four different adaptive scientific applications. The results show that data migration can be significantly reduced and that adaptive partitioning indeed can be effective for unstructured adaptive applications.
We present the results of a three year LDRD project which has focused on the development of novel, compact, ultraviolet solid-state sources and fluorescence-based sensing platforms that apply such devices to the sensing of biological and nuclear materials. We describe our development of 270-280 nm AlGaN-based semiconductor UV LEDs with performance suitable for evaluation in biosensor platforms as well as our development efforts towards the realization of a 340 nm AlGaN-based laser diode technology. We further review our sensor development efforts, including evaluation of the efficacy of using modulated LED excitation and phase sensitive detection techniques for fluorescence detection of bio molecules and uranyl-containing compounds.
We have developed a system of differential-output monitors that diagnose current and voltage in the vacuum section of a 20-MA 3-MV pulsed-power accelerator. The system includes 62 gauges: 3 current and 6 voltage monitors that are fielded on each of the accelerator's 4 vacuum-insulator stacks, 6 current monitors on each of the accelerator's 4 outer magnetically insulated transmission lines (MITLs), and 2 current monitors on the accelerator's inner MITL. The inner-MITL monitors are located 6 cm from the axis of the load. Each of the stack and outer-MITL current monitors comprises two separate B-dot sensors, each of which consists of four 3-mm-diameter wire loops wound in series. The two sensors are separately located within adjacent cavities machined out of a single piece of copper. The high electrical conductivity of copper minimizes penetration of magnetic flux into the cavity walls, which minimizes changes in the sensitivity of the sensors on the 100-ns time scale of the accelerator's power pulse. A model of flux penetration has been developed and is used to correct (to first order) the B-dot signals for the penetration that does occur. The two sensors are designed to produce signals with opposite polarities; hence, each current monitor may be regarded as a single detector with differential outputs. Common-mode-noise rejection is achieved by combining these signals in a 50-{Omega} balun. The signal cables that connect the B-dot monitors to the balun are chosen to provide reasonable bandwidth and acceptable levels of Compton drive in the bremsstrahlung field of the accelerator. A single 50-{omega} cable transmits the output signal of each balun to a double-wall screen room, where the signals are attenuated, digitized (0.5-ns/sample), numerically compensated for cable losses, and numerically integrated. By contrast, each inner-MITL current monitor contains only a single B-dot sensor. These monitors are fielded in opposite-polarity pairs. The two signals from a pair are not combined in a balun; they are instead numerically processed for common-mode-noise rejection after digitization. All the current monitors are calibrated on a 76-cm-diameter axisymmetric radial transmission line that is driven by a 10-kA current pulse. The reference current is measured by a current-viewing resistor (CVR). The stack voltage monitors are also differential-output gauges, consisting of one 1.8-cm-diameter D-dot sensor and one null sensor. Hence, each voltage monitor is also a differential detector with two output signals, processed as described above. The voltage monitors are calibrated in situ at 1.5 MV on dedicated accelerator shots with a short-circuit load. Faraday's law of induction is used to generate the reference voltage: currents are obtained from calibrated outer-MITL B-dot monitors, and inductances from the system geometry. In this way, both current and voltage measurements are traceable to a single CVR. Dependable and consistent measurements are thus obtained with this system of calibrated diagnostics. On accelerator shots that deliver 22 MA to a low-impedance z-pinch load, the peak lineal current densities at the stack, outer-MITL, and inner-MITL monitor locations are 0.5, 1, and 58 MA/m, respectively. On such shots the peak currents measured at these three locations agree to within 1%.
The authors have developed a chip-scale atomic clock (CSAC) for applications requiring atomic timing accuracy in portable battery-powered applications. At PTTI/FCS 2005, they reported on the demonstration of a prototype CSAC, with an overall size of 10 cm{sup 3}, power consumption > 150 mW, and short-term stability sy(t) < 1 x 10-9t-1/2. Since that report, they have completed the development of the CSAC, including provision for autonomous lock acquisition and a calibrated output at 10.0 MHz, in addition to modifications to the physics package and system architecture to improve performance and manufacturability.
Solute plumes are believed to disperse in a non-Fickian manner due to small-scale heterogeneity and variable velocities that create preferential pathways. In order to accurately predict dispersion in naturally complex geologic media, the connection between heterogeneity and dispersion must be better understood. Since aquifer properties can not be measured at every location, it is common to simulate small-scale heterogeneity with random field generators based on a two-point covariance (e.g., through use of sequential simulation algorithms). While these random fields can produce preferential flow pathways, it is unknown how well the results simulate solute dispersion through natural heterogeneous media. To evaluate the influence that complex heterogeneity has on dispersion, we utilize high-resolution terrestrial lidar to identify and model lithofacies from outcrop for application in particle tracking solute transport simulations using RWHet. The lidar scan data are used to produce a lab (meter) scale two-dimensional model that captures 2-8 mm scale natural heterogeneity. Numerical simulations utilize various methods to populate the outcrop structure captured by the lidar-based image with reasonable hydraulic conductivity values. The particle tracking simulations result in residence time distributions used to evaluate the nature of dispersion through complex media. Particle tracking simulations through conductivity fields produced from the lidar images are then compared to particle tracking simulations through hydraulic conductivity fields produced from sequential simulation algorithms. Based on this comparison, the study aims to quantify the difference in dispersion when using realistic and simplified representations of aquifer heterogeneity.
A sulfuric acid catalytic decomposer section was assembled and tested for the Integrated Laboratory Scale experiments of the Sulfur-Iodine Thermochemical Cycle. This cycle is being studied as part of the U. S. Department of Energy Nuclear Hydrogen Initiative. Tests confirmed that the 54-inch long silicon carbide bayonet could produce in excess of the design objective of 100 liters/hr of SO{sub 2} at 2 bar. Furthermore, at 3 bar the system produced 135 liters/hr of SO{sub 2} with only 31 mol% acid. The gas production rate was close to the theoretical maximum determined by equilibrium, which indicates that the design provides adequate catalyst contact and heat transfer. Several design improvements were also implemented to greatly minimize leakage of SO{sub 2} out of the apparatus. The primary modifications were a separate additional enclosure within the skid enclosure, and replacement of Teflon tubing with glass-lined steel pipes.
The geometry of ray paths through realistic Earth models can be extremely complex due to the vertical and lateral heterogeneity of the velocity distribution within the models. Calculation of high fidelity ray paths and travel times through these models generally involves sophisticated algorithms that require significant assumptions and approximations. To test such algorithms it is desirable to have available analytic solutions for the geometry and travel time of rays through simpler velocity distributions against which the more complex algorithms can be compared. Also, in situations where computational performance requirements prohibit implementation of full 3D algorithms, it may be necessary to accept the accuracy limitations of analytic solutions in order to compute solutions that satisfy those requirements. Analytic solutions are described for the geometry and travel time of infinite frequency rays through radially symmetric 1D Earth models characterized by an inner sphere where the velocity distribution is given by the function V (r) = A-Br{sup 2}, optionally surrounded by some number of spherical shells of constant velocity. The mathematical basis of the calculations is described, sample calculations are presented, and results are compared to the Taup Toolkit of Crotwell et al. (1999). These solutions are useful for evaluating the fidelity of sophisticated 3D travel time calculators and in situations where performance requirements preclude the use of more computationally intensive calculators. It should be noted that most of the solutions presented are only quasi-analytic. Exact, closed form equations are derived but computation of solutions to specific problems generally require application of numerical integration or root finding techniques, which, while approximations, can be calculated to very high accuracy. Tolerances are set in the numerical algorithms such that computed travel time accuracies are better than 1 microsecond.
Mode-stirred chamber and anechoic chamber measurements were made on two sets of canonical test objects (cylindrical and rectangular) with varying numbers of thin slot apertures. The shielding effectiveness was compared to determine the level of correction needed to compensate the mode-stirred data to levels commensurate with anechoic data from the same test object.
Er(D,T){sub 2-x} {sup 3}He{sub x}, erbium di-tritide, films of thicknesses 500 nm, 400 nm, 300 nm, 200 nm, and 100 nm were grown and analyzed by Transmission Electron Microscopy, X-Ray Diffraction, and Ion Beam Analysis to determine variations in film microstructure as a function of film thickness and age, due to the time-dependent build-up of {sup 3}He in the film from the radioactive decay of tritium. Several interesting features were observed: One, the amount of helium released as a function of film thickness is relatively constant. This suggests that the helium is being released only from the near surface region and that the helium is not diffusing to the surface from the bulk of the film. Two, lenticular helium bubbles are observed as a result of the radioactive decay of tritium into {sup 3}He. These bubbles grow along the [111] crystallographic direction. Three, a helium bubble free zone, or 'denuded zone' is observed near the surface. The size of this region is independent of film thickness. Four, an analysis of secondary diffraction spots in the Transmission Electron Microscopy study indicate that small erbium oxide precipitates, 5-10 nm in size, exist throughout the film. Further, all of the films had large erbium oxide inclusions, in many cases these inclusions span the depth of the film.
The objectives of this project were to develop a new scientific tool for studies of chemical processes at the single molecule level, and to provide enhanced capabilities for multiplexed, ultrasensitive separations and immunoassays. We have combined microfluidic separation techniques with our newly developed technology for spectrally and temporally resolved detection of single molecules. The detection of individual molecules can reveal fluctuations in molecular conformations, which are obscured in ensemble measurements, and allows detailed studies of reaction kinetics such as ligand or antibody binding. Detection near the single molecule level also enables the use of correlation techniques to extract information, such as diffusion rates, from the fluorescence signal. The micro-fluidic technology offers unprecedented control of the chemical environment and flow conditions, and affords the unique opportunity to study biomolecules without immobilization. For analytical separations, the fluorescence lifetime and spectral resolution of the detection makes it possible to use multiple parameters for identification of separation products to improve the certainty of identification. We have successfully developed a system that can measure fluorescence spectra, lifetimes and diffusion constants of the components of mixtures separated in a microfluidic electrophoresis chip.
Veloce is a medium-voltage, high-current, compact pulsed power generator developed for isentropic and shock compression experiments. Because of its increased availability and ease of operation, Veloce is well suited for studying isentropic compression experiments (ICE) in much greater detail than previously allowed with larger pulsed power machines such as the Z accelerator. Since the compact pulsed power technology used for dynamic material experiments has not been previously used, it is necessary to examine several key issues to ensure that accurate results are obtained. In the present experiments, issues such as panel and sample preparation, uniformity of loading, and edge effects were extensively examined. In addition, magnetohydrodynamic (MHD) simulations using the ALEGRA code were performed to interpret the experimental results and to design improved sample/panel configurations. Examples of recent ICE studies on aluminum are presented.
This document is considered a mechanical design best-practice guide to new and experienced designers alike. The contents consist of topics related to using Computer Aided Design (CAD) software, performing basic analyses, and using configuration management. The details specific to a particular topic have been leveraged against existing Product Realization Standard (PRS) and Technical Business Practice (TBP) requirements while maintaining alignment with sound engineering and design practices. This document is to be considered dynamic in that subsequent updates will be reflected in the main title, and each update will be published on an annual basis.
Results from recent experimental studies suggest that the N vacancy (V{sub N}) may compensate Mg acceptors in GaN in addition to the compensation arising from H introduced during growth. To investigate this possibility further, density-functional-theory calculations were performed to determine the interactions of V{sub N} with H, Mg, and the MgH center in GaN, and modeling was performed to determine the state populations at elevated temperatures. The results indicate that V{sub N}H and MgV{sub N}H complexes with H inside the vacancy are highly stable in p-type GaN and act to compensate or passivate Mg acceptors. Furthermore, barriers for formation of these complexes were investigated and the results indicate that they can readily form at temperatures > 400 C, which is well below temperatures typically used for GaN growth. Overall, the results indicate that the V{sub N} compensation behavior suggested by experiments arises not from isolated V{sub N}, but rather from V{sub N}H and MgV{sub N}H complexes with H located inside the vacancy.
This work describes the convergence analysis of a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems and, as such, the derived strong error estimates for the fully discrete solution are used to compare the computational efficiency of the proposed method with the Monte Carlo method. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo.
Alegra is an ALE (Arbitrary Lagrangian-Eulerian) multi-material finite element code that emphasizes large deformations and strong shock physics. The Lagrangian continuum dynamics package in Alegra uses a Galerkin finite element spatial discretization and an explicit central-difference stepping method in time. The goal of this report is to describe in detail the characteristics of this algorithm, including the conservation and stability properties. The details provided should help both researchers and analysts understand the underlying theory and numerical implementation of the Alegra continuum hydrodynamics algorithm.
The purpose of this nine-week project was to advance the understanding of low-altitude airbursts by developing the means to model them at extremely high resolution in order to span the scales of entry physics as well as blast wave and plume formation. Small asteroid impacts on Earth are a recognized hazard, but the full nature of the threat is still not well understood. We used shock physics codes to discover emergent phenomena associated with low-altitude airbursts such as the Siberian Tunguska event of 1908 and the Egyptian glass-forming event 29 million years ago. The planetary defense community is beginning to recognize the significant threat from such airbursts. Low-altitude airbursts are the only class of impacts that have a significant probability of occurring within a planning time horizon. There is roughly a 10% chance of a megaton-scale low-altitude airburst event in the next decade.The first part of this LDRD final project report is a preprint of our proceedings paper associated with the plenary presentation at the Hypervelocity Impact Society 2007 Symposium in Williamsburg, Virginia (International Journal of Impact Engineering, in press). The paper summarizes discoveries associated with a series of 2D axially-symmetric CTH simulations. The second part of the report contains slides from an invited presentation at the American Geophysical Union Fall 2007 meeting in San Francisco. The presentation summarizes the results of a series of 3D oblique impact simulations of the 1908 Tunguska explosion. Because of the brevity of this late-start project, the 3D results have not yet been written up for a peer-reviewed publication. We anticipate the opportunity to eventually run simulations that include the actual topography at Tunguska, at which time these results will be published.3