This thesis presents the theory, design, fabrication and testing of the microvalves and columns necessary in a pressure- and temperature-programmed micro gas chromatograph ({micro}GC). Two microcolumn designs are investigated: a bonded Si-glass column having a rectangular cross section and a vapor-deposited silicon oxynitride (Sion) column having a roughly circular cross section. Both microcolumns contain integrated heaters and sensors for rapid, controlled heating. The 3.2 cm x 3.2 cm, 3 m-long silicon-glass column, coated with a non-polar polydimethylsiloxane (PDMS) stationary phase, separates 30 volatile organic compounds (VOCs) in less than 6 min. This is the most efficient micromachined column reported to date, producing greater than 4000 plates/m. The 2.7 mm x 1.4 mm Sion column eliminates the glass sealing plate and silicon substrate using deposited dielectrics and is the lowest power and fastest GC column reported to date; it requires only 11 mW to raise the column temperature by 100 C and has a response time of 11s and natural temperature ramp rate of 580 C/min. A 1 m-long PDMS-coated Sion microcolumn separates 10 VOCs in 52s. A system-based design approach was used for both columns.
Computer Methods in Applied Mechanics and Engineering
Bartlett, Roscoe B.; Heinkenschloss, Matthias; Ridzal, Denis; van Bloemen Waanders, Bart G.
We present an optimization-level domain decomposition (DD) preconditioner for the solution of advection dominated elliptic linear-quadratic optimal control problems, which arise in many science and engineering applications. The DD preconditioner is based on a decomposition of the optimality conditions for the elliptic linear-quadratic optimal control problem into smaller subdomain optimality conditions with Dirichlet boundary conditions for the states and the adjoints on the subdomain interfaces. These subdomain optimality conditions are coupled through Robin transmission conditions for the states and the adjoints. The parameters in the Robin transmission condition depend on the advection. This decomposition leads to a Schur complement system in which the unknowns are the state and adjoint variables on the subdomain interfaces. The Schur complement operator is the sum of subdomain Schur complement operators, the application of which is shown to correspond to the solution of subdomain optimal control problems, which are essentially smaller copies of the original optimal control problem. We show that, under suitable conditions, the application of the inverse of the subdomain Schur complement operators requires the solution of a subdomain elliptic linear-quadratic optimal control problem with Robin boundary conditions for the state. Numerical tests for problems with distributed and with boundary control show that the dependence of the preconditioners on mesh size and subdomain size is comparable to its counterpart applied to a single advection dominated equation. These tests also show that the preconditioners are insensitive to the size of the control regularization parameter.
With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables to sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric, as well as features that we believe should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent free-shear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment.
Pulsed Arrested Spark Discharge (PASD) is a Sandia National Laboratories Patented, non-destructive wiring system diagnostic that has been developed to detect defects in aging wiring systems in the commercial aircraft fleet. PASD was previously demonstrated on relatively controlled geometry wiring such as coaxial cables and shielded twisted-pair wiring through a contract with the U.S. navy and is discussed in a Sandia National Laboratories report, SAND2001-3225 ''Pulsed Arrested Spark Discharge (PASD) Diagnostic Technique for the Location of Defects in Aging Wiring Systems''. This report describes an expansion of earlier work by applying the PASD technique to unshielded twisted-pair and discrete wire configurations commonly found in commercial aircraft. This wiring is characterized by higher impedances as well as relatively non-uniform impedance profiles that have been found to be challenging for existing aircraft wiring diagnostics. Under a three year contract let by the Federal Aviation Administration, Interagency Agreement DTFA-03-00X90019, this technology was further developed for application on aging commercial aircraft wiring systems. This report describes results of the FAA program with discussion of previous work conducted under U.S. Department of Defense funding.
Contemporary three-dimensional numerical sediment transport models are often computationally expensive because of their complexity and thus a compromise must be struck between accurately modeling sediment transport and the number of effective sediment grain (particle) size classes to represent in such a model. The Environmental Fluid Dynamics Code (EFDC) was used to simulate the experimental results of previous researchers who investigated sediment erosion and gradation around a 180° bend subject to transient flow. The EFDC model was first calibrated using the eight distinct particle size classes reported in the physical experiment to find the best erosion formulations to use. Once the best erosion formulations and parameters were ascertained, numerical simulations were carried out for each experimental run using a single effective particle size. Four techniques for evaluating the effective particle size were investigated. Each procedure yields comparable effective particle sizes within a factor of 1.5 of the others. Model results indicate that particle size as determined by the weighted critical shear velocity most faithfully reproduced the experimental results for erosion and deposition depths. Subsequently, model runs were conducted with different numbers of effective particle size classes to determine the optimal number that yields an accurate estimate for noncohesive sediment transport. Optimal, herein, means that numerical model results are reasonably representative of the experimental data with the fewest effective particle size classes used, thereby maximizing computational efficiency. Although modeling with more size classes can be equally accurate, results from this study indicate that using three effective particle size classes to estimate the distribution of sediment sizes is optimum.
This report represents the final product of a background literature review conducted for the Nuclear Waste Management Organization of Japan (NUMO) by Sandia National Laboratories, Albuquerque, New Mexico, USA. Internationally, research of hydrological and transport processes in the context of high level waste (HLW) repository performance, has been extensive. However, most of these studies have been conducted for sites that are within tectonically stable regions. Therefore, in support of NUMO's goal of selecting a site for a HLW repository, this literature review has been conducted to assess the applicability of the output from some of these studies to the geological environment in Japan. Specifically, this review consists of two main tasks. The first was to review the major documents of the main HLW repository programs around the world to identify the most important hydrologic and transport parameters and processes relevant in each of these programs. The review was to assess the relative importance of processes and measured parameters to site characterization by interpretation of existing sensitivity analyses and expert judgment in these documents. The second task was to convene a workshop to discuss the findings of Task 1 and to prioritize hydrologic and transport parameters in the context of the geology of Japan. This report details the results and conclusions of both of these Tasks.
An atmospheric pressure approach to growth of bulk group III-nitrides is outlined. Native III-nitride substrates for optoelectronic and high power, high frequency electronics are desirable to enhance performance and reliability of these devices; currently, these materials are available in research quantities only for GaN, and are unavailable in the case of InN. The thermodynamics and kinetics of the reactions associated with traditional crystal growth techniques place these activities on the extreme edges of experimental physics. The technique described herein relies on the production of the nitride precursor (N3-) by chemical and/or electrochemical methods in a molten halide salt. This nitride ion is then reacted with group III metals in such a manner as to form the bulk nitride material. The work performed during the period of funding (July 2004-September 2005) focused on the initial measurement of the solubility of GaN in molten LiCl as a function of temperature, the construction of electrochemical cells, the modification of a commercial glove box (required for handling very hygroscopic LiCl), and on securing intellectual property for the technique.
A polyurethane foam used in the H1616 shipping container provides impact energy absorption and fire protection in hypothetical accident conditions. This study was undertaken to determine the estimated lifetime of the foam as a function of temperature. The foams were aged at temperatures ranging from 65°C to 95°C for periods of time ranging from 6 months to 6 years. Both destructive and nondestructive Dynamic Mechanical Analyses (DMA) were used to evaluate the performance of the foams as a function of time and temperature. In addition, color changes and weight loss were recorded. Three properties of the foam show a definite trend with aging time: weight loss, nondestructive G’ (measured at 100°C), and glassy G’. A time temperature superposition analysis shows a reasonable trend with temperature for both the weight loss and glassy G’. The acceleration factors for weight loss and glassy G’ did not correlate with each other, however. A prediction of the behavior of G’ as a function of aging time at 25°C was derived from an extrapolated value of the acceleration factor. In addition to providing a quantitative estimation of the aging process, the curve also provides a description of the qualitative features of the aging process. First, the aging process appears to proceed smoothly as a function of aging time. There are no discontinuities or sharp breaks in the glassy G’ as a function of aging time at any of the temperatures. Second, the rate of change of the glassy G’ appears to decrease as the aging time increases.
The Department of Energy has assigned to Sandia National Laboratories the responsibility of producing a Safety Analysis Report (SAR) for the plutonium-dioxide fueled Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) proposed to be used in the Mars Science Laboratory (MSL) mission. The National Aeronautic and Space Administration (NASA) is anticipating a launch in fall of 2009, and the SAR will play a critical role in the launch approval process. As in past safety evaluations of MMRTG missions, a wide range of potential accident conditions differing widely in probability and seventy must be considered, and the resulting risk to the public will be presented in the form of probability distribution functions of health effects in terms of latent cancer fatalities. The basic descriptions of accident cases will be provided by NASA in the MSL SAR Databook for the mission, and on the basis of these descriptions, Sandia will apply a variety of sophisticated computational simulation tools to evaluate the potential release of plutonium dioxide, its transport to human populations, and the consequent health effects. The first step in carrying out this project is to evaluate the existing computational analysis tools (computer codes) for suitability to the analysis and, when appropriate, to identify areas where modifications or improvements are warranted. The overall calculation of health risks can be divided into three levels of analysis. Level A involves detailed simulations of the interactions of the MMRTG or its components with the broad range of insults (e.g., shrapnel, blast waves, fires) posed by the various accident environments. There are a number of candidate codes for this level; they are typically high resolution computational simulation tools that capture details of each type of interaction and that can predict damage and plutonium dioxide release for a range of choices of controlling parameters. Level B utilizes these detailed results to study many thousands of possible event sequences and to build up a statistical representation of the releases for each accident case. A code to carry out this process will have to be developed or adapted from previous MMRTG missions. Finally, Level C translates the release (or ''source term'') information from Level B into public risk by applying models for atmospheric transport and the health consequences of exposure to the released plutonium dioxide. A number of candidate codes for this level of analysis are available. This report surveys the range of available codes and tools for each of these levels and makes recommendations for which choices are best for the MSL mission. It also identities areas where improvements to the codes are needed. In some cases a second tier of codes may be identified to provide supporting or clarifying insight about particular issues. The main focus of the methodology assessment is to identify a suite of computational tools that can produce a high quality SAR that can be successfully reviewed by external bodies (such as the Interagency Nuclear Safety Review Panel) on the schedule established by NASA and DOE.
This report describes key ideas underlying the application of Quantification of Margins and Uncertainties (QMU) to nuclear weapons stockpile lifecycle decisions at Sandia National Laboratories. While QMU is a broad process and methodology for generating critical technical information to be used in stockpile management, this paper emphasizes one component, which is information produced by computational modeling and simulation. In particular, we discuss the key principles of developing QMU information in the form of Best Estimate Plus Uncertainty, the need to separate aleatory and epistemic uncertainty in QMU, and the risk-informed decision making that is best suited for decisive application of QMU. The paper is written at a high level, but provides a systematic bibliography of useful papers for the interested reader to deepen their understanding of these ideas.
Pulsed Radar systems suffer range ambiguities, that is, echoes from pulses transmitted at different times arrive at the receiver simultaneously. Conventional mitigation techniques are not always adequate. However, pulse modulation schemes exist that allow separation of ambiguous ranges in Doppler space, allowing easy filtering of problematic ambiguous ranges.