Analytical solutions : flat duct problem
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The ability to detect Weapons of Mass Destruction biological agents rapidly and sensitively is vital to homeland security, spurring development of compact detection systems at Sandia and elsewhere. One such system is Sandia's microseparations-based pChemLab. Many bio-agents are serious health threats even at extremely low concentrations. Therefore, a universal challenge for detection systems is the efficient collection and selective transport of highly diffuse bio-agents against the enormous background of benign particles and species ever present in the ambient environment. We have investigated development of a ''front end'' system for the collection, preconcentration, and selective transport of aerosolized biological agents from dilute (1-10 active particles per liter of air) atmospheric samples, to ultimate concentrations of {approx}20 active particles per microliter of liquid, for interface with microfluidic-based analyses and detection systems. Our approach employs a Sandia-developed aerosol particle-focusing microseparator array to focus size-selected particles into a mating microimpinger array of open microfluidic transport channels. Upon collection (i.e., impingement, submergence, and liquid suspension), microfluidic dielectrophoretic particle concentrators and sorters can be employed to further concentrate and selectively transport bio-agent particles to the sample preparation stages of microfluidic analyses and detection systems. This report documents results in experimental testing, modeling and analysis, component design, and materials fabrication critical to establishing proof-of-principle for this collection ''front end''. Outstanding results have been achieved for the aerodynamic microseparator, and for the post-collection dielectrophoretic concentrator and sorter. Results have been obtained for the microimpinger, too, but issues of particle-trapping by surface tension in liquid surfaces have proven difficult. Subsequent particle submergence into liquid suspension for microfluidic transport has been demonstrated only inefficiently despite significant and varied effort. Importantly, the separate technologies whose development is described, (inertial microseparator, dielectrophoretic corduroy concentrator/sorter) should each, independently, prove greatly useful in a variety of additional applications.
Earth Power Resources, Inc. recently completed a combined rotary/core hole to a depth of 3,813 feet at it's Hot Sulphur Springs Tuscarora Geothermal Power Project Lease Area located 70-miles north of Elko, Nevada. Previous geothermal exploration data were combined with geologic mapping and newly acquired seismic-reflection data to identify a northerly tending horst-graben structure approximately 2,000 feet wide by at least 6,000 feet long with up to 1,700 feet of vertical offset. The well (HSS-2) was successfully drilled through a shallow thick sequence of altered Tertiary Volcanic where previous exploration wells had severe hole-caving problems. The ''tight-hole'' drilling problems were reduced using drilling fluids consisting of Polymer-based mud mixed with 2% Potassium Chloride (KCl) to reduce Smectite-type clay swelling problems. Core from the 330 F fractured geothermal reservoir system at depths of 2,950 feet indicated 30% Smectite type clays existed in a fault-gouge zone where total loss of circulation occurred during coring. Smectite-type clays are not typically expected at temperatures above 300 F. The fracture zone at 2,950 feet exhibited a skin-damage during injection testing suggesting that the drilling fluids may have caused clay swelling and subsequent geothermal reservoir formation damage. The recent well drilling experiences indicate that drilling problems in the shallow clays at Hot Sulphur Springs can be reduced. In addition, average penetration rates through the caprock system can be on the order of 25 to 35 feet per hour. This information has greatly reduced the original estimated well costs that were based on previous exploration drilling efforts. Successful production formation drilling will depend on finding drilling fluids that will not cause formation damage in the Smectite-rich fractured geothermal reservoir system. Information obtained at Hot Sulphur Springs may apply to other geothermal systems developed in volcanic settings.
The AlGaInN material system is used for virtually all advanced solid state lighting and short wavelength optoelectronic devices. Although metal-organic chemical vapor deposition (MOCVD) has proven to be the workhorse deposition technique, several outstanding scientific and technical challenges remain, which hinder progress and keep RD&A costs high. The three most significant MOCVD challenges are: (1) Accurate temperature measurement; (2) Reliable and reproducible p-doping (Mg); and (3) Low dislocation density GaN material. To address challenge (1) we designed and tested (on reactor mockup) a multiwafer, dual wavelength, emissivity-correcting pyrometer (ECP) for AlGaInN MOCVD. This system simultaneously measures the reflectance (at 405 and 550 nm) and emissivity-corrected temperature for each individual wafer, with the platen signal entirely rejected. To address challenge (2) we measured the MgCp{sub 2} + NH{sub 3} adduct condensation phase diagram from 65-115 C, at typical MOCVD concentrations. Results indicate that it requires temperatures of 80-100 C in order to prevent MgCp{sub 2} + NH{sub 3} adduct condensation. Modification and testing of our research reactor will not be complete until FY2005. A new commercial Veeco reactor was installed in early FY2004, and after qualification growth experiments were conducted to improve the GaN quality using a delayed recovery technique, which addresses challenge (3). Using a delayed recovery technique, the dislocation densities determined from x-ray diffraction were reduced from 2 x 10{sup 9} cm{sup -2} to 4 x 10{sup 8} cm{sup -2}. We have also developed a model to simulate reflectance waveforms for GaN growth on sapphire.
Abstract not provided.
Proposed for publication in Physical Review B.
Abstract not provided.
Abstract not provided.
Structured adaptive mesh refinement (SAMR) methods are being widely used for computer simulations of various physical phenomena. Parallel implementations potentially offer realistic simulations of complex, three-dimensional applications. But achieving good scalability for large-scale applications is non-trivial. Performance is limited by the partitioners ability to efficiently use the underlying computer's resources. The goal of our research project is to improve scalability for general SAMR applications executing on general parallel computers. We engineer the dynamically adaptive meta-partitioner, able to select and configure the most appropriate partitioning method at run-time, based on system and application state. This presentation gives an overview of our project, reports on recent achievements, and discusses the project's significance in a wider scientific context.
The Container Analysis Fire Environment (CAFE) computer code has been developed to model all relevant fire physics for predicting the thermal response of massive objects engulfed in large fires. It provides realistic fire thermal boundary conditions for use in design of radioactive material packages and in risk-based transportation studies. The CAFE code can be coupled to commercial finite-element codes such as MSC PATRAN/THERMAL and ANSYS. This coupled system of codes can be used to determine the internal thermal response of finite element models of packages to a range of fire environments. This document is a user manual describing how to use the three-dimensional version of CAFE, as well as a description of CAFE input and output parameters. Since this is a user manual, only a brief theoretical description of the equations and physical models is included.
The formulation, implementation and usage of a numerical solution verification code is described. This code uses the Richardson extrapolation procedure to estimate the order of accuracy and error of a computational program solution. It evaluates multiple solutions performed in numerical grid convergence studies to verify a numerical algorithm implementation. Analyses are performed on both structured and unstructured grid codes. Finite volume and finite element discretization programs are examined. Two and three-dimensional solutions are evaluated. Steady state and transient solution analysis capabilities are present in the verification code. Multiple input data bases are accepted. Benchmark options are included to allow for minimal solution validation capability as well as verification.
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
This report presents a modification of a previous model for the statistical distribution of linear antenna impedance. With this modification a simple formula is determined which yields accurate results for all ratios of modal spectral width to spacing. It is shown that the reactance formula approaches the known unit Lorentzian in the lossless limit.
Sandia National Laboratories performs many expensive tests using inertial measurement units (IMUs)--systems that use accelerometers, gyroscopes, and other sensors to measure flight dynamics in three dimensions. For the purpose of this report, the metrics used to evaluate an IMU are cost, size, performance, resolution, upgradeability and testing. The cost of a precision IMU is very high and can cost hundreds of thousands of dollars. Thus the goals and results of this project are as follows: (1) Examine the data flow in an IMU and determine a generic IMU design. (2) Discuss a high cost IMU implementation and its theoretically achievable results. (3) Discuss design modifications that would save money for suited applications. (4) Design and implement a low cost IMU and discuss its theoretically achievable results. (5) Test the low cost IMU and compare theoretical results with empirical results. (6) Construct a more streamlined printed circuit board design reducing noise, increasing capabilities, and constructing a self-contained unit. Using these results, we can compare a high cost IMU versus a low cost IMU using the metrics from above. Further, we can examine and suggest situations where a low cost IMU could be used instead of a high cost IMU for saving cost, size, or both.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The absence of agreed definitions and metrics for supercomputer RAS obscures meaningful discussion of the issues involved and hinders their solution. This paper seeks to foster a common basis for communication about supercomputer RAS, by proposing a system state model, definitions, and measurements. These are modeled after the SEMI-E10 specification which is widely used in the semiconductor manufacturing industry.
Abstract not provided.
Abstract not provided.