Sandia Labs software development success
Proposed for publication in Software Development Magazine.
Abstract not provided.
Proposed for publication in Software Development Magazine.
Abstract not provided.
Proposed for publication in Analytical Chemistry.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS
A classification system is developed to identify driving situations from labeled examples of previous occurrences. The purpose of the classifier is to provide physical context to a separate system that mitigates unnecessary distractions, allowing the driver to maintain focus during periods of high difficulty. While watching videos of driving, we asked different users to indicate their perceptions of the current situation. We then trained a classifier to emulate the human recognition of driving situations using the Sandia Cognitive Framework. In unstructured conditions, such as driving in urban areas and the German autobahn, the classifier was able to correctly predict human perceptions of driving situations over 95% of the time. This paper focuses on the learning algorithms used to train the driving-situation classifier. Future work will reduce the human efforts needed to train the system. © 2005 IEEE.
43rd AIAA Aerospace Sciences Meeting and Exhibit - Meeting Papers
A new multidomain/multiphysics computational framework for optimal control of aeroacoustic noise has been developed based on a near-field compressible Navier-Stokes solver coupled with a far-field linearized Euler solver both based on a discontinuous Galerkin formulation. In this approach, the coupling of near- and far-field domains is achieved by weakly enforcing continuity of normal fluxes across a coupling surface that encloses all nonlinearities and noise sources. For optimal control, gradient information is obtained by the solution of an appropriate adjoint problem that involves the propagation of adjoint information from the far-field to the near-field. This computational framework has been successfully applied to study optimal boundary-control of blade-vortex interaction, which is a significant noise source for helicopters on approach to landing. In the model-problem presented here, the noise propogated toward the ground is reduced by 12dB.
Proposed for publication in AIAA.
Abstract not provided.
Abstract not provided.
An electrically programmable surface acoustic wave (SAW) correlator was recently completed from design through small scale production in support of low power space-based communications for NASA. Three different versions of this RF microsystem were built to satisfy design requirements and overcome packaging and system reliability related issues. Flip-chip packaging and conventional thick film hybrid assembly techniques are compared in the fabrication of this microsystem.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proposed for publication in TMS Letters.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proposed for presentation at the Physical Review B.
The apparent metal-insulator transition is observed in a high-quality two-dimensional electron system (2DES) in the strained Si quantum well of a Si/Si{sub 1-x}Ge{sub x} heterostructure with mobility {mu} = 1.9 x 10{sup 5} cm{sup 2}/V s at density n = 1.45 x 10{sup 11} cm{sup -2}. The critical density, at which the thermal coefficient of low T resistivity changes sign, is -0.32 x 10{sup 11} cm{sup -2}, a very low value obtained in Si-based 2D systems. The in-plane magnetoresistivity {rho}(B{sub ip}) was measured in the density range, 0.35 x 10{sup 11} < n < 1.45 x 10{sup 11} cm{sup -2}, where the 2DES shows the metallic-like behavior. It first increases and then saturates to a finite value {rho}(B{sub c}) for B{sub ip}>B{sub c} , with B{sub c} the full spin-polarization field. Surprisingly, {rho}(B{sub c})/{rho}(0)-1.8 for all the densities, even down to n = 0.35 x 10{sup 11} cm{sup -2}, only 10% higher than n{sub c}. This is different from that in clean Si metal-oxide-semiconductor field-effect transistors, where the enhancement is strongly density dependent and {rho}(B{sub c})/{rho}(0) appears to diverge as n {yields} n{sub c}. Finally, we show that in the fully spin-polarized regime, dependent on the 2DES density, the temperature dependence of {rho}(B{sub ip}) can be either metallic-like or insulating.
This article describes recent improvements in mapping a highly representative set of the world-wide scientific literature. The process described in this article extends existing work in this area in three major ways. First, we argue that a separate structural analysis of current literature vs. reference literature is required for R&D planning. Second, visualization software is used to improve coverage of the literature while maintaining structural integrity. Third, quantitative techniques for measuring the structural integrity of a map are introduced. Maps with high structural integrity, covering far more of the available literature, are presented.
Proposed for publication in the Journal of Quantitative Spectroscopy and Radiative Transfer.
A dynamic hohlraum is created when an annular z-pinch plasma implodes onto a cylindrical 0.014 g/cc 6-mm-diameter CH{sub 2} foam. The impact launches a radiating shock that propagates toward the axis at {approx}350 {micro}m/ns. The radiation trapped by the tungsten z-pinch plasma forms a {approx}200 eV hohlraum that provides X-rays for indirect drive inertial confinement fusion capsule implosion experiments. We are developing the ability to diagnose the hohlraum interior using emission and absorption spectroscopy of Si atoms added as a tracer to the central portion of the foam. Time- and space-resolved Si spectra are recorded with an elliptical crystal spectrometer viewing the cylindrical hohlraum end-on. A rectangular aperture at the end of the hohlraum restricts the field of view so that the 1D spectrometer resolution corresponds approximately to the hohlraum radial direction. This enables distinguishing between spectra from the unshocked radiation-heated foam and from the shocked foam. Typical spectral lines observed include the Si Ly{alpha} with its He-like satellites and the He-like resonance sequence including He{alpha}, He{beta}, and He{gamma}, along with some of their associated Li-like satellites. Work is in progress to infer the hohlraum conditions using collisional-radiative modeling that accounts for the radiation environment and includes both opacity effects and detailed Stark broadening calculations. These 6-mm-scale radiation-heated plasmas might eventually also prove suitable for testing Stark broadening line profile calculations or for opacity measurements.
Abstract not provided.
Proposed for publication in the Physical Review B.
Abstract not provided.
Abstract not provided.
IFPACK provides a suite of object-oriented algebraic preconditioners for the solution of preconditioned iterative solvers. IFPACK constructors expect the (distributed) real sparse matrix to be an Epetra RowMatrix object. IFPACK can be used to define point and block relaxation preconditioners, various flavors of incomplete factorizations for symmetric and non-symmetric matrices, and one-level additive Schwarz preconditioners with variable overlap. Exact LU factorizations of the local submatrix can be accessed through the AMESOS packages. IFPACK , as part of the Trilinos Solver Project, interacts well with other Trilinos packages. In particular, IFPACK objects can be used as preconditioners for AZTECOO, and as smoothers for ML. IFPACK is mainly written in C++, but only a limited subset of C++ features is used, in order to enhance portability.
Proposed for publication in the Biophysical Journal.
Fluorescence correlation spectroscopy (FCS) is used to examine mobility of labeled probes at specific sites in supported bilayers consisting of 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) lipid domains in 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC). Those sites are mapped beforehand with simultaneous atomic force microscopy and submicron confocal fluorescence imaging, allowing characterization of probe partitioning between gel DPPC and disordered liquid DOPC domains with corresponding topography of domain structure. We thus examine the relative partitioning and mobility in gel and disordered liquid phases for headgroup- and tailgroup-labeled GM1 ganglioside probes and for headgroup- and tailgroup-labeled phospholipid probes. For the GM1 probes, large differences in mobility between fluid and gel domains are observed; whereas unexpected mobility is observed in submicron gel domains for the phospholipid probes. We attribute the latter to domain heterogeneities that could be induced by the probe. Furthermore, fits to the FCS data for the phospholipid probes in the DOPC fluid phase require two components (fast and slow). Although proximity to the glass substrate may be a factor, local distortion of the probe by the fluorophore could also be important. Overall, we observe nonideal aspects of phospholipid probe mobility and partitioning that may not be restricted to supported bilayers.
Abstract not provided.
Proposed for publication in the SIAM Journal on Matrix Analysis.
We consider linear systems arising from the use of the finite element method for solving a certain class of linear elliptic problems. Our main result is that these linear systems, which are symmetric and positive semidefinite, are well approximated by symmetric diagonally dominant matrices. Our framework for defining matrix approximation is support theory. Significant graph theoretic work has already been developed in the support framework for preconditioners in the diagonally dominant case, and in particular it is known that such systems can be solved with iterative methods in nearly linear time. Thus, our approximation result implies that these graph theoretic techniques can also solve a class of finite element problems in nearly linear time. We show that the quality of our approximation, which controls the number of iterations in the preconditioned iterative solver, depends primarily on a mesh quality measure but not on the problem size or shape of the domain.
The authors explore various possible approaches for generating lowest order and higher order bases for modeling surface currents and their divergence for moment method application to integral equations. The bases developed are defined on curved triangular and quadrilateral elements. All the bases are conveniently defined in parent element coordinates, and each expansion function spans one or two patches.
This article examines how well one can predict the importance of a current paper (a paper that is recently published in the literature). We look at three factors--journal importance, reference importance and author reputation. Citation-based measures of importance are used for all variables. We find that journal importance is the best predictor (explaining 22.3% out of a potential 29.1% of the variance in the data), and that this correlation value varies significantly by discipline. Journal importance is a better predictor of citation in Computer Science than in any other discipline. While the finding supports the present policy of using journal impact statistics as a surrogate for the importance of current papers, it calls into question the present policy of equally weighting current documents in text-based analyses. We suggest that future researchers take into account the expected importance of a document when attempting to describe the cognitive structure of a field.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Lecture Notes in Computer Science
In large-scale parallel applications a graph coloring is often carried out to schedule computational tasks. In this paper, we describe a new distributed-memory algorithm for doing the coloring itself in parallel. The algorithm operates in an iterative fashion; in each round vertices are speculatively colored based on limited information, and then a set of incorrectly colored vertices, to be recolored in the next round, is identified. Parallel speedup is achieved in part by reducing the frequency of communication among processors. Experimental results on a PC cluster using up to 16 processors show that the algorithm is scalable. © Springer-Verlag Berlin Heidelberg 2005.
Physics of Fluids
The Knudsen layer is an important rarefaction phenomenon in gas flows in and around microdevices. Its accurate and efficient modeling is of critical importance in the design of such systems and in predicting their performance. In this paper we investigate the potential that higher-order continuum equations may have to model the Knudsen layer, and compare their predictions to high-accuracy DSMC (direct simulation Monte Carlo) data, as well as a standard result from kinetic theory. We find that, for a benchmark case, the most common higher-order continuum equation sets (Grad's 13 moment, Burnett, and super-Burnett equations) cannot capture the Knudsen layer. Variants of these equation families have, however, been proposed and some of them can qualitatively describe the Knudsen layer structure. To make quantitative comparisons, we obtain additional boundary conditions (needed for unique solutions to the higher-order equations) from kinetic theory. However, we find the quantitative agreement with kinetic theory and DSMC data is only slight. © 2005 American Institute of Physics.
Materials Research Society Symposium Proceedings
Conventional wisdom contends that high-energy nanosecond UV laser sources operate near the optical damage thresholds of their constituent materials. This notion is particularly true for nonlinear frequency converters like optical parametric oscillators, where poor beam quality combined with high intra-cavity fluence leads to catastrophic failure of crystals and optical coatings. The collective disappointment of many researchers supports this contention. However, we're challenging this frustrating paradigm by developing high-energy nanosecond UV sources that are efficient, mechanically robust, and most important, resistant to optical damage. Based on sound design principles developed through numerical modeling and rigorous laboratory testing, our sources generate 8-10 ns 190 mJ pulses at 320 nm with fluences ≤ 1 J/cm 2. Using the second harmonic of a Q-switched, injection-seeded Nd: YAG laser as the pump source, we convert the near-IR Nd:YAG fundamental to UV with optical-to-optical efficiency exceeding 21%. © 2005 Materials Research Society.
American Rock Mechanics Association - 40th US Rock Mechanics Symposium, ALASKA ROCKS 2005: Rock Mechanics for Energy, Mineral and Infrastructure Development in the Northern Regions
Stress measurements have been obtained from within the Norton Mine in support of site characterization activities intended to determine the in situ stress field around the mine. These results together with other measurements in the area permit an estimate of the principal stresses at the mine. Based on the most recent measurements, the maximum (σHmax) and minimum (σHmin) stresses acting in the horizontal plane are oriented nearly east-west and north-south, respectively, and their magnitudes are 5330 psi and 4100 psi, respectively. These values are expected to be essentially uniform within a few hundred feet vertically above and below the mine elevation. The stress acting in the vertical direction has a magnitude of 3270 psi at the mine level. This measured vertical stress is related to the overburden weight according to σv=1.26ρgh (where ρ is the overburden density, g acceleration of gravity, and h overburden depth). The measured vertical stress exceeds the stress calculated from overburden weight by a factor of 1.26. These in situ stresses are assumed to be principal stresses and, as a result, the vertical stress is the minimum principal stress. These measurements are generally consistent in magnitude and direction with two other much older sets of measurements taken in the mine and they are consistent with the east-west trend of the regional in situ principal stress direction. The average of all three sets of measurements, recent and old, in the mine give a maximum horizontal stress of 6110 psi, a minimum horizontal stress of 3630, and a vertical stress of 3030 psi. The directions of the mine excavation development, which normally are oriented according to the principal stresses, are also consistent with the current and past measurements.
Proceedings of the ASME/Pacific Rim Technical Conference and Exhibition on Integration and Packaging of MEMS, NEMS, and Electronic Systems: Advances in Electronic Packaging 2005
Optical MEMS devices are commonly interfaced with lasers for communication, switching, or imaging applications. Dissipation of the absorbed energy in such devices is often limited by dimensional constraints which may lead to overheating and damage of the component. Surface micromachined, optically powered thermal actuators fabricated from two 2.25 μm thick polycrystalline silicon layers were irradiated with 808 nm continuous wave laser light with a 100 μm diameter spot under increasing power levels to assess their resistance to laser-induced damage. Damage occurred immediately after laser irradiation at laser powers above 275 mW and 295 mW for 150 urn diameter circular and 194 urn by 150 μm oval targets, respectively. At laser powers below these thresholds, the exposure time required to damage the actuators increased linearly and steeply as the incident laser power decreased. Increasing the area of the connections between the two polycrystalline silicon layers of the actuator target decreases the extent of the laser damage. Additionally, an optical thermal actuator target with 15 μm × 15 μm posts withstood 326 mW for over 16 minutes without exhibiting damage to the surface. Copyright © 2005 by ASME.
Proceedings of the 14th International Meshing Roundtable, IMR 2005
Unconstrained Plastering is a new algorithm with the goal of generating a conformal all-hexahedral mesh on any solid geometry assembly. Paving[1] has proven reliable for quadrilateral meshing on arbitrary surfaces. However, the 3D corollary, Plastering [2][3][4][5], is unable to resolve the unmeshed center voids due to being over-constrained by a pre-existing boundary mesh. Unconstrained Plastering attempts to leverage the benefits of Paving and Plastering, without the over-constrained nature of Plastering. Unconstrained Plastering uses advancing fronts to inwardly project unconstrained hexahedral layers from an unmeshed boundary. Only when three layers cross, is a hex element formed. Resolving the final voids is easier since closely spaced, randomly oriented quadrilaterals do not over-constrain the problem. Implementation has begun on Unconstrained Plastering, however, proof of its reliability is still forthcoming. © 2005 Springer-Verlag Berlin Heidelberg.
Abstract not provided.
WIT Transactions on the Built Environment
Strategies for risk assessment and management of high consequence operations are often based on factors such as physical analysis, analysis of software and other logical processing, and analysis of statistically determined human actions. Conventional analysis methods work well for processing objective information. However, in practical situations, much or most of the data available are subjective. Also, there are potential resultant pitfalls where conventional analysis might be unrealistic, such as improperly using event tree and fault tree failure descriptions where failures or events are soft (partial) rather than crisp (binary), neglecting or misinterpreting dependence (positive, negative, correlation), and aggregating nonlinear contributions linearly. There are also personnel issues that transcend basic human factors statistics. For example, sustained productivity and safety in critical operations can depend on the morale of involved personnel. In addition, motivation is significantly influenced by "latent effects," which are pre-occurring influences. This paper addresses these challenges and proposes techniques for subjective risk analysis, latent effects risk analysis and a hybrid analysis that also includes objective risk analysis. The goal is an improved strategy for risk management. © 2005 WIT Press.
Proposed for publication in Nature.
Abstract not provided.
Materials Research Society Symposium Proceedings
Control of the synthesis of nanomaterials to produce morphologies exhibiting quantized properties will enable device integration of several novel applications including biosensors, catalysis, and optical devices. In this work, solvothermal routes to produce zinc oxide nanorods are explored. Much previous work has relied on the addition of growth directing/inhibiting agents to control morphology. It was found in coarsening studies that zinc oxide nanodots will ripen to nanorod morphologies at temperatures of 90 to 120°C. The resulting nanorods have widths of 9-12 nm average dimension, which is smaller than current methods for nanorod synthesis. Use of nanodots as nuclei may be an approach that will allow for controlled growth of higher aspect ratio nanorods. © 2005 Materials Research Society.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
In this paper we describe a data analysis toolkit constructed to meet the needs of data discovery in large scale spatio-temporal data. The toolkit is a C library of building blocks that can be assembled into data analyses. Our goals were to build a toolkit which is easy to use, is applicable to a wide variety of science domains, supports feature-based analysis, and minimizes low-level processing. The discussion centers on the design of a data model and interface that best supports these goals and we present three usage examples. © Springer-Verlag Berlin Heidelberg 2005.
IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
Face recognition systems require the ability to efficiently scan an existing database of faces to locate a match for a newly acquired face. The large number of faces in real world databases makes computationally intensive algorithms impractical for scanning entire databases. We propose the use of more efficient algorithms to “prescreen” face databases, determining a limited set of likely matches that can be processed further to identify a match. We use both radial symmetry and shape to extract five features of interest on 3D range images of faces. These facial features determine a very small subset of discriminating points which serve as input to a prescreening algorithm based on a Hausdorff fraction. We show how to compute the Haudorff fraction in linear O(n) time using a range image representation. Our feature extraction and prescreening algorithms are verified using the FRGC v1.0 3D face scan data. Results show 97% of the extracted facial features are within 10 mm or less of manually marked ground truth, and the prescreener has a rank 6 recognition rate of 100%.
Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference - DETC2005
A variety of methods exist for the assembly of microscale devices. One such strategy uses microscale force-fit pin insertion to assemble LIGA parts together. One of the challenges associated with this strategy is the handling of small pins which are 170 microns in diameter and with lengths ranging from 500 to 1000 microns. In preparation for insertion, a vibratory micro-pin feeder has been used to successfully singulate and manipulate the pins into a pin storage magazine. This paper presents the development of a deterministic model, simulation tool, and methodology in order to identify and analyze key performance attributes of the vibratory micro-pin feeder system. A brief parametric study was conducted to identify the effects of changing certain system parameters on the bulk behavior of the system, namely the capture rate of the pins. Results showing trends have been obtained for a few specific cases. These results indicate that different system parameters can be chosen to yield better system performance. Copyright © 2005 by ASME.
SAE Technical Papers
The accurate quantification and control of mixture stoichiometry is critical in many applications using new combustion strategies and fuels (e.g., homogeneous charge compression ignition, gasoline direct injection, and oxygenated fuels). The parameter typically used to quantify mixture stoichiometry (i.e., the proximity of a reactant mixture to its stoichiometric condition) is the equivalence ratio, φ. The traditional definition of φ is based on the relative amounts of fuel and oxidizer molecules in a mixture. This definition provides an accurate measure of mixture stoichiometry when the fuel molecule does not contain oxidizer elements and when the oxidizer molecule does not contain fuel elements. However, the traditional definition of φ leads to problems when the fuel molecule contains an oxidizer element, as is the case when an oxygenated fuel is used, or once reactions have started and the fuel has begun to oxidize. The problems arise because an oxidizer element in a fuel molecule is counted as part of the fuel, even though it is an oxidizer element. Similarly, if an oxidizer molecule contains fuel elements, the fuel elements in the oxidizer molecule are misleadingly lumped in with the oxidizer in the traditional definition of φ. In either case, use of the traditional definition of φ to quantify the mixture stoichiometry can lead to significant errors. This paper introduces the oxygen equivalence ratio, φΩ, a parameter that properly characterizes the instantaneous mixture stoichiometry for a broader class of reactant mixtures than does φ. Because it is an instantaneous measure of mixture stoichiometry, φΩ can be used to track the time-evolution of stoichiometry as a reaction progresses. The relationship between φΩ and φ is shown. Errors are involved when the traditional definition of φ is used as a measure of mixture stoichiometry with fuels that contain oxidizer elements or oxidizers that contain fuel elements; φΩ is used to quantify these errors. Proper usage of φΩ is discussed, and φΩ is used to interpret results in a practical example. Copyright © 2005 SAE International.
Proceedings of the 2005 IEEE International Workshop on Advanced Methods for Uncertainty Estimation in Measurement, AMUEM 2005
Proposed Supplement 1 to the GUM outlines a "propagation of distributions " approach to deriving the distribution of a measurandfor any non-linear function and for any set of random inputs. The Supplement 's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximated distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals. © 2005 IEEE.
SAE Technical Papers
Optical engines are often skip-fired to maintain optical components at acceptable temperatures and to reduce window fouling. Although many different skip-fired sequences are possible, if exhaust emissions data are required, the skip-firing sequence ought to consist of a single fired cycle followed by a series of motored cycles (referred to here as singleton skip-firing). This paper compares a singleton skip-firing sequence with continuous firing at the same inlet conditions, and shows that combustion performance trends with equivalence ratio are similar. However, as expected, reactant temperatures are lower with skip-firing, resulting in retarded combustion phasing, and lower pressures and combustion efficiency. LIF practitioners often employ a homogeneous charge of known composition to create calibration images for converting raw signal to equivalence ratio. Homogeneous in-cylinder mixtures are typically obtained by premixing fuel and air upstream of the engine; however, premixing usually precludes skip-firing. Data are presented demonstrating that using continuously-fired operation to calibrate skip-fired data leads to over-prediction of local equivalence ratio. This is due to a combination of lower reactant temperatures for skip- versus continuous-fired operation, and a fluorescence yield that decreases with temperature. It is further demonstrated that early direct injection can be used as an alternative approach to provide calibration images. The influence of hardware modifications made to optical engines on performance is also examined. Copyright © 2005 SAE International.
SAE Technical Papers
Experiments were conducted in an optically accessible constant-volume combustion vessel to investigate soot formation at diesel combustion conditions in a high exhaust-gas recirculation (EGR) environment. The ambient oxygen concentration was decreased systematically from 21% to 8% to simulate a wide range of EGR conditions. Quantitative measurements of in-situ soot in quasi-steady n-heptane and #2 diesel fuel jets were made by using laser extinction and planar laser-induced incandescence (PLII) measurements. Flame lift-off length measurements were also made in support of the soot measurements. At constant ambient temperature, results show that the equivalence ratio estimated at the lift-off length does not vary with the use of EGR, implying an equal amount of fuel-air mixing prior to combustion. Soot measurements show that the soot volume fraction decreases with increasing EGR. The regions of soot formation are effectively "stretched out" to longer axial and radial distances from the injector with increasing EGR, according to the dilution in ambient oxygen. However, the axial soot distribution and location of maximum soot collapses if plotted in terms of a "flame coordinate", where the relative fuel-oxygen mixture is equivalent. The total soot in the jet cross-section at the maximum axial soot location initially increases and then decreases to zero as the oxygen concentration decreases from 21% to 8%. The trend is caused by competition between soot formation rates and increasing residence time. Soot formation rates decrease with decreasing oxygen concentration because of the lower combustion temperatures. At the same time, the residence time for soot formation increases, allowing more time for accumulation of soot. Increasing the ambient temperature above nominal diesel engine conditions leads to a rapid increase in soot for high-EGR conditions when compared to conditions with no EGR. This result emphasizes the importance of EGR cooling and its beneficial effect on mitigating soot formation. The effect of EGR is consistent for different fuels but soot levels depend on the sooting propensity of the fuel. Specifically, #2 diesel fuel produces soot levels more than ten times higher than those of n-heptane. Copyright © 2005 SAE International.
Proceedings of the ACM/IEEE 2005 Supercomputing Conference, SC'05
Many emerging large-scale data science applications require searching large graphs distributed across multiple memories and processors. This paper presents a distributed breadth-first search (BFS) scheme that scales for random graphs with up to three billion vertices and 30 billion edges. Scalability was tested on IBM BlueGene/L with 32,768 nodes at the Lawrence Livermore National Laboratory. Scalability was obtained through a series of optimizations, in particular, those that ensure scalable use of memory. We use 2D (edge) partitioning of the graph instead of conventional ID (vertex) partitioning to reduce communication overhead. For Poisson random graphs, we show that the expected size of the messages is scalable for both 2D and ID partitionings. Finally, we have developed efficient collective communication functions for the 3D torus architecture of BlueGene/L that also take advantage of the structure in the problem. The performance and characteristics of the algorithm are measured and reported. © 2005 IEEE.
SAE Technical Papers
Double-pulse laser-induced desorption with elastic laser scattering (LIDELS) is a diagnostic technique capable of making time-resolved, in situ measurements of the volatile fraction of diesel particulate matter (PM). The technique uses two laser pulses of comparable energy, separated in time by an interval sufficiently short to freeze the flow field, to measure the change in PM volume caused by laser-induced desorption of the volatile fraction. The first laser pulse of a pulse-pair produces elastic laser scattering (ELS) that gives the total PM volume, and also deposits the energy to desorb the volatiles. ELS from the second pulse gives the volume of the remaining solid portion of the PM, and the ratio of these two measurements is the quantitative solid volume fraction. In an earlier study, we used a single laser to make real-time LIDELS measurements during steady-state operation of a diesel engine. In this paper, we discuss the advantages and disadvantages of the two LIDELS techniques and present measurements made in real diesel exhaust and simulated diesel exhaust created by coating diffusion-flame soot with single-component hydrocarbons. Comparison with analysis of PM collected on quartz filters reveals that LIDELS considerably under-predicts the volatile fraction. We discuss reasons for this discrepancy and recommend future directions for LIDELS research. Copyright © 2005 SAE International.
Journal of Computer-Aided Molecular Design
Physical, chemical and biological properties are the ultimate information of interest for chemical compounds. Molecular descriptors that map structural information to activities and properties are obvious candidates for information sharing. In this paper, we consider the feasibility of using molecular descriptors to safely exchange chemical information in such a way that the original chemical structures cannot be reverse engineered. To investigate the safety of sharing such descriptors, we compute the degeneracy (the number of structure matching a descriptor value) of several 2D descriptors, and use various methods to search for and reverse engineer structures. We examine degeneracy in the entire chemical space taking descriptors values from the alkane isomer series and the PubChem database. We further use a stochastic search to retrieve structures matching specific topological index values. Finally, we investigate the safety of exchanging of fragmental descriptors using deterministic enumeration. © Springer 2005.
Proceedings - IEEE Military Communications Conference MILCOM
The fast and unrelenting spread of wireless telecommunication devices has changed the landscape of the telecommunication world, as we know it. Today we find that most users have access to both wireline and wireless communication devices. This widespread availability of alternate modes of communication is adding, on one hand, to a redundancy in networks, yet, on the other hand, has cross network impacts during overloads and disruptions. This being the case, it behooves network designers and service providers to understand how this redundancy works so that it can be better utilized in emergency conditions where the need for redundancy is critical. In this paper, we examine the scope of this redundancy as expressed by telecommunications availability to users under different failure scenarios. We quantify the interaction of wireline and wireless networks during network failures and traffic overloads. Developed as part of a Department of Homeland Security Infrastructure Protection (DHS IP) project, the Network Simulation Modeling and Analysis Research Tool (N-SMART) was used to perform this study. The product of close technical collaboration between the National Infrastructure Simulation and Analysis Center (NISAC) and Lucent Technologies, N-SMART supports detailed wireline and wireless network simulations and detailed user calling behavior.
Journal of Parallel and Distributed Computing
The traditional, serial, algorithm for finding the strongly connected components in a graph is based on depth first search and has complexity which is linear in the size of the graph. Depth first search is difficult to parallelize, which creates a need for a different parallel algorithm for this problem. We describe the implementation of a recently proposed parallel algorithm that finds strongly connected components in distributed graphs, and discuss how it is used in a radiation transport solver. © 2005 Elsevier Inc. All rights reserved.
2nd International Conference on Cybernetics and Information Technologies, Systems and Applications, CITSA 2005, 11th International Conference on Information Systems Analysis and Synthesis, ISAS 2005
Shock Physics codes in use at many Department of Energy (DOE) and Department of Defense (DoD) laboratories can be divided into two classes; Lagrangian Codes (where the computational mesh is (attached' to the materials) and Eulerian Codes (where the computational mesh is (fixed' in space and die materials flow through the mesh). These two classes of codes exhibit different advantages and disadvantages. Lagrangian codes are good at keeping material interfaces well defined, but suffer when the materials undergo extreme distortion which leads to severe reductions in the time steps. Eulerian codes are better able to handle severe material distortion (since the mesh is fixed the time steps are not as severely reduced), but these codes do not keep track of material interfaces very well. So in an Eulerian code the developers must design algorithms to track or reconstruct accurate interfaces between materials as the calculation progresses. However, there are classes of calculations where an interface is not desired between some materials, for instance between materials that are intimately mixed (dusty air or multiphase materials). In these cases a material interface reconstruction scheme is needed that will keep this mixture separated from other materials in the calculation, but will maintain the mixture attributes. This paper will describe the Sandia National Laboratories Eulerian Shock Physics Code known as CTH, and the specialized isotropic material interface reconstruction scheme designed to keep mixed material groups together while keeping different groups separated during the remap step.
Proceedings - International Carnahan Conference on Security Technology
For over a decade, Sandia National Laboratories has collaborated with domestic and international partners in the development of intelligent Radio Frequency (RF) loop seals and sensor technologies for multiple applications. Working with US industry, the International Atomic Energy Agency and Russian institutes; the Sandia team continues to utilize gains in technology performance to develop and deploy increasingly sophisticated platforms. Seals of this type are typically used as item monitors to detect unauthorized actions and malicious attacks in storage and transportation applications. The spectrum of current seal technologies at Sandia National Laboratories ranges from Sandia's initial T-1 design incorporating bi-directional RF communication with a loop seal and tamper indicating components to the highly flexible Secure Sensor Platform (SSP). Sandia National Laboratories is currently pursuing the development of the next generation fiber optic loop seal. This new device is based upon the previously designed multi-mission electronic sensor and communication platform that launched the development of the T-1A which is currently in production at Honeywell FM&T for the Savannah River Site. The T-1A is configured as an active fiber optic seal with authenticated, bi-directional RF communications capable of supporting a number of sensors. The next generation fiber optic loop seal, the Secure Sensor Platform (SSP), is enhancing virtually all of the existing capabilities of the T-1A and is adding many new features and capabilities. The versatility of this new device allows the capabilities to be selected and tailored to best fit the specific application. This paper discusses the capabilities of this new generation fiber optic loop seal as well as the potential application theater which can range from rapid, remotely-monitored, temporary deployments to long-term item storage monitoring supporting International nuclear non-proliferation. This next generation technology suite addresses the combination of sealing requirements with requirements in unique materials' identification, environmental monitoring, and remote long-term secure communications. © 2005 IEEE.
WIT Transactions on the Built Environment
In this overview we present a reactive multiphase flow model to describe the physical processes associated with enhanced blast. This model is incorporated into CTH, a shock physics code, using a variant of the Baer and Nunziato nonequilibrium multiphase mixture to describe shock-driven reactive flow including the effects of interphase mass exchange, particulate drag, heat transfer and secondary combustion of multiphase mixtures. This approach is applied to address the various aspects of the reactive behavior of enhanced blast including detonation and the subsequent expansion of reactive products. The latter stage of reactive explosion involves shock-driven multiphase flow that produces instabilities which are the prelude to the generation of turbulence and subsequent mixing of surrounding air to cause secondary combustion. Turbulent flow is modeled in the context of Large Eddy Simulation (LES) with the formalism of multiphase PDF theory including a mechanistic model of metal combustion. © 2005 WIT Press.
American Rock Mechanics Association - 40th US Rock Mechanics Symposium, ALASKA ROCKS 2005: Rock Mechanics for Energy, Mineral and Infrastructure Development in the Northern Regions
Sandia National Laboratories has partnered with industry on a multifaceted, baseline experimental study that supports the development of improved drag cutters for advanced drill bits. Different nonstandard cutter lots were produced and subjected to laboratory tests that evaluated the influence of selected design and processing parameters on cutter loads, wear, and durability pertinent to the penetration of hard rock with mechanical properties representative of formations encountered in geothermal or deep oil/gas drilling environments. The focus was on cutters incorporating ultrahard PDC (polycrystalline diamond compact) overlays (i.e., diamond tables) on tungsten-carbide substrates. Parameter variations included changes in cutter geometry, material composition, and processing conditions. Geometric variables were the diamond-table thickness, the cutting-edge profile, and the PDC/substrate interface configuration. Material and processing variables for the diamond table were, respectively, the diamond particle size and the sintering pressure applied during cutter fabrication. Complementary drop-impact, granite-log abrasion, linear cutting-force, and rotary-drilling tests examined the response of cutters from each lot. Substantial changes in behavior were observed from lot to lot, allowing the identification of features contributing major (factor of 10+) improvements in cutting performance for hard-rock applications. Recent field demonstrations highlight the advantages of employing enhanced cutter technology during challenging drilling operations.
Proceedings of the International Symposium on Superalloys and Various Derivatives
The Specialty Metals Processing Consortium (SMPC) was established in 1990 with the goal of advancing the technology of melting and remelting nickel and titanium alloys. In recent years, the SMPC technical program has focused on developing technology to improve control over the final ingot remelting and solidification processes to alleviate conditions that lead to the formation of inclusions and positive and negative segregation. A primary objective is the development of advanced monitoring and control techniques for application to vacuum arc remelting (VAR), with special emphasis on VAR of Alloy 718. This has lead to the development of an accurate, low order electrode melting model for this alloy as well as an advanced process estimator that provides real-time estimates of important process variables such as electrode temperature distribution, instantaneous melt rate, process efficiency, fill ratio, and voltage bias. This, in turn, has enabled the development and industrial application of advanced VAR process monitoring and control systems. The technology is based on the simple idea that the set of variables describing the state of the process must be self-consistent as required by the dynamic process model. The output of the process estimator comprises the statistically optimal estimate of this self-consistent set. Process upsets such as those associated with glows and cracked electrodes are easily identified using estimator based methods.
Proceedings of SPIE - The International Society for Optical Engineering
Optical firing sets need miniature, robust, reliable pulsed laser sources for a variety of triggering functions. In many cases, these lasers must withstand high transient radiation environments. In this paper we describe a monolithic passively Q-switched microlaser constructed using Cr:Nd:GSGG as the gain material and Cr4+:YAG as the saturable absorber, both of which are radiation hard crystals. This laser consists of a 1-mm-long piece of undoped YAG, a 7-mm-long piece of Cr:Nd:GSGG, and a 1.5-mm-long piece of Cr 4+:YAG diffusion bonded together. The ends of the assembly are polished flat and parallel and dielectric mirrors are coated directly on the ends to form a compact, rugged, monolithic laser. When end pumped with a diode laser emitting at ∼807.6 nm, this passively Q-switched laser produces ∼1.5-ns-wide pulses. While the unpumped flat-flat cavity is geometrically unstable, thermal lensing and gain guiding produce a stable cavity with a TEM00 gaussian output beam over a wide range of operating parameters. The output energy of the laser is scalable and dependent on the cross sectional area of the pump beam. This laser has produced Q-switched output energies from several μJ per pulse to several 100 μJ per pulse with excellent beam quality. Its short pulse length and good beam quality result in high peak power density required for many applications such as optically triggering sprytrons. In this paper we discuss the design, construction, and characterization of this monolithic laser as well as energy scaling of the laser up to several 100 μJ per pulse.
Micro Total Analysis Systems - Proceedings of MicroTAS 2005 Conference: 9th International Conference on Miniaturized Systems for Chemistry and Life Sciences
We report an automated on-chip clinical diagnostic that integrates analyte mixing, preconcentration, and subsequent detection using native polyacrylamide gel electrophoresis (PAGE) immunoanalysis. Sample proteins are concentrated > 100-fold with an in situ polymerized size exclusion membrane. The membrane also facilitates rapid mixing of reagents and sample prior to analysis. The integrated system was used to rapidly (minutes) detect immune-response markers in saliva acquired from periodontal diseased patients. Copyright © 2005 by the Transducer Research Foundation, Inc.
Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference - DETC2005
Micro mirrors have emerged as key components for optical microelectromechanical system (MEMS) applications. Electrostatic vertical comb drives are attractive because they can be fabricated underneath the mirror, allowing for arrays with a high fill factor. Also, vertical comb drives are more easily controlled than parallel plate actuators, making them the better choice for analog scanning devices. The device presented in this paper is a one-degree of freedom vertical comb drive fabricated using Sandia National Laboratories SUMMiT™ five-level surface micromachining process. The electrostatic performance of the device is investigated using finite element analysis to determine the capacitance for a unit cell of the comb drive as the position of the device is varied. This information is then used to design a progressive linkage that will seek to alleviate or eliminate the effects of instability. The goal of this research is to develop an electrostatic model for the behavior of the vertical comb drive mirror and then use this to design a progressive-linkage that can delay or eliminate the pull-in instability. Copyright © 2005 by ASME.
Health Physics
Abstract not provided.
IEEE Vehicular Technology Conference
High-power 18650 Li-ion cells have been developed for hybrid electric vehicle applications as part of the DOE FreedomCAR Advanced Technology Development (ATD) program. Cells have been developed for high-power, long-life, low-cost and abuse tolerance conditions. The thermal abuse response of advanced materials and cells were measured and compared. Cells were constructed for determination of abuse tolerance to determine the thermal runaway response and flammability of evolved gas products during venting. Advanced cathode and anode materials were evaluated for improved tolerance under abusive conditions. Calorimetric methods were used to measure the thermal response and properties of the cells and cell materials up to 450 °C. Improvements in thermal runaway response have been shown using combinations of these materials.
Fall Technical Meeting of the Western States Section of the Combustion Institute 2005, WSS/CI 2005 Fall Meeting
We report results from an investigation of the two-color polarization spectroscopy (TC-PS) and two-color six-wave mixing (TC-SWM) techniques for the measurement of atomic hydrogen in flames. The 243-nm two-photon pumping of 1S-2S transition of the H-atom was followed by single-photon probing of the 2S-3P transition at 656 nm. Necessary laser radiation was generated using two distributed feedback dye lasers (DFDLs) pumped by two regeneratively amplified, picosecond, Nd:YAG lasers. The DFDL pulses are nearly Fourier transform limited and have a pulse width of approximately 80 ps. The effects of pump and probe beam polarizations on the TC-PS and TC-SWM signals were studied in detail. The collisional dynamics of the H(2l) level were also investigated in an atmospheric-pressure hydrogenair flame by scanning the time delay between the pump and probe pulses. An increase in signal intensity of approximately 100 was observed in the TC-SWM geometry as compared to the TC-PS geometry.
Technology in Cancer Research and Treatment
Currently, pathologists rely on labor-intensive microscopic examination of tumor cells using century-old staining methods that can give false readings. Emerging BioMicroNano-technologies have the potential to provide accurate, realtime, high-throughput screening of tumor cells without the need for time-consuming sample preparation. These rapid, nano-optical techniques may play an important role in advancing early detection, diagnosis, and treatment of disease. In this report, we show that laser scanning confocal microscopy can be used to identify a previously unknown property of certain cancer cells that distinguishes them, with single-cell resolution, from closely related normal cells. This property is the correlation of light scattering and the spatial organization of mitochondria. In normal liver cells, mitochondria are highly organized within the cytoplasm and highly scattering, yielding a highly correlated signal. In cancer cells, mitochondria are more chaotically organized and poorly scattering. These differences correlate with important bioenergetic disturbances that are hallmarks of many types of cancer. In addition, we review recent work that exploits the new technology of nanolaser spectroscopy using the biocavity laser to characterize the unique spectral signatures of normal and transformed cells. These optical methods represent powerful new tools that hold promise for detecting cancer at an early stage and may help to limit delays in diagnosis and treatment. ©Adenine Press (2005).
Abstract not provided.
Proposed for publication in Geochimica et Cosmochimica Acta.
Abstract not provided.
Journal of Crystal Growth
Optical reflectance and atomic force microscopy (AFM) are used to develop a detailed description of GaN nucleation layer (NL) evolution upon annealing in ammonia and hydrogen to 1050°C. For the experiments, the GaN NLs were grown to a thickness of 30nm at 540°C, and then heated to 1050°C, following by holding at 1050°C for additional time. As the temperature, T, is increased, the NL decomposes uniformly beginning at 850°C up to 980°C as observed by the decrease in the optical reflectance signal and the absence of change in the NL AFM images. Decomposition of the original NL material drives the formation of GaN nuclei on top of the NL, which begin to appear on the NL near 1000°C, increasing the NL roughness. The GaN nuclei are formed by gas-phase transport of Ga atoms generated during the NL decomposition that recombine with ambient NH3. The gas-phase mechanism responsible for forming the GaN nuclei is demonstrated in two ways. First, the NL decomposition kinetics has an activation energy, EA, of 2.7 eV and this EA is observed in the NL roughening as the GaN nuclei increase in size. Second, the power spectral density functions measured with atomic force microscopy reveal that the GaN nuclei grow via an evaporation and recondensation mechanism. Once the original NL material is fully decomposed, the GaN nuclei stop growing in size and begin to decompose. For 30 nm thick NLs used in this study, approximately 1/3 of the NL Ga atoms are reincorporated into GaN nuclei. A detailed description of the NL evolution as it is heated to high temperature is presented, along with recommendations on how to enhance or reduce the NL decomposition and nuclei formation before high T GaN growth. © 2004 Elsevier B.V. All rights reserved.
Proceedings - International Carnahan Conference on Security Technology
Sandia has been investigating the use of "intelligent sensors" and their integration into "Smart Networks" for security applications. Intelligent sensors include devices that assess various phenomenologies such as radiation, chem-bio agents, radars, and video/video-motion detection. The main problem experienced with these intelligent sensors is in integrating the output from these various sensors into a system that reports the data to users in a manner that enables an efficient response to potential threats. The overall systems engineering is a critical part of bringing these intelligent sensors on-line and is important to ensuring that these systems are successfully deployed. The systems engineering effort includes designing and deploying computer networks, interfaces to make systems inter-operable, and training users to ensure that these intelligent sensors can be deployed property. This paper focuses on Sandia's efforts to investigate the systems architecture for "smart" networks and the various interfaces required between "smart" sensors to implement these "Smart Networks." ©2004 IEEE.
Abstracts of the Pacific Basin Nuclear Conference
One effect noted during the March 1975 fire at the Browns Ferry plant is that fire-induced cable damage caused a range of unanticipated circuit faults including spurious reactor status signals and the apparent spurious operation of plant systems and components. Current USNRC regulations require that licensees conduct a post-fire safe shutdown analysis that includes consideration of such circuit effects. Post-fire circuit analysis continues to be an area of both technical challenge and regulatory focus. This paper discusses risk perspectives related to post-fire circuit analysis. An opening background discussion outlines the issues, concerns, and technical challenges. The paper then focuses on current risk insights and perspectives relevant to the circuit analysis problem. This includes a discussion of the available experimental data on cable failure modes and effects, a discussion of fire events that illustrate potential fire-induced circuit faults, and a discussion of risk analysis approaches currently being developed and implemented.
Inertial Fusion Sciences and Applications 2003
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Astrophysical Journal
The turbulent convection that takes place in a Chandrasekhar-mass white dwarf during the final few minutes before it explodes determines where and how frequently it ignites. Numerical simulations have shown that the properties of the subsequent Type la supernova are sensitive to these ignition conditions. A heuristic model of the turbulent convection is explored. The results suggest that supernova ignition is likely to occur at a radius of order 100 km, rather than at the center of the star.
Abstract not provided.
34th AIAA Fluid Dynamics Conference and Exhibit
In this study, we extend self-similar, far-field, turbulent wake concepts to estimate the 2-d drag coefficient for a range of bluff body problems. The self-similar wake velocity defect that is normally independent of the near field wake (and hence body geometry) is modified using a combined approximate Green's function/Gram-Charlier series approach to retain the body geometry information. Formally a near field velocity defect profile is created using small disturbance theory and the inviscid flow field associated with the body of interest. The defect solution is then used as an initial condition in the approximate Green's function solution. Finally, the Green's function solution is matched to the Gram-Charlier series yielding profiles that are integrated to yield the net form drag on the bluff body. Preliminary results indicate that drag estimates computed using this method are within approximately 15% as compared to published values for flows with large separation. This methodology may be of use as a supplement to CFD and experimental solutions in reducing the heavy computational and experimental burden of estimating drag coefficients for blunt body flows for preliminary design type studies. © 2004 by the American Institute of Aeronautics and Astronautics, Inc.
Abstract not provided.
Proposed for publication in the International Union of Theoretical and Applied Mechanics Symposium: One Hundred Years of Boundary L.
Abstract not provided.
37th AIAA Thermophysics Conference
Three-dimensional Direct Simulation Monte Carlo simulations of Columbia Shuttle Orbiter flight STS-107 are presented. The aim of this work is to determine the aerodynamic and heating behavior of the Orbiter during aerobraking maneuvers and to provide piecewise integration of key scenario events to assess the plausibility of the candidate failure scenarios. The flight of the Orbiter is examined at two altitudes: 350-kft and 300-kft. The flowfield around the Orbiter and the heat transfer to it are calculated for the undamaged configuration. The flow inside the wing for an assumed damage to the leading edge in the form of a 10- inch hole is studied.
Abstract not provided.
Transactions - Geothermal Resources Council
This report covers the basic design of the Sandia downhole geothermal reservoir monitoring system. The monitoring system can operate continuously at temperatures up to 240°C (464°F) while measuring small pressure and temperature changes in reservoirs. Future improvements in the existing system will come from research and development programs by such agencies as NASA, JPL, USAF and NETL. An explanation of the benefits of this research to the Geothermal HT electronics program will be given.
Transactions of the American Nuclear Society
Abstract not provided.
Proceedings - International Carnahan Conference on Security Technology
The system design of the Advanced Exterior Sensor (AES), test data and Sandia National Laboratories' current work on the AES is described. The AES integrates three sensor technologies (thermal infrared waveband, visible waveband, and microwave radar) in a Remote Sensor Module communicating with three motion detection target trackers and a sensor fusion software module in the Data Processor Module to achieve higher performance than single technology devices. Wide areas are covered by continuously scanning the three sensors 360 degrees in about one second. The images from the infrared and visible detector sets and the radar range data are updated as the sensors rotate each second. The radar provides range data with approximately one-meter resolution. Panoramic imagery is generated for immediate visual assessment of alarms using the Display Control Module. There is great potential for site security enhancement using the AES, which was designed for low-cost, easy use and rapid deployment to cover wide areas beyond typical perimeters, possibly in place of typical perimeter sensors, and for tactical applications around fixed or temporary high-value assets. Commercial-off-the-shelf (COTS) systems have neither the three sensor technologies nor the imaging sensor resolution. Cost and performance will be discussed for different scenarios. ©2004 IEEE.
Proceedings - Electrochemical Society
Recent attempts to fabricate free-standing MEMS structures using electrodeposited Ni have run into difficulty due to the curvatures that result from stress gradients intrinsic to electrodeposition. We have investigated the intrinsic stress behavior during electrodeposition of Ni from an additive-free sulfamate bath. It was determined that the stress during the first 1000 Å of growth was dependent only on the substrate materials, whereas the stress after that point was dependent on the deposition rate. Additionally, the stress in this region was found to be independent of the stress-state of the underlying material. Therefore, by varying the plating dynamically during deposition it is possible to reduce or eliminate the curvature in Ni MEMS structures.
Conference Record of the International Power Modulator Symposium and High Voltage Workshop
Paraxial diodes have been a stronghold for high-brightness, flash x-ray radiography. In its traditional configuration, an electron beam impinges onto an anode foil, entering a gas-filled transport cell. Within the cell, the beam is focused into a small spot onto a high-Z target to generate x-rays for the radiographic utility. Simulations using Lsp, a particle-in-cell code, have shown that within the gas-filled focusing cell the electron beam spot location sweeps axially during the course of the beam pulse. The result is a larger radiographic spot than is desirable. Lsp has also shown that replacing the gas-filled cell with a fully ionized plasma on the order of 1016 cm-3 will prevent the spot from significant beam sweeping, thus resulting in a smaller, more stable radiographic spot size. Sandia National Laboratories (SNL) is developing a plasma-filled focusing cell for future paraxial diode experiments. A z-discharge in a hydrogen fill is used to generate a uniform, highly ionized plasma. Laser interferometry is the key diagnostic to determine electron density in a light lab setting and during future paraxial diode shots on SNL's RITS-3 accelerator. A time-resolved spot diagnostic will also be implemented during diode shots to measure the change in spot size during the course of the pulse. © 2004 IEEE.
Proceedings of the 2004 World Water and Environmetal Resources Congress: Critical Transitions in Water and Environmetal Resources Management
The effect of variable demands at short time scales on the transport of a solute through a water distribution network has not previously been studied. We simulate flow and transport in a small water distribution network using EPANET to explore the effect of variable demand on solute transport across a range of hydraulic time step scales from 1 minute to 2 hours. We show that variable demands at short time scales can have the following effects: smoothing of a pulse of tracer injected into a distribution network and increasing the variability of both the transport pathway and transport timing through the network. Variable demands are simulated for these different time step sizes using a previously developed Poisson rectangular pulse (PRP) demand generator that considers demand at a node to be a combination of exponentially distributed arrival times with log-normally distributed intensities and durations. Solute is introduced at a tank and at three different network nodes and concentrations are modeled through the system using the Lagrangian transport scheme within EPANET. The transport equations within EPANET assume perfect mixing of the solute within a parcel of water and therefore physical dispersion cannot occur. However, variation in demands along the solute transport path contribute to both removal and distortion of the injected pulse. The model performance measures examined are the distribution of the Reynolds number, the variation in the center of mass of the solute across time, and the transport path and timing of the solute through the network. Variation in all three performance measures is greatest at the shortest time step sizes. As the scale of the time step increases, the variability in these performance measures decreases. The largest time steps produce results that are inconsistent with the results produced by the smaller time steps.
Proceedings of SPIE - The International Society for Optical Engineering
In multispectral imaging, automated cross-spectral (band-to-band) image registration is difficult to achieve with a reliability approaching 100%. This is particularly true when registering infrared to visible imagery, where contrast reversals are common and similarity is often lacking. Algorithms that use mutual information as a similarity measure have been shown to work well in the presence of contrast reversal. However, weak similarity between the long-wave infrared (LWIR) bands and shorter wavelengths remains a problem. A method is presented in this paper for registering multiple images simultaneously rather than one pair at a time using a multivariate extension of the mutual information measure. This approach improves the success rate of automated registration by making use of the information available in multiple images rather than a single pair. This approach is further enhanced by including a cyclic consistency check, for example registering band A to B, B to C, and C to A. The cyclic consistency check provides an automated measure of success allowing a different combination of bands to be used in the event of a failure. Experiments were conducted using imagery from the Department of Energy's Multispectral Thermal Imager satellite. The results show a significantly improved success rate.
Proceedings of SPIE - The International Society for Optical Engineering
Calcium fluoride is a desirable material for optical design of space systems in the ultraviolet, visible, and infrared bands. Modern calcium fluoride materials fabricated for the photolithography industry are highly resistant to space radiation. The wide wavelength band and low dispersion are also desirable properties. Unfortunately, calcium fluoride has a host of significant material property issues which hinder its use in the space environment. Low hardness, susceptibility to thermal and mechanical shock, and large coefficient of thermal expansion present significant challenges during development of opto-mechanical designs. Sandia National Laboratories Monitoring Systems and Technology Center has fielded a calcium fluoride based optical system for use in space. The Sandia design solution is based upon a spring-loaded mount which uses no volatile organic compounds. The theory of the Sandia solution is developed and design rules are presented. The Sandia design solution is illustrated for a specific example. Example design and margin calculations are shown. Finally, lessons learned from our design realization and qualification testing efforts are shared for the benefit of the community.
Proceedings of SPIE - The International Society for Optical Engineering
While hyperspectral imaging systems are increasingly used in remote sensing and offer enhanced scene characterization relative to univariate and multispectral technologies, it has proven difficult in practice to extract all of the useful information from these systems due to overwhelming data volume, confounding atmospheric effects, and the limited a priori knowledge regarding the scene. The need exists for the ability to perform rapid and comprehensive data exploitation of remotely sensed hyperspectral imagery. To address this need, this paper describes the application of a fast and rigorous multivariate curve resolution (MCR) algorithm to remotely sensed thermal infrared hyperspectral images. Employing minimal a priori knowledge, notably non-negativity constraints on the extracted endmember profiles and a constant abundance constraint for the atmospheric upwelling component, it is demonstrated that MCR can successfully compensate thermal infrared hyperspectral images for atmospheric upwelling and, thereby, transmittance effects. We take a semi-synthetic approach to obtaining image data containing gas plumes by adding emission gas signals onto real hyperspectral images. MCR can accurately estimate the relative spectral absorption coefficients and thermal contrast distribution of an ammonia gas plume component added near the minimum detectable quantity.