Publications

Results 87301–87325 of 99,299

Search results

Jump to search filters

Unconstrained paving & plastering: A new idea for all hexahedral mesh generation

Proceedings of the 14th International Meshing Roundtable, IMR 2005

Staten, Matthew L.; Owen, Steven J.; Blacker, Teddy D.

Unconstrained Plastering is a new algorithm with the goal of generating a conformal all-hexahedral mesh on any solid geometry assembly. Paving[1] has proven reliable for quadrilateral meshing on arbitrary surfaces. However, the 3D corollary, Plastering [2][3][4][5], is unable to resolve the unmeshed center voids due to being over-constrained by a pre-existing boundary mesh. Unconstrained Plastering attempts to leverage the benefits of Paving and Plastering, without the over-constrained nature of Plastering. Unconstrained Plastering uses advancing fronts to inwardly project unconstrained hexahedral layers from an unmeshed boundary. Only when three layers cross, is a hex element formed. Resolving the final voids is easier since closely spaced, randomly oriented quadrilaterals do not over-constrain the problem. Implementation has begun on Unconstrained Plastering, however, proof of its reliability is still forthcoming. © 2005 Springer-Verlag Berlin Heidelberg.

More Details

A mathematically guided strategy for risk assessment and management

WIT Transactions on the Built Environment

Cooper, James A.

Strategies for risk assessment and management of high consequence operations are often based on factors such as physical analysis, analysis of software and other logical processing, and analysis of statistically determined human actions. Conventional analysis methods work well for processing objective information. However, in practical situations, much or most of the data available are subjective. Also, there are potential resultant pitfalls where conventional analysis might be unrealistic, such as improperly using event tree and fault tree failure descriptions where failures or events are soft (partial) rather than crisp (binary), neglecting or misinterpreting dependence (positive, negative, correlation), and aggregating nonlinear contributions linearly. There are also personnel issues that transcend basic human factors statistics. For example, sustained productivity and safety in critical operations can depend on the morale of involved personnel. In addition, motivation is significantly influenced by "latent effects," which are pre-occurring influences. This paper addresses these challenges and proposes techniques for subjective risk analysis, latent effects risk analysis and a hybrid analysis that also includes objective risk analysis. The goal is an improved strategy for risk management. © 2005 WIT Press.

More Details

Solvothermal routes for synthesis of zinc oxide nanorods

Materials Research Society Symposium Proceedings

Bell, Nelson S.

Control of the synthesis of nanomaterials to produce morphologies exhibiting quantized properties will enable device integration of several novel applications including biosensors, catalysis, and optical devices. In this work, solvothermal routes to produce zinc oxide nanorods are explored. Much previous work has relied on the addition of growth directing/inhibiting agents to control morphology. It was found in coarsening studies that zinc oxide nanodots will ripen to nanorod morphologies at temperatures of 90 to 120°C. The resulting nanorods have widths of 9-12 nm average dimension, which is smaller than current methods for nanorod synthesis. Use of nanodots as nuclei may be an approach that will allow for controlled growth of higher aspect ratio nanorods. © 2005 Materials Research Society.

More Details

FCLib: A library for building data analysis and data discovery tools

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Doyle, Wendy S.K.; Kegelmeyer, William P.

In this paper we describe a data analysis toolkit constructed to meet the needs of data discovery in large scale spatio-temporal data. The toolkit is a C library of building blocks that can be assembled into data analyses. Our goals were to build a toolkit which is easy to use, is applicable to a wide variety of science domains, supports feature-based analysis, and minimizes low-level processing. The discussion centers on the design of a data model and interface that best supports these goals and we present three usage examples. © Springer-Verlag Berlin Heidelberg 2005.

More Details

A prescreener for 3D face recognition using radial symmetry and the Hausdorff fraction

IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

Koudelka, Melissa L.; Koch, Mark W.; Russ, Trina D.

Face recognition systems require the ability to efficiently scan an existing database of faces to locate a match for a newly acquired face. The large number of faces in real world databases makes computationally intensive algorithms impractical for scanning entire databases. We propose the use of more efficient algorithms to “prescreen” face databases, determining a limited set of likely matches that can be processed further to identify a match. We use both radial symmetry and shape to extract five features of interest on 3D range images of faces. These facial features determine a very small subset of discriminating points which serve as input to a prescreening algorithm based on a Hausdorff fraction. We show how to compute the Haudorff fraction in linear O(n) time using a range image representation. Our feature extraction and prescreening algorithms are verified using the FRGC v1.0 3D face scan data. Results show 97% of the extracted facial features are within 10 mm or less of manually marked ground truth, and the prescreener has a rank 6 recognition rate of 100%.

More Details

Modeling and analysis of a vibratory micro-pin feeder using impulse-based simulation

Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference - DETC2005

Weir, Nathan; Cipra, Raymond J.

A variety of methods exist for the assembly of microscale devices. One such strategy uses microscale force-fit pin insertion to assemble LIGA parts together. One of the challenges associated with this strategy is the handling of small pins which are 170 microns in diameter and with lengths ranging from 500 to 1000 microns. In preparation for insertion, a vibratory micro-pin feeder has been used to successfully singulate and manipulate the pins into a pin storage magazine. This paper presents the development of a deterministic model, simulation tool, and methodology in order to identify and analyze key performance attributes of the vibratory micro-pin feeder system. A brief parametric study was conducted to identify the effects of changing certain system parameters on the bulk behavior of the system, namely the capture rate of the pins. Results showing trends have been obtained for a few specific cases. These results indicate that different system parameters can be chosen to yield better system performance. Copyright © 2005 by ASME.

More Details

The quantification of mixture stoichiometry when fuel molecules contain oxidizer elements or oxidizer molecules contain fuel elements

SAE Technical Papers

Mueller, Charles J.

The accurate quantification and control of mixture stoichiometry is critical in many applications using new combustion strategies and fuels (e.g., homogeneous charge compression ignition, gasoline direct injection, and oxygenated fuels). The parameter typically used to quantify mixture stoichiometry (i.e., the proximity of a reactant mixture to its stoichiometric condition) is the equivalence ratio, φ. The traditional definition of φ is based on the relative amounts of fuel and oxidizer molecules in a mixture. This definition provides an accurate measure of mixture stoichiometry when the fuel molecule does not contain oxidizer elements and when the oxidizer molecule does not contain fuel elements. However, the traditional definition of φ leads to problems when the fuel molecule contains an oxidizer element, as is the case when an oxygenated fuel is used, or once reactions have started and the fuel has begun to oxidize. The problems arise because an oxidizer element in a fuel molecule is counted as part of the fuel, even though it is an oxidizer element. Similarly, if an oxidizer molecule contains fuel elements, the fuel elements in the oxidizer molecule are misleadingly lumped in with the oxidizer in the traditional definition of φ. In either case, use of the traditional definition of φ to quantify the mixture stoichiometry can lead to significant errors. This paper introduces the oxygen equivalence ratio, φΩ, a parameter that properly characterizes the instantaneous mixture stoichiometry for a broader class of reactant mixtures than does φ. Because it is an instantaneous measure of mixture stoichiometry, φΩ can be used to track the time-evolution of stoichiometry as a reaction progresses. The relationship between φΩ and φ is shown. Errors are involved when the traditional definition of φ is used as a measure of mixture stoichiometry with fuels that contain oxidizer elements or oxidizers that contain fuel elements; φΩ is used to quantify these errors. Proper usage of φΩ is discussed, and φΩ is used to interpret results in a practical example. Copyright © 2005 SAE International.

More Details

A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes

Proceedings of the 2005 IEEE International Workshop on Advanced Methods for Uncertainty Estimation in Measurement, AMUEM 2005

Crowder, Stephen V.; Moyer, Robert D.

Proposed Supplement 1 to the GUM outlines a "propagation of distributions " approach to deriving the distribution of a measurandfor any non-linear function and for any set of random inputs. The Supplement 's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximated distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals. © 2005 IEEE.

More Details

Acquisition of corresponding fuel distribution and emissions measurements in HCCI engines

SAE Technical Papers

De Zilwa, Shane R.; Steeper, Richard R.

Optical engines are often skip-fired to maintain optical components at acceptable temperatures and to reduce window fouling. Although many different skip-fired sequences are possible, if exhaust emissions data are required, the skip-firing sequence ought to consist of a single fired cycle followed by a series of motored cycles (referred to here as singleton skip-firing). This paper compares a singleton skip-firing sequence with continuous firing at the same inlet conditions, and shows that combustion performance trends with equivalence ratio are similar. However, as expected, reactant temperatures are lower with skip-firing, resulting in retarded combustion phasing, and lower pressures and combustion efficiency. LIF practitioners often employ a homogeneous charge of known composition to create calibration images for converting raw signal to equivalence ratio. Homogeneous in-cylinder mixtures are typically obtained by premixing fuel and air upstream of the engine; however, premixing usually precludes skip-firing. Data are presented demonstrating that using continuously-fired operation to calibrate skip-fired data leads to over-prediction of local equivalence ratio. This is due to a combination of lower reactant temperatures for skip- versus continuous-fired operation, and a fluorescence yield that decreases with temperature. It is further demonstrated that early direct injection can be used as an alternative approach to provide calibration images. The influence of hardware modifications made to optical engines on performance is also examined. Copyright © 2005 SAE International.

More Details

Soot formation in diesel combustion under high-EGR conditions

SAE Technical Papers

Idicheria, Cherian I.; Pickett, Lyle M.

Experiments were conducted in an optically accessible constant-volume combustion vessel to investigate soot formation at diesel combustion conditions in a high exhaust-gas recirculation (EGR) environment. The ambient oxygen concentration was decreased systematically from 21% to 8% to simulate a wide range of EGR conditions. Quantitative measurements of in-situ soot in quasi-steady n-heptane and #2 diesel fuel jets were made by using laser extinction and planar laser-induced incandescence (PLII) measurements. Flame lift-off length measurements were also made in support of the soot measurements. At constant ambient temperature, results show that the equivalence ratio estimated at the lift-off length does not vary with the use of EGR, implying an equal amount of fuel-air mixing prior to combustion. Soot measurements show that the soot volume fraction decreases with increasing EGR. The regions of soot formation are effectively "stretched out" to longer axial and radial distances from the injector with increasing EGR, according to the dilution in ambient oxygen. However, the axial soot distribution and location of maximum soot collapses if plotted in terms of a "flame coordinate", where the relative fuel-oxygen mixture is equivalent. The total soot in the jet cross-section at the maximum axial soot location initially increases and then decreases to zero as the oxygen concentration decreases from 21% to 8%. The trend is caused by competition between soot formation rates and increasing residence time. Soot formation rates decrease with decreasing oxygen concentration because of the lower combustion temperatures. At the same time, the residence time for soot formation increases, allowing more time for accumulation of soot. Increasing the ambient temperature above nominal diesel engine conditions leads to a rapid increase in soot for high-EGR conditions when compared to conditions with no EGR. This result emphasizes the importance of EGR cooling and its beneficial effect on mitigating soot formation. The effect of EGR is consistent for different fuels but soot levels depend on the sooting propensity of the fuel. Specifically, #2 diesel fuel produces soot levels more than ten times higher than those of n-heptane. Copyright © 2005 SAE International.

More Details

A scalable distributed parallel breadth-first search algorithm on BlueGene/L

Proceedings of the ACM/IEEE 2005 Supercomputing Conference, SC'05

Yoo, Andy; Chow, Edmond; Henderson, Keith; McLendon, William; Hendrickson, Bruce A.; Çatalyürek, Ümit

Many emerging large-scale data science applications require searching large graphs distributed across multiple memories and processors. This paper presents a distributed breadth-first search (BFS) scheme that scales for random graphs with up to three billion vertices and 30 billion edges. Scalability was tested on IBM BlueGene/L with 32,768 nodes at the Lawrence Livermore National Laboratory. Scalability was obtained through a series of optimizations, in particular, those that ensure scalable use of memory. We use 2D (edge) partitioning of the graph instead of conventional ID (vertex) partitioning to reduce communication overhead. For Poisson random graphs, we show that the expected size of the messages is scalable for both 2D and ID partitionings. Finally, we have developed efficient collective communication functions for the 3D torus architecture of BlueGene/L that also take advantage of the structure in the problem. The performance and characteristics of the algorithm are measured and reported. © 2005 IEEE.

More Details

Dual-laser LIDELS: An optical diagnostic for time-resolved volatile fraction measurements of diesel particulate emissions

SAE Technical Papers

Witze, Peter O.; Gershenzon, Michael; Michelsen, Hope A.

Double-pulse laser-induced desorption with elastic laser scattering (LIDELS) is a diagnostic technique capable of making time-resolved, in situ measurements of the volatile fraction of diesel particulate matter (PM). The technique uses two laser pulses of comparable energy, separated in time by an interval sufficiently short to freeze the flow field, to measure the change in PM volume caused by laser-induced desorption of the volatile fraction. The first laser pulse of a pulse-pair produces elastic laser scattering (ELS) that gives the total PM volume, and also deposits the energy to desorb the volatiles. ELS from the second pulse gives the volume of the remaining solid portion of the PM, and the ratio of these two measurements is the quantitative solid volume fraction. In an earlier study, we used a single laser to make real-time LIDELS measurements during steady-state operation of a diesel engine. In this paper, we discuss the advantages and disadvantages of the two LIDELS techniques and present measurements made in real diesel exhaust and simulated diesel exhaust created by coating diffusion-flame soot with single-component hydrocarbons. Comparison with analysis of PM collected on quartz filters reveals that LIDELS considerably under-predicts the volatile fraction. We discuss reasons for this discrepancy and recommend future directions for LIDELS research. Copyright © 2005 SAE International.

More Details

Reverse engineering chemical structures from molecular descriptors: How many solutions?

Journal of Computer-Aided Molecular Design

Faulon, Jean-Loup M.; Brown, W.M.; Martin, Shawn

Physical, chemical and biological properties are the ultimate information of interest for chemical compounds. Molecular descriptors that map structural information to activities and properties are obvious candidates for information sharing. In this paper, we consider the feasibility of using molecular descriptors to safely exchange chemical information in such a way that the original chemical structures cannot be reverse engineered. To investigate the safety of sharing such descriptors, we compute the degeneracy (the number of structure matching a descriptor value) of several 2D descriptors, and use various methods to search for and reverse engineer structures. We examine degeneracy in the entire chemical space taking descriptors values from the alkane isomer series and the PubChem database. We further use a stochastic search to retrieve structures matching specific topological index values. Finally, we investigate the safety of exchanging of fragmental descriptors using deterministic enumeration. © Springer 2005.

More Details

Wireless and wireline network interactions in disaster scenarios

Proceedings - IEEE Military Communications Conference MILCOM

Jrad, Ahmad; Uzunalioglu, Huseyin; Houck, David J.; O'Reilly, Gerard; Conrad, Stephen H.; Beyeler, Walter E.

The fast and unrelenting spread of wireless telecommunication devices has changed the landscape of the telecommunication world, as we know it. Today we find that most users have access to both wireline and wireless communication devices. This widespread availability of alternate modes of communication is adding, on one hand, to a redundancy in networks, yet, on the other hand, has cross network impacts during overloads and disruptions. This being the case, it behooves network designers and service providers to understand how this redundancy works so that it can be better utilized in emergency conditions where the need for redundancy is critical. In this paper, we examine the scope of this redundancy as expressed by telecommunications availability to users under different failure scenarios. We quantify the interaction of wireline and wireless networks during network failures and traffic overloads. Developed as part of a Department of Homeland Security Infrastructure Protection (DHS IP) project, the Network Simulation Modeling and Analysis Research Tool (N-SMART) was used to perform this study. The product of close technical collaboration between the National Infrastructure Simulation and Analysis Center (NISAC) and Lucent Technologies, N-SMART supports detailed wireline and wireless network simulations and detailed user calling behavior.

More Details

Finding strongly connected components in distributed graphs

Journal of Parallel and Distributed Computing

McLendon, William; Hendrickson, Bruce A.; Plimpton, Steven J.; Rauchwerger, Lawrence

The traditional, serial, algorithm for finding the strongly connected components in a graph is based on depth first search and has complexity which is linear in the size of the graph. Depth first search is difficult to parallelize, which creates a need for a different parallel algorithm for this problem. We describe the implementation of a recently proposed parallel algorithm that finds strongly connected components in distributed graphs, and discuss how it is used in a radiation transport solver. © 2005 Elsevier Inc. All rights reserved.

More Details

An isotropic material remap scheme for Eulerian Codes

2nd International Conference on Cybernetics and Information Technologies, Systems and Applications, CITSA 2005, 11th International Conference on Information Systems Analysis and Synthesis, ISAS 2005

Bell, Raymond L.

Shock Physics codes in use at many Department of Energy (DOE) and Department of Defense (DoD) laboratories can be divided into two classes; Lagrangian Codes (where the computational mesh is (attached' to the materials) and Eulerian Codes (where the computational mesh is (fixed' in space and die materials flow through the mesh). These two classes of codes exhibit different advantages and disadvantages. Lagrangian codes are good at keeping material interfaces well defined, but suffer when the materials undergo extreme distortion which leads to severe reductions in the time steps. Eulerian codes are better able to handle severe material distortion (since the mesh is fixed the time steps are not as severely reduced), but these codes do not keep track of material interfaces very well. So in an Eulerian code the developers must design algorithms to track or reconstruct accurate interfaces between materials as the calculation progresses. However, there are classes of calculations where an interface is not desired between some materials, for instance between materials that are intimately mixed (dusty air or multiphase materials). In these cases a material interface reconstruction scheme is needed that will keep this mixture separated from other materials in the calculation, but will maintain the mixture attributes. This paper will describe the Sandia National Laboratories Eulerian Shock Physics Code known as CTH, and the specialized isotropic material interface reconstruction scheme designed to keep mixed material groups together while keeping different groups separated during the remap step.

More Details

Secure Sensor Platform (SSP) for materials' sealing and monitoring applications

Proceedings - International Carnahan Conference on Security Technology

Schoeneman, Barry D.; Blankenau, Steven J.

For over a decade, Sandia National Laboratories has collaborated with domestic and international partners in the development of intelligent Radio Frequency (RF) loop seals and sensor technologies for multiple applications. Working with US industry, the International Atomic Energy Agency and Russian institutes; the Sandia team continues to utilize gains in technology performance to develop and deploy increasingly sophisticated platforms. Seals of this type are typically used as item monitors to detect unauthorized actions and malicious attacks in storage and transportation applications. The spectrum of current seal technologies at Sandia National Laboratories ranges from Sandia's initial T-1 design incorporating bi-directional RF communication with a loop seal and tamper indicating components to the highly flexible Secure Sensor Platform (SSP). Sandia National Laboratories is currently pursuing the development of the next generation fiber optic loop seal. This new device is based upon the previously designed multi-mission electronic sensor and communication platform that launched the development of the T-1A which is currently in production at Honeywell FM&T for the Savannah River Site. The T-1A is configured as an active fiber optic seal with authenticated, bi-directional RF communications capable of supporting a number of sensors. The next generation fiber optic loop seal, the Secure Sensor Platform (SSP), is enhancing virtually all of the existing capabilities of the T-1A and is adding many new features and capabilities. The versatility of this new device allows the capabilities to be selected and tailored to best fit the specific application. This paper discusses the capabilities of this new generation fiber optic loop seal as well as the potential application theater which can range from rapid, remotely-monitored, temporary deployments to long-term item storage monitoring supporting International nuclear non-proliferation. This next generation technology suite addresses the combination of sealing requirements with requirements in unique materials' identification, environmental monitoring, and remote long-term secure communications. © 2005 IEEE.

More Details

Modeling enhanced blast explosives using a multiphase mixture approach

WIT Transactions on the Built Environment

Baer, M.R.; Schmitt, R.G.; Hertel, E.S.; DesJardin, P.E.

In this overview we present a reactive multiphase flow model to describe the physical processes associated with enhanced blast. This model is incorporated into CTH, a shock physics code, using a variant of the Baer and Nunziato nonequilibrium multiphase mixture to describe shock-driven reactive flow including the effects of interphase mass exchange, particulate drag, heat transfer and secondary combustion of multiphase mixtures. This approach is applied to address the various aspects of the reactive behavior of enhanced blast including detonation and the subsequent expansion of reactive products. The latter stage of reactive explosion involves shock-driven multiphase flow that produces instabilities which are the prelude to the generation of turbulence and subsequent mixing of surrounding air to cause secondary combustion. Turbulent flow is modeled in the context of Large Eddy Simulation (LES) with the formalism of multiphase PDF theory including a mechanistic model of metal combustion. © 2005 WIT Press.

More Details

Geometry and material choices govern hard-rock drilling performance of PDC drag cutters

American Rock Mechanics Association - 40th US Rock Mechanics Symposium, ALASKA ROCKS 2005: Rock Mechanics for Energy, Mineral and Infrastructure Development in the Northern Regions

Wise, Jack L.

Sandia National Laboratories has partnered with industry on a multifaceted, baseline experimental study that supports the development of improved drag cutters for advanced drill bits. Different nonstandard cutter lots were produced and subjected to laboratory tests that evaluated the influence of selected design and processing parameters on cutter loads, wear, and durability pertinent to the penetration of hard rock with mechanical properties representative of formations encountered in geothermal or deep oil/gas drilling environments. The focus was on cutters incorporating ultrahard PDC (polycrystalline diamond compact) overlays (i.e., diamond tables) on tungsten-carbide substrates. Parameter variations included changes in cutter geometry, material composition, and processing conditions. Geometric variables were the diamond-table thickness, the cutting-edge profile, and the PDC/substrate interface configuration. Material and processing variables for the diamond table were, respectively, the diamond particle size and the sintering pressure applied during cutter fabrication. Complementary drop-impact, granite-log abrasion, linear cutting-force, and rotary-drilling tests examined the response of cutters from each lot. Substantial changes in behavior were observed from lot to lot, allowing the identification of features contributing major (factor of 10+) improvements in cutting performance for hard-rock applications. Recent field demonstrations highlight the advantages of employing enhanced cutter technology during challenging drilling operations.

More Details

Advancing alloy 718 vacuum arc remelting technology through developing model-based controls

Proceedings of the International Symposium on Superalloys and Various Derivatives

Williamson, Rodney L.; Beaman, Joseph J.; Zanner, Frank J.; Debarbadillo, John J.

The Specialty Metals Processing Consortium (SMPC) was established in 1990 with the goal of advancing the technology of melting and remelting nickel and titanium alloys. In recent years, the SMPC technical program has focused on developing technology to improve control over the final ingot remelting and solidification processes to alleviate conditions that lead to the formation of inclusions and positive and negative segregation. A primary objective is the development of advanced monitoring and control techniques for application to vacuum arc remelting (VAR), with special emphasis on VAR of Alloy 718. This has lead to the development of an accurate, low order electrode melting model for this alloy as well as an advanced process estimator that provides real-time estimates of important process variables such as electrode temperature distribution, instantaneous melt rate, process efficiency, fill ratio, and voltage bias. This, in turn, has enabled the development and industrial application of advanced VAR process monitoring and control systems. The technology is based on the simple idea that the set of variables describing the state of the process must be self-consistent as required by the dynamic process model. The output of the process estimator comprises the statistically optimal estimate of this self-consistent set. Process upsets such as those associated with glows and cracked electrodes are easily identified using estimator based methods.

More Details

Laser-induced damage of polycrystalline silicon optically powered MEMS actuators

Proceedings of the ASME/Pacific Rim Technical Conference and Exhibition on Integration and Packaging of MEMS, NEMS, and Electronic Systems: Advances in Electronic Packaging 2005

Serrano, Justin R.; Brooks, Carlton F.; Phinney, Leslie

Optical MEMS devices are commonly interfaced with lasers for communication, switching, or imaging applications. Dissipation of the absorbed energy in such devices is often limited by dimensional constraints which may lead to overheating and damage of the component. Surface micromachined, optically powered thermal actuators fabricated from two 2.25 μm thick polycrystalline silicon layers were irradiated with 808 nm continuous wave laser light with a 100 μm diameter spot under increasing power levels to assess their resistance to laser-induced damage. Damage occurred immediately after laser irradiation at laser powers above 275 mW and 295 mW for 150 urn diameter circular and 194 urn by 150 μm oval targets, respectively. At laser powers below these thresholds, the exposure time required to damage the actuators increased linearly and steeply as the incident laser power decreased. Increasing the area of the connections between the two polycrystalline silicon layers of the actuator target decreases the extent of the laser damage. Additionally, an optical thermal actuator target with 15 μm × 15 μm posts withstood 326 mW for over 16 minutes without exhibiting damage to the surface. Copyright © 2005 by ASME.

More Details

Monolithic passively Q-switched Cr:Nd:GSGG microlaser

Proceedings of SPIE - The International Society for Optical Engineering

Schmitt, Randal L.

Optical firing sets need miniature, robust, reliable pulsed laser sources for a variety of triggering functions. In many cases, these lasers must withstand high transient radiation environments. In this paper we describe a monolithic passively Q-switched microlaser constructed using Cr:Nd:GSGG as the gain material and Cr4+:YAG as the saturable absorber, both of which are radiation hard crystals. This laser consists of a 1-mm-long piece of undoped YAG, a 7-mm-long piece of Cr:Nd:GSGG, and a 1.5-mm-long piece of Cr 4+:YAG diffusion bonded together. The ends of the assembly are polished flat and parallel and dielectric mirrors are coated directly on the ends to form a compact, rugged, monolithic laser. When end pumped with a diode laser emitting at ∼807.6 nm, this passively Q-switched laser produces ∼1.5-ns-wide pulses. While the unpumped flat-flat cavity is geometrically unstable, thermal lensing and gain guiding produce a stable cavity with a TEM00 gaussian output beam over a wide range of operating parameters. The output energy of the laser is scalable and dependent on the cross sectional area of the pump beam. This laser has produced Q-switched output energies from several μJ per pulse to several 100 μJ per pulse with excellent beam quality. Its short pulse length and good beam quality result in high peak power density required for many applications such as optically triggering sprytrons. In this paper we discuss the design, construction, and characterization of this monolithic laser as well as energy scaling of the laser up to several 100 μJ per pulse.

More Details

On-chip preconcentration of proteins for picomolar detection in oral fluids

Micro Total Analysis Systems - Proceedings of MicroTAS 2005 Conference: 9th International Conference on Miniaturized Systems for Chemistry and Life Sciences

Hatch, A.V.; Herr, A.E.; Throckmorton, Daniel J.; Brennan, J.P.; Giannobile, W.V.; Singh, Anup K.

We report an automated on-chip clinical diagnostic that integrates analyte mixing, preconcentration, and subsequent detection using native polyacrylamide gel electrophoresis (PAGE) immunoanalysis. Sample proteins are concentrated > 100-fold with an in situ polymerized size exclusion membrane. The membrane also facilitates rapid mixing of reagents and sample prior to analysis. The integrated system was used to rapidly (minutes) detect immune-response markers in saliva acquired from periodontal diseased patients. Copyright © 2005 by the Transducer Research Foundation, Inc.

More Details

Modeling and alleviating instability in a mems vertical comb drive using a progressive linkage

Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference - DETC2005

Bronson, Jessica R.; Wiens, Gloria J.; Allen, James J.

Micro mirrors have emerged as key components for optical microelectromechanical system (MEMS) applications. Electrostatic vertical comb drives are attractive because they can be fabricated underneath the mirror, allowing for arrays with a high fill factor. Also, vertical comb drives are more easily controlled than parallel plate actuators, making them the better choice for analog scanning devices. The device presented in this paper is a one-degree of freedom vertical comb drive fabricated using Sandia National Laboratories SUMMiT™ five-level surface micromachining process. The electrostatic performance of the device is investigated using finite element analysis to determine the capacitance for a unit cell of the comb drive as the position of the device is varied. This information is then used to design a progressive linkage that will seek to alleviate or eliminate the effects of instability. The goal of this research is to develop an electrostatic model for the behavior of the vertical comb drive mirror and then use this to design a progressive-linkage that can delay or eliminate the pull-in instability. Copyright © 2005 by ASME.

More Details
Results 87301–87325 of 99,299
Results 87301–87325 of 99,299