Publications

Results 87301–87400 of 99,299

Search results

Jump to search filters

Unconstrained paving & plastering: A new idea for all hexahedral mesh generation

Proceedings of the 14th International Meshing Roundtable, IMR 2005

Staten, Matthew L.; Owen, Steven J.; Blacker, Teddy D.

Unconstrained Plastering is a new algorithm with the goal of generating a conformal all-hexahedral mesh on any solid geometry assembly. Paving[1] has proven reliable for quadrilateral meshing on arbitrary surfaces. However, the 3D corollary, Plastering [2][3][4][5], is unable to resolve the unmeshed center voids due to being over-constrained by a pre-existing boundary mesh. Unconstrained Plastering attempts to leverage the benefits of Paving and Plastering, without the over-constrained nature of Plastering. Unconstrained Plastering uses advancing fronts to inwardly project unconstrained hexahedral layers from an unmeshed boundary. Only when three layers cross, is a hex element formed. Resolving the final voids is easier since closely spaced, randomly oriented quadrilaterals do not over-constrain the problem. Implementation has begun on Unconstrained Plastering, however, proof of its reliability is still forthcoming. © 2005 Springer-Verlag Berlin Heidelberg.

More Details

A mathematically guided strategy for risk assessment and management

WIT Transactions on the Built Environment

Cooper, James A.

Strategies for risk assessment and management of high consequence operations are often based on factors such as physical analysis, analysis of software and other logical processing, and analysis of statistically determined human actions. Conventional analysis methods work well for processing objective information. However, in practical situations, much or most of the data available are subjective. Also, there are potential resultant pitfalls where conventional analysis might be unrealistic, such as improperly using event tree and fault tree failure descriptions where failures or events are soft (partial) rather than crisp (binary), neglecting or misinterpreting dependence (positive, negative, correlation), and aggregating nonlinear contributions linearly. There are also personnel issues that transcend basic human factors statistics. For example, sustained productivity and safety in critical operations can depend on the morale of involved personnel. In addition, motivation is significantly influenced by "latent effects," which are pre-occurring influences. This paper addresses these challenges and proposes techniques for subjective risk analysis, latent effects risk analysis and a hybrid analysis that also includes objective risk analysis. The goal is an improved strategy for risk management. © 2005 WIT Press.

More Details

Solvothermal routes for synthesis of zinc oxide nanorods

Materials Research Society Symposium Proceedings

Bell, Nelson S.

Control of the synthesis of nanomaterials to produce morphologies exhibiting quantized properties will enable device integration of several novel applications including biosensors, catalysis, and optical devices. In this work, solvothermal routes to produce zinc oxide nanorods are explored. Much previous work has relied on the addition of growth directing/inhibiting agents to control morphology. It was found in coarsening studies that zinc oxide nanodots will ripen to nanorod morphologies at temperatures of 90 to 120°C. The resulting nanorods have widths of 9-12 nm average dimension, which is smaller than current methods for nanorod synthesis. Use of nanodots as nuclei may be an approach that will allow for controlled growth of higher aspect ratio nanorods. © 2005 Materials Research Society.

More Details

FCLib: A library for building data analysis and data discovery tools

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Doyle, Wendy S.K.; Kegelmeyer, William P.

In this paper we describe a data analysis toolkit constructed to meet the needs of data discovery in large scale spatio-temporal data. The toolkit is a C library of building blocks that can be assembled into data analyses. Our goals were to build a toolkit which is easy to use, is applicable to a wide variety of science domains, supports feature-based analysis, and minimizes low-level processing. The discussion centers on the design of a data model and interface that best supports these goals and we present three usage examples. © Springer-Verlag Berlin Heidelberg 2005.

More Details

A prescreener for 3D face recognition using radial symmetry and the Hausdorff fraction

IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

Koudelka, Melissa L.; Koch, Mark W.; Russ, Trina D.

Face recognition systems require the ability to efficiently scan an existing database of faces to locate a match for a newly acquired face. The large number of faces in real world databases makes computationally intensive algorithms impractical for scanning entire databases. We propose the use of more efficient algorithms to “prescreen” face databases, determining a limited set of likely matches that can be processed further to identify a match. We use both radial symmetry and shape to extract five features of interest on 3D range images of faces. These facial features determine a very small subset of discriminating points which serve as input to a prescreening algorithm based on a Hausdorff fraction. We show how to compute the Haudorff fraction in linear O(n) time using a range image representation. Our feature extraction and prescreening algorithms are verified using the FRGC v1.0 3D face scan data. Results show 97% of the extracted facial features are within 10 mm or less of manually marked ground truth, and the prescreener has a rank 6 recognition rate of 100%.

More Details

Modeling and analysis of a vibratory micro-pin feeder using impulse-based simulation

Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference - DETC2005

Weir, Nathan; Cipra, Raymond J.

A variety of methods exist for the assembly of microscale devices. One such strategy uses microscale force-fit pin insertion to assemble LIGA parts together. One of the challenges associated with this strategy is the handling of small pins which are 170 microns in diameter and with lengths ranging from 500 to 1000 microns. In preparation for insertion, a vibratory micro-pin feeder has been used to successfully singulate and manipulate the pins into a pin storage magazine. This paper presents the development of a deterministic model, simulation tool, and methodology in order to identify and analyze key performance attributes of the vibratory micro-pin feeder system. A brief parametric study was conducted to identify the effects of changing certain system parameters on the bulk behavior of the system, namely the capture rate of the pins. Results showing trends have been obtained for a few specific cases. These results indicate that different system parameters can be chosen to yield better system performance. Copyright © 2005 by ASME.

More Details

The quantification of mixture stoichiometry when fuel molecules contain oxidizer elements or oxidizer molecules contain fuel elements

SAE Technical Papers

Mueller, Charles J.

The accurate quantification and control of mixture stoichiometry is critical in many applications using new combustion strategies and fuels (e.g., homogeneous charge compression ignition, gasoline direct injection, and oxygenated fuels). The parameter typically used to quantify mixture stoichiometry (i.e., the proximity of a reactant mixture to its stoichiometric condition) is the equivalence ratio, φ. The traditional definition of φ is based on the relative amounts of fuel and oxidizer molecules in a mixture. This definition provides an accurate measure of mixture stoichiometry when the fuel molecule does not contain oxidizer elements and when the oxidizer molecule does not contain fuel elements. However, the traditional definition of φ leads to problems when the fuel molecule contains an oxidizer element, as is the case when an oxygenated fuel is used, or once reactions have started and the fuel has begun to oxidize. The problems arise because an oxidizer element in a fuel molecule is counted as part of the fuel, even though it is an oxidizer element. Similarly, if an oxidizer molecule contains fuel elements, the fuel elements in the oxidizer molecule are misleadingly lumped in with the oxidizer in the traditional definition of φ. In either case, use of the traditional definition of φ to quantify the mixture stoichiometry can lead to significant errors. This paper introduces the oxygen equivalence ratio, φΩ, a parameter that properly characterizes the instantaneous mixture stoichiometry for a broader class of reactant mixtures than does φ. Because it is an instantaneous measure of mixture stoichiometry, φΩ can be used to track the time-evolution of stoichiometry as a reaction progresses. The relationship between φΩ and φ is shown. Errors are involved when the traditional definition of φ is used as a measure of mixture stoichiometry with fuels that contain oxidizer elements or oxidizers that contain fuel elements; φΩ is used to quantify these errors. Proper usage of φΩ is discussed, and φΩ is used to interpret results in a practical example. Copyright © 2005 SAE International.

More Details

A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes

Proceedings of the 2005 IEEE International Workshop on Advanced Methods for Uncertainty Estimation in Measurement, AMUEM 2005

Crowder, Stephen V.; Moyer, Robert D.

Proposed Supplement 1 to the GUM outlines a "propagation of distributions " approach to deriving the distribution of a measurandfor any non-linear function and for any set of random inputs. The Supplement 's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximated distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals. © 2005 IEEE.

More Details

Acquisition of corresponding fuel distribution and emissions measurements in HCCI engines

SAE Technical Papers

De Zilwa, Shane R.; Steeper, Richard R.

Optical engines are often skip-fired to maintain optical components at acceptable temperatures and to reduce window fouling. Although many different skip-fired sequences are possible, if exhaust emissions data are required, the skip-firing sequence ought to consist of a single fired cycle followed by a series of motored cycles (referred to here as singleton skip-firing). This paper compares a singleton skip-firing sequence with continuous firing at the same inlet conditions, and shows that combustion performance trends with equivalence ratio are similar. However, as expected, reactant temperatures are lower with skip-firing, resulting in retarded combustion phasing, and lower pressures and combustion efficiency. LIF practitioners often employ a homogeneous charge of known composition to create calibration images for converting raw signal to equivalence ratio. Homogeneous in-cylinder mixtures are typically obtained by premixing fuel and air upstream of the engine; however, premixing usually precludes skip-firing. Data are presented demonstrating that using continuously-fired operation to calibrate skip-fired data leads to over-prediction of local equivalence ratio. This is due to a combination of lower reactant temperatures for skip- versus continuous-fired operation, and a fluorescence yield that decreases with temperature. It is further demonstrated that early direct injection can be used as an alternative approach to provide calibration images. The influence of hardware modifications made to optical engines on performance is also examined. Copyright © 2005 SAE International.

More Details

Soot formation in diesel combustion under high-EGR conditions

SAE Technical Papers

Idicheria, Cherian I.; Pickett, Lyle M.

Experiments were conducted in an optically accessible constant-volume combustion vessel to investigate soot formation at diesel combustion conditions in a high exhaust-gas recirculation (EGR) environment. The ambient oxygen concentration was decreased systematically from 21% to 8% to simulate a wide range of EGR conditions. Quantitative measurements of in-situ soot in quasi-steady n-heptane and #2 diesel fuel jets were made by using laser extinction and planar laser-induced incandescence (PLII) measurements. Flame lift-off length measurements were also made in support of the soot measurements. At constant ambient temperature, results show that the equivalence ratio estimated at the lift-off length does not vary with the use of EGR, implying an equal amount of fuel-air mixing prior to combustion. Soot measurements show that the soot volume fraction decreases with increasing EGR. The regions of soot formation are effectively "stretched out" to longer axial and radial distances from the injector with increasing EGR, according to the dilution in ambient oxygen. However, the axial soot distribution and location of maximum soot collapses if plotted in terms of a "flame coordinate", where the relative fuel-oxygen mixture is equivalent. The total soot in the jet cross-section at the maximum axial soot location initially increases and then decreases to zero as the oxygen concentration decreases from 21% to 8%. The trend is caused by competition between soot formation rates and increasing residence time. Soot formation rates decrease with decreasing oxygen concentration because of the lower combustion temperatures. At the same time, the residence time for soot formation increases, allowing more time for accumulation of soot. Increasing the ambient temperature above nominal diesel engine conditions leads to a rapid increase in soot for high-EGR conditions when compared to conditions with no EGR. This result emphasizes the importance of EGR cooling and its beneficial effect on mitigating soot formation. The effect of EGR is consistent for different fuels but soot levels depend on the sooting propensity of the fuel. Specifically, #2 diesel fuel produces soot levels more than ten times higher than those of n-heptane. Copyright © 2005 SAE International.

More Details

A scalable distributed parallel breadth-first search algorithm on BlueGene/L

Proceedings of the ACM/IEEE 2005 Supercomputing Conference, SC'05

Yoo, Andy; Chow, Edmond; Henderson, Keith; McLendon, William; Hendrickson, Bruce A.; Çatalyürek, Ümit

Many emerging large-scale data science applications require searching large graphs distributed across multiple memories and processors. This paper presents a distributed breadth-first search (BFS) scheme that scales for random graphs with up to three billion vertices and 30 billion edges. Scalability was tested on IBM BlueGene/L with 32,768 nodes at the Lawrence Livermore National Laboratory. Scalability was obtained through a series of optimizations, in particular, those that ensure scalable use of memory. We use 2D (edge) partitioning of the graph instead of conventional ID (vertex) partitioning to reduce communication overhead. For Poisson random graphs, we show that the expected size of the messages is scalable for both 2D and ID partitionings. Finally, we have developed efficient collective communication functions for the 3D torus architecture of BlueGene/L that also take advantage of the structure in the problem. The performance and characteristics of the algorithm are measured and reported. © 2005 IEEE.

More Details

Dual-laser LIDELS: An optical diagnostic for time-resolved volatile fraction measurements of diesel particulate emissions

SAE Technical Papers

Witze, Peter O.; Gershenzon, Michael; Michelsen, Hope A.

Double-pulse laser-induced desorption with elastic laser scattering (LIDELS) is a diagnostic technique capable of making time-resolved, in situ measurements of the volatile fraction of diesel particulate matter (PM). The technique uses two laser pulses of comparable energy, separated in time by an interval sufficiently short to freeze the flow field, to measure the change in PM volume caused by laser-induced desorption of the volatile fraction. The first laser pulse of a pulse-pair produces elastic laser scattering (ELS) that gives the total PM volume, and also deposits the energy to desorb the volatiles. ELS from the second pulse gives the volume of the remaining solid portion of the PM, and the ratio of these two measurements is the quantitative solid volume fraction. In an earlier study, we used a single laser to make real-time LIDELS measurements during steady-state operation of a diesel engine. In this paper, we discuss the advantages and disadvantages of the two LIDELS techniques and present measurements made in real diesel exhaust and simulated diesel exhaust created by coating diffusion-flame soot with single-component hydrocarbons. Comparison with analysis of PM collected on quartz filters reveals that LIDELS considerably under-predicts the volatile fraction. We discuss reasons for this discrepancy and recommend future directions for LIDELS research. Copyright © 2005 SAE International.

More Details

Reverse engineering chemical structures from molecular descriptors: How many solutions?

Journal of Computer-Aided Molecular Design

Faulon, Jean-Loup M.; Brown, W.M.; Martin, Shawn

Physical, chemical and biological properties are the ultimate information of interest for chemical compounds. Molecular descriptors that map structural information to activities and properties are obvious candidates for information sharing. In this paper, we consider the feasibility of using molecular descriptors to safely exchange chemical information in such a way that the original chemical structures cannot be reverse engineered. To investigate the safety of sharing such descriptors, we compute the degeneracy (the number of structure matching a descriptor value) of several 2D descriptors, and use various methods to search for and reverse engineer structures. We examine degeneracy in the entire chemical space taking descriptors values from the alkane isomer series and the PubChem database. We further use a stochastic search to retrieve structures matching specific topological index values. Finally, we investigate the safety of exchanging of fragmental descriptors using deterministic enumeration. © Springer 2005.

More Details

Wireless and wireline network interactions in disaster scenarios

Proceedings - IEEE Military Communications Conference MILCOM

Jrad, Ahmad; Uzunalioglu, Huseyin; Houck, David J.; O'Reilly, Gerard; Conrad, Stephen H.; Beyeler, Walter E.

The fast and unrelenting spread of wireless telecommunication devices has changed the landscape of the telecommunication world, as we know it. Today we find that most users have access to both wireline and wireless communication devices. This widespread availability of alternate modes of communication is adding, on one hand, to a redundancy in networks, yet, on the other hand, has cross network impacts during overloads and disruptions. This being the case, it behooves network designers and service providers to understand how this redundancy works so that it can be better utilized in emergency conditions where the need for redundancy is critical. In this paper, we examine the scope of this redundancy as expressed by telecommunications availability to users under different failure scenarios. We quantify the interaction of wireline and wireless networks during network failures and traffic overloads. Developed as part of a Department of Homeland Security Infrastructure Protection (DHS IP) project, the Network Simulation Modeling and Analysis Research Tool (N-SMART) was used to perform this study. The product of close technical collaboration between the National Infrastructure Simulation and Analysis Center (NISAC) and Lucent Technologies, N-SMART supports detailed wireline and wireless network simulations and detailed user calling behavior.

More Details

Finding strongly connected components in distributed graphs

Journal of Parallel and Distributed Computing

McLendon, William; Hendrickson, Bruce A.; Plimpton, Steven J.; Rauchwerger, Lawrence

The traditional, serial, algorithm for finding the strongly connected components in a graph is based on depth first search and has complexity which is linear in the size of the graph. Depth first search is difficult to parallelize, which creates a need for a different parallel algorithm for this problem. We describe the implementation of a recently proposed parallel algorithm that finds strongly connected components in distributed graphs, and discuss how it is used in a radiation transport solver. © 2005 Elsevier Inc. All rights reserved.

More Details

An isotropic material remap scheme for Eulerian Codes

2nd International Conference on Cybernetics and Information Technologies, Systems and Applications, CITSA 2005, 11th International Conference on Information Systems Analysis and Synthesis, ISAS 2005

Bell, Raymond L.

Shock Physics codes in use at many Department of Energy (DOE) and Department of Defense (DoD) laboratories can be divided into two classes; Lagrangian Codes (where the computational mesh is (attached' to the materials) and Eulerian Codes (where the computational mesh is (fixed' in space and die materials flow through the mesh). These two classes of codes exhibit different advantages and disadvantages. Lagrangian codes are good at keeping material interfaces well defined, but suffer when the materials undergo extreme distortion which leads to severe reductions in the time steps. Eulerian codes are better able to handle severe material distortion (since the mesh is fixed the time steps are not as severely reduced), but these codes do not keep track of material interfaces very well. So in an Eulerian code the developers must design algorithms to track or reconstruct accurate interfaces between materials as the calculation progresses. However, there are classes of calculations where an interface is not desired between some materials, for instance between materials that are intimately mixed (dusty air or multiphase materials). In these cases a material interface reconstruction scheme is needed that will keep this mixture separated from other materials in the calculation, but will maintain the mixture attributes. This paper will describe the Sandia National Laboratories Eulerian Shock Physics Code known as CTH, and the specialized isotropic material interface reconstruction scheme designed to keep mixed material groups together while keeping different groups separated during the remap step.

More Details

Secure Sensor Platform (SSP) for materials' sealing and monitoring applications

Proceedings - International Carnahan Conference on Security Technology

Schoeneman, Barry D.; Blankenau, Steven J.

For over a decade, Sandia National Laboratories has collaborated with domestic and international partners in the development of intelligent Radio Frequency (RF) loop seals and sensor technologies for multiple applications. Working with US industry, the International Atomic Energy Agency and Russian institutes; the Sandia team continues to utilize gains in technology performance to develop and deploy increasingly sophisticated platforms. Seals of this type are typically used as item monitors to detect unauthorized actions and malicious attacks in storage and transportation applications. The spectrum of current seal technologies at Sandia National Laboratories ranges from Sandia's initial T-1 design incorporating bi-directional RF communication with a loop seal and tamper indicating components to the highly flexible Secure Sensor Platform (SSP). Sandia National Laboratories is currently pursuing the development of the next generation fiber optic loop seal. This new device is based upon the previously designed multi-mission electronic sensor and communication platform that launched the development of the T-1A which is currently in production at Honeywell FM&T for the Savannah River Site. The T-1A is configured as an active fiber optic seal with authenticated, bi-directional RF communications capable of supporting a number of sensors. The next generation fiber optic loop seal, the Secure Sensor Platform (SSP), is enhancing virtually all of the existing capabilities of the T-1A and is adding many new features and capabilities. The versatility of this new device allows the capabilities to be selected and tailored to best fit the specific application. This paper discusses the capabilities of this new generation fiber optic loop seal as well as the potential application theater which can range from rapid, remotely-monitored, temporary deployments to long-term item storage monitoring supporting International nuclear non-proliferation. This next generation technology suite addresses the combination of sealing requirements with requirements in unique materials' identification, environmental monitoring, and remote long-term secure communications. © 2005 IEEE.

More Details

Modeling enhanced blast explosives using a multiphase mixture approach

WIT Transactions on the Built Environment

Baer, M.R.; Schmitt, R.G.; Hertel, E.S.; DesJardin, P.E.

In this overview we present a reactive multiphase flow model to describe the physical processes associated with enhanced blast. This model is incorporated into CTH, a shock physics code, using a variant of the Baer and Nunziato nonequilibrium multiphase mixture to describe shock-driven reactive flow including the effects of interphase mass exchange, particulate drag, heat transfer and secondary combustion of multiphase mixtures. This approach is applied to address the various aspects of the reactive behavior of enhanced blast including detonation and the subsequent expansion of reactive products. The latter stage of reactive explosion involves shock-driven multiphase flow that produces instabilities which are the prelude to the generation of turbulence and subsequent mixing of surrounding air to cause secondary combustion. Turbulent flow is modeled in the context of Large Eddy Simulation (LES) with the formalism of multiphase PDF theory including a mechanistic model of metal combustion. © 2005 WIT Press.

More Details

Geometry and material choices govern hard-rock drilling performance of PDC drag cutters

American Rock Mechanics Association - 40th US Rock Mechanics Symposium, ALASKA ROCKS 2005: Rock Mechanics for Energy, Mineral and Infrastructure Development in the Northern Regions

Wise, Jack L.

Sandia National Laboratories has partnered with industry on a multifaceted, baseline experimental study that supports the development of improved drag cutters for advanced drill bits. Different nonstandard cutter lots were produced and subjected to laboratory tests that evaluated the influence of selected design and processing parameters on cutter loads, wear, and durability pertinent to the penetration of hard rock with mechanical properties representative of formations encountered in geothermal or deep oil/gas drilling environments. The focus was on cutters incorporating ultrahard PDC (polycrystalline diamond compact) overlays (i.e., diamond tables) on tungsten-carbide substrates. Parameter variations included changes in cutter geometry, material composition, and processing conditions. Geometric variables were the diamond-table thickness, the cutting-edge profile, and the PDC/substrate interface configuration. Material and processing variables for the diamond table were, respectively, the diamond particle size and the sintering pressure applied during cutter fabrication. Complementary drop-impact, granite-log abrasion, linear cutting-force, and rotary-drilling tests examined the response of cutters from each lot. Substantial changes in behavior were observed from lot to lot, allowing the identification of features contributing major (factor of 10+) improvements in cutting performance for hard-rock applications. Recent field demonstrations highlight the advantages of employing enhanced cutter technology during challenging drilling operations.

More Details

Advancing alloy 718 vacuum arc remelting technology through developing model-based controls

Proceedings of the International Symposium on Superalloys and Various Derivatives

Williamson, Rodney L.; Beaman, Joseph J.; Zanner, Frank J.; Debarbadillo, John J.

The Specialty Metals Processing Consortium (SMPC) was established in 1990 with the goal of advancing the technology of melting and remelting nickel and titanium alloys. In recent years, the SMPC technical program has focused on developing technology to improve control over the final ingot remelting and solidification processes to alleviate conditions that lead to the formation of inclusions and positive and negative segregation. A primary objective is the development of advanced monitoring and control techniques for application to vacuum arc remelting (VAR), with special emphasis on VAR of Alloy 718. This has lead to the development of an accurate, low order electrode melting model for this alloy as well as an advanced process estimator that provides real-time estimates of important process variables such as electrode temperature distribution, instantaneous melt rate, process efficiency, fill ratio, and voltage bias. This, in turn, has enabled the development and industrial application of advanced VAR process monitoring and control systems. The technology is based on the simple idea that the set of variables describing the state of the process must be self-consistent as required by the dynamic process model. The output of the process estimator comprises the statistically optimal estimate of this self-consistent set. Process upsets such as those associated with glows and cracked electrodes are easily identified using estimator based methods.

More Details

Laser-induced damage of polycrystalline silicon optically powered MEMS actuators

Proceedings of the ASME/Pacific Rim Technical Conference and Exhibition on Integration and Packaging of MEMS, NEMS, and Electronic Systems: Advances in Electronic Packaging 2005

Serrano, Justin R.; Brooks, Carlton F.; Phinney, Leslie

Optical MEMS devices are commonly interfaced with lasers for communication, switching, or imaging applications. Dissipation of the absorbed energy in such devices is often limited by dimensional constraints which may lead to overheating and damage of the component. Surface micromachined, optically powered thermal actuators fabricated from two 2.25 μm thick polycrystalline silicon layers were irradiated with 808 nm continuous wave laser light with a 100 μm diameter spot under increasing power levels to assess their resistance to laser-induced damage. Damage occurred immediately after laser irradiation at laser powers above 275 mW and 295 mW for 150 urn diameter circular and 194 urn by 150 μm oval targets, respectively. At laser powers below these thresholds, the exposure time required to damage the actuators increased linearly and steeply as the incident laser power decreased. Increasing the area of the connections between the two polycrystalline silicon layers of the actuator target decreases the extent of the laser damage. Additionally, an optical thermal actuator target with 15 μm × 15 μm posts withstood 326 mW for over 16 minutes without exhibiting damage to the surface. Copyright © 2005 by ASME.

More Details

Monolithic passively Q-switched Cr:Nd:GSGG microlaser

Proceedings of SPIE - The International Society for Optical Engineering

Schmitt, Randal L.

Optical firing sets need miniature, robust, reliable pulsed laser sources for a variety of triggering functions. In many cases, these lasers must withstand high transient radiation environments. In this paper we describe a monolithic passively Q-switched microlaser constructed using Cr:Nd:GSGG as the gain material and Cr4+:YAG as the saturable absorber, both of which are radiation hard crystals. This laser consists of a 1-mm-long piece of undoped YAG, a 7-mm-long piece of Cr:Nd:GSGG, and a 1.5-mm-long piece of Cr 4+:YAG diffusion bonded together. The ends of the assembly are polished flat and parallel and dielectric mirrors are coated directly on the ends to form a compact, rugged, monolithic laser. When end pumped with a diode laser emitting at ∼807.6 nm, this passively Q-switched laser produces ∼1.5-ns-wide pulses. While the unpumped flat-flat cavity is geometrically unstable, thermal lensing and gain guiding produce a stable cavity with a TEM00 gaussian output beam over a wide range of operating parameters. The output energy of the laser is scalable and dependent on the cross sectional area of the pump beam. This laser has produced Q-switched output energies from several μJ per pulse to several 100 μJ per pulse with excellent beam quality. Its short pulse length and good beam quality result in high peak power density required for many applications such as optically triggering sprytrons. In this paper we discuss the design, construction, and characterization of this monolithic laser as well as energy scaling of the laser up to several 100 μJ per pulse.

More Details

On-chip preconcentration of proteins for picomolar detection in oral fluids

Micro Total Analysis Systems - Proceedings of MicroTAS 2005 Conference: 9th International Conference on Miniaturized Systems for Chemistry and Life Sciences

Hatch, A.V.; Herr, A.E.; Throckmorton, Daniel J.; Brennan, J.P.; Giannobile, W.V.; Singh, Anup K.

We report an automated on-chip clinical diagnostic that integrates analyte mixing, preconcentration, and subsequent detection using native polyacrylamide gel electrophoresis (PAGE) immunoanalysis. Sample proteins are concentrated > 100-fold with an in situ polymerized size exclusion membrane. The membrane also facilitates rapid mixing of reagents and sample prior to analysis. The integrated system was used to rapidly (minutes) detect immune-response markers in saliva acquired from periodontal diseased patients. Copyright © 2005 by the Transducer Research Foundation, Inc.

More Details

Modeling and alleviating instability in a mems vertical comb drive using a progressive linkage

Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference - DETC2005

Bronson, Jessica R.; Wiens, Gloria J.; Allen, James J.

Micro mirrors have emerged as key components for optical microelectromechanical system (MEMS) applications. Electrostatic vertical comb drives are attractive because they can be fabricated underneath the mirror, allowing for arrays with a high fill factor. Also, vertical comb drives are more easily controlled than parallel plate actuators, making them the better choice for analog scanning devices. The device presented in this paper is a one-degree of freedom vertical comb drive fabricated using Sandia National Laboratories SUMMiT™ five-level surface micromachining process. The electrostatic performance of the device is investigated using finite element analysis to determine the capacitance for a unit cell of the comb drive as the position of the device is varied. This information is then used to design a progressive linkage that will seek to alleviate or eliminate the effects of instability. The goal of this research is to develop an electrostatic model for the behavior of the vertical comb drive mirror and then use this to design a progressive-linkage that can delay or eliminate the pull-in instability. Copyright © 2005 by ASME.

More Details

Development and characterization of li-ion batteries for the freedomCAR advanced technology development program

IEEE Vehicular Technology Conference

Roth, Emanuel P.; Doughty, Daniel H.

High-power 18650 Li-ion cells have been developed for hybrid electric vehicle applications as part of the DOE FreedomCAR Advanced Technology Development (ATD) program. Cells have been developed for high-power, long-life, low-cost and abuse tolerance conditions. The thermal abuse response of advanced materials and cells were measured and compared. Cells were constructed for determination of abuse tolerance to determine the thermal runaway response and flammability of evolved gas products during venting. Advanced cathode and anode materials were evaluated for improved tolerance under abusive conditions. Calorimetric methods were used to measure the thermal response and properties of the cells and cell materials up to 450 °C. Improvements in thermal runaway response have been shown using combinations of these materials.

More Details

Investigation of two-color polarization spectroscopy (TCPS) and two-color six-wave mixing (TC-SWM) for detection of atomic hydrogen

Fall Technical Meeting of the Western States Section of the Combustion Institute 2005, WSS/CI 2005 Fall Meeting

Kulatilaka, W.D.; Lucht, R.P.; Settersten, T.B.

We report results from an investigation of the two-color polarization spectroscopy (TC-PS) and two-color six-wave mixing (TC-SWM) techniques for the measurement of atomic hydrogen in flames. The 243-nm two-photon pumping of 1S-2S transition of the H-atom was followed by single-photon probing of the 2S-3P transition at 656 nm. Necessary laser radiation was generated using two distributed feedback dye lasers (DFDLs) pumped by two regeneratively amplified, picosecond, Nd:YAG lasers. The DFDL pulses are nearly Fourier transform limited and have a pulse width of approximately 80 ps. The effects of pump and probe beam polarizations on the TC-PS and TC-SWM signals were studied in detail. The collisional dynamics of the H(2l) level were also investigated in an atmospheric-pressure hydrogenair flame by scanning the time delay between the pump and probe pulses. An increase in signal intensity of approximately 100 was observed in the TC-SWM geometry as compared to the TC-PS geometry.

More Details

Mitochondrial correlation microscopy and nanolaser spectroscopy - New tools for biophotonic detection of cancer in single cells

Technology in Cancer Research and Treatment

Gourley, Paul L.; Hendricks, Judy K.; Mcdonald, Anthony; Copeland, Robert; Barrett, Keith E.; Gourley, Cheryl R.; Singh, Keshav K.; Naviaux, Robert K.

Currently, pathologists rely on labor-intensive microscopic examination of tumor cells using century-old staining methods that can give false readings. Emerging BioMicroNano-technologies have the potential to provide accurate, realtime, high-throughput screening of tumor cells without the need for time-consuming sample preparation. These rapid, nano-optical techniques may play an important role in advancing early detection, diagnosis, and treatment of disease. In this report, we show that laser scanning confocal microscopy can be used to identify a previously unknown property of certain cancer cells that distinguishes them, with single-cell resolution, from closely related normal cells. This property is the correlation of light scattering and the spatial organization of mitochondria. In normal liver cells, mitochondria are highly organized within the cytoplasm and highly scattering, yielding a highly correlated signal. In cancer cells, mitochondria are more chaotically organized and poorly scattering. These differences correlate with important bioenergetic disturbances that are hallmarks of many types of cancer. In addition, we review recent work that exploits the new technology of nanolaser spectroscopy using the biocavity laser to characterize the unique spectral signatures of normal and transformed cells. These optical methods represent powerful new tools that hold promise for detecting cancer at an early stage and may help to limit delays in diagnosis and treatment. ©Adenine Press (2005).

More Details

External Second Gate-Fourier Transform Ion Mobility Spectrometry

Tarver III, Edward E.

Ion mobility spectrometry (IMS) is recognized as one of the most sensitive and versatile techniques for the detection of trace levels of organic vapors. IMS is widely used for detecting contraband narcotics, explosives, toxic industrial compounds and chemical warfare agents. Increasing threat of terrorist attacks, the proliferation of narcotics, Chemical Weapons Convention treaty verification as well as humanitarian de-mining efforts has mandated that equal importance be placed on the analysis time as well as the quality of the analytical data. (1) IMS is unrivaled when both speed of response and sensitivity has to be considered. (2) With conventional (signal averaging) IMS systems the number of available ions contributing to the measured signal to less than 1%. Furthermore, the signal averaging process incorporates scan-to-scan variations decreasing resolution. With external second gate Fourier Transform ion mobility spectrometry (FT-IMS), the entrance gate frequency is variable and can be altered in conjunction with other data acquisition parameters to increase the spectral resolution. The FT-IMS entrance gate operates with a 50% duty cycle and so affords a 7 to 10-fold increase in sensitivity. Recent data on high explosives are presented to demonstrate the parametric optimization in sensitivity and resolution of our system.

More Details

Evolutionary complexity for protection of critical assets

Chandross, Michael E.; Battaile, Corbett C.

This report summarizes the work performed as part of a one-year LDRD project, 'Evolutionary Complexity for Protection of Critical Assets.' A brief introduction is given to the topics of genetic algorithms and genetic programming, followed by a discussion of relevant results obtained during the project's research, and finally the conclusions drawn from those results. The focus is on using genetic programming to evolve solutions for relatively simple algebraic equations as a prototype application for evolving complexity in computer codes. The results were obtained using the lil-gp genetic program, a C code for evolving solutions to user-defined problems and functions. These results suggest that genetic programs are not well-suited to evolving complexity for critical asset protection because they cannot efficiently evolve solutions to complex problems, and introduce unacceptable performance penalties into solutions for simple ones.

More Details

Sandia National Laboratories corporate mentor program : program review, May 2004

Stephens, James R.; Dudeck, William; Bristol, Colette; Tarro, Talitha; Pegues, Tiffany T.

The Sandia National Laboratories Corporate Mentor Program provides a mechanism for the development and retention of Sandia's people and knowledge. The relationships formed among staff members at different stages in their careers offer benefits to all. These relationships can provide experienced employees with new ideas and insight and give less experienced employees knowledge of Sandia's culture, strategies, and programmatic direction. The program volunteer coordinators are dedicated to the satisfaction of the participants, who come from every area of Sandia. Since its inception in 1995, the program has sustained steady growth and excellent customer satisfaction. This report summarizes the accomplishments, activities, enhancements, and evaluation data for the Corporate Mentor Program for the 2003/2004 program year ending May 1, 2004.

More Details

Genomes to life project quarterly report June 2004

Heffelfinger, Grant S.; Martino, Anthony; Rintoul, Mark D.

This SAND report provides the technical progress through June 2004 of the Sandia-led project, ''Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling'', funded by the DOE Office of Science Genomes to Life Program. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO{sub 2} are important terms in the global environmental response to anthropogenic atmospheric inputs of CO{sub 2} and that oceanic microorganisms play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. In this project, we will investigate the carbon sequestration behavior of Synechococcus Sp., an abundant marine cyanobacteria known to be important to environmental responses to carbon dioxide levels, through experimental and computational methods. This project is a combined experimental and computational effort with emphasis on developing and applying new computational tools and methods. Our experimental effort will provide the biology and data to drive the computational efforts and include significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Computational tools will be essential to our efforts to discover and characterize the function of the molecular machines of Synechococcus. To this end, molecular simulation methods will be coupled with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes. In addition, we will develop a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. The ultimate goal of this effort is develop and apply new experimental and computational methods needed to generate a new level of understanding of how the Synechococcus genome affects carbon fixation at the global scale. Anticipated experimental and computational methods will provide ever-increasing insight about the individual elements and steps in the carbon fixation process, however relating an organism's genome to its cellular response in the presence of varying environments will require systems biology approaches. Thus a primary goal for this effort is to integrate the genomic data generated from experiments and lower level simulations with data from the existing body of literature into a whole cell model. We plan to accomplish this by developing and applying a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats. These challenges are unprecedented in high performance scientific computing and necessitate the development of a companion computational infrastructure to support this effort.

More Details

A user's guide to radiation transport in ALEGRA-HEDP : version 4.6

Mehlhorn, Thomas A.

This manual describes the input syntax to the ALEGRA radiation transport package. All input and output variables are defined, as well as all algorithmic controls. This manual describes the radiation input syntax for ALEGRA-HEDP. The ALEGRA manual[2] describes how to run the code and general input syntax. The ALEGRA-HEDP manual[13] describes the input for other physics used in high energy density physics simulations, as well as the opacity models used by this radiation package. An emission model, which is the lowest order radiation transport approximation, is also described in the ALEGRA-HEDP manual. This document is meant to be used with these other manuals.

More Details

Automated visual direction : LDRD 38623 final report

Anderson, Robert J.

Mobile manipulator systems used by emergency response operators consist of an articulated robot arm, a remotely driven base, a collection of cameras, and a remote communications link. Typically the system is completely teleoperated, with the operator using live video feedback to monitor and assess the environment, plan task activities, and to conduct the operations via remote control input devices. The capabilities of these systems are limited, and operators rarely attempt sophisticated operations such as retrieving and utilizing tools, deploying sensors, or building up world models. This project has focused on methods to utilize this video information to enable monitored autonomous behaviors for the mobile manipulator system, with the goal of improving the overall effectiveness of the human/robot system. Work includes visual servoing, visual targeting, utilization of embedded video in 3-D models, and improved methods of camera utilization and calibration.

More Details

Assessment of disinfectants in explosive destruction system for biological agent destruction : LDRD final report FY04

Buffleben, George M.; Crooker, Paul J.; Didlake, John E.; Simmons, Blake; Bradshaw, Robert W.

Treatment systems that can neutralize biological agents are needed to mitigate risks from novel and legacy biohazards. Tests with Bacillus thuringiensis and Bacillus steurothemophilus spores were performed in a 190-liter, 1-112 lb TNT equivalent rated Explosive Destruction System (EDS) system to evaluate its capability to treat and destroy biological agents. Five tests were conducted using three different agents to kill the spores. The EDS was operated in steam autoclave, gas fumigation and liquid decontamination modes. The first three tests used EDS as an autoclave, which uses pressurized steam to kill the spores. Autoclaving was performed at 130-140 deg C for up to 2-hours. Tests with chlorine dioxide at 750 ppm concentration for 1 hour and 10% (vol) aqueous chlorine bleach solution for 1 hour were also performed. All tests resulted in complete neutralization of the bacterial spores based on no bacterial growth in post-treatment incubations. Explosively opening a glass container to expose the bacterial spores for treatment with steam was demonstrated and could easily be done for chlorine dioxide gas or liquid bleach.

More Details

Model-building codes for membrane proteins

Brown, William M.; Faulon, Jean-Loup M.; Gray, Genetha A.; Hunt, Thomas W.; Schoeniger, Joseph S.; Slepoy, Alexander S.; Young, Malin M.

We have developed a novel approach to modeling the transmembrane spanning helical bundles of integral membrane proteins using only a sparse set of distance constraints, such as those derived from MS3-D, dipolar-EPR and FRET experiments. Algorithms have been written for searching the conformational space of membrane protein folds matching the set of distance constraints, which provides initial structures for local conformational searches. Local conformation search is achieved by optimizing these candidates against a custom penalty function that incorporates both measures derived from statistical analysis of solved membrane protein structures and distance constraints obtained from experiments. This results in refined helical bundles to which the interhelical loops and amino acid side-chains are added. Using a set of only 27 distance constraints extracted from the literature, our methods successfully recover the structure of dark-adapted rhodopsin to within 3.2 {angstrom} of the crystal structure.

More Details

Quantum squeezed light for probing mitochondrial membranes and study of neuroprotectants

Gourley, Paul L.; Copeland, Robert; Mcdonald, Anthony

We report a new nanolaser technique for measuring characteristics of human mitochondria. Because mitochondria are so small, it has been difficult to study large populations using standard light microscope or flow cytometry techniques. We recently discovered a nano-optical transduction method for high-speed analysis of submicron organelles that is well suited to mitochondrial studies. This ultrasensitive detection technique uses nano-squeezing of light into photon modes imposed by the ultrasmall organelle dimensions in a semiconductor biocavity laser. In this paper, we use the method to study the lasing spectra of normal and diseased mitochondria. We find that the diseased mitochondria exhibit larger physical diameter and standard deviation. This morphological differences are also revealed in the lasing spectra. The diseased specimens have a larger spectral linewidth than the normal, and have more variability in their statistical distributions.

More Details

Biomolecular decision-making process for self assembly

Osbourn, Gordon C.

The brain is often identified with decision-making processes in the biological world. In fact, single cells, single macromolecules (proteins) and populations of molecules also make simple decisions. These decision processes are essential to survival and to the biological self-assembly and self-repair processes that we seek to emulate. How do these tiny systems make effective decisions? How do they make decisions in concert with a cooperative network of other molecules or cells? How can we emulate the decision-making behaviors of small-scale biological systems to program and self-assemble microsystems? This LDRD supported research to answer these questions. Our work included modeling and simulation of protein populations to help us understand, mimic, and categorize molecular decision-making mechanisms that nonequilibrium systems can exhibit. This work is an early step towards mimicking such nanoscale and microscale biomolecular decision-making processes in inorganic systems.

More Details

Near net shape forming processes for chemically prepared zinc oxide varistors

Bell, Nelson S.; Lockwood, Steven J.; Voigt, James A.; Tuttle, Bruce

Chemically prepared zinc oxide powders are fabricated for the production of high aspect ratio varistor components. Colloidal processing in water was performed to reduce agglomerates to primary particles, form a high solids loading slurry, and prevent dopant migration. The milled and dispersed powder exhibited a viscoelastic to elastic behavioral transition at a volume loading of 43-46%. The origin of this transition was studied using acoustic spectroscopy, zeta potential measurements and oscillatory rheology. The phenomenon occurs due to a volume fraction solids dependent reduction in the zeta potential of the solid phase. It is postulated to result from divalent ion binding within the polyelectrolyte dispersant chain, and was mitigated using a polyethylene glycol plasticizing additive. Chemically prepared zinc oxide powders were processed for the production of high aspect ratio varistor components. Near net shape casting methods including slip casting and agarose gelcasting were evaluated for effectiveness in achieving a uniform green microstructure achieving density values near the theoretical maximum during sintering. The structure of the green parts was examined by mercury porisimetry. Agarose gelcasting produced green parts with low solids loading values and did not achieve high fired density. Isopressing the agarose cast parts after drying raised the fired density to greater than 95%, but the parts exhibited catastrophic shorting during electrical testing. Slip casting produced high green density parts, which exhibited high fired density values. The electrical characteristics of slip cast parts are comparable with dry pressed powder compacts. Alternative methods for near net shape forming of ceramic dispersions were investigated for use with the chemically prepared ZnO material. Recommendations for further investigation to achieve a viable production process are presented.

More Details

ALEGRA : version 4.6

Wong, Michael K.; Brunner, Thomas A.; Garasi, Christopher J.; Haill, Thomas A.; Mehlhorn, Thomas A.; Drake, Richard R.; Hensinger, David M.; Robbins, Joshua; Robinson, Allen C.; Summers, Randall M.; Voth, Thomas E.

ALEGRA is an arbitrary Lagrangian-Eulerian multi-material finite element code used for modeling solid dynamics problems involving large distortion and shock propagation. This document describes the basic user input language and instructions for using the software.

More Details

MiniSAR composite gimbal arm development

Klarer, Paul R.

An exploratory effort in the application of carbon epoxy composite structural materials to a multi-axis gimbal arm design is described. An existing design in aluminum was used as a baseline for a functionally equivalent redesigned outer gimbal arm using a carbon epoxy composite material. The existing arm was analyzed using finite element techniques to characterize performance in terms of strength, stiffness, and weight. A new design was virtually prototyped. using the same tools to produce a design with similar stiffness and strength, but reduced overall weight, than the original arm. The new design was prototyped using Rapid Prototyping technology, which was subsequently used to produce molds for fabricating the carbon epoxy composite parts. The design tools, process, and results are discussed.

More Details

Sensitivity technologies for large scale simulation

Bartlett, Roscoe; Collis, Samuel S.; Keiter, Eric R.; Ober, Curtis C.; Smith, Thomas M.

Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first order approximation of the Euler equations and used as a preconditioner. In comparison to other methods, the AD preconditioner showed better convergence behavior. Our ultimate target is to perform shape optimization and hp adaptivity using adjoint formulations in the Premo compressible fluid flow simulator. A mathematical formulation for mixed-level simulation algorithms has been developed where different physics interact at potentially different spatial resolutions in a single domain. To minimize the implementation effort, explicit solution methods can be considered, however, implicit methods are preferred if computational efficiency is of high priority. We present the use of a partial elimination nonlinear solver technique to solve these mixed level problems and show how these formulation are closely coupled to intrusive optimization approaches and sensitivity analyses. Production codes are typically not designed for sensitivity analysis or large scale optimization. The implementation of our optimization libraries into multiple production simulation codes in which each code has their own linear algebra interface becomes an intractable problem. In an attempt to streamline this task, we have developed a standard interface between the numerical algorithm (such as optimization) and the underlying linear algebra. These interfaces (TSFCore and TSFCoreNonlin) have been adopted by the Trilinos framework and the goal is to promote the use of these interfaces especially with new developments. Finally, an adjoint based a posteriori error estimator has been developed for discontinuous Galerkin discretization of Poisson's equation. The goal is to investigate other ways to leverage the adjoint calculations and we show how the convergence of the forward problem can be improved by adapting the grid using adjoint-based error estimates. Error estimation is usually conducted with continuous adjoints but if discrete adjoints are available it may be possible to reuse the discrete version for error estimation. We investigate the advantages and disadvantages of continuous and discrete adjoints through a simple example.

More Details

Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan. Part 2, Mappings for the ASC software quality engineering practices. Version 1.0

Boucheron, Edward A.; Schofield, Joseph R.; Drake, Richard R.; Minana, Molly A.; Forsythe, Christi A.; Heaphy, Robert T.; Hodges, Ann L.; Pavlakos, Constantine; Sturtevant, Judith E.

The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR 1.3.2 and 1.3.6 and to a Department of Energy document, 'ASCI Software Quality Engineering: Goals, Principles, and Guidelines'. This document also identifies ASC management and software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.

More Details

Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan. Part 1 : ASC software quality engineering practices version 1.0

Boucheron, Edward A.; Schofield, Joseph R.; Drake, Richard R.; Edwards, Harold C.; Minana, Molly A.; Forsythe, Christi A.; Heaphy, Robert T.; Hodges, Ann L.; Pavlakos, Constantine; Sturtevant, Judith E.

The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in DOE/AL Quality Criteria (QC-1) as conformance to customer requirements and expectations. This quality plan defines the ASC program software quality practices and provides mappings of these practices to the SNL Corporate Process Requirements (CPR 1.3.2 and CPR 1.3.6) and the Department of Energy (DOE) document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines (GP&G). This quality plan identifies ASC management and software project teams' responsibilities for cost-effective software engineering quality practices. The SNL ASC Software Quality Plan establishes the signatories commitment to improving software products by applying cost-effective software engineering quality practices. This document explains the project teams opportunities for tailoring and implementing the practices; enumerates the practices that compose the development of SNL ASC's software products; and includes a sample assessment checklist that was developed based upon the practices in this document.

More Details

LDRD final report on engineered superconductivity in electron-hole bilayers

Lilly, Michael; Bielejec, Edward S.; Seamons, John; Dunn, Roberto G.; Lyo, Sungkwun K.; Reno, John L.; Stephenson, Larry L.; Baca, Wes E.; Simmons, Jerry A.

Macroscopic quantum states such as superconductors, Bose-Einstein condensates and superfluids are some of the most unusual states in nature. In this project, we proposed to design a semiconductor system with a 2D layer of electrons separated from a 2D layer of holes by a narrow (but high) barrier. Under certain conditions, the electrons would pair with the nearby holes and form excitons. At low temperature, these excitons could condense to a macroscopic quantum state either through a Bose-Einstein condensation (for weak exciton interactions) or a BCS transition to a superconductor (for strong exciton interactions). While the theoretical predictions have been around since the 1960's, experimental realization of electron-hole bilayer systems has been extremely difficult due to technical challenges. We identified four characteristics that if successfully incorporated into a device would give the best chances for excitonic condensation to be observed. These characteristics are closely spaced layers, low disorder, low density, and independent contacts to allow transport measurements. We demonstrated each of these characteristics separately, and then incorporated all of them into a single electron-hole bilayer device. The key to the sample design is using undoped GaAs/AlGaAs heterostructures processed in a field-effect transistor geometry. In such samples, the density of single 2D layers of electrons could be varied from an extremely low value of 2 x 10{sup 9} cm{sup -2} to high values of 3 x 10{sup 11} cm{sup -2}. The extreme low values of density that we achieved in single layer 2D electrons allowed us to make important contributions to the problem of the metal insulator transition in two dimensions, while at the same time provided a critical base for understanding low density 2D systems to be used in the electron-hole bilayer experiments. In this report, we describe the processing advances to fabricate single and double layer undoped samples, the low density results on single layers, and evidence for gateable undoped bilayers.

More Details

Advanced polychromator systems for remote chemical sensing (LDRD project 52575)

Allen, James J.; Sinclair, Michael B.; Pfeifer, Kent B.

The objective of this LDRD project was to develop a programmable diffraction grating fabricated in SUMMiT V{trademark}. Two types of grating elements (vertical and rotational) were designed and demonstrated. The vertical grating element utilized compound leveraged bending and the rotational grating element used vertical comb drive actuation. This work resulted in two technical advances and one patent application. Also a new optical configuration of the Polychromator was demonstrated. The new optical configuration improved the optical efficiency of the system without degrading any other aspect of the system. The new configuration also relaxes some constraint on the programmable diffraction grating.

More Details

Graduated embodiment for sophisticated agent evolution and optimization

Boslough, Mark; Peters, Michael D.; Pierson, Arthurine R.

We summarize the results of a project to develop evolutionary computing methods for the design of behaviors of embodied agents in the form of autonomous vehicles. We conceived and implemented a strategy called graduated embodiment. This method allows high-level behavior algorithms to be developed using genetic programming methods in a low-fidelity, disembodied modeling environment for migration to high-fidelity, complex embodied applications. This project applies our methods to the problem domain of robot navigation using adaptive waypoints, which allow navigation behaviors to be ported among autonomous mobile robots with different degrees of embodiment, using incremental adaptation and staged optimization. Our approach to biomimetic behavior engineering is a hybrid of human design and artificial evolution, with the application of evolutionary computing in stages to preserve building blocks and limit search space. The methods and tools developed for this project are directly applicable to other agent-based modeling needs, including climate-related conflict analysis, multiplayer training methods, and market-based hypothesis evaluation.

More Details

System of systems modeling and analysis

Campbell, James E.; Anderson, Dennis J.; Shirah, Donald N.

This report documents the results of an LDRD program entitled 'System of Systems Modeling and Analysis' that was conducted during FY 2003 and FY 2004. Systems that themselves consist of multiple systems (referred to here as System of Systems or SoS) introduce a level of complexity to systems performance analysis and optimization that is not readily addressable by existing capabilities. The objective of the 'System of Systems Modeling and Analysis' project was to develop an integrated modeling and simulation environment that addresses the complex SoS modeling and analysis needs. The approach to meeting this objective involved two key efforts. First, a static analysis approach, called state modeling, has been developed that is useful for analyzing the average performance of systems over defined use conditions. The state modeling capability supports analysis and optimization of multiple systems and multiple performance measures or measures of effectiveness. The second effort involves time simulation which represents every system in the simulation using an encapsulated state model (State Model Object or SMO). The time simulation can analyze any number of systems including cross-platform dependencies and a detailed treatment of the logistics required to support the systems in a defined mission.

More Details

Simultaneous time and frequency resolved fluorescence microscopy of single molecules

Luong, A.K.; Gradinaru, Claudiu C.; Chandler, David; Hayden, Carl C.

Single molecule fluorophores were studied for the first time with a new confocal fluorescence microscope that allows the wavelength and emission time to be simultaneously measured with single molecule sensitivity. In this apparatus, the photons collected from the sample are imaged through a dispersive optical system onto a time and position sensitive detector. This detector records the wavelength and emission time of each detected photon relative to an excitation laser pulse. A histogram of many events for any selected spatial region or time interval can generate a full fluorescence spectrum and correlated decay plot for the given selection. At the single molecule level, this approach makes entirely new types of temporal and spectral correlation spectroscopy of possible. This report presents the results of simultaneous time- and frequency-resolved fluorescence measurements of single rhodamine 6G (R6G), tetramethylrhodamine (TMR), and Cy3 embedded in thin films of polymethylmethacrylate (PMMA).

More Details

Si-based RF MEMS components

Dyck, Christopher; Stewart, Harold D.; Fleming, J.G.; Stevens, James E.; Baker, Michael S.; Nordquist, Christopher D.

Radio frequency microelectromechanical systems (RF MEMS) are an enabling technology for next-generation communications and radar systems in both military and commercial sectors. RF MEMS-based reconfigurable circuits outperform solid-state circuits in terms of insertion loss, linearity, and static power consumption and are advantageous in applications where high signal power and nanosecond switching speeds are not required. We have demonstrated a number of RF MEMS switches on high-resistivity silicon (high-R Si) that were fabricated by leveraging the volume manufacturing processes available in the Microelectronics Development Laboratory (MDL), a Class-1, radiation-hardened CMOS manufacturing facility. We describe novel tungsten and aluminum-based processes, and present results of switches developed in each of these processes. Series and shunt ohmic switches and shunt capacitive switches were successfully demonstrated. The implications of fabricating on high-R Si and suggested future directions for developing low-loss RF MEMS-based circuits are also discussed.

More Details

Preconditioning of elliptic saddle point systems by substructuring and a penalty approach

Dohrmann, Clark R.

The focus of this paper is a penalty-based strategy for preconditioning elliptic saddle point systems. As the starting point, we consider the regularization approach of Axelsson in which a related linear system, differing only in the (2,2) block of the coefficient matrix, is introduced. By choosing this block to be negative definite, the dual unknowns of the related system can be eliminated resulting in a positive definite primal Schur complement. Rather than solving the Schur complement system exactly, an approximate solution is obtained using a substructuring preconditioner. The approximate primal solution together with the recovered dual solution then define the preconditioned residual for the original system. The approach can be applied to a variety of different saddle point problems. Although the preconditioner itself is symmetric and indefinite, all the eigenvalues of the preconditioned system are real and positive if certain conditions hold. Stronger conditions also ensure that the eigenvalues are bounded independently of mesh parameters. An interesting feature of the approach is that conjugate gradients can be used as the iterative solution method rather than GMRES. The effectiveness of the overall strategy hinges on the preconditioner for the primal Schur complement. Interestingly, the primary condition ensuring real and positive eigenvalues is satisfied automatically in certain instances if a Balancing Domain Decomposition by Constraints (BDDC) preconditioner is used. Following an overview of BDDC, we show how its constraints can be chosen to ensure insensitivity to parameter choices in the (2,2) block for problems with a divergence constraint. Examples for different saddle point problems are presented and comparisons made with other approaches.

More Details

Modeling local chemistry in the presence of collective phenomena

Chandross, Michael E.

Confinement within the nanoscale pores of a zeolite strongly modifies the behavior of small molecules. Typical of many such interesting and important problems, realistic modeling of this phenomena requires simultaneously capturing the detailed behavior of chemical bonds and the possibility of collective dynamics occurring in a complex unit cell (672 atoms in the case of Zeolite-4A). Classical simulations alone cannot reliably model the breaking and formation of chemical bonds, while quantum methods alone are incapable of treating the extended length and time scales characteristic of complex dynamics. We have developed a robust and efficient model in which a small region treated with the Kohn-Sham density functional theory is embedded within a larger system represented with classical potentials. This model has been applied in concert with first-principles electronic structure calculations and classical molecular dynamics and Monte Carlo simulations to study the behavior of water, ammonia, the hydroxide ion, and the ammonium ion in Zeolite-4a. Understanding this behavior is important to the predictive modeling of the aging of Zeolite-based desiccants. In particular, we have studied the absorption of these molecules, interactions between water and the ammonium ion, and reactions between the hydroxide ion and the zeolite cage. We have shown that interactions with the extended Zeolite cage strongly modifies these local chemical phenomena, and thereby we have proven out hypothesis that capturing both local chemistry and collective phenomena is essential to realistic modeling of this system. Based on our results, we have been able to identify two possible mechanisms for the aging of Zeolite-based desiccants.

More Details

Characterizing the effects of scale and heating rate on micro-scale explosive ignition criteria

Hafenrichter, Everett S.; Pahl, Robert J.

Laser diode ignition experiments were conducted in an effort to characterize the effects of scale and heating rate on micro-scale explosive ignition criteria. Over forty experiments were conducted with various laser power densities and laser spot sizes. In addition, relatively simple analytical and numerical calculations were performed to assist with interpretation of the experimental data and characterization of the explosive ignition criteria.

More Details

Fast ignition breakeven scaling

Proposed for publication in Physics of Plasmas.

Slutz, Stephen A.; Vesey, Roger A.

A series of numerical simulations have been performed to determine scaling laws for fast ignition break even of a hot spot formed by energetic particles created by a short pulse laser. Hot spot break even is defined to be when the fusion yield is equal to the total energy deposited in the hot spot through both the initial compression and the subsequent heating. In these simulations, only a small portion of a previously compressed mass of deuterium-tritium fuel is heated on a short time scale, i.e., the hot spot is tamped by the cold dense fuel which surrounds it. The hot spot tamping reduces the minimum energy required to obtain break even as compared to the situation where the entire fuel mass is heated, as was assumed in a previous study [S. A. Slutz, R. A. Vesey, I. Shoemaker, T. A. Mehlhorn, and K. Cochrane, Phys. Plasmas 7, 3483 (2004)]. The minimum energy required to obtain hot spot break even is given approximately by the scaling law E{sub T} = 7.5({rho}/100){sup -1.87} kJ for tamped hot spots, as compared to the previously reported scaling of E{sub UT} = 15.3({rho}/100){sup -1.5} kJ for untamped hotspots. The size of the compressed fuel mass and the focusability of the particles generated by the short pulse laser determines which scaling law to use for an experiment designed to achieve hot spot break even.

More Details

Electroporation : bio-electrochemical mass transfer at the nano scale

Davalos, Rafael V.

This article provides a brief review of the field of electroporation and introduces a new microdevice that facilitates studies to test theories, gain understanding, and control this important biomedical technology. Electroporation, a bio-electrochemical process whose fundamentals are not yet understood, is a means of permeating the cell membrane by applying a voltage across the cell and forming nano-scale pores in the membrane. It has become an important field in biotechnology and medicine for the controlled introduction of macromolecules, such as gene constructs and drugs, into various cells. It is viewed as an engineering alternative to biological techniques for the genetic engineering of cells. To study and control electroporation, we have created a low-cost microelectroporation chip that incorporates a live biological cell with an electric circuit. The device revealed an important behavior of cells in electrical fields. They produce measurable electrical information about the electroporation state of the cell that may enable precise control of the process. The device can be used to facilitate fundamental studies of electroporation and can become useful in providing precise control over biotechnological processes.

More Details

RadCat 2.0 User Guide

Osborn, Douglas; Weiner, Ruth F.; Mills, G.S.

This document provides a detailed discussion and a guide for the use of the RadCat 2.0 Graphical User Interface input file generator for the RADTRAN 5.5 code. The differences between RadCat 2.0 and RadCat 1.0 can be attributed to the differences between RADTRAN 5 and RADTRAN 5.5 as well as clarification for some of the input parameters. 3

More Details

Synthesis of a photoresponsive polymer and its incorporation into an organic superlattice

Mcelhanon, James R.; Cole, Phillip J.; Rondeau, Chris J.

The synthesis of a photoswitchable polymer by grafting an azobenzene dye to methacrylate followed by polymerization is presented. The azobenzene dye undergoes a trans-cis photoisomerization that causes a persistent change in the refractive index of cast polymer films. This novel polymer was incorporated into superlattices prepared by spin casting and the optical activity of the polymer was maintained. A modified coextruder that allows the rapid production of soft matter superlattices was designed and fabricated.

More Details

LDRD final report : on the development of hybrid level-set/particle methods for modeling surface evolution during feature-scale etching and deposition processes

Schmidt, Rodney C.

Two methods for creating a hybrid level-set (LS)/particle method for modeling surface evolution during feature-scale etching and deposition processes are developed and tested. The first method supplements the LS method by introducing Lagrangian marker points in regions of high curvature. Once both the particle set and the LS function are advanced in time, minimization of certain objective functions adjusts the LS function so that its zero contour is in closer alignment with the particle locations. It was found that the objective-minimization problem was unexpectedly difficult to solve, and even when a solution could be found, the acquisition of it proved more costly than simply expanding the basis set of the LS function. The second method explored is a novel explicit marker-particle method that we have named the grid point particle (GPP) approach. Although not a LS method, the GPP approach has strong procedural similarities to certain aspects of the LS approach. A key aspect of the method is a surface rediscretization procedure--applied at each time step and based on a global background mesh--that maintains a representation of the surface while naturally adding and subtracting surface discretization points as the surface evolves in time. This method was coded in 2-D, and tested on a variety of surface evolution problems by using it in the ChISELS computer code. Results shown for 2-D problems illustrate the effectiveness of the method and highlight some notable advantages in accuracy over the LS method. Generalizing the method to 3D is discussed but not implemented.

More Details

Statistical characterization of multi-conductor cables using large numbers of measurements

Higgins, Matthew B.

Understanding and characterizing the electrical properties of multi-conductor shielded and unshielded cables is an important endeavor for many diverse applications, including airlines, land based communications, nuclear weapons, and any piece of hardware containing multi-conductor cabling. Determining the per unit length capacitance and inductance based on the geometry of the conductors, number of conductors, and characteristics of the shield can prove quite difficult. Relating the inductance and capacitance to shielding effectiveness can be even more difficult. An exceedingly large number of measurements were taken to characterize eight multi-conductor cables, of which four were 3-conductor cables and four were 18-conductor cables. Each set of four cables contained a shielded cable and an unshielded cable with the inner conductors twisted together and a shielded cable and an unshielded cable with the inner conductors not twisted together (or straight). Male LJT connectors were attached on either end of the cable and each cable had a finished length of 22.5 inches. The measurements performed were self and mutual inductance, self and mutual capacitance, and effective height. For the 18 conductor case there ended up being an 18 by 18 element matrix for inductance (with the self inductance terms lying on the diagonal) and an 18 by 18 matrix for capacitance. The effective height of each cable was measured over a frequency range from 220 MHz to 18 GHz in a Mode-Stirred Chamber. The effective height of each conductor of each cable was measured individually and all shorted together, producing 19 frequency responses for each 18 conductor cable. Shielding effectiveness was calculated using the effective heights from the shielded and unshielded cables. The results of these measurements and the statistical analysis of the data will be presented. There will also be a brief presentation of comparison with numerical models.

More Details

ODTLES : a model for 3D turbulent flow based on one-dimensional turbulence modeling concepts

Schmidt, Rodney C.; Kerstein, Alan R.

This report describes an approach for extending the one-dimensional turbulence (ODT) model of Kerstein [6] to treat turbulent flow in three-dimensional (3D) domains. This model, here called ODTLES, can also be viewed as a new LES model. In ODTLES, 3D aspects of the flow are captured by embedding three, mutually orthogonal, one-dimensional ODT domain arrays within a coarser 3D mesh. The ODTLES model is obtained by developing a consistent approach for dynamically coupling the different ODT line sets to each other and to the large scale processes that are resolved on the 3D mesh. The model is implemented computationally and its performance is tested and evaluated by performing simulations of decaying isotropic turbulence, a standard turbulent flow benchmarking problem.

More Details

Measurement and modeling of energetic material mass transfer to soil pore water : Project CP-1227 : FY04 annual technical report

Webb, Stephen W.; Stein, Joshua

Military test and training ranges operate with live fire engagements to provide realism important to the maintenance of key tactical skills. Ordnance detonations during these operations typically produce minute residues of parent explosive chemical compounds. Occasional low order detonations also disperse solid phase energetic material onto the surface soil. These detonation remnants are implicated in chemical contamination impacts to groundwater on a limited set of ranges where environmental characterization projects have occurred. Key questions arise regarding how these residues and the environmental conditions (e.g., weather and geostratigraphy) contribute to groundwater pollution impacts. This report documents interim results of a mass transfer model evaluating mass transfer processes from solid phase energetics to soil pore water based on experimental work obtained earlier in this project. This mass transfer numerical model has been incorporated into the porous media simulation code T2TNT. Next year, the energetic material mass transfer model will be developed further using additional experimental data.

More Details

Identification of chemical hazards for security risk analysis activities

Jaeger, Calvin D.

The presentation outline of this paper is: (1) How identification of chemical hazards fits into a security risk analysis approach; (2) Techniques for target identification; and (3) Identification of chemical hazards by different organizations. The summary is: (1) There are a number of different methodologies used within the chemical industry which identify chemical hazards: (a) Some develop a manual listing of potential targets based on published lists of hazardous chemicals or chemicals of concern, 'expert opinion' or known hazards. (b) Others develop a prioritized list based on chemicals found at a facility and consequence analysis (offsite release affecting population, theft of material, product tampering). (2) Identification of chemical hazards should include not only intrinsic properties of the chemicals but also potential reactive chemical hazards and potential use for activities off-site.

More Details

Nanofluidic devices for rapid detection of virus particles

Gourley, Paul L.; Mcdonald, Anthony

Technologies that could quickly detect and identify virus particles would play a critical role in fighting bioterrorism and help to contain the rapid spread of disease. Of special interest is the ability to detect the presence and movement of virions without chemically modifying them by attaching molecular probes. This would be useful for rapid detection of pathogens in food or water supplies without the use of expensive chemical reagents. Such detection requires new devices to quickly screen for the presence of tiny pathogens. To develop such a device, we fabricated nanochannels to transport virus particles through ultrashort laser cavities and measured the lasing output as a sensor for virions. To understand this transduction mechanism, we also investigated light scattering from virions, both to determine the magnitude of the scattered signal and to use it to investigate the motion of virions.

More Details

Why well monitoring instruments fail

Normann, Randy A.; Henfling, Joseph A.

This overview is intended to provide the reader with insight into basic reliability issues often confronted when designing long-term geothermal well monitoring equipment. No single system is looked at. General examples of the long-term reliability of other industries are presented. Examples of reliability issues involving electronic components and sensors along with fiber optic sensors and cables are given. This paper will aid in building systems where a long operating life is required. However, as no introductory paper can cover all reliability issues, basic assembly practices and testing concepts are presented.

More Details

Robust algebraic preconditioners using IFPACK 3.0

Sala, Marzio; Heroux, Michael A.

IFPACK provides a suite of object-oriented algebraic preconditioners for the solution of preconditioned iterative solvers. IFPACK constructors expect the (distributed) real sparse matrix to be an Epetra RowMatrix object. IFPACK can be used to define point and block relaxation preconditioners, various flavors of incomplete factorizations for symmetric and non-symmetric matrices, and one-level additive Schwarz preconditioners with variable overlap. Exact LU factorizations of the local submatrix can be accessed through the AMESOS packages. IFPACK , as part of the Trilinos Solver Project, interacts well with other Trilinos packages. In particular, IFPACK objects can be used as preconditioners for AZTECOO, and as smoothers for ML. IFPACK is mainly written in C++, but only a limited subset of C++ features is used, in order to enhance portability.

More Details

Self-metallization of photocatalytic porphyrin nanotubes

Journal of the American Chemical Society

Medforth, Craig J.; Shelnutt, John A.

Porphyrin nanotubes represent a new class of nanostructures for which the molecular building blocks can be altered to control their structural and functional properties. Nanotubes containing tin(IV) porphyrins are photocatalytically active and can reduce metal ions from aqueous solution. The metal is deposited selectively onto tube surfaces, producing novel composite nanostructures that have potential applications as nanodevices. Two examples presented here are nanotubes with a continuous gold wire in the core and a gold ball at the end and nanotubes coated with platinum nanoparticles mainly on their outer surfaces. The latter are capable of photocatalytic reduction of water to hydrogen. Copyright © 2004 American Chemical Society.

More Details

Understanding GaN nucleation layer evolution on sapphire

Journal of Crystal Growth

Koleske, D.D.; Coltrin, Michael E.; Cross, K.C.; Mitchell, Christine C.; Allerman, A.A.

Optical reflectance and atomic force microscopy (AFM) are used to develop a detailed description of GaN nucleation layer (NL) evolution upon annealing in ammonia and hydrogen to 1050°C. For the experiments, the GaN NLs were grown to a thickness of 30nm at 540°C, and then heated to 1050°C, following by holding at 1050°C for additional time. As the temperature, T, is increased, the NL decomposes uniformly beginning at 850°C up to 980°C as observed by the decrease in the optical reflectance signal and the absence of change in the NL AFM images. Decomposition of the original NL material drives the formation of GaN nuclei on top of the NL, which begin to appear on the NL near 1000°C, increasing the NL roughness. The GaN nuclei are formed by gas-phase transport of Ga atoms generated during the NL decomposition that recombine with ambient NH3. The gas-phase mechanism responsible for forming the GaN nuclei is demonstrated in two ways. First, the NL decomposition kinetics has an activation energy, EA, of 2.7 eV and this EA is observed in the NL roughening as the GaN nuclei increase in size. Second, the power spectral density functions measured with atomic force microscopy reveal that the GaN nuclei grow via an evaporation and recondensation mechanism. Once the original NL material is fully decomposed, the GaN nuclei stop growing in size and begin to decompose. For 30 nm thick NLs used in this study, approximately 1/3 of the NL Ga atoms are reincorporated into GaN nuclei. A detailed description of the NL evolution as it is heated to high temperature is presented, along with recommendations on how to enhance or reduce the NL decomposition and nuclei formation before high T GaN growth. © 2004 Elsevier B.V. All rights reserved.

More Details

Phase diagram of Coulomb interactions across the metal-insulator transition in Si:B

Physical Review Letters

Lee, Mark

The evolution of Coulomb interactions across the metal insulator transition (MIT) in a 3D localized conductor was experimentally investigated. The data were used to construct a phase diagram of the macroscopic Coulomb-correlated states as a function of single-particle energy and density. The phase diagrams show the existence of a phase boundary that separates low-energy distinctive metallic or insulating behavior from a higher energy mixed state. The data indicate a diverging screening radius at the critical density, which may signal an interaction-driven thermodynamic state change.

More Details

2004 Biological Opinion

Baker, Alexandra M.; Manger, Trevor J.

This document transmits the U.S. Fish and Wildlife Service's (Service) biological and conference opinions based on our review of National Nuclear Security Administration's (NNSA) proposed Maximum Operations Alternative at Sandia National Laboratories (SNL/CA), Alameda County, California.

More Details

Smart sensor integration into security networks

Proceedings - International Carnahan Conference on Security Technology

Cano, Lester A.

Sandia has been investigating the use of "intelligent sensors" and their integration into "Smart Networks" for security applications. Intelligent sensors include devices that assess various phenomenologies such as radiation, chem-bio agents, radars, and video/video-motion detection. The main problem experienced with these intelligent sensors is in integrating the output from these various sensors into a system that reports the data to users in a manner that enables an efficient response to potential threats. The overall systems engineering is a critical part of bringing these intelligent sensors on-line and is important to ensuring that these systems are successfully deployed. The systems engineering effort includes designing and deploying computer networks, interfaces to make systems inter-operable, and training users to ensure that these intelligent sensors can be deployed property. This paper focuses on Sandia's efforts to investigate the systems architecture for "smart" networks and the various interfaces required between "smart" sensors to implement these "Smart Networks." ©2004 IEEE.

More Details

Cable failure modes and effects risk analysis perspectives

Abstracts of the Pacific Basin Nuclear Conference

Nowlen, Steven P.

One effect noted during the March 1975 fire at the Browns Ferry plant is that fire-induced cable damage caused a range of unanticipated circuit faults including spurious reactor status signals and the apparent spurious operation of plant systems and components. Current USNRC regulations require that licensees conduct a post-fire safe shutdown analysis that includes consideration of such circuit effects. Post-fire circuit analysis continues to be an area of both technical challenge and regulatory focus. This paper discusses risk perspectives related to post-fire circuit analysis. An opening background discussion outlines the issues, concerns, and technical challenges. The paper then focuses on current risk insights and perspectives relevant to the circuit analysis problem. This includes a discussion of the available experimental data on cable failure modes and effects, a discussion of fire events that illustrate potential fire-induced circuit faults, and a discussion of risk analysis approaches currently being developed and implemented.

More Details

ALEGRA-HEDP three dimensional simulations of Z-pinch related physics

Inertial Fusion Sciences and Applications 2003

Garasi, Christopher J.

More Details

Convection and off-center ignition in type la supernovae

Astrophysical Journal

Wunsch, Scott E.; Woosley, S.E.

The turbulent convection that takes place in a Chandrasekhar-mass white dwarf during the final few minutes before it explodes determines where and how frequently it ignites. Numerical simulations have shown that the properties of the subsequent Type la supernova are sensitive to these ignition conditions. A heuristic model of the turbulent convection is explored. The results suggest that supernova ignition is likely to occur at a radius of order 100 km, rather than at the center of the star.

More Details
Results 87301–87400 of 99,299
Results 87301–87400 of 99,299