Face recognition systems require the ability to efficiently scan an existing database of faces to locate a match for a newly acquired face. The large number of faces in real world databases makes computationally intensive algorithms impractical for scanning entire databases. We propose the use of more efficient algorithms to “prescreen” face databases, determining a limited set of likely matches that can be processed further to identify a match. We use both radial symmetry and shape to extract five features of interest on 3D range images of faces. These facial features determine a very small subset of discriminating points which serve as input to a prescreening algorithm based on a Hausdorff fraction. We show how to compute the Haudorff fraction in linear O(n) time using a range image representation. Our feature extraction and prescreening algorithms are verified using the FRGC v1.0 3D face scan data. Results show 97% of the extracted facial features are within 10 mm or less of manually marked ground truth, and the prescreener has a rank 6 recognition rate of 100%.
The fast and unrelenting spread of wireless telecommunication devices has changed the landscape of the telecommunication world, as we know it. Today we find that most users have access to both wireline and wireless communication devices. This widespread availability of alternate modes of communication is adding, on one hand, to a redundancy in networks, yet, on the other hand, has cross network impacts during overloads and disruptions. This being the case, it behooves network designers and service providers to understand how this redundancy works so that it can be better utilized in emergency conditions where the need for redundancy is critical. In this paper, we examine the scope of this redundancy as expressed by telecommunications availability to users under different failure scenarios. We quantify the interaction of wireline and wireless networks during network failures and traffic overloads. Developed as part of a Department of Homeland Security Infrastructure Protection (DHS IP) project, the Network Simulation Modeling and Analysis Research Tool (N-SMART) was used to perform this study. The product of close technical collaboration between the National Infrastructure Simulation and Analysis Center (NISAC) and Lucent Technologies, N-SMART supports detailed wireline and wireless network simulations and detailed user calling behavior.
2nd International Conference on Cybernetics and Information Technologies, Systems and Applications, CITSA 2005, 11th International Conference on Information Systems Analysis and Synthesis, ISAS 2005
Shock Physics codes in use at many Department of Energy (DOE) and Department of Defense (DoD) laboratories can be divided into two classes; Lagrangian Codes (where the computational mesh is (attached' to the materials) and Eulerian Codes (where the computational mesh is (fixed' in space and die materials flow through the mesh). These two classes of codes exhibit different advantages and disadvantages. Lagrangian codes are good at keeping material interfaces well defined, but suffer when the materials undergo extreme distortion which leads to severe reductions in the time steps. Eulerian codes are better able to handle severe material distortion (since the mesh is fixed the time steps are not as severely reduced), but these codes do not keep track of material interfaces very well. So in an Eulerian code the developers must design algorithms to track or reconstruct accurate interfaces between materials as the calculation progresses. However, there are classes of calculations where an interface is not desired between some materials, for instance between materials that are intimately mixed (dusty air or multiphase materials). In these cases a material interface reconstruction scheme is needed that will keep this mixture separated from other materials in the calculation, but will maintain the mixture attributes. This paper will describe the Sandia National Laboratories Eulerian Shock Physics Code known as CTH, and the specialized isotropic material interface reconstruction scheme designed to keep mixed material groups together while keeping different groups separated during the remap step.
American Rock Mechanics Association - 40th US Rock Mechanics Symposium, ALASKA ROCKS 2005: Rock Mechanics for Energy, Mineral and Infrastructure Development in the Northern Regions
Sandia National Laboratories has partnered with industry on a multifaceted, baseline experimental study that supports the development of improved drag cutters for advanced drill bits. Different nonstandard cutter lots were produced and subjected to laboratory tests that evaluated the influence of selected design and processing parameters on cutter loads, wear, and durability pertinent to the penetration of hard rock with mechanical properties representative of formations encountered in geothermal or deep oil/gas drilling environments. The focus was on cutters incorporating ultrahard PDC (polycrystalline diamond compact) overlays (i.e., diamond tables) on tungsten-carbide substrates. Parameter variations included changes in cutter geometry, material composition, and processing conditions. Geometric variables were the diamond-table thickness, the cutting-edge profile, and the PDC/substrate interface configuration. Material and processing variables for the diamond table were, respectively, the diamond particle size and the sintering pressure applied during cutter fabrication. Complementary drop-impact, granite-log abrasion, linear cutting-force, and rotary-drilling tests examined the response of cutters from each lot. Substantial changes in behavior were observed from lot to lot, allowing the identification of features contributing major (factor of 10+) improvements in cutting performance for hard-rock applications. Recent field demonstrations highlight the advantages of employing enhanced cutter technology during challenging drilling operations.
Proceedings of the International Symposium on Superalloys and Various Derivatives
Williamson, Rodney L.; Beaman, Joseph J.; Zanner, Frank J.; Debarbadillo, John J.
The Specialty Metals Processing Consortium (SMPC) was established in 1990 with the goal of advancing the technology of melting and remelting nickel and titanium alloys. In recent years, the SMPC technical program has focused on developing technology to improve control over the final ingot remelting and solidification processes to alleviate conditions that lead to the formation of inclusions and positive and negative segregation. A primary objective is the development of advanced monitoring and control techniques for application to vacuum arc remelting (VAR), with special emphasis on VAR of Alloy 718. This has lead to the development of an accurate, low order electrode melting model for this alloy as well as an advanced process estimator that provides real-time estimates of important process variables such as electrode temperature distribution, instantaneous melt rate, process efficiency, fill ratio, and voltage bias. This, in turn, has enabled the development and industrial application of advanced VAR process monitoring and control systems. The technology is based on the simple idea that the set of variables describing the state of the process must be self-consistent as required by the dynamic process model. The output of the process estimator comprises the statistically optimal estimate of this self-consistent set. Process upsets such as those associated with glows and cracked electrodes are easily identified using estimator based methods.
Proceedings of the ASME/Pacific Rim Technical Conference and Exhibition on Integration and Packaging of MEMS, NEMS, and Electronic Systems: Advances in Electronic Packaging 2005
Optical firing sets need miniature, robust, reliable pulsed laser sources for a variety of triggering functions. In many cases, these lasers must withstand high transient radiation environments. In this paper we describe a monolithic passively Q-switched microlaser constructed using Cr:Nd:GSGG as the gain material and Cr4+:YAG as the saturable absorber, both of which are radiation hard crystals. This laser consists of a 1-mm-long piece of undoped YAG, a 7-mm-long piece of Cr:Nd:GSGG, and a 1.5-mm-long piece of Cr 4+:YAG diffusion bonded together. The ends of the assembly are polished flat and parallel and dielectric mirrors are coated directly on the ends to form a compact, rugged, monolithic laser. When end pumped with a diode laser emitting at ∼807.6 nm, this passively Q-switched laser produces ∼1.5-ns-wide pulses. While the unpumped flat-flat cavity is geometrically unstable, thermal lensing and gain guiding produce a stable cavity with a TEM00 gaussian output beam over a wide range of operating parameters. The output energy of the laser is scalable and dependent on the cross sectional area of the pump beam. This laser has produced Q-switched output energies from several μJ per pulse to several 100 μJ per pulse with excellent beam quality. Its short pulse length and good beam quality result in high peak power density required for many applications such as optically triggering sprytrons. In this paper we discuss the design, construction, and characterization of this monolithic laser as well as energy scaling of the laser up to several 100 μJ per pulse.
Micro Total Analysis Systems - Proceedings of MicroTAS 2005 Conference: 9th International Conference on Miniaturized Systems for Chemistry and Life Sciences
High-power 18650 Li-ion cells have been developed for hybrid electric vehicle applications as part of the DOE FreedomCAR Advanced Technology Development (ATD) program. Cells have been developed for high-power, long-life, low-cost and abuse tolerance conditions. The thermal abuse response of advanced materials and cells were measured and compared. Cells were constructed for determination of abuse tolerance to determine the thermal runaway response and flammability of evolved gas products during venting. Advanced cathode and anode materials were evaluated for improved tolerance under abusive conditions. Calorimetric methods were used to measure the thermal response and properties of the cells and cell materials up to 450 °C. Improvements in thermal runaway response have been shown using combinations of these materials.
Fall Technical Meeting of the Western States Section of the Combustion Institute 2005, WSS/CI 2005 Fall Meeting
Kulatilaka, W.D.; Lucht, R.P.; Settersten, T.B.
We report results from an investigation of the two-color polarization spectroscopy (TC-PS) and two-color six-wave mixing (TC-SWM) techniques for the measurement of atomic hydrogen in flames. The 243-nm two-photon pumping of 1S-2S transition of the H-atom was followed by single-photon probing of the 2S-3P transition at 656 nm. Necessary laser radiation was generated using two distributed feedback dye lasers (DFDLs) pumped by two regeneratively amplified, picosecond, Nd:YAG lasers. The DFDL pulses are nearly Fourier transform limited and have a pulse width of approximately 80 ps. The effects of pump and probe beam polarizations on the TC-PS and TC-SWM signals were studied in detail. The collisional dynamics of the H(2l) level were also investigated in an atmospheric-pressure hydrogenair flame by scanning the time delay between the pump and probe pulses. An increase in signal intensity of approximately 100 was observed in the TC-SWM geometry as compared to the TC-PS geometry.
Ion mobility spectrometry (IMS) is recognized as one of the most sensitive and versatile techniques for the detection of trace levels of organic vapors. IMS is widely used for detecting contraband narcotics, explosives, toxic industrial compounds and chemical warfare agents. Increasing threat of terrorist attacks, the proliferation of narcotics, Chemical Weapons Convention treaty verification as well as humanitarian de-mining efforts has mandated that equal importance be placed on the analysis time as well as the quality of the analytical data. (1) IMS is unrivaled when both speed of response and sensitivity has to be considered. (2) With conventional (signal averaging) IMS systems the number of available ions contributing to the measured signal to less than 1%. Furthermore, the signal averaging process incorporates scan-to-scan variations decreasing resolution. With external second gate Fourier Transform ion mobility spectrometry (FT-IMS), the entrance gate frequency is variable and can be altered in conjunction with other data acquisition parameters to increase the spectral resolution. The FT-IMS entrance gate operates with a 50% duty cycle and so affords a 7 to 10-fold increase in sensitivity. Recent data on high explosives are presented to demonstrate the parametric optimization in sensitivity and resolution of our system.
This report summarizes the work performed as part of a one-year LDRD project, 'Evolutionary Complexity for Protection of Critical Assets.' A brief introduction is given to the topics of genetic algorithms and genetic programming, followed by a discussion of relevant results obtained during the project's research, and finally the conclusions drawn from those results. The focus is on using genetic programming to evolve solutions for relatively simple algebraic equations as a prototype application for evolving complexity in computer codes. The results were obtained using the lil-gp genetic program, a C code for evolving solutions to user-defined problems and functions. These results suggest that genetic programs are not well-suited to evolving complexity for critical asset protection because they cannot efficiently evolve solutions to complex problems, and introduce unacceptable performance penalties into solutions for simple ones.
The Sandia National Laboratories Corporate Mentor Program provides a mechanism for the development and retention of Sandia's people and knowledge. The relationships formed among staff members at different stages in their careers offer benefits to all. These relationships can provide experienced employees with new ideas and insight and give less experienced employees knowledge of Sandia's culture, strategies, and programmatic direction. The program volunteer coordinators are dedicated to the satisfaction of the participants, who come from every area of Sandia. Since its inception in 1995, the program has sustained steady growth and excellent customer satisfaction. This report summarizes the accomplishments, activities, enhancements, and evaluation data for the Corporate Mentor Program for the 2003/2004 program year ending May 1, 2004.
This SAND report provides the technical progress through June 2004 of the Sandia-led project, ''Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling'', funded by the DOE Office of Science Genomes to Life Program. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO{sub 2} are important terms in the global environmental response to anthropogenic atmospheric inputs of CO{sub 2} and that oceanic microorganisms play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. In this project, we will investigate the carbon sequestration behavior of Synechococcus Sp., an abundant marine cyanobacteria known to be important to environmental responses to carbon dioxide levels, through experimental and computational methods. This project is a combined experimental and computational effort with emphasis on developing and applying new computational tools and methods. Our experimental effort will provide the biology and data to drive the computational efforts and include significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Computational tools will be essential to our efforts to discover and characterize the function of the molecular machines of Synechococcus. To this end, molecular simulation methods will be coupled with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes. In addition, we will develop a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. The ultimate goal of this effort is develop and apply new experimental and computational methods needed to generate a new level of understanding of how the Synechococcus genome affects carbon fixation at the global scale. Anticipated experimental and computational methods will provide ever-increasing insight about the individual elements and steps in the carbon fixation process, however relating an organism's genome to its cellular response in the presence of varying environments will require systems biology approaches. Thus a primary goal for this effort is to integrate the genomic data generated from experiments and lower level simulations with data from the existing body of literature into a whole cell model. We plan to accomplish this by developing and applying a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats. These challenges are unprecedented in high performance scientific computing and necessitate the development of a companion computational infrastructure to support this effort.
This manual describes the input syntax to the ALEGRA radiation transport package. All input and output variables are defined, as well as all algorithmic controls. This manual describes the radiation input syntax for ALEGRA-HEDP. The ALEGRA manual[2] describes how to run the code and general input syntax. The ALEGRA-HEDP manual[13] describes the input for other physics used in high energy density physics simulations, as well as the opacity models used by this radiation package. An emission model, which is the lowest order radiation transport approximation, is also described in the ALEGRA-HEDP manual. This document is meant to be used with these other manuals.
ALEGRA is an arbitrary Lagrangian-Eulerian finite element code that emphasizes large distortion and shock propagation in inviscid fluids and solids. This document describes user options for modeling resistive magnetohydrodynamic, thermal conduction, and radiation emission effects.
Mobile manipulator systems used by emergency response operators consist of an articulated robot arm, a remotely driven base, a collection of cameras, and a remote communications link. Typically the system is completely teleoperated, with the operator using live video feedback to monitor and assess the environment, plan task activities, and to conduct the operations via remote control input devices. The capabilities of these systems are limited, and operators rarely attempt sophisticated operations such as retrieving and utilizing tools, deploying sensors, or building up world models. This project has focused on methods to utilize this video information to enable monitored autonomous behaviors for the mobile manipulator system, with the goal of improving the overall effectiveness of the human/robot system. Work includes visual servoing, visual targeting, utilization of embedded video in 3-D models, and improved methods of camera utilization and calibration.
Treatment systems that can neutralize biological agents are needed to mitigate risks from novel and legacy biohazards. Tests with Bacillus thuringiensis and Bacillus steurothemophilus spores were performed in a 190-liter, 1-112 lb TNT equivalent rated Explosive Destruction System (EDS) system to evaluate its capability to treat and destroy biological agents. Five tests were conducted using three different agents to kill the spores. The EDS was operated in steam autoclave, gas fumigation and liquid decontamination modes. The first three tests used EDS as an autoclave, which uses pressurized steam to kill the spores. Autoclaving was performed at 130-140 deg C for up to 2-hours. Tests with chlorine dioxide at 750 ppm concentration for 1 hour and 10% (vol) aqueous chlorine bleach solution for 1 hour were also performed. All tests resulted in complete neutralization of the bacterial spores based on no bacterial growth in post-treatment incubations. Explosively opening a glass container to expose the bacterial spores for treatment with steam was demonstrated and could easily be done for chlorine dioxide gas or liquid bleach.
We have developed a novel approach to modeling the transmembrane spanning helical bundles of integral membrane proteins using only a sparse set of distance constraints, such as those derived from MS3-D, dipolar-EPR and FRET experiments. Algorithms have been written for searching the conformational space of membrane protein folds matching the set of distance constraints, which provides initial structures for local conformational searches. Local conformation search is achieved by optimizing these candidates against a custom penalty function that incorporates both measures derived from statistical analysis of solved membrane protein structures and distance constraints obtained from experiments. This results in refined helical bundles to which the interhelical loops and amino acid side-chains are added. Using a set of only 27 distance constraints extracted from the literature, our methods successfully recover the structure of dark-adapted rhodopsin to within 3.2 {angstrom} of the crystal structure.
We report a new nanolaser technique for measuring characteristics of human mitochondria. Because mitochondria are so small, it has been difficult to study large populations using standard light microscope or flow cytometry techniques. We recently discovered a nano-optical transduction method for high-speed analysis of submicron organelles that is well suited to mitochondrial studies. This ultrasensitive detection technique uses nano-squeezing of light into photon modes imposed by the ultrasmall organelle dimensions in a semiconductor biocavity laser. In this paper, we use the method to study the lasing spectra of normal and diseased mitochondria. We find that the diseased mitochondria exhibit larger physical diameter and standard deviation. This morphological differences are also revealed in the lasing spectra. The diseased specimens have a larger spectral linewidth than the normal, and have more variability in their statistical distributions.
The brain is often identified with decision-making processes in the biological world. In fact, single cells, single macromolecules (proteins) and populations of molecules also make simple decisions. These decision processes are essential to survival and to the biological self-assembly and self-repair processes that we seek to emulate. How do these tiny systems make effective decisions? How do they make decisions in concert with a cooperative network of other molecules or cells? How can we emulate the decision-making behaviors of small-scale biological systems to program and self-assemble microsystems? This LDRD supported research to answer these questions. Our work included modeling and simulation of protein populations to help us understand, mimic, and categorize molecular decision-making mechanisms that nonequilibrium systems can exhibit. This work is an early step towards mimicking such nanoscale and microscale biomolecular decision-making processes in inorganic systems.
Chemically prepared zinc oxide powders are fabricated for the production of high aspect ratio varistor components. Colloidal processing in water was performed to reduce agglomerates to primary particles, form a high solids loading slurry, and prevent dopant migration. The milled and dispersed powder exhibited a viscoelastic to elastic behavioral transition at a volume loading of 43-46%. The origin of this transition was studied using acoustic spectroscopy, zeta potential measurements and oscillatory rheology. The phenomenon occurs due to a volume fraction solids dependent reduction in the zeta potential of the solid phase. It is postulated to result from divalent ion binding within the polyelectrolyte dispersant chain, and was mitigated using a polyethylene glycol plasticizing additive. Chemically prepared zinc oxide powders were processed for the production of high aspect ratio varistor components. Near net shape casting methods including slip casting and agarose gelcasting were evaluated for effectiveness in achieving a uniform green microstructure achieving density values near the theoretical maximum during sintering. The structure of the green parts was examined by mercury porisimetry. Agarose gelcasting produced green parts with low solids loading values and did not achieve high fired density. Isopressing the agarose cast parts after drying raised the fired density to greater than 95%, but the parts exhibited catastrophic shorting during electrical testing. Slip casting produced high green density parts, which exhibited high fired density values. The electrical characteristics of slip cast parts are comparable with dry pressed powder compacts. Alternative methods for near net shape forming of ceramic dispersions were investigated for use with the chemically prepared ZnO material. Recommendations for further investigation to achieve a viable production process are presented.
ALEGRA is an arbitrary Lagrangian-Eulerian multi-material finite element code used for modeling solid dynamics problems involving large distortion and shock propagation. This document describes the basic user input language and instructions for using the software.
An exploratory effort in the application of carbon epoxy composite structural materials to a multi-axis gimbal arm design is described. An existing design in aluminum was used as a baseline for a functionally equivalent redesigned outer gimbal arm using a carbon epoxy composite material. The existing arm was analyzed using finite element techniques to characterize performance in terms of strength, stiffness, and weight. A new design was virtually prototyped. using the same tools to produce a design with similar stiffness and strength, but reduced overall weight, than the original arm. The new design was prototyped using Rapid Prototyping technology, which was subsequently used to produce molds for fabricating the carbon epoxy composite parts. The design tools, process, and results are discussed.
Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first order approximation of the Euler equations and used as a preconditioner. In comparison to other methods, the AD preconditioner showed better convergence behavior. Our ultimate target is to perform shape optimization and hp adaptivity using adjoint formulations in the Premo compressible fluid flow simulator. A mathematical formulation for mixed-level simulation algorithms has been developed where different physics interact at potentially different spatial resolutions in a single domain. To minimize the implementation effort, explicit solution methods can be considered, however, implicit methods are preferred if computational efficiency is of high priority. We present the use of a partial elimination nonlinear solver technique to solve these mixed level problems and show how these formulation are closely coupled to intrusive optimization approaches and sensitivity analyses. Production codes are typically not designed for sensitivity analysis or large scale optimization. The implementation of our optimization libraries into multiple production simulation codes in which each code has their own linear algebra interface becomes an intractable problem. In an attempt to streamline this task, we have developed a standard interface between the numerical algorithm (such as optimization) and the underlying linear algebra. These interfaces (TSFCore and TSFCoreNonlin) have been adopted by the Trilinos framework and the goal is to promote the use of these interfaces especially with new developments. Finally, an adjoint based a posteriori error estimator has been developed for discontinuous Galerkin discretization of Poisson's equation. The goal is to investigate other ways to leverage the adjoint calculations and we show how the convergence of the forward problem can be improved by adapting the grid using adjoint-based error estimates. Error estimation is usually conducted with continuous adjoints but if discrete adjoints are available it may be possible to reuse the discrete version for error estimation. We investigate the advantages and disadvantages of continuous and discrete adjoints through a simple example.
The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR 1.3.2 and 1.3.6 and to a Department of Energy document, 'ASCI Software Quality Engineering: Goals, Principles, and Guidelines'. This document also identifies ASC management and software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.
The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in DOE/AL Quality Criteria (QC-1) as conformance to customer requirements and expectations. This quality plan defines the ASC program software quality practices and provides mappings of these practices to the SNL Corporate Process Requirements (CPR 1.3.2 and CPR 1.3.6) and the Department of Energy (DOE) document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines (GP&G). This quality plan identifies ASC management and software project teams' responsibilities for cost-effective software engineering quality practices. The SNL ASC Software Quality Plan establishes the signatories commitment to improving software products by applying cost-effective software engineering quality practices. This document explains the project teams opportunities for tailoring and implementing the practices; enumerates the practices that compose the development of SNL ASC's software products; and includes a sample assessment checklist that was developed based upon the practices in this document.
Macroscopic quantum states such as superconductors, Bose-Einstein condensates and superfluids are some of the most unusual states in nature. In this project, we proposed to design a semiconductor system with a 2D layer of electrons separated from a 2D layer of holes by a narrow (but high) barrier. Under certain conditions, the electrons would pair with the nearby holes and form excitons. At low temperature, these excitons could condense to a macroscopic quantum state either through a Bose-Einstein condensation (for weak exciton interactions) or a BCS transition to a superconductor (for strong exciton interactions). While the theoretical predictions have been around since the 1960's, experimental realization of electron-hole bilayer systems has been extremely difficult due to technical challenges. We identified four characteristics that if successfully incorporated into a device would give the best chances for excitonic condensation to be observed. These characteristics are closely spaced layers, low disorder, low density, and independent contacts to allow transport measurements. We demonstrated each of these characteristics separately, and then incorporated all of them into a single electron-hole bilayer device. The key to the sample design is using undoped GaAs/AlGaAs heterostructures processed in a field-effect transistor geometry. In such samples, the density of single 2D layers of electrons could be varied from an extremely low value of 2 x 10{sup 9} cm{sup -2} to high values of 3 x 10{sup 11} cm{sup -2}. The extreme low values of density that we achieved in single layer 2D electrons allowed us to make important contributions to the problem of the metal insulator transition in two dimensions, while at the same time provided a critical base for understanding low density 2D systems to be used in the electron-hole bilayer experiments. In this report, we describe the processing advances to fabricate single and double layer undoped samples, the low density results on single layers, and evidence for gateable undoped bilayers.
The objective of this LDRD project was to develop a programmable diffraction grating fabricated in SUMMiT V{trademark}. Two types of grating elements (vertical and rotational) were designed and demonstrated. The vertical grating element utilized compound leveraged bending and the rotational grating element used vertical comb drive actuation. This work resulted in two technical advances and one patent application. Also a new optical configuration of the Polychromator was demonstrated. The new optical configuration improved the optical efficiency of the system without degrading any other aspect of the system. The new configuration also relaxes some constraint on the programmable diffraction grating.
We summarize the results of a project to develop evolutionary computing methods for the design of behaviors of embodied agents in the form of autonomous vehicles. We conceived and implemented a strategy called graduated embodiment. This method allows high-level behavior algorithms to be developed using genetic programming methods in a low-fidelity, disembodied modeling environment for migration to high-fidelity, complex embodied applications. This project applies our methods to the problem domain of robot navigation using adaptive waypoints, which allow navigation behaviors to be ported among autonomous mobile robots with different degrees of embodiment, using incremental adaptation and staged optimization. Our approach to biomimetic behavior engineering is a hybrid of human design and artificial evolution, with the application of evolutionary computing in stages to preserve building blocks and limit search space. The methods and tools developed for this project are directly applicable to other agent-based modeling needs, including climate-related conflict analysis, multiplayer training methods, and market-based hypothesis evaluation.
This report documents the results of an LDRD program entitled 'System of Systems Modeling and Analysis' that was conducted during FY 2003 and FY 2004. Systems that themselves consist of multiple systems (referred to here as System of Systems or SoS) introduce a level of complexity to systems performance analysis and optimization that is not readily addressable by existing capabilities. The objective of the 'System of Systems Modeling and Analysis' project was to develop an integrated modeling and simulation environment that addresses the complex SoS modeling and analysis needs. The approach to meeting this objective involved two key efforts. First, a static analysis approach, called state modeling, has been developed that is useful for analyzing the average performance of systems over defined use conditions. The state modeling capability supports analysis and optimization of multiple systems and multiple performance measures or measures of effectiveness. The second effort involves time simulation which represents every system in the simulation using an encapsulated state model (State Model Object or SMO). The time simulation can analyze any number of systems including cross-platform dependencies and a detailed treatment of the logistics required to support the systems in a defined mission.
Single molecule fluorophores were studied for the first time with a new confocal fluorescence microscope that allows the wavelength and emission time to be simultaneously measured with single molecule sensitivity. In this apparatus, the photons collected from the sample are imaged through a dispersive optical system onto a time and position sensitive detector. This detector records the wavelength and emission time of each detected photon relative to an excitation laser pulse. A histogram of many events for any selected spatial region or time interval can generate a full fluorescence spectrum and correlated decay plot for the given selection. At the single molecule level, this approach makes entirely new types of temporal and spectral correlation spectroscopy of possible. This report presents the results of simultaneous time- and frequency-resolved fluorescence measurements of single rhodamine 6G (R6G), tetramethylrhodamine (TMR), and Cy3 embedded in thin films of polymethylmethacrylate (PMMA).
Radio frequency microelectromechanical systems (RF MEMS) are an enabling technology for next-generation communications and radar systems in both military and commercial sectors. RF MEMS-based reconfigurable circuits outperform solid-state circuits in terms of insertion loss, linearity, and static power consumption and are advantageous in applications where high signal power and nanosecond switching speeds are not required. We have demonstrated a number of RF MEMS switches on high-resistivity silicon (high-R Si) that were fabricated by leveraging the volume manufacturing processes available in the Microelectronics Development Laboratory (MDL), a Class-1, radiation-hardened CMOS manufacturing facility. We describe novel tungsten and aluminum-based processes, and present results of switches developed in each of these processes. Series and shunt ohmic switches and shunt capacitive switches were successfully demonstrated. The implications of fabricating on high-R Si and suggested future directions for developing low-loss RF MEMS-based circuits are also discussed.
The focus of this paper is a penalty-based strategy for preconditioning elliptic saddle point systems. As the starting point, we consider the regularization approach of Axelsson in which a related linear system, differing only in the (2,2) block of the coefficient matrix, is introduced. By choosing this block to be negative definite, the dual unknowns of the related system can be eliminated resulting in a positive definite primal Schur complement. Rather than solving the Schur complement system exactly, an approximate solution is obtained using a substructuring preconditioner. The approximate primal solution together with the recovered dual solution then define the preconditioned residual for the original system. The approach can be applied to a variety of different saddle point problems. Although the preconditioner itself is symmetric and indefinite, all the eigenvalues of the preconditioned system are real and positive if certain conditions hold. Stronger conditions also ensure that the eigenvalues are bounded independently of mesh parameters. An interesting feature of the approach is that conjugate gradients can be used as the iterative solution method rather than GMRES. The effectiveness of the overall strategy hinges on the preconditioner for the primal Schur complement. Interestingly, the primary condition ensuring real and positive eigenvalues is satisfied automatically in certain instances if a Balancing Domain Decomposition by Constraints (BDDC) preconditioner is used. Following an overview of BDDC, we show how its constraints can be chosen to ensure insensitivity to parameter choices in the (2,2) block for problems with a divergence constraint. Examples for different saddle point problems are presented and comparisons made with other approaches.
Confinement within the nanoscale pores of a zeolite strongly modifies the behavior of small molecules. Typical of many such interesting and important problems, realistic modeling of this phenomena requires simultaneously capturing the detailed behavior of chemical bonds and the possibility of collective dynamics occurring in a complex unit cell (672 atoms in the case of Zeolite-4A). Classical simulations alone cannot reliably model the breaking and formation of chemical bonds, while quantum methods alone are incapable of treating the extended length and time scales characteristic of complex dynamics. We have developed a robust and efficient model in which a small region treated with the Kohn-Sham density functional theory is embedded within a larger system represented with classical potentials. This model has been applied in concert with first-principles electronic structure calculations and classical molecular dynamics and Monte Carlo simulations to study the behavior of water, ammonia, the hydroxide ion, and the ammonium ion in Zeolite-4a. Understanding this behavior is important to the predictive modeling of the aging of Zeolite-based desiccants. In particular, we have studied the absorption of these molecules, interactions between water and the ammonium ion, and reactions between the hydroxide ion and the zeolite cage. We have shown that interactions with the extended Zeolite cage strongly modifies these local chemical phenomena, and thereby we have proven out hypothesis that capturing both local chemistry and collective phenomena is essential to realistic modeling of this system. Based on our results, we have been able to identify two possible mechanisms for the aging of Zeolite-based desiccants.
Laser diode ignition experiments were conducted in an effort to characterize the effects of scale and heating rate on micro-scale explosive ignition criteria. Over forty experiments were conducted with various laser power densities and laser spot sizes. In addition, relatively simple analytical and numerical calculations were performed to assist with interpretation of the experimental data and characterization of the explosive ignition criteria.
A series of numerical simulations have been performed to determine scaling laws for fast ignition break even of a hot spot formed by energetic particles created by a short pulse laser. Hot spot break even is defined to be when the fusion yield is equal to the total energy deposited in the hot spot through both the initial compression and the subsequent heating. In these simulations, only a small portion of a previously compressed mass of deuterium-tritium fuel is heated on a short time scale, i.e., the hot spot is tamped by the cold dense fuel which surrounds it. The hot spot tamping reduces the minimum energy required to obtain break even as compared to the situation where the entire fuel mass is heated, as was assumed in a previous study [S. A. Slutz, R. A. Vesey, I. Shoemaker, T. A. Mehlhorn, and K. Cochrane, Phys. Plasmas 7, 3483 (2004)]. The minimum energy required to obtain hot spot break even is given approximately by the scaling law E{sub T} = 7.5({rho}/100){sup -1.87} kJ for tamped hot spots, as compared to the previously reported scaling of E{sub UT} = 15.3({rho}/100){sup -1.5} kJ for untamped hotspots. The size of the compressed fuel mass and the focusability of the particles generated by the short pulse laser determines which scaling law to use for an experiment designed to achieve hot spot break even.
This article provides a brief review of the field of electroporation and introduces a new microdevice that facilitates studies to test theories, gain understanding, and control this important biomedical technology. Electroporation, a bio-electrochemical process whose fundamentals are not yet understood, is a means of permeating the cell membrane by applying a voltage across the cell and forming nano-scale pores in the membrane. It has become an important field in biotechnology and medicine for the controlled introduction of macromolecules, such as gene constructs and drugs, into various cells. It is viewed as an engineering alternative to biological techniques for the genetic engineering of cells. To study and control electroporation, we have created a low-cost microelectroporation chip that incorporates a live biological cell with an electric circuit. The device revealed an important behavior of cells in electrical fields. They produce measurable electrical information about the electroporation state of the cell that may enable precise control of the process. The device can be used to facilitate fundamental studies of electroporation and can become useful in providing precise control over biotechnological processes.
This document provides a detailed discussion and a guide for the use of the RadCat 2.0 Graphical User Interface input file generator for the RADTRAN 5.5 code. The differences between RadCat 2.0 and RadCat 1.0 can be attributed to the differences between RADTRAN 5 and RADTRAN 5.5 as well as clarification for some of the input parameters. 3
The synthesis of a photoswitchable polymer by grafting an azobenzene dye to methacrylate followed by polymerization is presented. The azobenzene dye undergoes a trans-cis photoisomerization that causes a persistent change in the refractive index of cast polymer films. This novel polymer was incorporated into superlattices prepared by spin casting and the optical activity of the polymer was maintained. A modified coextruder that allows the rapid production of soft matter superlattices was designed and fabricated.
Two methods for creating a hybrid level-set (LS)/particle method for modeling surface evolution during feature-scale etching and deposition processes are developed and tested. The first method supplements the LS method by introducing Lagrangian marker points in regions of high curvature. Once both the particle set and the LS function are advanced in time, minimization of certain objective functions adjusts the LS function so that its zero contour is in closer alignment with the particle locations. It was found that the objective-minimization problem was unexpectedly difficult to solve, and even when a solution could be found, the acquisition of it proved more costly than simply expanding the basis set of the LS function. The second method explored is a novel explicit marker-particle method that we have named the grid point particle (GPP) approach. Although not a LS method, the GPP approach has strong procedural similarities to certain aspects of the LS approach. A key aspect of the method is a surface rediscretization procedure--applied at each time step and based on a global background mesh--that maintains a representation of the surface while naturally adding and subtracting surface discretization points as the surface evolves in time. This method was coded in 2-D, and tested on a variety of surface evolution problems by using it in the ChISELS computer code. Results shown for 2-D problems illustrate the effectiveness of the method and highlight some notable advantages in accuracy over the LS method. Generalizing the method to 3D is discussed but not implemented.
Understanding and characterizing the electrical properties of multi-conductor shielded and unshielded cables is an important endeavor for many diverse applications, including airlines, land based communications, nuclear weapons, and any piece of hardware containing multi-conductor cabling. Determining the per unit length capacitance and inductance based on the geometry of the conductors, number of conductors, and characteristics of the shield can prove quite difficult. Relating the inductance and capacitance to shielding effectiveness can be even more difficult. An exceedingly large number of measurements were taken to characterize eight multi-conductor cables, of which four were 3-conductor cables and four were 18-conductor cables. Each set of four cables contained a shielded cable and an unshielded cable with the inner conductors twisted together and a shielded cable and an unshielded cable with the inner conductors not twisted together (or straight). Male LJT connectors were attached on either end of the cable and each cable had a finished length of 22.5 inches. The measurements performed were self and mutual inductance, self and mutual capacitance, and effective height. For the 18 conductor case there ended up being an 18 by 18 element matrix for inductance (with the self inductance terms lying on the diagonal) and an 18 by 18 matrix for capacitance. The effective height of each cable was measured over a frequency range from 220 MHz to 18 GHz in a Mode-Stirred Chamber. The effective height of each conductor of each cable was measured individually and all shorted together, producing 19 frequency responses for each 18 conductor cable. Shielding effectiveness was calculated using the effective heights from the shielded and unshielded cables. The results of these measurements and the statistical analysis of the data will be presented. There will also be a brief presentation of comparison with numerical models.
This report describes an approach for extending the one-dimensional turbulence (ODT) model of Kerstein [6] to treat turbulent flow in three-dimensional (3D) domains. This model, here called ODTLES, can also be viewed as a new LES model. In ODTLES, 3D aspects of the flow are captured by embedding three, mutually orthogonal, one-dimensional ODT domain arrays within a coarser 3D mesh. The ODTLES model is obtained by developing a consistent approach for dynamically coupling the different ODT line sets to each other and to the large scale processes that are resolved on the 3D mesh. The model is implemented computationally and its performance is tested and evaluated by performing simulations of decaying isotropic turbulence, a standard turbulent flow benchmarking problem.
Military test and training ranges operate with live fire engagements to provide realism important to the maintenance of key tactical skills. Ordnance detonations during these operations typically produce minute residues of parent explosive chemical compounds. Occasional low order detonations also disperse solid phase energetic material onto the surface soil. These detonation remnants are implicated in chemical contamination impacts to groundwater on a limited set of ranges where environmental characterization projects have occurred. Key questions arise regarding how these residues and the environmental conditions (e.g., weather and geostratigraphy) contribute to groundwater pollution impacts. This report documents interim results of a mass transfer model evaluating mass transfer processes from solid phase energetics to soil pore water based on experimental work obtained earlier in this project. This mass transfer numerical model has been incorporated into the porous media simulation code T2TNT. Next year, the energetic material mass transfer model will be developed further using additional experimental data.
The presentation outline of this paper is: (1) How identification of chemical hazards fits into a security risk analysis approach; (2) Techniques for target identification; and (3) Identification of chemical hazards by different organizations. The summary is: (1) There are a number of different methodologies used within the chemical industry which identify chemical hazards: (a) Some develop a manual listing of potential targets based on published lists of hazardous chemicals or chemicals of concern, 'expert opinion' or known hazards. (b) Others develop a prioritized list based on chemicals found at a facility and consequence analysis (offsite release affecting population, theft of material, product tampering). (2) Identification of chemical hazards should include not only intrinsic properties of the chemicals but also potential reactive chemical hazards and potential use for activities off-site.
Technologies that could quickly detect and identify virus particles would play a critical role in fighting bioterrorism and help to contain the rapid spread of disease. Of special interest is the ability to detect the presence and movement of virions without chemically modifying them by attaching molecular probes. This would be useful for rapid detection of pathogens in food or water supplies without the use of expensive chemical reagents. Such detection requires new devices to quickly screen for the presence of tiny pathogens. To develop such a device, we fabricated nanochannels to transport virus particles through ultrashort laser cavities and measured the lasing output as a sensor for virions. To understand this transduction mechanism, we also investigated light scattering from virions, both to determine the magnitude of the scattered signal and to use it to investigate the motion of virions.
This overview is intended to provide the reader with insight into basic reliability issues often confronted when designing long-term geothermal well monitoring equipment. No single system is looked at. General examples of the long-term reliability of other industries are presented. Examples of reliability issues involving electronic components and sensors along with fiber optic sensors and cables are given. This paper will aid in building systems where a long operating life is required. However, as no introductory paper can cover all reliability issues, basic assembly practices and testing concepts are presented.
IFPACK provides a suite of object-oriented algebraic preconditioners for the solution of preconditioned iterative solvers. IFPACK constructors expect the (distributed) real sparse matrix to be an Epetra RowMatrix object. IFPACK can be used to define point and block relaxation preconditioners, various flavors of incomplete factorizations for symmetric and non-symmetric matrices, and one-level additive Schwarz preconditioners with variable overlap. Exact LU factorizations of the local submatrix can be accessed through the AMESOS packages. IFPACK , as part of the Trilinos Solver Project, interacts well with other Trilinos packages. In particular, IFPACK objects can be used as preconditioners for AZTECOO, and as smoothers for ML. IFPACK is mainly written in C++, but only a limited subset of C++ features is used, in order to enhance portability.
The evolution of Coulomb interactions across the metal insulator transition (MIT) in a 3D localized conductor was experimentally investigated. The data were used to construct a phase diagram of the macroscopic Coulomb-correlated states as a function of single-particle energy and density. The phase diagrams show the existence of a phase boundary that separates low-energy distinctive metallic or insulating behavior from a higher energy mixed state. The data indicate a diverging screening radius at the critical density, which may signal an interaction-driven thermodynamic state change.
This document transmits the U.S. Fish and Wildlife Service's (Service) biological and conference opinions based on our review of National Nuclear Security Administration's (NNSA) proposed Maximum Operations Alternative at Sandia National Laboratories (SNL/CA), Alameda County, California.
One effect noted during the March 1975 fire at the Browns Ferry plant is that fire-induced cable damage caused a range of unanticipated circuit faults including spurious reactor status signals and the apparent spurious operation of plant systems and components. Current USNRC regulations require that licensees conduct a post-fire safe shutdown analysis that includes consideration of such circuit effects. Post-fire circuit analysis continues to be an area of both technical challenge and regulatory focus. This paper discusses risk perspectives related to post-fire circuit analysis. An opening background discussion outlines the issues, concerns, and technical challenges. The paper then focuses on current risk insights and perspectives relevant to the circuit analysis problem. This includes a discussion of the available experimental data on cable failure modes and effects, a discussion of fire events that illustrate potential fire-induced circuit faults, and a discussion of risk analysis approaches currently being developed and implemented.
The complexity associated with the dynamics of wire arrays from individual wire ablation to wire-wire interaction and finally stagnation have been observed with relatively recent advances in experimental diagnostics. These experimental snapshots illustrate the existence of three-dimensional effects (e.g. wire precursor ablation and stagnation, array mass left behind, current density redistribution, multiple stagnations) that have a significant impact on the total radiation output. A detailed understanding of the magnitude and impact of these perturbations is lacking, especially those perturbations in three-dimensions. Sandia National Laboratories has developed a new multi-physics simulation framework tailored to high energy density physics (HEDP) environments. ALEGRA-HEDP[1] has begun to simulate this environment and has produced the highest fidelity, two-dimensional simulations of wire-array precursor ablation to date. The three-dimensional code capability now provides the ability to solve for the magnetic field and current density distribution associated with both the wire array and the complex current return structure. With this new capability the impact that experimental view-ports (e.g., slots in the current return can and radial spokes) have on the magnetic field surrounding the array can be investigated. Specifically, the impact that the perturbed magnetic field has on an idealized cylindrical liner implosion has been investigated.
The turbulent convection that takes place in a Chandrasekhar-mass white dwarf during the final few minutes before it explodes determines where and how frequently it ignites. Numerical simulations have shown that the properties of the subsequent Type la supernova are sensitive to these ignition conditions. A heuristic model of the turbulent convection is explored. The results suggest that supernova ignition is likely to occur at a radius of order 100 km, rather than at the center of the star.