EXCITATION OF FLUTE MODE TURBULENCE IN HIGH BETA CURRENT-CARRYING Z-PINCH PLASMAS
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proposed for publication in the Journal of Physical Chemistry A.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Optical Engineering
Abstract not provided.
Abstract not provided.
Proposed for publication in Physical Review Letters.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proposed for publication in the Journal of The Electrochemical Society.
Abstract not provided.
Abstract not provided.
The objective of this report is to promote increased understanding of decision making processes and hopefully to enable improved decision making regarding high-consequence, highly sophisticated technological systems. This report brings together insights regarding risk perception and decision making across domains ranging from nuclear power technology safety, cognitive psychology, economics, science education, public policy, and neural science (to name a few). It forms them into a unique, coherent, concise framework, and list of strategies to aid in decision making. It is suggested that all decision makers, whether ordinary citizens, academics, or political leaders, ought to cultivate their abilities to separate the wheat from the chaff in these types of decision making instances. The wheat includes proper data sources and helpful human decision making heuristics; these should be sought. The chaff includes ''unhelpful biases'' that hinder proper interpretation of available data and lead people unwittingly toward inappropriate decision making ''strategies''; obviously, these should be avoided. It is further proposed that successfully accomplishing the wheat vs. chaff separation is very difficult, yet tenable. This report hopes to expose and facilitate navigation away from decision-making traps which often ensnare the unwary. Furthermore, it is emphasized that one's personal decision making biases can be examined, and tools can be provided allowing better means to generate, evaluate, and select among decision options. Many examples in this report are tailored to the energy domain (esp. nuclear power for electricity generation). The decision making framework and approach presented here are applicable to any high-consequence, highly sophisticated technological system.
Anisotropic carbon/glass hybrid composite laminates have been fabricated, tested, and analyzed. The laminates have been fabricated using vacuum-assisted resin transfer molding (VARTM). Five fiber complexes and a two-part epoxy resin system have been used in the study to fabricate panels of twenty different laminate constructions. These panels have been subjected to physical testing to measure density, fiber volume fraction, and void fraction. Coupons machined from these panels have also been subjected to mechanical testing to measure elastic properties and strength of the laminates using tensile, compressive, transverse tensile, and in-plane shear tests. Interlaminar shear strength has also been measured. Out-of-plane displacement, axial strain, transverse strain, and inplane shear strain have also been measured using photogrammetry data obtained during edgewise compression tests. The test data have been reduced to characterize the elastic properties and strength of the laminates. Constraints imposed by test fixtures might be expected to affect measurements of the moduli of anisotropic materials; classical lamination theory has been used to assess the magnitude of such effects and correct the experimental data for the same. The tensile moduli generally correlate well with experiment without correction and indicate that factors other than end constraints dominate. The results suggest that shear moduli of the anisotropic materials are affected by end constraints. Classical lamination theory has also been used to characterize the level of extension-shear coupling in the anisotropic laminates. Three factors affecting the coupling have been examined: the volume fraction of unbalanced off-axis layers, the angle of the off-axis layers, and the composition of the fibers (i.e., carbon or glass) used as the axial reinforcement. The results indicate that extension/shear coupling is maximized with the least loss in axial tensile stiffness by using carbon fibers oriented 15{sup o} from the long axis for approximately two-thirds of the laminate volume (discounting skin layers), with reinforcing carbon fibers oriented axially comprising the remaining one-third of the volume. Finite element analysis of each laminate has been performed to examine first ply failure. Three failure criteria--maximum stress, maximum strain, and Tsai-Wu--have been compared. Failure predicted by all three criteria proves generally conservative, with the stress-based criteria the most conservative. For laminates that respond nonlinearly to loading, large error is observed in the prediction of failure using maximum strain as the criterion. This report documents the methods and results in two volumes. Volume 1 contains descriptions of the laminates, their fabrication and testing, the methods of analysis, the results, and the conclusions and recommendations. Volume 2 contains a comprehensive summary of the individual test results for all laminates.
The thermal challenge problem has been developed at Sandia National Laboratories as a testbed for demonstrating various types of validation approaches and prediction methods. This report discusses one particular methodology to assess the validity of a computational model given experimental data. This methodology is based on Bayesian Belief Networks (BBNs) and can incorporate uncertainty in experimental measurements, in physical quantities, and model uncertainties. The approach uses the prior and posterior distributions of model output to compute a validation metric based on Bayesian hypothesis testing (a Bayes' factor). This report discusses various aspects of the BBN, specifically in the context of the thermal challenge problem. A BBN is developed for a given set of experimental data in a particular experimental configuration. The development of the BBN and the method for ''solving'' the BBN to develop the posterior distribution of model output through Monte Carlo Markov Chain sampling is discussed in detail. The use of the BBN to compute a Bayes' factor is demonstrated.
The oxidation of zirconium alloys is one of the most studied processes in the nuclear industry. The purpose of this report is to provide in a concise form a review of the oxidation process of zirconium alloys in the moderate temperature regime. In the initial ''pre-transition'' phase, the surface oxide is dense and protective. After the oxide layer has grown to a thickness of 2 to 3 {micro}m's, the oxidation process enters the ''post-transition'' phase where the density of the layer decreases and becomes less protective. A compilation of relevant data suggests a single expression can be used to describe the post-transition oxidation rate of most zirconium alloys during exposure to oxygen, air, or water vapor. That expression is: Oxidation Rate = 13.9 g/(cm{sup 2}-s-atm{sup -1/6}) exp(-1.47 eV/kT) x P{sup 1/6} (atm{sup 1/6}).
The film stress of Ni films deposited at near-ambient temperatures from sulfamate electrolytes was studied. The particulate filtering of the electrolyte, a routine industrial practice, becomes an important deposition parameter at lower bath temperatures. At 28 C, elevated tensile film stress develops at low current densities (<10 mA/cm{sup 2}) if the electrolyte is filtered. Filtering at higher current densities has a negligible effect on film stress. A similar though less pronounced trend is observed at 32 C. Sulfate-based Ni plating baths display similar film stress sensitivity to filtering, suggesting that this is a general effect for Ni electrodeposition. It is shown that filtering does not significantly change the current efficiency or the pH near the surface during deposition. The observed changes in film stress are thus attributed not to adsorbed hydrogen but instead to the effects of filtering on the formation and concentration of polyborate species due to the decreased solubility of boric acid at near-ambient temperatures.
A microflame-based detector suit has been developed for sensing of a broad range of chemical analytes. This detector combines calorimetry, flame ionization detection (FID), nitrogen-phosphorous detection (NPD) and flame photometric detection (FPD) modes into one convenient platform based on a microcombustor. The microcombustor consists in a micromachined microhotplate with a catalyst or low-work function material added to its surface. For the NPD mode a low work function material selectively ionizes chemical analytes; for all other modes a supported catalyst such as platinum/alumina is used. The microcombustor design permits rapid, efficient heating of the deposited film at low power. To perform calorimetric detection of analytes, the change in power required to maintain the resistive microhotplate heater at a constant temperature is measured. For FID and NPD modes, electrodes are placed around the microcombustor flame zone and an electrometer circuit measures the production of ions. For FPD, the flame zone is optically interrogated to search for light emission indicative of deexcitation of flame-produced analyte compounds. The calorimetric and FID modes respond generally to all hydrocarbons, while sulfur compounds only alarm in the calorimetric mode, providing speciation. The NPD mode provides 10,000:1 selectivity of nitrogen and phosphorous compounds over hydrocarbons. The FPD can distinguish between sulfur and phosphorous compounds. Importantly all detection modes can be established on one convenient microcombustor platform, in fact the calorimetric, FID and FPD modes can be achieved simultaneously on only one microcombustor. Therefore, it is possible to make a very universal chemical detector array with as little as two microcombustor elements. A demonstration of the performance of the microcombustor in each of the detection modes is provided herein.
Functional organic nanostructures such as well-formed tubes or fibers that can easily be fabricated into electronic and photonic devices are needed in many applications. Especially desirable from a national security standpoint are nanostructures that have enhanced sensitivity for the detection of chemicals and biological (CB) agents and other environmental stimuli. We recently discovered the first class of highly responsive and adaptive porphyrin-based nanostructures that may satisfy these requirements. These novel porphyrin nanostructures, which are formed by ionic self-assembly of two oppositely charged porphyrins, may function as conductors, semiconductors, or photoconductors, and they have additional properties that make them suitable for device fabrication (e.g., as ultrasensitive colorimetric CB microsensors). Preliminary studies with porphyrin nanotubes have shown that these nanostructures have novel optical and electronic properties, including strong resonant light scattering, quenched fluorescence, and electrical conductivity. In addition, they are photochemically active and capable of light-harvesting and photosynthesis; they may also have nonlinear optical properties. Remarkably, the nanotubes and potentially other porphyrin nanostructure are mechanically responsive and adaptive (e.g., the rigidity of the micrometers-long nanotubes is altered by light, ultrasound, or chemicals) and they self-heal upon removal the environmental stimulus. Given the tremendous degree of structural variation possible in the porphyrin subunits, additional types of nanostructures and greater control over their morphology can be anticipated. Molecular modification also provides a means of controlling their electronic, photonic, and other functional properties. In this work, we have greatly broadened the range of ionic porphyrin nanostructures that can be made, and determined the optical and responsivity properties of the nanotubes and other porphyrin nanostructures. We have also explored means for controlling their morphology, size, and placement on surfaces. The research proposed will lay the groundwork for the use of these remarkable porphyrin nanostructures in micro- and nanoscale devices, by providing a more detailed understanding of their molecular structure and the factors that control their structural, photophysical, and chemical properties.
In this investigation, we conduct a literature study of the best experimental and theoretical data available for thin and thick atmospheres on THz radiation propagation from 0.1 to 10 THz. We determined that for thick atmospheres no data exists beyond 450 GHz. For thin atmospheres data exists from 0.35 to 1.2 THz. We were successful in using FASE code with the HITRAN database to simulate the THz transmission spectrum for Mauna Kea from 0.1 to 2 THz. Lastly, we successfully measured the THz transmission spectra of laboratory atmospheres at relative humidities of 18 and 27%. In general, we found that an increase in the water content of the atmosphere led to a decrease in the THz transmission. We identified two potential windows in an Albuquerque atmosphere for THz propagation which were the regions from 1.2 to 1.4 THz and 1.4 to 1.6 THz.
The formation and functions of living materials and organisms are fundamentally different from those of synthetic materials and devices. Synthetic materials tend to have static structures, and are not capable of adapting to the functional needs of changing environments. In contrast, living systems utilize energy to create, heal, reconfigure, and dismantle materials in a dynamic, non-equilibrium fashion. The overall goal of the project was to organize and reconfigure functional assemblies of nanoparticles using strategies that mimic those found in living systems. Active assembly of nanostructures was studied using active biomolecules to drive the organization and assembly of nanocomposite materials. In this system, kinesin motor proteins and microtubules were used to direct the transport and interactions of nanoparticles at synthetic interfaces. In addition, the kinesin/microtubule transport system was used to actively assemble nanocomposite materials capable of storing significant elastic energy. Novel biophysical measurement tools were also developed for measuring the collective force generated by kinesin motor proteins, which will provide insight on the mechanical constraints of active assembly processes. Responsive reconfiguration of nanostructures was studied in terms of using active biomolecules to mediate the optical properties of quantum dot (QD) arrays through modulation of inter-particle spacing and associated energy transfer interaction. Design rules for kinesin-based transport of a wide range of nanoscale cargo (e.g., nanocrystal quantum dots, micron-sized polymer spheres) were developed. Three-dimensional microtubule organizing centers were assembled in which the polar orientation of the microtubules was controlled by a multi-staged assembly process. Overall, a number of enabling technologies were developed over the course of this project, and will drive the exploitation of energy-driven processes to regulate the assembly, disassembly, and dynamic reorganization of nanomaterials.
Polymers and fiber-reinforced polymer matrix composites play an important role in many Defense Program applications. Recently an advanced nonlinear viscoelastic model for polymers has been developed and incorporated into ADAGIO, Sandia's SIERRA-based quasi-static analysis code. Standard linear elastic shell and continuum models for fiber-reinforced polymer-matrix composites have also been added to ADAGIO. This report details the use of these models for advanced adhesive joint and composites simulations carried out as part of an Advanced Simulation and Computing Advanced Deployment (ASC AD) project. More specifically, the thermo-mechanical response of an adhesive joint when loaded during repeated thermal cycling is simulated, the response of some composite rings under internal pressurization is calculated, and the performance of a composite container subjected to internal pressurization, thermal loading, and distributed mechanical loading is determined. Finally, general comparisons between the continuum and shell element approaches for modeling composites using ADAGIO are given.
Understanding the properties and behavior of biomembranes is fundamental to many biological processes and technologies. Microdomains in biomembranes or ''lipid rafts'' are now known to be an integral part of cell signaling, vesicle formation, fusion processes, protein trafficking, and viral and toxin infection processes. Understanding how microdomains form, how they depend on membrane constituents, and how they act not only has biological implications, but also will impact Sandia's effort in development of membranes that structurally adapt to their environment in a controlled manner. To provide such understanding, we created physically-based models of biomembranes. Molecular dynamics (MD) simulations and classical density functional theory (DFT) calculations using these models were applied to phenomena such as microdomain formation, membrane fusion, pattern formation, and protein insertion. Because lipid dynamics and self-organization in membranes occur on length and time scales beyond atomistic MD, we used coarse-grained models of double tail lipid molecules that spontaneously self-assemble into bilayers. DFT provided equilibrium information on membrane structure. Experimental work was performed to further help elucidate the fundamental membrane organization principles.
Deoxyribonucleic acid (DNA) molecules represent Nature's genetic database, encoding the information necessary for all cellular processes. From a materials engineering perspective, DNA represents a nanoscale scaffold with highly refined structure, stability across a wide range of environmental conditions, and the ability to interact with a range of biomolecules. The ability to mass-manufacture functionalized DNA strands with Angstrom-level resolution through DNA replication technology, however, has not been explored. The long-term goal of the work presented in this report is focused on exploiting DNA and in vitro DNA replication processes to mass-manufacture nanocomposite materials. The specific objectives of this project were to: (1) develop methods for replicating DNA strands that incorporate nucleotides with ''chemical handles'', and (2) demonstrate attachment of nanocrystal quantum dots (nQDs) to functionalized DNA strands. Polymerase chain reaction (PCR) and primer extension methodologies were used to successfully synthesize amine-, thiol-, and biotin-functionalized DNA molecules. Significant variability in the efficiency of modified nucleotide incorporation was observed, and attributed to the intrinsic properties of the modified nucleotides. Noncovalent attachment of streptavidin-coated nQDs to biotin-modified DNA synthesized using the primer extension method was observed by epifluorescence microscopy. Data regarding covalent attachment of nQDs to amine- and thiol-functionalized DNA was generally inconclusive; alternative characterization tools are necessary to fully evaluate these attachment methods. Full realization of this technology may facilitate new approaches to manufacturing materials at the nanoscale. In addition, composite nQD-DNA materials may serve as novel recognition elements in sensor devices, or be used as diagnostic tools for forensic analyses. This report summarizes the results obtained over the course of this 1-year project.
Nearly every manufacturing and many technologies central to Sandia's business involve physical processes controlled by interfacial wetting. Interfacial forces, e.g. conjoining/disjoining pressure, electrostatics, and capillary condensation, are ubiquitous and can surpass and even dominate bulk inertial or viscous effects on a continuum level. Moreover, the statics and dynamics of three-phase contact lines exhibit a wide range of complex behavior, such as contact angle hysteresis due to surface roughness, surface reaction, or compositional heterogeneities. These thermodynamically and kinetically driven interactions are essential to the development of new materials and processes. A detailed understanding was developed for the factors controlling wettability in multicomponent systems from computational modeling tools, and experimental diagnostics for systems, and processes dominated by interfacial effects. Wettability probed by dynamic advancing and receding contact angle measurements, ellipsometry, and direct determination of the capillary and disjoining forces. Molecular scale experiments determined the relationships between the fundamental interactions between molecular species and with the substrate. Atomistic simulations studied the equilibrium concentration profiles near the solid and vapor interfaces and tested the basic assumptions used in the continuum approaches. These simulations provide guidance in developing constitutive equations, which more accurately take into account the effects of surface induced phase separation and concentration gradients near the three-phase contact line. The development of these accurate models for dynamic multicomponent wetting allows improvement in science based engineering of manufacturing processes previously developed through costly trial and error by varying material formulation and geometry modification.
The objective of this project is the investigation of compliant membranes for the development of a MicroElectrical Mechanical Systems (MEMS) microphone using the Sandia Ultraplanar, Multilevel MEMS Technology (SUMMiT V) fabrication process. The microphone is a dual-backplate capacitive microphone utilizing electrostatic force feedback. The microphone consists of a diaphragm and two porous backplates, one on either side of the diaphragm. This forms a capacitor between the diaphragm and each backplate. As the incident pressure deflects the diaphragm, the value of each capacitor will change, thus resulting in an electrical output. Feedback may be used in this device by applying a voltage between the diaphragm and the backplates to balance the incident pressure keeping the diaphragm stationary. The SUMMiT V fabrication process is unique in that it can meet the fabrication requirements of this project. All five layers of polysilicon are used in the fabrication of this device. The SUMMiT V process has been optimized to provide low-stress mechanical layers that are ideal for the construction of the microphone's diaphragm. The use of chemical mechanical polishing in the SUMMiT V process results in extremely flat structural layers and uniform spacing between the layers, both of which are critical to the successful fabrication of the MEMS microphone. The MEMS capacitive microphone was fabricated at Sandia National Laboratories and post-processed, packaged, and tested at the University of Florida. The microphone demonstrates a flat frequency response, a linear response up to the designed limit, and a sensitivity that is close to the designed value. Future work will focus on characterization of additional devices, extending the frequency response measurements, and investigating the use of other types of interface circuitry.
The natural gas industry seeks inexpensive sensors and instrumentation to rapidly measure gas heating value in widely distributed locations. For gas pipelines, this will improve gas quality during transfer and blending, and will expedite accurate financial accounting. Industrial endusers will benefit through continuous feedback of physical gas properties to improve combustion efficiency during use. To meet this need, Sandia has developed a natural gas heating value monitoring instrument using existing and modified microfabricated components. The instrument consists of a silicon micro-fabricated gas chromatography column in conjunction with a catalytic micro-calorimeter sensor. A reference thermal conductivity sensor provides diagnostics and surety. This combination allows for continuous calorimetric determination with a 1 minute analysis time and 1.5 minute cycle time using air as a carrier gas. This system will find application at remote natural gas mining stations, pipeline switching and metering stations, turbine generators, and other industrial user sites. Microfabrication techniques will allow the analytical components to be manufactured in production quantities at a low per-unit cost.
Sodium aluminum hydride, NaAlH{sub 4}, has been studied for use as a hydrogen storage material. The effect of Ti, as a few mol. % dopant in the system to increase kinetics of hydrogen sorption, is studied with respect to changes in lattice structure of the crystal. No Ti substitution is found in the crystal lattice. Electronic structure calculations indicate that the NaAlH{sub 4} and Na{sub 3}AlH{sub 6} structures are complex-ionic hydrides with Na{sup +} cations and AlH{sub 4}{sup -} and AlH{sub 6}{sup 3-} anions, respectively. Compound formation studies indicate the primary Ti-compound formed when doping the material at 33 at. % is TiAl{sub 3} , and likely Ti-Al compounds at lower doping rates. A general study of sorption kinetics of NaAlH{sub 4}, when doped with a variety of Ti-halide compounds, indicates a uniform response with the kinetics similar for all dopants. NMR multiple quantum studies of solution-doped samples indicate solvent interaction with the doped alanate. Raman spectroscopy was used to study the lattice dynamics of NaAlH{sub 4}, and illustrated the molecular ionic nature of the lattice as a separation of vibrational modes between the AlH{sub 4}{sup -} anion-modes and lattice-modes. In-situ Raman measurements indicate a stable AlH{sub 4}{sup -} anion that is stable at the melting temperature of NaAlH{sub 4}, indicating that Ti-dopants must affect the Al-H bond strength.
The design, simulation, fabrication, packaging, electrical characterization and testing analysis of a microfabricated a cylindrical ion trap ({mu}CIT) array is presented. Several versions of microfabricated cylindrical ion traps were designed and fabricated. The final design of the individual trap array element consisted of two end cap electrodes, one ring electrode, and a detector plate, fabricated in seven tungsten metal layers by molding tungsten around silicon dioxide (SiO{sub 2}) features. Each layer of tungsten is then polished back in damascene fashion. The SiO{sub 2} was removed using a standard release processes to realize a free-hung structure. Five different sized traps were fabricated with inner radii of 1, 1.5, 2, 5 and 10 {micro}m and heights ranging from 3-24 {micro}m. Simulations examined the effects of ion and neutral temperature, the pressure and nature of cooling gas, ion mass, trap voltage and frequency, space-charge, fabrication defects, and other parameters on the ability of micrometer-sized traps to store ions. The electrical characteristics of the ion trap arrays were determined. The capacitance was 2-500 pF for the various sized traps and arrays. The resistance was in the order of 1-2 {Omega}. The inductance of the arrays was calculated to be 10-1500 pH, depending on the trap and array sizes. The ion traps' field emission characteristics were assessed. It was determined that the traps could be operated up to 125 V while maintaining field emission currents below 1 x 10{sup -15} A. The testing focused on using the 5-{micro}m CITs to trap toluene (C{sub 7}H{sub 8}). Ion ejection from the traps was induced by termination of the RF voltage applied to the ring electrode and current measured on the collector electrode suggested trapping of ions in 1-10% of the traps. Improvements to the to the design of the traps were defined to minimize voltage drop to the substrate, thereby increasing trapping voltage applied to the ring electrode, and to allow for electron injection into, ion ejection from, and optical access to the trapping region.
Piezoelectric polymers based on polyvinylidene fluoride (PVDF) are of interest for large aperture space-based telescopes as adaptive or smart materials. Dimensional adjustments of adaptive polymer films depend on controlled charge deposition. Predicting their long-term performance requires a detailed understanding of the piezoelectric material features, expected to suffer due to space environmental degradation. Hence, the degradation and performance of PVDF and its copolymers under various stress environments expected in low Earth orbit has been reviewed and investigated. Various experiments were conducted to expose these polymers to elevated temperature, vacuum UV, {gamma}-radiation and atomic oxygen. The resulting degradative processes were evaluated. The overall materials performance is governed by a combination of chemical and physical degradation processes. Molecular changes are primarily induced via radiative damage, and physical damage from temperature and atomic oxygen exposure is evident as depoling, loss of orientation and surface erosion. The effects of combined vacuum UV radiation and atomic oxygen resulted in expected surface erosion and pitting rates that determine the lifetime of thin films. Interestingly, the piezo responsiveness in the underlying bulk material remained largely unchanged. This study has delivered a comprehensive framework for material properties and degradation sensitivities with variations in individual polymer performances clearly apparent. The results provide guidance for material selection, qualification, optimization strategies, feedback for manufacturing and processing, or alternative materials. Further material qualification should be conducted via experiments under actual space conditions.
This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.
This work covers three distinct aspects of deformation and fracture during indentations. In particular, we develop an approach to verification of nanoindentation induced film fracture in hard film/soft substrate systems; we examine the ability to perform these experiments in harsh environments; we investigate the methods by which the resulting deformation from indentation can be quantified and correlated to computational simulations, and we examine the onset of plasticity during indentation testing. First, nanoindentation was utilized to induce fracture of brittle thin oxide films on compliant substrates. During the indentation, a load is applied and the penetration depth is continuously measured. A sudden discontinuity, indicative of film fracture, was observed upon the loading portion of the load-depth curve. The mechanical properties of thermally grown oxide films on various substrates were calculated using two different numerical methods. The first method utilized a plate bending approach by modeling the thin film as an axisymmetric circular plate on a compliant foundation. The second method measured the applied energy for fracture. The crack extension force and applied stress intensity at fracture was then determined from the energy measurements. Secondly, slip steps form on the free surface around indentations in most crystalline materials when dislocations reach the free surface. Analysis of these slip steps provides information about the deformation taking place in the material. Techniques have now been developed to allow for accurate and consistent measurement of slip steps and the effects of crystal orientation and tip geometry are characterized. These techniques will be described and compared to results from dislocation dynamics simulations.
A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations when data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses. At least one-hundred realizations were simulated for each scenario defined in the performance assessment. Conservative values and assumptions were used to define values and distributions of uncertain input parameters when site data were not available. Results showed that exposure to tritium via the air pathway exceeded the regulatory metric of 10 mrem/year in about 2% of the simulated realizations when the receptor was located at the MWL (continuously exposed to the air directly above the MWL). Simulations showed that peak radon gas fluxes exceeded the design standard of 20 pCi/m{sup 2}/s in about 3% of the realizations if up to 1% of the containers of sealed radium-226 sources were assumed to completely degrade in the future. If up to 100% of the containers of radium-226 sources were assumed to completely degrade, 30% of the realizations yielded radon surface fluxes that exceeded the design standard. For the groundwater pathway, simulations showed that none of the radionuclides or heavy metals (lead and cadmium) reached the groundwater during the 1,000-year evaluation period. Tetrachloroethylene (PCE) was used as a proxy for other VOCs because of its mobility and potential to exceed maximum contaminant levels in the groundwater relative to other VOCs. Simulations showed that PCE reached the groundwater, but only 1% of the realizations yielded aquifer concentrations that exceeded the regulatory metric of 5 {micro}g/L. Based on these results, monitoring triggers have been proposed for the air, surface soil, vadose zone, and groundwater at the MWL. Specific triggers include numerical thresholds for radon concentrations in the air, tritium concentrations in surface soil, infiltration through the vadose zone, and uranium and select VOC concentrations in groundwater. The proposed triggers are based on U.S. Environmental Protection Agency and Department of Energy regulatory standards. If a trigger is exceeded, then a trigger evaluation process will be initiated which will allow sufficient data to be collected to assess trends and recommend corrective actions, if necessary.
LDRD Project 86361 provided support to upgrade the chemical and material spectral signature measurement and detection capabilities of Sandia National Laboratories using the terahertz (THz) portion of the electromagnetic spectrum, which includes frequencies between 0.1 to 10 THz. Under this project, a THz time-domain spectrometer was completed. This instrument measures sample absorption spectra coherently, obtaining both magnitude and phase of the absorption signal, and has shown an operating signal-to-noise ratio of 10{sub 4}. Additionally, various gas cells and a reflectometer were added to an existing high-resolution THz Fourier transform spectrometer, which greatly extend the functionality of this spectrometer. Finally, preliminary efforts to design an integrated THz transceiver based on a quantum cascade laser were begun.
This report contains an algorithm for decomposing higher-order finite elementsinto regions appropriate for isosurfacing and proves the conditions under which thealgorithm will terminate. Finite elements are used to create piecewise polynomialapproximants to the solution of partial differential equations for which no analyticalsolution exists. These polynomials represent fields such as pressure, stress, and mo-mentim. In the past, these polynomials have been linear in each parametric coordinate.Each polynomial coefficient must be uniquely determined by a simulation, and thesecoefficients are called degrees of freedom. When there are not enough degrees of free-dom, simulations will typically fail to produce a valid approximation to the solution.Recent work has shown that increasing the number of degrees of freedom by increas-ing the order of the polynomial approximation (instead of increasing the number offinite elements, each of which has its own set of coefficients) can allow some typesof simulations to produce a valid approximation with many fewer degrees of freedomthan increasing the number of finite elements alone. However, once the simulation hasdetermined the values of all the coefficients in a higher-order approximant, tools donot exist for visual inspection of the solution.This report focuses on a technique for the visual inspection of higher-order finiteelement simulation results based on decomposing each finite element into simplicialregions where existing visualization algorithms such as isosurfacing will work. Therequirements of the isosurfacing algorithm are enumerated and related to the placeswhere the partial derivatives of the polynomial become zero. The original isosurfacingalgorithm is then applied to each of these regions in turn.3 AcknowledgementThe authors would like to thank David Day and Louis Romero for their insight into poly-nomial system solvers and the LDRD Senior Council for the opportunity to pursue thisresearch. The authors were supported by the United States Department of Energy, Officeof Defense Programs by the Labratory Directed Research and Development Senior Coun-cil, project 90499. Sandia is a multiprogram laboratory operated by Sandia Corporation,a Lockheed-Martin Company, for the United States Department of Energy under contractDE-AC04-94-AL85000.4
We have developed a new nanotagging technology for detecting and imaging the self-organization of proteins and other components of membranes at nanometer resolution for the purpose of investigating cell signaling and other membrane-mediated biological processes. We used protein-, lipid-, or drug-bound porphyrin photocatalysts to grow in-situ nanometer-sized metal particles, which reveal the location of the porphyrin-labeled molecules by electron microscopy. We initially used photocatalytic nanotagging to image assembled multi-component proteins and to monitor the distribution of lipids and porphyrin labels in liposomes. For example, by exchanging the heme molecules in hemoproteins with a photocatalytic tin porphyrin, a nanoparticle was grown at each heme site of the protein. The result obtained from electron microscopy for a tagged multi-subunit protein such as hemoglobin is a symmetric constellation of a specific number of nanoparticle tags, four in the case of the hemoglobin tetramer. Methods for covalently linking photocatalytic porphyrin labels to lipids and proteins were also developed to detect and image the self-organization of lipids, protein-protein supercomplexes, and membrane-protein complexes. Procedures for making photocatalytic porphyrin-drug, porphyrin-lipid, and porphyrin-protein hybrids for non-porphyrin-binding proteins and membrane components were pursued and the first porphyrin-labeled lipids was investigated in liposomal membrane models. Our photocatalytic nanotagging technique may ultimately allow membrane self-organization and cell signaling processes to be imaged in living cells. Fluorescence and plasmonic spectra of the tagged proteins might also provide additional information about protein association and membrane organization. In addition, a porphyrin-aspirin or other NSAID hybrid may be used to grow metal nanotags for the pharmacologically important COX enzymes in membranes so that the distribution of the protein can be imaged at the nanometer scale.
There are several engineering obstacles that need to be solved before key management and encryption under the bounded storage model can be realized. One of the critical obstacles hindering its adoption is the construction of a scheme that achieves reliable communication in the event that timing synchronization errors occur. One of the main accomplishments of this project was the development of a new scheme that solves this problem. We show in general that there exist message encoding techniques under the bounded storage model that provide an arbitrarily small probability of transmission error. We compute the maximum capacity of this channel using the unsynchronized key-expansion as side-channel information at the decoder and provide tight lower bounds for a particular class of key-expansion functions that are pseudo-invariant to timing errors. Using our results in combination with Dziembowski et al. [11] encryption scheme we can construct a scheme that solves the timing synchronization error problem. In addition to this work we conducted a detailed case study of current and future storage technologies. We analyzed the cost, capacity, and storage data rate of various technologies, so that precise security parameters can be developed for bounded storage encryption schemes. This will provide an invaluable tool for developing these schemes in practice.
Abstract not provided.
The original LDRD proposal was to use a nonlinear diffusion solver to compute estimates for the material temperature that could then be used in a Implicit Monte Carlo (IMC) calculation. At the end of the first year of the project, it was determined that this was not going to be effective, partially due to the concept, and partially due to the fact that the radiation diffusion package was not as efficient as it could be. The second, and final year, of the project focused on improving the robustness and computational efficiency of the radiation diffusion package in ALEGRA. To this end, several new multigroup diffusion methods have been developed and implemented in ALEGRA. While these methods have been implemented, their effectiveness of reducing overall simulation run time has not been fully tested. Additionally a comprehensive suite of verification problems has been developed for the diffusion package to ensure that it has been implemented correctly. This process took considerable time, but exposed significant bugs in both the previous and new diffusion packages, the linear solve packages, and even the NEVADA Framework's parser. In order to manage this large suite of problem, a new tool called Tampa has been developed. It is a general tool for automating the process of running and analyzing many simulations. Ryan McClarren, at the University of Michigan has been developing a Spherical Harmonics capability for unstructured meshes. While still in the early phases of development, this promises to bridge the gap in accuracy between a full transport solution using IMC and the diffusion approximation.
The later time phase of electrical breakdown in water is investigated for the purpose of improving understanding of the discharge characteristics. One dimensional simulations in addition to a zero dimensional lumped model are used to study the spark discharge. The goal is to provide better electrical models for water switches used in the pulse compression section of pulsed power systems. It is found that temperatures in the discharge channel under representative drive conditions, and assuming small initial radii from earlier phases of development, reach levels that are as much as an order of magnitude larger than those used to model discharges in atmospheric gases. This increased temperature coupled with a more rapidly rising conductivity with temperature than in air result in a decreased resistance characteristic compared to preceding models. A simple modification is proposed for the existing model to enable the approximate calculation of channel temperature and incorporate the resulting conductivity increase into the electrical circuit for the discharge channel. Comparisons are made between the theoretical predictions and recent experiments at Sandia. Although present and past experiments indicated that preceding late time channel models overestimated channel resistance, the calculations in this report seem to underestimate the resistance relative to recent experiments. Some possible reasons for this discrepancy are discussed.
This report contains an algorithm for decomposing higher-order finite elements into regions appropriate for isosurfacing and proves the conditions under which the algorithm will terminate. Finite elements are used to create piecewise polynomial approximants to the solution of partial differential equations for which no analytical solution exists. These polynomials represent fields such as pressure, stress, and momentum. In the past, these polynomials have been linear in each parametric coordinate. Each polynomial coefficient must be uniquely determined by a simulation, and these coefficients are called degrees of freedom. When there are not enough degrees of freedom, simulations will typically fail to produce a valid approximation to the solution. Recent work has shown that increasing the number of degrees of freedom by increasing the order of the polynomial approximation (instead of increasing the number of finite elements, each of which has its own set of coefficients) can allow some types of simulations to produce a valid approximation with many fewer degrees of freedom than increasing the number of finite elements alone. However, once the simulation has determined the values of all the coefficients in a higher-order approximant, tools do not exist for visual inspection of the solution. This report focuses on a technique for the visual inspection of higher-order finite element simulation results based on decomposing each finite element into simplicial regions where existing visualization algorithms such as isosurfacing will work. The requirements of the isosurfacing algorithm are enumerated and related to the places where the partial derivatives of the polynomial become zero. The original isosurfacing algorithm is then applied to each of these regions in turn.
We present an active method for mixing fluid streams in microchannels at low Reynolds number with no dead volume. To overcome diffusion limited mixing in microchannels, surface acoustic wave streaming offers an extremely effective approach to rapidly homogenize fluids. This is a pivotal improvement over mixers based on complex 3D microchannels which have significant dead volume resulting in trapping or loss of sample. Our micromixer is integrable and highly adaptable for use within existing microfluidic devices. Surface acoustic wave devices fabricated on 128° YX LiNbO3 permitted rapid mixing of flow streams as evidenced by fluorescence microscopy. Longitudinal waves created at the solid-liquid interface were capable of inducing strong nonlinear gradients within the bulk fluid. In the highly laminar regime (Re = 2), devices achieved over 93% mixing efficacy in less than a second. Micro-particle imaging velicometry was used to determine the mixing behavior in the microchannels and indicated that the liquid velocity can be controlled by varying the input power. Fluid velocities in excess of 3 cm•s-1 were measured in the main excitation region at low power levels (2.8mW). We believe that this technology will be pivotal in the development and advancement of microfluidic devices and applications.
This report describes a methodology for estimating the power and energy capacities for electricity energy storage systems that can be used to defer costly upgrades to fully overloaded, or nearly overloaded, transmission and distribution (T&D) nodes. This ''sizing'' methodology may be used to estimate the amount of storage needed so that T&D upgrades may be deferred for one year. The same methodology can also be used to estimate the characteristics of storage needed for subsequent years of deferral.
The MATLAB language has become a standard for rapid prototyping throughout all disciplines of engineering because the environment is easy to understand and use. Many of the basic functions included in MATLAB are those operations that are necessary to carry out larger algorithms such as the chirp z-transform spectral zoom. These functions include, but are not limited to mathematical operators, logical operators, array indexing, and the Fast Fourier Transform (FFT). However, despite its ease of use, MATLAB's technical computing language is interpreted and thus is not always capable of the memory management and performance of a compiled language. There are however, several optimizations that can be made within the chirp z-transform spectral zoom algorithm itself, and also to the MATLAB implementation in order to take full advantage of the computing environment and lower processing time and improve memory usage. To that end, this document's purpose is two-fold. The first demonstrates how to perform a chirp z-transform spectral zoom as well as an optimization within the algorithm that improves performance and memory usage. The second demonstrates a minor MATLAB language usage technique that can reduce overhead memory costs and improve performance.
Proposed for publication in the Journal of Fluid Mechanics.
Abstract not provided.
This report describes the research accomplishments achieved under the LDRD Project ''Leaky-mode VCSELs for photonic logic circuits''. Leaky-mode vertical-cavity surface-emitting lasers (VCSELs) offer new possibilities for integration of microcavity lasers to create optical microsystems. A leaky-mode VCSEL output-couples light laterally, in the plane of the semiconductor wafer, which allows the light to interact with adjacent lasers, modulators, and detectors on the same wafer. The fabrication of leaky-mode VCSELs based on effective index modification was proposed and demonstrated at Sandia in 1999 but was not adequately developed for use in applications. The aim of this LDRD has been to advance the design and fabrication of leaky-mode VCSELs to the point where initial applications can be attempted. In the first and second years of this LDRD we concentrated on overcoming previous difficulties in the epitaxial growth and fabrication of these advanced VCSELs. In the third year, we focused on applications of leaky-mode VCSELs, such as all-optical processing circuits based on gain quenching.
Abstract not provided.
Abstract not provided.
Abstract not provided.
With the increasing reliance on cyber technology to operate and control physical security system components, there is a need for methods to assess and model the interactions between the cyber system and the physical security system to understand the effects of cyber technology on overall security system effectiveness. This paper evaluates two methodologies for their applicability to the combined cyber and physical security problem. The comparison metrics include probabilities of detection (P{sub D}), interruption (P{sub I}), and neutralization (P{sub N}), which contribute to calculating the probability of system effectiveness (P{sub E}), the probability that the system can thwart an adversary attack. P{sub E} is well understood in practical applications of physical security but when the cyber security component is added, system behavior becomes more complex and difficult to model. This paper examines two approaches (Bounding Analysis Approach (BAA) and Expected Value Approach (EVA)) to determine their applicability to the combined physical and cyber security issue. These methods were assessed for a variety of security system characteristics to determine whether reasonable security decisions could be made based on their results. The assessments provided insight on an adversary's behavior depending on what part of the physical security system is cyber-controlled. Analysis showed that the BAA is more suited to facility analyses than the EVA because it has the ability to identify and model an adversary's most desirable attack path.
Abstract not provided.
In order to optically vary the magnification of an imaging system, continuous mechanical zoom lenses require multiple optical elements and use fine mechanical motion to precisely adjust the separations between individual or groups of lenses. By incorporating active elements into the optical design, we have designed and demonstrated imaging systems that are capable of variable optical magnification with no macroscopic moving parts. Changing the effective focal length and magnification of an imaging system can be accomplished by adeptly positioning two or more active optics in the optical design and appropriately adjusting the optical power of those elements. In this application, the active optics (e.g. liquid crystal spatial light modulators or deformable mirrors) serve as variable focal-length lenses. Unfortunately, the range over which currently available devices can operate (i.e. their dynamic range) is relatively small. Therefore, the key to this concept is to create large changes in the effective focal length of the system with very small changes in the focal lengths of individual elements by leveraging the optical power of conventional optical elements surrounding the active optics. By appropriately designing the optical system, these variable focal-length lenses can provide the flexibility necessary to change the overall system focal length, and therefore magnification, that is normally accomplished with mechanical motion in conventional zoom lenses.
Abstract not provided.
The SNL/CA Cultural Resources Management Plan satisfies the site's Environmental Management System requirement to promote long-term stewardship of cultural resources. The plan summarizes the cultural and historical setting of the site, identifies existing procedures and processes that support protection and preservation of resources, and outlines actions that would be initiated if cultural resources were discovered onsite in the future.3
Abstract not provided.
Abstract not provided.
Abstract not provided.
We present the results of a one year LDRD program that has focused on evaluating the use of newly developed deep ultraviolet LEDs in water purification. We describe our development efforts that have produced an LED-based water exposure set-up and enumerate the advances that have been made in deep UV LED performance throughout the project. The results of E. coli inactivation with 270-295 nm LEDs are presented along with an assessment of the potential for applying deep ultraviolet LED-based water purification to mobile point-of-use applications as well as to rural and international environments where the benefits of photovoltaic-powered systems can be realized.
This report describes an investigation of the piezoelectric field in strained bulk GaAs. The bound charge distribution is calculated and suitable electrode configurations are proposed for (1) uniaxial and (2) biaxial strain. The screening of the piezoelectric field is studied for different impurity concentrations and sample lengths. Electric current due to the piezoelectric field is calculated for the cases of (1) fixed strain and (2) strain varying in time at a constant rate.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
High-purity AlPt thin films prepared by self-propagating, high temperature combustion synthesis show evidence for a new rhombohedral phase. Sputter deposited Al/Pt multilayers of various designs are reacted at different rates in air and in vacuum, and each form a new trigonal/hexagonal aluminide phase with unit cell parameters a = 15.571(8) {angstrom}, c = 5.304(1) {angstrom}, space group R-3 (148), and Z, the number of formula units within a unit cell, = 39. The lattice is isostructural to that of the AlPd R-3 lattice as reported by Matkovic and Schubert (Matkovic, 1977). Reacted films have a random in-plane crystallographic texture, a modest out-of-plane (001) texture, and equiaxed grains with dimensions on the order of film thickness.
Abstract not provided.
As information systems become increasingly complex and pervasive, they become inextricably intertwined with the critical infrastructure of national, public, and private organizations. The problem of recognizing and evaluating threats against these complex, heterogeneous networks of cyber and physical components is a difficult one, yet a solution is vital to ensuring security. In this paper we investigate profile-based anomaly detection techniques that can be used to address this problem. We focus primarily on the area of network anomaly detection, but the approach could be extended to other problem domains. We investigate using several data analysis techniques to create profiles of network hosts and perform anomaly detection using those profiles. The ''profiles'' reduce multi-dimensional vectors representing ''normal behavior'' into fewer dimensions, thus allowing pattern and cluster discovery. New events are compared against the profiles, producing a quantitative measure of how ''anomalous'' the event is. Most network intrusion detection systems (IDSs) detect malicious behavior by searching for known patterns in the network traffic. This approach suffers from several weaknesses, including a lack of generalizability, an inability to detect stealthy or novel attacks, and lack of flexibility regarding alarm thresholds. Our research focuses on enhancing current IDS capabilities by addressing some of these shortcomings. We identify and evaluate promising techniques for data mining and machine-learning. The algorithms are ''trained'' by providing them with a series of data-points from ''normal'' network traffic. A successful algorithm can be trained automatically and efficiently, will have a low error rate (low false alarm and miss rates), and will be able to identify anomalies in ''pseudo real-time'' (i.e., while the intrusion is still in progress, rather than after the fact). We also build a prototype anomaly detection tool that demonstrates how the techniques might be integrated into an operational intrusion detection framework.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Over the last decade, dairy farms in New Mexico have become an important component to the economy of many rural ranching and farming communities. Dairy operations are water intensive and use groundwater that otherwise would be used for irrigation purposes. Most dairies reuse their process/green water three times and utilize lined lagoons for temporary storage of green water. Leakage of water from lagoons can pose a risk to groundwater quality. Groundwater resource protection infrastructures at dairies are regulated by the New Mexico Environment Department which currently relies on monitoring wells installed in the saturated zone for detecting leakage of waste water lagoon liners. Here we present a proposal to monitor the unsaturated zone beneath the lagoons with soil water solution samplers to provide early detection of leaking liners. Early detection of leaking liners along with rapid repair can minimize contamination of aquifers and reduce dairy liability for aquifer remediation. Additionally, acceptance of vadose zone monitoring as a NMED requirement over saturated zone monitoring would very likely significantly reduce dairy startup and expansion costs. Acknowledgment Funding for this project was provided by the Sandia National Laboratories Small Business Assistance Program
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report summarizes results generated on a 5-year cable-aging program that constituted part of the Nuclear Energy Plant Optimization (NEPO) program, an effort cosponsored by the U. S. Department of Energy (DOE) and the Electric Power Research Institute (EPRI). The NEPO cable-aging effort concentrated on two important issues involving the development of better lifetime prediction methods as well as the development and testing of novel cable condition-monitoring (CM) techniques. To address improved life prediction methods, we first describe the use of time-temperature superposition principles, indicating how this approach improves the testing of the Arrhenius model by utilizing all of the experimentally generated data instead of a few selected and processed data points. Although reasonable superposition is often found, we show several cases where non-superposition is evident, a situation that violates the constant acceleration assumption normally used in accelerated aging studies. Long-term aging results over extended temperature ranges allow us to show that curvature in Arrhenius plots for elongation is a common occurrence. In all cases the curvature results in a lowering of the Arrhenius activation energy at lower temperatures implying that typical extrapolation of high temperature results over-estimates material lifetimes. The long-term results also allow us to test the significance of extrapolating through the crystalline melting point of semi-crystalline materials. By utilizing ultrasensitive oxygen consumption (UOC) measurements, we show that it is possible to probe the low temperature extrapolation region normally inaccessible to conventional accelerated aging studies. This allows the quantitative testing of the often-used Arrhenius extrapolation assumption. Such testing indicates that many materials again show evidence of ''downward'' curvature (E{sub a} values drop as the aging temperature is lowered) consistent with the limited elongation results and many literature results. It is also shown how the UOC approach allows the probing of temperatures that cross through the crystalline melting point region of semi-crystalline materials such as XLPO and EPR cable insulations. New results on combined environment aging of neoprene and hypalon cable jacketing materials are presented and offer additional evidence in support of our time-temperature-dose rate (t-T-DR) superposition approach that had been used successfully in the past for such situations.
This research continues previous efforts to re-focus the question of penetrability away from the behavior of the penetrator itself and toward understanding the dynamic, possibly strain-rate dependent, behavior of the affected materials. A modified split Hopkinson pressure bar technique is prototyped to determine the value of reproducing the stress states, and mechanical responses, of geomaterials observed in actual penetrator tests within a laboratory setting. Conceptually, this technique simulates the passage of the penetrator surface past any fixed point in the penetrator trajectory by allowing for a controlled stress-time function to be transmitted into a sample, thereby mimicking the 1D radial projection inherent to analyses of the cavity expansion problem. Test results from a suite of weak (unconfined compressive strength, or UCS, of 22 MPa) concrete samples, with incident strain rates of 100-250 s{sup -1}, show that the complex mechanical response includes both plastic and anelastic wave propagation, and is critically dependent on incident particle velocity and saturation state. For instance, examination of the transmitted stress-time data, and post-test volumetric measurements of pulverized material, provide independent estimates of the plasticized zone length (1-2 cm) formed for incident particle velocity of {approx}16.7 m/s. The results also shed light on the elastic or energy propagation property changes that occur in the concrete. For example, the pre- and post-test zero-stress elastic wave propagation velocities show that the Young's modulus drops from {approx}19 GPa to <8 GPa for material within the first centimeter from the plastic transition front, while the Young's modulus of the dynamically confined, axially-stressed (in 6-18 MPa range) plasticized material drops to 0.5-0.6 GPa. The data also suggest that the critical particle velocity for formation of a plastic zone in the weak concrete is 13-15 m/s, with increased saturation tending to increase the critical particle velocity limit. Overall, the data produced from these experiments suggests that further pursuit of this approach is warranted for penetration research but also as a potential new method for dynamic testing of materials.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proposed for publication in Optics Express.
Abstract not provided.
Abstract not provided.
Wireless computer networks are increasing exponentially around the world. They are being implemented in both the unlicensed radio frequency (RF) spectrum (IEEE 802.11a/b/g) and the licensed spectrum (e.g., Firetide [1] and Motorola Canopy [2]). Wireless networks operating in the unlicensed spectrum are by far the most popular wireless computer networks in existence. The open (i.e., proprietary) nature of the IEEE 802.11 protocols and the availability of ''free'' RF spectrum have encouraged many producers of enterprise and common off-the-shelf (COTS) computer networking equipment to jump into the wireless arena. Competition between these companies has driven down the price of 802.11 wireless networking equipment and has improved user experiences with such equipment. The end result has been an increased adoption of the equipment by businesses and consumers, the establishment of the Wi-Fi Alliance [3], and widespread use of the Alliance's ''Wi-Fi'' moniker to describe these networks. Consumers use 802.11 equipment at home to reduce the burden of running wires in existing construction, facilitate the sharing of broadband Internet services with roommates or neighbors, and increase their range of ''connectedness''. Private businesses and government entities (at all levels) are deploying wireless networks to reduce wiring costs, increase employee mobility, enable non-employees to access the Internet, and create an added revenue stream to their existing business models (coffee houses, airports, hotels, etc.). Municipalities (Philadelphia; San Francisco; Grand Haven, MI) are deploying wireless networks so they can bring broadband Internet access to places lacking such access; offer limited-speed broadband access to impoverished communities; offer broadband in places, such as marinas and state parks, that are passed over by traditional broadband providers; and provide themselves with higher quality, more complete network coverage for use by emergency responders and other municipal agencies. In short, these Wi-Fi networks are being deployed everywhere. Much thought has been and is being put into evaluating cost-benefit analyses of wired vs. wireless networks and issues such as how to effectively cover an office building or municipality, how to efficiently manage a large network of wireless access points (APs), and how to save money by replacing an Internet service provider (ISP) with 802.11 technology. In comparison, very little thought and money are being focused on wireless security and monitoring for security purposes.
Abstract not provided.
Traditional polar format image formation for Synthetic Aperture Radar (SAR) requires a large amount of processing power and memory in order to accomplish in real-time. These requirements can thus eliminate the possible usage of interpreted language environments such as MATLAB. However, with trapezoidal aperture phase history collection and changes to the traditional polar format algorithm, certain optimizations make MATLAB a possible tool for image formation. Thus, this document's purpose is two-fold. The first outlines a change to the existing Polar Format MATLAB implementation utilizing the Chirp Z-Transform that improves performance and memory usage achieving near realtime results for smaller apertures. The second is the addition of two new possible image formation options that perform a more traditional interpolation style image formation. These options allow the continued exploration of possible interpolation methods for image formation and some preliminary results comparing image quality are given.
This SAND report provides the technical progress through April 2005 of the Sandia-led project, ''Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling'', funded by the DOE Office of Science GenomicsGTL Program. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO{sub 2} are important terms in the global environmental response to anthropogenic atmospheric inputs of CO{sub 2} and that oceanic microorganisms play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. In this project, we will investigate the carbon sequestration behavior of Synechococcus Sp., an abundant marine cyanobacteria known to be important to environmental responses to carbon dioxide levels, through experimental and computational methods. This project is a combined experimental and computational effort with emphasis on developing and applying new computational tools and methods. Our experimental effort will provide the biology and data to drive the computational efforts and include significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microamy experiments. Computational tools will be essential to our efforts to discover and characterize the function of the molecular machines of Synechococcus. To this end, molecular simulation methods will be coupled with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes. In addition, we will develop a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. The ultimate goal of this effort is develop and apply new experimental and computational methods needed to generate a new level of understanding of how the Synechococcus genome affects carbon fixation at the global scale. Anticipated experimental and computational methods will provide ever-increasing insight about the individual elements and steps in the carbon fixation process, however relating an organism's genome to its cellular response in the presence of varying environments will require systems biology approaches. Thus a primary goal for this effort is to integrate the genomic data generated from experiments and lower level simulations with data from the existing body of literature into a whole cell model. We plan to accomplish this by developing and applying a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats. These challenges are unprecedented in high performance scientific computing and necessitate the development of a companion computational infrastructure to support this effort.
Abstract not provided.
We report on the successful attempts to trigger high voltage pressurized gas switches by utilizing beam transport through 1 MO-cm deionized water. The wavelength of the laser radiation was 532 nm. We have investigated Nd: YAG laser triggering of a 6 MV, SF6 insulated gas switch for a range of laser and switch parameters. Laser wavelength of 532 nm with nominal pulse lengths of 10 ns full width half maximum (FWHM) were used to trigger the switch. The laser beam was transported through 67 cm-long cell of 1 MO-cm deionized water constructed with anti reflection UV grade fused silica windows. The laser beam was then focused to form a breakdown arc in the gas between switch electrodes. Less than 10 ns jitter in the operation of the switch was obtained for laser pulse energies of between 80-110 mJ. Breakdown arcs more than 35 mm-long were produced by using a 70 cm focusing optic.
Abstract not provided.
Abstract not provided.
Proposed for publication.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report summarizes research performed at Sandia National Laboratories (SNL) in collaboration with the Environmental Protection Agency (EPA) to assess microarray quality on arrays from two platforms of interest to the EPA. Custom microarrays from two novel, commercially produced array platforms were imaged with SNL's unique hyperspectral imaging technology and multivariate data analysis was performed to investigate sources of emission on the arrays. No extraneous sources of emission were evident in any of the array areas scanned. This led to the conclusions that either of these array platforms could produce high quality, reliable microarray data for the EPA toxicology programs. Hyperspectral imaging results are presented and recommendations for microarray analyses using these platforms are detailed within the report.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Raman spectroscopic imaging is a powerful technique for visualizing chemical differences within a variety of samples based on the interaction of a substance's molecular vibrations with laser light. While Raman imaging can provide a unique view of samples such as residual stress within silicon devices, chemical degradation, material aging, and sample heterogeneity, the Raman scattering process is often weak and thus requires very sensitive collection optics and detectors. Many commercial instruments (including ones owned here at Sandia National Laboratories) generate Raman images by raster scanning a point focused laser beam across a sample--a process which can expose a sample to extreme levels of laser light and requires lengthy acquisition times. Our previous research efforts have led to the development of a state-of-the-art two-dimensional hyperspectral imager for fluorescence imaging applications such as microarray scanning. This report details the design, integration, and characterization of a line-scan Raman imaging module added to this efficient hyperspectral fluorescence microscope. The original hyperspectral fluorescence instrument serves as the framework for excitation and sample manipulation for the Raman imaging system, while a more appropriate axial transmissive Raman imaging spectrometer and detector are utilized for collection of the Raman scatter. The result is a unique and flexible dual-modality fluorescence and Raman imaging system capable of high-speed imaging at high spatial and spectral resolutions. Care was taken throughout the design and integration process not to hinder any of the fluorescence imaging capabilities. For example, an operator can switch between the fluorescence and Raman modalities without need for extensive optical realignment. The instrument performance has been characterized and sample data is presented.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Advances in the Astronautical Sciences
Physical mechanisms responsible for single-event effects are reviewed, concentrating on silicon MOS devices and digital integrated circuits. A brief historical overview of single-event effects in space and terrestrial systems is given. Single-event upset mechanisms in SRAMs are briefly described, as is the initiation of single-event latchup in CMOS structures. Techniques for mitigating single-event effects are described, including the impact of technology trends on mitigation efficacy. Future challenges are briefly explored.
Proceedings of SPIE - The International Society for Optical Engineering
Sandia National Laboratories designs and builds Synthetic Aperture Radar (SAR) systems capable of forming high-quality exceptionally fine resolution images. During the spring of 2004 a series of test flights were completed with a Ka-band testbed SAR on Sandia's DeHavilland DHC-6 Twin Otter aircraft. A large data set was collected including real-time fine-resolution images of a variety of target scenes. This paper offers a sampling of high quality images representative of the output of Sandia's Ka-band testbed radar with resolutions as fine as 4 inches. Images will be annotated with descriptions of collection geometries and other relevant image parameters.
Proceedings of SPIE - The International Society for Optical Engineering
Airborne synthetic aperture radar (SAR) imaging systems have reached a degree of accuracy and sophistication that requires the validity of the free-space approximation for radio-wave propagation to be questioned. Based on the thin-lens approximation, a closed-form model for the focal length of a gravity wave-modulated refractive-index interface in the lower troposphere is developed. The model corroborates the suggestion that mesoscale, quasi-deterministic variations of the clear-air radio refractive-index field can cause diffraction patterns on the ground that are consistent with reflectivity artifacts occasionally seen in SAR images, particularly in those collected at long ranges, short wavelengths, and small grazing angles.
Proceedings of SPIE - The International Society for Optical Engineering
An unattended ground sensor (UGS) that attempts to perform target identification without providing some corresponding estimate of confidence level is of limited utility. In this context, a confidence level is a measure of probability that the detected vehicle is of a particular target class. Many identification methods attempt to match features of a detected vehicle to each of a set of target templates. Each template is formed empirically from features collected from vehicles known to be members of the particular target class. The nontarget class is inherent in this formulation and must be addressed in providing a confidence level. Often, it is difficult to adequately characterize the nontarget class empirically by feature collection, so assumptions must be made about the nontarget class. An analyst tasked with deciding how to use the confidence level of the classifier decision should have an accurate understanding of the meaning of the confidence level given. This paper compares several definitions of confidence level by considering the assumptions that are made in each, how these assumptions affect the meaning, and giving examples of implementing them in a practical acoustic UGS.
Physical Review B - Condensed Matter and Materials Physics
Heteroepitaxial growth of GeSi alloys on Si (001) under deposition conditions that partially limit surface mobility leads to an unusual form of strain-induced surface morphological evolution. We discuss a kinetic growth regime wherein pits form in a thick metastable wetting layer and, with additional deposition, evolve to a quantum dot molecule-a symmetric assembly of four quantum dots bound by the central pit. We discuss the size selection and scaling of quantum dot molecules. We then examine the key mechanism-preferred pit formation-in detail, using ex situ atomic force microscopy, in situ scanning tunneling microscopy, and kinetic Monte Carlo simulations. A picture emerges wherein localized pits appear to arise from a damped instability. When pits are annealed, they extend into an array of highly anisotropic surface grooves via a one-dimensional growth instability. Subsequent deposition on this grooved film results in a fascinating structure where compact quantum dots and molecules, as well as highly ramified quantum wires, are all simultaneously self-assembled. © 2005 The American Physical Society.
Physical Review Letters
We track individual twin boundaries in Ag films on Ru(0001) using low-energy electron microscopy. The twin boundaries, which separate film regions whose close-packed planes are stacked differently, move readily during film growth but relatively little during annealing. The growth-driven motion of twin boundaries occurs as film steps advance across the surface-as a new atomic Ag layer reaches an fcc twin boundary, the advancing step edge carries along the boundary. This coupling of the microstructural defect (twin boundary) and the surface step during growth can produce film regions over 10μm wide that are twin free. © 2005 The American Physical Society.
Proceedings of SPIE - The International Society for Optical Engineering
The shape control of thin, flexible structures has been studied primarily for edge-supported thin plates. For applications involving reconfigurable apertures such as membrane optics and active RF surfaces, corner-supported configurations may prove more applicable. Corner-supported adaptive structures allow for parabolic geometries, greater flexibility, and larger achievable deflections when compared to edge-supported geometries under similar actuation conditions. Preliminary models have been developed for corner-supported thin plates actuated by isotropic piezoelectric actuators. However, typical piezoelectric materials are known to be orthotropic. This paper extends a previously-developed isotropic model for a corner-supported, thin, rectangular bimorph to a more general orthotropic model for a bimorph actuated by a two-dimensional array of segmented PVDF laminates. First, a model determining the deflected shape of an orthotropic laminate for a given distribution of voltages over the actuator array is derived. Second, symmetric actuation of a bimorph consisting of orthotropic material is simulated using orthogonally-oriented laminae. Finally, the results of the model are shown to agree well with layered-shell finite element simulations for simple and complex voltage distributions.
Proposed for publication in Nanoletters.
Abstract not provided.
Journal of the American Ceramic Society
The ability to predict and control organic decomposition of a material under arbitrary thermal treatments is one of the main objectives of thermogravimetric studies. The development of this ability provides significant potential to ensure reliability and reproducibility for a given processing method and can be used in planning optimized thermal treatment strategies. Based on this report, the master sintering curve theory has been successfully extended to similar kinetically controlled phenomena. The theory has been applied to organic decomposition reaction kinetics to develop a master organic decomposition curve. The fundamental kinetics are assumed to be governed by an Arrhenius-type reaction rate, making master sintering and decomposition curves analogous to one another. The formulation and construction of a master decomposition curve are given in this paper. Simultaneous thermogravimetric and differential thermal analysis of a low-temperature co-fire glass/ceramic dielectric tape (Dupont 951 Green Tape™) is analyzed and used to demonstrate this new concept. The results reveal two independent organic decomposition reactions, the first occurring at ≈ 245° C and the second at ≈ 365°C. The analysis is used to produce a master decomposition curve and to calculate the activation energy for these reactions, at 86±6 and 142 ± 4 kJ/mol, respectively. In addition, the weight loss of product and the rate of decomposition can be predicted under varying thermal paths (time-temperature trajectories) following a minimal set of preliminary experiments. © 2005 The American Ceramic Society.
This report describes the test and evaluation methods by which the Teraflops Operating System, or TOS, that resides on Sandia's massively-parallel computer Janus is verified for production release. Also discussed are methods used to build TOS before testing and evaluating, miscellaneous utility scripts, a sample test plan, and a proposed post-test method for quickly examining the large number of test results. The purpose of the report is threefold: (1) to provide a guide to T&E procedures, (2) to aid and guide others who will run T&E procedures on the new ASCI Red Storm machine, and (3) to document some of the history of evaluation and testing of TOS. This report is not intended to serve as an exhaustive manual for testers to conduct T&E procedures.
Modeling and simulation is playing an increasing role in supporting tough regulatory decisions, which are typically characterized by variabilities and uncertainties in the scenarios, input conditions, failure criteria, model parameters, and even model form. Variability exists when there is a statistically significant database that is fully relevant to the application. Uncertainty, on the other hand, is characterized by some degree of ignorance. A simple algebraic problem was used to illustrate how various risk methodologies address variability and uncertainty in a regulatory context. These traditional risk methodologies include probabilistic methods (including frequensic and Bayesian perspectives) and second-order methods where variabilities and uncertainties are treated separately. Representing uncertainties with (subjective) probability distributions and using probabilistic methods to propagate subjective distributions can lead to results that are not logically consistent with available knowledge and that may not be conservative. The Method of Belief Scales (MBS) is developed as a means to logically aggregate uncertain input information and to propagate that information through the model to a set of results that are scrutable, easily interpretable by the nonexpert, and logically consistent with the available input information. The MBS, particularly in conjunction with sensitivity analyses, has the potential to be more computationally efficient than other risk methodologies. The regulatory language must be tailored to the specific risk methodology if ambiguity and conflict are to be avoided.
Proposed for publication in Applied Geochemistry.
Abstract not provided.
Tensile and compressive stress-strain experiments on metals at strain rates in the range of 1-1000 1/s are relevant to many applications such as gravity-dropped munitions and airplane accidents. While conventional test methods cover strain rates up to {approx}10 s{sup -1} and split-Hopkinson and other techniques cover strain rates in excess of {approx}1000 s{sup -1}, there are no well defined techniques for the intermediate or ''Sub-Hopkinson'' strain-rate regime. The current work outlines many of the challenges in testing in the Sub-Hopkinson regime, and establishes methods for addressing these challenges. The resulting technique for obtaining intermediate rate stress-strain data is demonstrated in tension on a high-strength, high-toughness steel alloy (Hytuf) that could be a candidate alloy for earth penetrating munitions and in compression on a Au-Cu braze alloy.
We conducted broadband absorption measurements of atmospheric water vapor in the ground state, X {sup 1}A{sub 1} (000), from 0.4 to 2.7 THz with a pressure broadening-limited resolution of 6.2 GHz using pulsed, terahertz time-domain spectroscopy (THz-TDS). We measured a total of seventy-two absorption lines and forty-nine lines were identified as H{sub 2}{sup 16}O resonances. All the H{sub 2}{sup 16}O lines identified were confirmed by comparing their center frequencies to experimental values available in the literature.
Friction and wear are major concerns in the performance and reliability of micromechanical (MEMS) devices. While a variety of lubricant and wear resistant coatings are known which we might consider for application to MEMS devices, the severe geometric constraints of many micromechanical systems (high aspect ratios, shadowed surfaces) make most deposition methods for friction and wear-resistance coatings impossible. In this program we have produced and evaluate highly conformal, tribological coatings, deposited by atomic layer deposition (ALD), for use on surface micromachined (SMM) and LIGA structures. ALD is a chemical vapor deposition process using sequential exposure of reagents and self-limiting surface chemistry, saturating at a maximum of one monolayer per exposure cycle. The self-limiting chemistry results in conformal coating of high aspect ratio structures, with monolayer precision. ALD of a wide variety of materials is possible, but there have been no studies of structural, mechanical, and tribological properties of these films. We have developed processes for depositing thin (<100 nm) conformal coatings of selected hard and lubricious films (Al2O3, ZnO, WS2, W, and W/Al{sub 2}O{sub 3} nanolaminates), and measured their chemical, physical, mechanical and tribological properties. A significant challenge in this program was to develop instrumentation and quantitative test procedures, which did not exist, for friction, wear, film/substrate adhesion, elastic properties, stress, etc., of extremely thin films and nanolaminates. New scanning probe and nanoindentation techniques have been employed along with detailed mechanics-based models to evaluate these properties at small loads characteristic of microsystem operation. We emphasize deposition processes and fundamental properties of ALD materials, however we have also evaluated applications and film performance for model SMM and LIGA devices.
This multinational test program is quantifying the aerosol particulates produced when a high energy density device (HEDD) impacts surrogate material and actual spent fuel test rodlets. The experimental work, performed in four consecutive test phases, has been in progress for several years. The overall program provides needed data that are relevant to some sabotage scenarios in relation to spent fuel transport and storage casks, and associated risk assessments. This program also provides significant political benefits in international cooperation for nuclear security related evaluations. The spent fuel sabotage--aerosol test program is coordinated with the international Working Group for Sabotage Concerns of Transport and Storage Casks (WGSTSC), and supported by both the U.S. Department of Energy and Nuclear Regulatory Commission. This report summarizes the preliminary, Phase 1 work performed in 2001 and 2002 at Sandia National Laboratories and the Fraunhofer Institute, Germany, and documents the experimental results obtained, observations, and preliminary interpretations. Phase 1 testing included: performance quantifications of the HEDD devices; characterization of the HEDD or conical shaped charge (CSC) jet properties with multiple tests; refinement of the aerosol particle collection apparatus being used; and, CSC jet-aerosol tests using leaded glass plates and glass pellets, serving as representative brittle materials. Phase 1 testing was quite important for the design and performance of the following Phase 2 test program and test apparatus.
Due to the nature of many infectious agents, such as anthrax, symptoms may either take several days to manifest or resemble those of less serious illnesses leading to misdiagnosis. Thus, bioterrorism attacks that include the release of such agents are particularly dangerous and potentially deadly. For this reason, a system is needed for the quick and correct identification of disease outbreaks. The Real-time Outbreak Disease Surveillance System (RODS), initially developed by Carnegie Mellon University and the University of Pittsburgh, was created to meet this need. The RODS software implements different classifiers for pertinent health surveillance data in order to determine whether or not an outbreak has occurred. In an effort to improve the capability of RODS at detecting outbreaks, we incorporate a data fusion method. Data fusion is used to improve the results of a single classification by combining the output of multiple classifiers. This paper documents the first stages of the development of a data fusion system that can combine the output of the classifiers included in RODS.
Large complex teams (e.g., DOE labs) must achieve sustained productivity in critical operations (e.g., weapons and reactor development) while maintaining safety for involved personnel, the public, and physical assets, as well as security for property and information. This requires informed management decisions that depend on tradeoffs of factors such as the mode and extent of personnel protection, potential accident consequences, the extent of information and physical asset protection, and communication with and motivation of involved personnel. All of these interact (and potentially interfere) with each other and must be weighed against financial resources and implementation time. Existing risk analysis tools can successfully treat physical response, component failure, and routine human actions. However, many ''soft'' factors involving human motivation and interaction among weakly related factors have proved analytically problematic. There has been a need for an effective software tool capable of quantifying these tradeoffs and helping make rational choices. This type of tool, developed during this project, facilitates improvements in safety, security, and productivity, and enables measurement of improvements as a function of resources expended. Operational safety, security, and motivation are significantly influenced by ''latent effects'', which are pre-occurring influences. One example of these is that an atmosphere of excessive fear can suppress open and frank disclosures, which can in turn hide problems, impede correction, and prevent lessons learned. Another is that a cultural mind-set of commitment, self-responsibility, and passion for an activity is a significant contributor to the activity's success. This project pursued an innovative approach for quantitatively analyzing latent effects in order to link the above types of factors, aggregating available information into quantitative metrics that can contribute to strategic management decisions, and measuring the results. The approach also evaluates the inherent uncertainties, and allows for tracking dynamics for early response and assessing developing trends. The model development is based on how factors combine and influence other factors in real time and over extended time periods. Potential strategies for improvement can be simulated and measured. Input information can be determined by quantification of qualitative information in a structured derivation process. This has proved to be a promising new approach for research and development applied to personnel performance and risk management.
Saliency detection in images is an important outstanding problem both in machine vision design and the understanding of human vision mechanisms. Recently, seminal work by Itti and Koch resulted in an effective saliency-detection algorithm. We reproduce the original algorithm in a software application Vision and explore its limitations. We propose extensions to the algorithm that promise to improve performance in the case of difficult-to-detect objects.
A previously-developed experimental facility has been used to determine gas-surface thermal accommodation coefficients from the pressure dependence of the heat flux between parallel plates of similar material but different surface finish. Heat flux between the plates is inferred from measurements of temperature drop between the plate surface and an adjacent temperature-controlled water bath. Thermal accommodation measurements were determined from the pressure dependence of the heat flux for a fixed plate separation. Measurements of argon and nitrogen in contact with standard machined (lathed) or polished 304 stainless steel plates are indistinguishable within experimental uncertainty. Thus, the accommodation coefficient of 304 stainless steel with nitrogen and argon is estimated to be 0.80 {+-} 0.02 and 0.87 {+-} 0.02, respectively, independent of the surface roughness within the range likely to be encountered in engineering practice. Measurements of the accommodation of helium showed a slight variation with 304 stainless steel surface roughness: 0.36 {+-} 0.02 for a standard machine finish and 0.40 {+-} 0.02 for a polished finish. Planned tests with carbon-nanotube-coated plates will be performed when 304 stainless-steel blanks have been successfully coated.
Two different Sandia MEMS devices have been tested in a high-g environment to determine their performance and survivability. The first test was performed using a drop-table to produce a peak acceleration load of 1792 g's over a period of 1.5 ms. For the second test the MEMS devices were assembled in a gun-fired penetrator and shot into a cement target at the Army Waterways Experiment Station in Vicksburg Mississippi. This test resulted in a peak acceleration of 7191 g's for a duration of 5.5 ms. The MEMS devices were instrumented using the MEMS Diagnostic Extraction System (MDES), which is capable of driving the devices and recording the device output data during the high-g event, providing in-flight data to assess the device performance. A total of six devices were monitored during the experiments, four mechanical non-volatile memory devices (MNVM) and two Silicon Reentry Switches (SiRES). All six devices functioned properly before, during, and after each high-g test without a single failure. This is the first known test under flight conditions of an active, powered MEMS device at Sandia.
Optoelectronic microsystems are more and more prevalent as researchers seek to increase transmission bandwidths, implement electrical isolation, enhance security, or take advantage of sensitive optical sensing methods. Board level photonic integration techniques continue to improve, but photonic microsystems and fiber interfaces remain problematic, especially upon size reduction. Optical fiber is unmatched as a transmission medium for distances ranging from tens of centimeters to kilometers. The difficulty with using optical fiber is the small size of the core (approximately 9 {micro}m for the core of single mode telecommunications fiber) and the tight requirement on spot size and input numerical aperture (NA). Coupling to devices such as vertical cavity emitting lasers (VCSELs) and photodetectors presents further difficulties since these elements work in a plane orthogonal to the electronics board and typically require additional optics. This leads to the need for a packaging solution that can incorporate dissimilar materials while maintaining the tight alignment tolerances required by the optics. Over the course of this LDRD project, we have examined the capabilities of components such as VCSELs and photodetectors for high-speed operation and investigated the alignment tolerances required by the optical system. A solder reflow process has been developed to help fulfill these packaging requirements and the results of that work are presented here.
This report examines a number of hardware circuit design issues associated with implementing certain functions in FPGA and ASIC technologies. Here we show circuit designs for AES and SHA-1 that have an extremely small hardware footprint, yet show reasonably good performance characteristics as compared to the state of the art designs found in the literature. Our AES performance numbers are fueled by an optimized composite field S-box design for the Stratix chipset. Our SHA-1 designs use register packing and feedback functionalities of the Stratix LE, which reduce the logic element usage by as much as 72% as compared to other SHA-1 designs.
Proposed for publication in IEEE Transactions on Antennas and Propagation.
Abstract not provided.
While isentropic compression experiment (ICE) techniques have proved useful in deducing the high-pressure compressibility of a wide range of materials, they have encountered difficulties where large-volume phase transitions exist. The present study sought to apply graded-density impactor methods for producing isentropic loading to planar impact experiments to selected such problems. Cerium was chosen due to its 20% compression between 0.7 and 1.0 GPa. A model was constructed based on limited earlier dynamic data, and applied to the design of a suite of experiments. A capability for handling this material was installed. Two experiments were executed using shock/reload techniques with available samples, loading initially to near the gamma-alpha transition, then reloading. As well, two graded-density impactor experiments were conducted with alumina. A method for interpreting ICE data was developed and validated; this uses a wavelet construction for the ramp wave and includes corrections for the ''diffraction'' of wavelets by releases or reloads reflected from the sample/window interface. Alternate methods for constructing graded-density impactors are discussed.
Water is the critical natural resource of the new century. Significant improvements in traditional water treatment processes require novel approaches based on a fundamental understanding of nanoscale and atomic interactions at interfaces between aqueous solution and materials. To better understand these critical issues and to promote an open dialog among leading international experts in water-related specialties, Sandia National Laboratories sponsored a workshop on April 24-26, 2005 in Santa Fe, New Mexico. The ''Frontiers of Interfacial Water Research Workshop'' provided attendees with a critical review of water technologies and emphasized the new advances in surface and interfacial microscopy, spectroscopy, diffraction, and computer simulation needed for the development of new materials for water treatment.
Recent interest in reprocessing nuclear fuel in the U.S. has led to advanced separations processes that employ continuous processing and multiple extraction steps. These advanced plants will need to be designed with state-of-the-art instrumentation for materials accountancy and control. This research examines the current and upcoming instrumentation for nuclear materials accountancy for those most suited to the reprocessing environment. Though this topic has received attention time and again in the past, new technologies and changing world conditions require a renewed look and this subject. The needs for the advanced UREX+ separations concept are first identified, and then a literature review of current and upcoming measuring techniques is presented. The report concludes with a preliminary list of recommended instruments and measurement locations.
This report contains the summary of LDRD project 91312, titled ''Binary Electrokinetic Separation of Target DNA from Background DNA Primers''. This work is the first product of a collaboration with Columbia University and the Northeast BioDefense Center of Excellence. In conjunction with Ian Lipkin's lab, we are developing a technique to reduce false positive events, due to the detection of unhybridized reporter molecules, in a sensitive and multiplexed detection scheme for nucleic acids developed by the Lipkin lab. This is the most significant problem in the operation of their capability. As they are developing the tools for rapidly detecting the entire panel of hemorrhagic fevers this technology will immediately serve an important national need. The goal of this work was to attempt to separate nucleic acid from a preprocessed sample. We demonstrated the preconcentration of kilobase-pair length double-stranded DNA targets, and observed little preconcentration of 60 base-pair length single-stranded DNA probes. These objectives were accomplished in microdevice formats that are compatible with larger detection systems for sample pre-processing. Combined with Columbia's expertise, this technology would enable a unique, fast, and potentially compact method for detecting/identifying genetically-modified organisms and multiplexed rapid nucleic acid identification. Another competing approach is the DARPA funded IRIS Pharmaceutical TIGER platform which requires many hours for operation, and an 800k$ piece of equipment that fills a room. The Columbia/SNL system could provide a result in 30 minutes, at the cost of a few thousand dollars for the platform, and would be the size of a shoebox or smaller.
Abstract not provided.
This report documents the investigation regarding the failure of CPVC piping that was used to connect a solar hot water system to standard plumbing in a home. Details of the failure are described along with numerous pictures and diagrams. A potential failure mechanism is described and recommendations are outlined to prevent such a failure.
Political borders are controversial and contested spaces. In an attempt to better understand movement along and through political borders, this project applied the metaphor of a membrane to look at how people, ideas, and things ''move'' through a border. More specifically, the research team employed this metaphor in a system dynamics framework to construct a computer model to assess legal and illegal migration on the US-Mexico border. Employing a metaphor can be helpful, as it was in this project, to gain different perspectives on a complex system. In addition to the metaphor, the multidisciplinary team utilized an array of methods to gather data including traditional literature searches, an experts workshop, a focus group, interviews, and culling expertise from the individuals on the research team. Results from the qualitative efforts revealed strong social as well as economic drivers that motivate individuals to cross the border legally. Based on the information gathered, the team concluded that legal migration dynamics were of a scope we did not want to consider hence, available demographic models sufficiently capture migration at the local level. Results from both the quantitative and qualitative data searches were used to modify a 1977 border model to demonstrate the dynamic nature of illegal migration. Model runs reveal that current US-policies based on neo-classic economic theory have proven ineffective in curbing illegal migration, and that proposed enforcement policies are also likely to be ineffective. We suggest, based on model results, that improvement in economic conditions within Mexico may have the biggest impact on illegal migration to the U.S. The modeling also supports the views expressed in the current literature suggesting that demographic and economic changes within Mexico are likely to slow illegal migration by 2060 with no special interventions made by either government.
Current Joint Test Assembly (JTA) neutron monitors rely on knock-on proton type detectors that are susceptible to X-rays and low energy gamma rays. We investigated two novel plastic scintillating fiber directional neutron detector prototypes. One prototype used a fiber selected such that the fiber width was less than 2.1mm which is the range of a proton in plastic. The difference in the distribution of recoil proton energy deposited in the fiber was used to determine the incident neutron direction. The second prototype measured both the recoil proton energy and direction. The neutron direction was determined from the kinematics of single neutron-proton scatters. This report describes the development and performance of these detectors.
A turbulence model for buoyant flows has been developed in the context of a k-{var_epsilon} turbulence modeling approach. A production term is added to the turbulent kinetic energy equation based on dimensional reasoning using an appropriate time scale for buoyancy-induced turbulence taken from the vorticity conservation equation. The resulting turbulence model is calibrated against far field helium-air spread rate data, and validated with near source, strongly buoyant helium plume data sets. This model is more numerically stable and gives better predictions over a much broader range of mesh densities than the standard k-{var_epsilon} model for these strongly buoyant flows.
Because of the inevitable depletion of fossil fuels and the corresponding release of carbon to the environment, the global energy future is complex. Some of the consequences may be politically and economically disruptive, and expensive to remedy. For the next several centuries, fuel requirements will increase with population, land use, and ecosystem degradation. Current or projected levels of aggregated energy resource use will not sustain civilization as we know it beyond a few more generations. At the same time, issues of energy security, reliability, sustainability, recoverability, and safety need attention. We supply a top-down, qualitative model--the surety model--to balance expenditures of limited resources to assure success while at the same time avoiding catastrophic failure. Looking at U.S. energy challenges from a surety perspective offers new insights on possible strategies for developing solutions to challenges. The energy surety model with its focus on the attributes of security and sustainability could be extrapolated into a global energy system using a more comprehensive energy surety model than that used here. In fact, the success of the energy surety strategy ultimately requires a more global perspective. We use a 200 year time frame for sustainability because extending farther into the future would almost certainly miss the advent and perfection of new technologies or changing needs of society.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proposed for publication in Groundwater.
Abstract not provided.
UNIPROCESSOR PERFORMANCE ANALYSIS OF A REPRESENTATIVE WORKLOAD OF SANDIA NATIONAL LABORATORIES' SCIENTIFIC APPLICATIONS Master of Science in Electrical Engineering New Mexico State University Las Cruces, New Mexico, 2005 Dr. Jeanine Cook, Chair Throughout the last decade computer performance analysis has become absolutely necessary to maximum performance of some workloads. Sandia National Laboratories (SNL) located in Albuquerque, New Mexico is no different in that to achieve maximum performance of large scientific, parallel workloads performance analysis is needed at the uni-processor level. A representative workload has been chosen as the basis of a computer performance study to determine optimal processor characteristics in order to better specify the next generation of supercomputers. Cube3, a finite element test problem developed at SNL is a representative workload of their scientific workloads. This workload has been studied at the uni-processor level to understand characteristics in the microarchitecture that will lead to the overall performance improvement at the multi-processor level. The goal of studying vthis workload at the uni-processor level is to build a performance prediction model that will be integrated into a multi-processor performance model which is currently being developed at SNL. Through the use of performance counters on the Itanium 2 microarchitecture, performance statistics are studied to determine bottlenecks in the microarchitecture and/or changes in the application code that will maximize performance. From source code analysis a performance degrading loop kernel was identified and through the use of compiler optimizations a performance gain of around 20% was achieved.
Proposed for publication in Nuclear Instruments and Methods in Physics Research.
Abstract not provided.
We present a new ab initio method for electronic structure calculations of materials at finite temperature (FT) based on the all-electron quasiparticle self-consistent GW (QPscGW) approximation and Keldysh time-loop Green's function approach. We apply the method to Si, Ge, GaAs, InSb, and diamond and show that the band gaps of these materials universally decrease with temperature in contrast with the local density approximation (LDA) of density functional theory (DFT) where the band gaps universally increase. At temperatures of a few eV the difference between quasiparticle energies obtained in FT-QPscGW and FT-LDA approaches significantly reduces. This result suggests that existing simulations of very high temperature materials based on the FT-LDA are more justified then it might appear from well-known LDA band gap errors at zero-temperature.
The use of Ion Mobility Spectrometry (IMS)in the Detection of Contraband Sandia researchers use ion mobility spectrometers for trace chemical detection and analysis in a variety of projects and applications. Products developed in recent years based on IMS-technology include explosives detection personnel portals, the Material Area Access (MAA) checkpoint of the future, an explosives detection vehicle portal, hand-held detection systems such as the Hound and Hound II (all 6400), micro-IMS sensors (1700), ordnance detection (2500), and Fourier Transform IMS technology (8700). The emphasis to date has been on explosives detection, but the detection of chemical agents has also been pursued (8100 and 6400).
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report began with a Laboratory-Directed Research and Development (LDRD) project to improve Sandia National Laboratories multidisciplinary capabilities in energy systems analysis. The aim is to understand how various electricity generating options can best serve needs in the United States. The initial product is documented in a series of white papers that span a broad range of topics, including the successes and failures of past modeling studies, sustainability, oil dependence, energy security, and nuclear power. Summaries of these projects are included here. These projects have provided a background and discussion framework for the Energy Systems Analysis LDRD team to carry out an inter-comparison of many of the commonly available electric power sources in present use, comparisons of those options, and efforts needed to realize progress towards those options. A computer aid has been developed to compare various options based on cost and other attributes such as technological, social, and policy constraints. The Energy Systems Analysis team has developed a multi-criteria framework that will allow comparison of energy options with a set of metrics that can be used across all technologies. This report discusses several evaluation techniques and introduces the set of criteria developed for this LDRD.
Abstract not provided.
Abstract not provided.
AMarkovprocessmodelhasbeenusedfortheDARTsystemsanalysisstudy.ThebasicdesignthroughanalysisprocessisnotimmediatelydescribableasaMarkovprocess,butweshowhowatrueMarkovprocesscanbederivedandanalyzed.Wealsoshowhowsensitivitiesofthemodelwithrespecttotheinputvaluescanbecomputedefficiently.Thisisusefulinunderstandinghowtheresultsofthismodelcanbeusedtodeterminestrategiesforinvestmentthatwillimprovethedesignthroughanalysisprocess.3