This paper presents a new map representing the structure of all of science, based on journal articles, including both the natural and social sciences. Similar to cartographic maps of our world, the map of science provides a bird's eye view of today's scientific landscape. It can be used to visually identify major areas of science, their size, similarity, and interconnectedness. In order to be useful, the map needs to be accurate on a local and on a global scale. While our recent work has focused on the former aspect, this paper summarizes results on how to achieve structural accuracy. Eight alternative measures of journal similarity were applied to a data set of 7,121 journals covering over 1 million documents in the combined Science Citation and Social Science Citation Indexes. For each journal similarity measure we generated two-dimensional spatial layouts using the force-directed graph layout tool, VxOrd. Next, mutual information values were calculated for each graph at different clustering levels to give a measure of structural accuracy for each map. The best co-citation and inter-citation maps according to local and structural accuracy were selected and are presented and characterized. These two maps are compared to establish robustness. The inter-citation map is then used to examine linkages between disciplines. Biochemistry appears as the most interdisciplinary discipline in science.
The Z accelerator [R.B. Spielman, W.A. Stygar, J.F. Seamen et al., Proceedings of the 11th International Pulsed Power Conference, Baltimore, MD, 1997, edited by G. Cooperstein and I. Vitkovitsky (IEEE, Piscataway, NJ, 1997), Vol. 1, p. 709] at Sandia National Laboratories delivers {approx}20 MA load currents to create high magnetic fields (>1000 T) and high pressures (megabar to gigabar). In a z-pinch configuration, the magnetic pressure (the Lorentz force) supersonically implodes a plasma created from a cylindrical wire array, which at stagnation typically generates a plasma with energy densities of about 10 MJ/cm{sup 3} and temperatures >1 keV at 0.1% of solid density. These plasmas produce x-ray energies approaching 2 MJ at powers >200 TW for inertial confinement fusion (ICF) and high energy density physics (HEDP) experiments. In an alternative configuration, the large magnetic pressure directly drives isentropic compression experiments to pressures >3 Mbar and accelerates flyer plates to >30 km/s for equation of state (EOS) experiments at pressures up to 10 Mbar in aluminum. Development of multidimensional radiation-magnetohydrodynamic codes, coupled with more accurate material models (e.g., quantum molecular dynamics calculations with density functional theory), has produced synergy between validating the simulations and guiding the experiments. Z is now routinely used to drive ICF capsule implosions (focusing on implosion symmetry and neutron production) and to perform HEDP experiments (including radiation-driven hydrodynamic jets, EOS, phase transitions, strength of materials, and detailed behavior of z-pinch wire-array initiation and implosion). This research is performed in collaboration with many other groups from around the world. A five year project to enhance the capability and precision of Z, to be completed in 2007, will result in x-ray energies of nearly 3 MJ at x-ray powers >300 TW.
Terahertz radiation from optically-induced plasmas on metal, semiconductor, and dielectric surfaces is compared to electron-hole plasma radiation from GaAs and Ge. Electro-optic sampling and electric-field probes measure radiated field waveforms and distributions to 0.350 THz.
Gold nanocrystal(NC)/silica films are synthesized through self-assembly of water-soluble gold nanocrystal micelles and silica by sol-gel processing. Absorption and transmission spectra show a strong surface plasmon resonance absorption peak at {approx}520 nm. Angular excitation spectra of surface plasmon show a steep dip in the reflectivity curve at {approx}65{sup o} depending on the thickness and refractive index of the gold NC/silica film. A potential SPR sensor with enhanced sensitivities can be realized based on these gold NC/silica films.
Cold spray, a new member of the thermal spray process family, can be used to prepare dense, thick metal coatings. It has tremendous potential as a spray-forming process. However, it is well known that significant cold work occurs during the cold spray deposition process. This cold work results in hard coatings but relatively brittle bulk deposits. This work investigates the mechanical properties of cold-sprayed aluminum and the effect of annealing on those properties. Cold spray coatings approximately 1 cm thick were prepared using three different feedstock powders: Valimet H-10; Valimet H-20; and Brodmann Flomaster. ASTM E8 tensile specimens were machined from these coatings and tested using standard tensile testing procedures. Each material was tested in two conditions: as-sprayed; and after a 300 C, 22 h air anneal. The as-sprayed material showed high ultimate strength and low ductility, with <1% elongation. The annealed samples showed a reduction in ultimate strength but a dramatic increase in ductility, with up to 10% elongation. The annealed samples exhibited mechanical properties that were similar to those of wrought 1100 H14 aluminum. Microstructural examination and fractography clearly showed a change in fracture mechanism between the as-sprayed and annealed materials. These results indicate good potential for cold spray as a bulk-forming process.
Laser-induced breakdown spectroscopy (LIBS) was used in the evaluation of aerosol concentration in the exhaust of an oxygen/natural-gas glass furnace. Experiments showed that for a delay time of 10 {micro}s and a gate width of 50 {micro}s, the presence of CO{sub 2} and changes in gas temperature affect the intensity of both continuum emission and the Na D lines. The intensity increased for the neutral Ca and Mg lines in the presence of 21% CO{sub 2} when compared to 100% N{sub 2}, whereas the intensity of the Mg and Ca ionic lines decreased. An increase in temperature from 300 to 730 K produced an increase in both continuum emission and Na signal. These laboratory measurements were consistent with measurements in the glass furnace exhaust. Time-resolved analysis of the spark radiation suggested that differences in continuum radiation resulting from changes in bath composition are only apparent at long delay times. The changes in the intensity of ionic and neutral lines in the presence of CO{sub 2} are believed to result from higher free electron number density caused by lower ionization energies of species formed during the spark decay process in the presence of CO{sub 2}. For the high Na concentration observed in the glass furnace exhaust, self-absorption of the spark radiation occurred. Power law regression was used to fit laboratory Na LIBS calibration data for sodium loadings, gas temperatures, and a CO{sub 2} content representative of the furnace exhaust. Improvement of the LIBS measurement in this environment may be possible by evaluation of Na lines with weaker emission and through the use of shorter gate delay times.
The junction temperature of AlGaN ultraviolet light-emitting diodes emitting at 295 nm is measured by using the temperature coefficients of the diode forward voltage and emission peak energy. The high-energy slope of the spectrum is explored to measure the carrier temperature. A linear relation between junction temperature and current is found. Analysis of the experimental methods reveals that the diode-forward voltage is the most accurate ({+-}3 C). A theoretical model for the dependence of the diode forward voltage (V{sub f}) on junction temperature (T{sub j}) is developed that takes into account the temperature dependence of the energy gap. A thermal resistance of 87.6 K/W is obtained with the device mounted with thermal paste on a heat sink.
The effect of temperature on the tensile properties of annealed 304L stainless steel and HERF 304L stainless steel forgings was determined by completing experiments over the moderate range of -40 F to 160 F. Temperature effects were more significant in the annealed material than the HERF material. The tensile yield strength of the annealed material at -40 F averaged twenty two percent above the room temperature value and at 160 F averaged thirteen percent below. The tensile yield strength for the three different geometry HERF forgings at -40 F and 160 F changed less than ten percent from room temperature. The ultimate tensile strength was more temperature dependent than the yield strength. The annealed material averaged thirty six percent above and fourteen percent below the room temperature ultimate strength at -40 F and 160 F, respectively. The HERF forgings exhibited similar, slightly lower changes in ultimate strength with temperature. For completeness and illustrative purposes, the stress-strain curves are included for each of the tensile experiments conducted. The results of this study prompted a continuation study to determine tensile property changes of welded 304L stainless steel material with temperature, documented separately.
Several groups of plastic molded CD4011 were electrically tested as part of an Army dormant storage program. For this test, parts had been in storage in missile containers for 4.5 years. Eight of the parts (out of 1200) failed the electrical tests and were subsequently analyzed to determine the cause of the failures. The root cause was found to be corrosion of the unpassivated Al bondpads. No significant attack of the passivated Al traces was found. Seven of the eight failures occurred in parts stored on a preposition ship (Jeb Stuart), suggesting a link between the external environment and observed corrosion.
Social and ecological scientists emphasize that effective natural resource management depends in part on understanding the dynamic relationship between the physical and non-physical process associated with resource consumption. In this case, the physical processes include hydrological, climatological and ecological dynamics, and the non-physical process include social, economic and cultural dynamics among humans who do the resource consumption. This project represents a case study aimed at modeling coupled social and physical processes in a single decision support system. In central New Mexico, individual land use decisions over the past five decades have resulted in the gradual transformation of the Middle Rio Grande Valley from a primarily rural agricultural landscape to a largely urban one. In the arid southwestern U.S., the aggregate impact of individual decisions about land use is uniquely important to understand, because scarce hydrological resources will likely limit the viability of resulting growth and development trajectories. This decision support tool is intended to help planners in the area look forward in their efforts to create a collectively defined 'desired' social landscape in the Middle Rio Grande. Our research question explored the ways in which socio-cultural values impact decisions regarding that landscape and associated land use. Because of the constraints hydrological resources place on land use, we first assumed that water use, as embodied in water rights, was a reasonable surrogate for land use. We thought that modeling the movement of water rights over time and across water source types (surface and ground) would provide planners with insight into the possibilities for certain types of decisions regarding social landscapes, and the impact those same decisions would have on those landscapes. We found that water rights transfer data in New Mexico is too incomplete and inaccurate to use as the basis for the model. Furthermore, because of its lack of accuracy and completeness, water rights ownership was a poor indicator of water and land usage habits and patterns. We also found that commitment among users in the Middle Rio Grande Valley is to an agricultural lifestyle, not to a community or place. This commitment is conditioned primarily by generational cohort and past experience. If conditions warrant, many would be willing to practice the lifestyle elsewhere. A related finding was that sometimes the pressure to sell was not the putative price of the land, but the taxes on the land. These taxes were, in turn, a function of the level of urbanization of the neighborhood. This urbanization impacted the quality of the agricultural lifestyle. The project also yielded some valuable lessons regarding the model development process. A facilitative and collaborative style (rather than a top-down, directive style) was most productive with the inter-disciplinary , inter-institutional team that worked on the project. This allowed for the emergence of a process model which combined small, discipline- and/or task-specific subgroups with larger, integrating team meetings. The project objective was to develop a model that could be used to run test scenarios in which we explored the potential impact of different policy options. We achieved that objective, although not with the level of success or modeling fidelity which we had hoped for. This report only describes very superficially the results of test scenarios, since more complete analysis of scenarios would require more time and effort. Our greatest obstacle in the successful completion of the project was that required data were sparse, of poor quality, or completely nonexistent. Moreover, we found no similar modeling or research efforts taking place at either the state or local level. This leads to a key finding of this project: that state and local policy decisions regarding land use, development, urbanization, and water resource allocation are being made with minimal data and without the benefit of economic or social policy analysis.
Warm dense matter is the region in phase space of density and temperature where the thermal, Fermi, and Coulomb energies are approximately equal. The lack of a dominating scale and physical behavior makes it challenging to model the physics to high fidelity. For Sandia, a fundamental understanding of the region is of importance because of the needs of our experimental HEDP programs for high fidelity descriptive and predictive modeling. We show that multi-scale simulations of macroscopic physical phenomena now have predictive capability also for difficult but ubiquitous materials such as stainless steel, a transition metal alloy.
Imagine free-standing flexible membranes with highly-aligned arrays of carbon nanotubes (CNTs) running through their thickness. Perhaps with both ends of the CNTs open for highly controlled nanofiltration? Or CNTs at heights uniformly above a polymer membrane for a flexible array of nanoelectrodes or field-emitters? How about CNT films with incredible amounts of accessible surface area for analyte adsorption? These self-assembled crystalline nanotubes consist of multiple layers of graphene sheets rolled into concentric cylinders. Tube diameters (3-300 nm), inner-bore diameters (2-15 nm), and lengths (nanometers - microns) are controlled to tailor physical, mechanical, and chemical properties. We proposed to explore growth and characterize nanotube arrays to help determine their exciting functionality for Sandia applications. Thermal chemical vapor deposition growth in a furnace nucleates from a metal catalyst. Ordered arrays grow using templates from self-assembled hexagonal arrays of nanopores in anodized-aluminum oxide. Polymeric-binders can mechanically hold the CNTs in place for polishing, lift-off, and membrane formation. The stiffness, electrical and thermal conductivities of CNTs make them ideally suited for a wide-variety of possible applications. Large-area, highly-accessible gas-adsorbing carbon surfaces, superb cold-cathode field-emission, and unique nanoscale geometries can lead to advanced microsensors using analyte adsorption, arrays of functionalized nanoelectrodes for enhanced electrochemical detection of biological/explosive compounds, or mass-ionizers for gas-phase detection. Materials studies involving membrane formation may lead to exciting breakthroughs in nanofiltration/nanochromatography for the separation of chemical and biological agents. With controlled nanofilter sizes, ultrafiltration will be viable to separate and preconcentrate viruses and many strains of bacteria for 'down-stream' analysis.
Chemical microsensors rely on partitioning of airborne chemicals into films to collect and measure trace quantities of hazardous vapors. Polymer sensor coatings used today are typically slow to respond and difficult to apply reproducibly. The objective of this project was to produce a durable sensor coating material based on graphitic nanoporous-carbon (NPC), a new material first studied at Sandia, for collection and detection of volatile organic compounds (VOC), toxic industrial chemicals (TIC), chemical warfare agents (CWA) and nuclear processing precursors (NPP). Preliminary studies using NPC films on exploratory surface-acoustic-wave (SAW) devices and as a {micro}ChemLab membrane preconcentrator suggested that NPC may outperform existing, irreproducible coatings for SAW sensor and {micro}ChemLab preconcentrator applications. Success of this project will provide a strategic advantage to the development of a robust, manufacturable, highly-sensitive chemical microsensor for public health, industrial, and national security needs. We use pulsed-laser deposition to grow NPC films at room-temperature with negligible residual stress, and hence, can be deposited onto nearly any substrate material to any thickness. Controlled deposition yields reproducible NPC density, morphology, and porosity, without any discernable variation in surface chemistry. NPC coatings > 20 {micro}m thick with density < 5% that of graphite have been demonstrated. NPC can be 'doped' with nearly any metal during growth to provide further enhancements in analyte detection and selectivity. Optimized NPC-coated SAW devices were compared directly to commonly-used polymer coated SAWs for sensitivity to a variety of VOC, TIC, CWA and NPP. In every analyte, NPC outperforms each polymer coating by multiple orders-of-magnitude in detection sensitivity, with improvements ranging from 103 to 108 times greater detection sensitivity! NPC-coated SAW sensors appear capable of detecting most analytes tested to concentrations below parts-per-billion. In addition, the graphitic nature of NPC enables thermal stability > 600 C, several hundred degrees higher than the polymers. This superior thermal stability will enable higher-Temperature preconcentrator operation, as well as greatly prolonged device reliability, since polymers tend to degrade with time and repeated thermal cycling.
The convergence of nanoscience and biotechnology has opened the door to the integration of a wide range of biological molecules and processes with synthetic materials and devices. A primary biomolecule of interest has been DNA based upon its role as information storage in living systems, as well as its ability to withstand a wide range of environmental conditions. DNA also offers unique chemistries and interacts with a range of biomolecules, making it an ideal component in biological sensor applications. The primary goal of this project was to develop methods that utilize in vitro DNA synthesis to provide spatial localization of nanocrystal quantum dots (nQDs). To accomplish this goal, three specific technical objectives were addressed: (1) attachment of nQDs to DNA nucleotides, (2) demonstrating the synthesis of nQD-DNA strands in bulk solution, and (3) optimizing the ratio of unlabeled to nQD-labeled nucleotides. DNA nucleotides were successfully attached to nQDs using the biotin-streptavidin linkage. Synthesis of 450-nm long, nQD-coated DNA strands was demonstrated using a DNA template and the polymerase chain reaction (PCR)-based method of DNA amplification. Modifications in the synthesis process and conditions were subsequently used to synthesize 2-{micro}m long linear nQD-DNA assemblies. In the case of the 2-{micro}m structures, both the ratio of streptavidin-coated nQDs to biotinylated dCTP, and streptavidin-coated nQD-dCTPs to unlabeled dCTPs affected the ability to synthesize the nQD-DNA assemblies. Overall, these proof-of-principles experiments demonstrated the successful synthesis of nQD-DNA using DNA templates and in vitro replication technologies. Continued development of this technology may enable rapid, spatial patterning of semiconductor nanoparticles with Angstrom-level resolution, as well as optically active probes for DNA and other biomolecular analyses.
A combined experimental/modeling study was conducted to better understand the critical role of gas-surface interactions in rarefied gas flows. An experimental chamber and supporting diagnostics were designed and assembled to allow simultaneous measurements of gas heat flux and inter-plate gas density profiles in an axisymmetric, parallel-plate geometry. Measurements of gas density profiles and heat flux are made under identical conditions, eliminating an important limitation of earlier studies. The use of in situ, electron-beam fluorescence is demonstrated as a means to measure gas density profiles although additional work is required to improve the accuracy of this technique. Heat flux is inferred from temperature-drop measurements using precision thermistors. The system can be operated with a variety of gases (monatomic, diatomic, polyatomic, mixtures) and carefully controlled, well-characterized surfaces of different types (metals, ceramics) and conditions (smooth, rough). The measurements reported here are for 304 stainless steel plates with a standard machined surface coupled with argon, helium, and nitrogen. The resulting heat-flux and gas-density-profile data are analyzed using analytic and computational models to show that a simple Maxwell gas-surface interaction model is adequate to represent all of the observations. Based on this analysis, thermal accommodation coefficients for 304 stainless steel coupled with argon, nitrogen, and helium are determined to be 0.88, 0.80, and 0.38, respectively, with an estimated uncertainty of {+-}0.02.
Time domain reflectometry (TDR) operates by propagating a radar frequency electromagnetic pulse down a transmission line while monitoring the reflected signal. As the electromagnetic pulse propagates along the transmission line, it is subject to impedance by the dielectric properties of the media along the transmission line (e.g., air, water, sediment), reflection at dielectric discontinuities (e.g., air-water or water-sediment interface), and attenuation by electrically conductive materials (e.g., salts, clays). Taken together, these characteristics provide a basis for integrated stream monitoring; specifically, concurrent measurement of stream stage, channel profile and aqueous conductivity. Here, we make novel application of TDR within the context of stream monitoring. Efforts toward this goal followed three critical phases. First, a means of extracting the desired stream parameters from measured TDR traces was required. Analysis was complicated by the fact that interface location and aqueous conductivity vary concurrently and multiple interfaces may be present at any time. For this reason a physically based multisection model employing the S11 scatter function and Cole-Cole parameters for dielectric dispersion and loss was developed to analyze acquired TDR traces. Second, we explored the capability of this multisection modeling approach for interpreting TDR data acquired from complex environments, such as encountered in stream monitoring. A series of laboratory tank experiments were performed in which the depth of water, depth of sediment, and conductivity were varied systematically. Comparisons between modeled and independently measured data indicate that TDR measurements can be made with an accuracy of {+-}3.4x10{sup -3} m for sensing the location of an air/water or water/sediment interface and {+-}7.4% of actual for the aqueous conductivity. Third, monitoring stations were sited on the Rio Grande and Paria rivers to evaluate performance of the TDR system under normal field conditions. At the Rio Grande site (near Central Bridge in Albuquerque, New Mexico) continuous monitoring of stream stage and aqueous conductivity was performed for 6 months. Additionally, channel profile measurements were acquired at 7 locations across the river. At the Paria site (near Lee's Ferry, Arizona) stream stage and aqueous conductivity data were collected over a 4-month period. Comparisons drawn between our TDR measurements and USGS gage data indicate that the stream stage is accurate within {+-}0.88 cm, conductivity is accurate within {+-}11% of actual, and channel profile measurements agree within {+-}1.2 cm.
Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operations indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.
Deep X-ray lithography on PMMA resist is used in the LIGA process. The resist is exposed to synchrotron X-rays through a patterned mask and then is developed in a liquid developer to make high aspect ratio microstructures. The limitations in dimensional accuracies of the LIGA generated microstructure originate from many sources, including synchrotron and X-ray physics, thermal and mechanical properties of mask and resist, and from the kinetics of the developer. This work addresses the thermal analysis and temperature rise of the mask-resist assembly during exposure in air at the Advanced Light Source (ALS) synchrotron. The concern is that dimensional errors generated at the mask and the resist due to thermal expansion will lower the accuracy of the lithography. We have developed a three-dimensional finite-element model of the mask and resist assembly that includes a mask with absorber, a resist with substrate, three metal holders, and a water-cooling block. We employed the LIGA exposure-development software LEX-D to calculate volumetric heat sources generated in the assembly by X-ray absorption and the commercial software ABAQUS to calculate heat transfer including thermal conduction inside the assembly, natural and forced convection, and thermal radiation. at assembly outer and/or inner surfaces. The calculations of assembly maximum temperature. have been compared with temperature measurements conducted at ALS. In some of these experiments, additional cooling of the assembly was produced by forced nitrogen flow ('nitrogen jets') directed at the mask surface. The temperature rise in the silicon mask and the mask holder comes directly from the X-ray absorption, but nitrogen jets carry away a significant portion of heat energy from the mask surface, while natural convection carries away negligibly small amounts energy from the holder. The temperature rise in PMMA resist is mainly from heat conducted from the silicon substrate backward to the resist and from the inner cavity air forward to the resist, while the X-ray absorption is only secondary. Therefore, reduction of heat flow conducted from both substrate and cavity air to the resist is essential. An improved water-cooling block is expected to carry away most heat energy along the main heat conductive path, leaving the resist at a favorable working temperature.
Acts of terrorism could have a range of broad impacts on an economy, including changes in consumer (or demand) confidence and the ability of productive sectors to respond to changes. As a first step toward a model of terrorism-based impacts, we develop here a model of production and employment that characterizes dynamics in ways useful toward understanding how terrorism-based shocks could propagate through the economy; subsequent models will introduce the role of savings and investment into the economy. We use Aspen, a powerful economic modeling tool developed at Sandia, to demonstrate for validation purposes that a single-firm economy converges to the known monopoly equilibrium price, output, and employment levels, while multiple-firm economies converge toward the competitive equilibria typified by lower prices and higher output and employment. However, we find that competition also leads to churn by consumers seeking lower prices, making it difficult for firms to optimize with respect to wages, prices, and employment levels. Thus, competitive firms generate market ''noise'' in the steady state as they search for prices and employment levels that will maximize profits. In the context of this model, not only could terrorism depress overall consumer confidence and economic activity but terrorist acts could also cause normal short-run dynamics to be misinterpreted by consumers as a faltering economy.
The overall purpose of this LDRD is multifold. First, we are interested in preparing new homogeneous catalysts that can be used in the oligomerization of ethylene and in understanding commercially important systems better. Second, we are interested in attempting to support these new homogeneous catalysts in the pores of nano- or mesoporous materials in order to force new and unusual distributions of a-olefins to be formed during the oligomerization. Thus the overall purpose is to try to prepare new catalytic species and to possibly control the active site architecture in order to yield certain desired products during a catalytic reaction, much like nature does with enzymes. In order to rationally synthesize catalysts it is imperative to comprehend the function of the various components of the catalyst. In heterogeneous systems, it is of utmost importance to know how a support interacts with the active site of the catalyst. In fact, in the catalysis world this lack of fundamental understanding of the relationship between active site and support is the single largest reason catalysis is considered an 'empirical' or 'black box' science rather than a well-understood one. In this work we will be preparing novel ethylene oligomerization catalysts, which are normally P-O chelated homogeneous complexes, with new ligands that replace P with a stable carbene. We will also examine a commercially catalyst system and investigate the active site in it via X-ray crystallography. We will also attempt to support these materials inside the pores of nano- and mesoporous materials. Essentially, we will be tailoring the size and scale of the catalyst active site and its surrounding environment to match the size of the molecular product(s) we wish to make. The overall purpose of the study will be to prepare new homogeneous catalysts, and if successful in supporting them to examine the effects that steric constraints and pore structures can have on growing oligomer chains.
Lien, Steve J.; Baxter, Larry L.; Frederick Jr., W.J.; Wessel, Richard A.
As part of the U.S. Department of Energy (DOE) Office of Industrial Technologies (OIT) Industries of the Future (IOF) Forest Products research program, the mechanisms of particle deposition and properties of deposits that form in the convection passes of recovery boilers were investigated. Research from experimental facilities at Sandia National Laboratories, the Institute of Paper Science and Technology (IPST), and the University of Toronto (U of T) was coordinated into a single effort to define the controlling mechanisms and rates of deposition. Deposition rates were recorded on a volumetric and mass basis in a Sandia facility for particle sizes in the range of 0.1 to 150 {micro}m. Deposit thickness, mass, spectral emissivity, thermal conductivity, surface temperature, and apparent density were monitored simultaneously and in situ on instrumented probes that allow determination of heat flux and probe surface temperature. Particle composition and mass deposition rates were also recorded in a U of T facility for particle sizes in the range of 100 to 600 {micro}m. These measurements allowed determination of the liquid content and sticking efficiency of carryover particles that inertially impact on a deposition probe. In addition, information on particulates, stable gas species, gas temperature and velocity were obtained from field tests in an operating recovery boiler. The results were used to develop algorithms appropriate for use in computer codes that simulate recovery boilers. Representative calculations were performed using B&W's comprehensive recovery boiler model to demonstrate the use of the algorithms in such computer codes. Comparisons between observations in commercial systems and model predictions were made to identify algorithm strengths and weaknesses.
Property-based testing is a testing technique that evaluates executions of a program. The method checks that specifications, called properties, hold throughout the execution of the program. TASpec is a language used to specify these properties. This paper compares some attributes of the language with the specification patterns used for model-checking languages, and then presents some descriptions of properties that can be used to detect common security flaws in programs. This report describes the results of a one year research project at the University of California, Davis, which was funded by a University Collaboration LDRD entitled ''Property-based Testing for Cyber Security Assurance''.
The Matrixed Business Support Comparison Study reviewed the current matrixed Chief Financial Officer (CFO) division staff models at Sandia National Laboratories. There were two primary drivers of this analysis: (1) the increasing number of financial staff matrixed to mission customers and (2) the desire to further understand the matrix process and the opportunities and challenges it creates.
At Sandia National Laboratories, miniaturization dominates future hardware designs, and technologies that address the manufacture of micro-scale to nano-scale features are in demand. Currently, Sandia is developing technologies such as photolithography/etching (e.g. silicon MEMS), LIGA, micro-electro-discharge machining (micro-EDM), and focused ion beam (FIB) machining to fulfill some of the component design requirements. Some processes are more encompassing than others, but each process has its niche, where all performance characteristics cannot be met by one technology. For example, micro-EDM creates highly accurate micro-scale features but the choice of materials is limited to conductive materials. With silicon-based MEMS technology, highly accurate nano-scale integrated devices are fabricated but the mechanical performance may not meet the requirements. Femtosecond laser processing has the potential to fulfill a broad range of design demands, both in terms of feature resolution and material choices, thereby improving fabrication of micro-components. One of the unique features of femtosecond lasers is the ability to ablate nearly all materials with little heat transfer, and therefore melting or damage, to the surrounding material, resulting in highly accurate micro-scale features. Another unique aspect to femtosecond radiation is the ability to create localized structural changes thought nonlinear absorption processes. By scanning the focal point within transparent material, we can create three-dimensional waveguides for biological sensors and optical components. In this report, we utilized the special characteristics of femtosecond laser processing for microfabrication. Special emphasis was placed on the laser-material interactions to gain a science-based understanding of the process and to determine the process parameter space for laser processing of metals and glasses. Two areas were investigated, including laser ablation of ferrous alloys and direct-write optical waveguides and integrated optics in bulk glass. The effects of laser and environmental parameters on such aspects as removal rate, feature size, feature definition, and ablation angle during the ablation process of metals were studied. In addition, the manufacturing requirements for component fabrication including precision and reproducibility were investigated. The effect of laser processing conditions on the optical properties of direct-written waveguides and an unusual laser-induced birefringence in an optically isotropic glass are reported. Several integrated optical devices, including a Y coupler, directional coupler, and Mach-Zehnder interferometer, were made to demonstrate the simplicity and flexibility of this technique in comparison to the conventional waveguide fabrication processes.
We have studied the feasibility of using the 3D fully electromagnetic implicit hybrid particle code LSP (Large Scale Plasma) to study laser plasma interactions with dense, compressed plasmas like those created with Z, and which might be created with the planned ZR. We have determined that with the proper additional physics and numerical algorithms developed during the LDRD period, LSP was transformed into a unique platform for studying such interactions. Its uniqueness stems from its ability to consider realistic compressed densities and low initial target temperatures (if required), an ability that conventional PIC codes do not possess. Through several test cases, validations, and applications to next generation machines described in this report, we have established the suitability of the code to look at fast ignition issues for ZR, as well as other high-density laser plasma interaction problems relevant to the HEDP program at Sandia (e.g. backlighting).
The combination of phase diversity and adaptive optics offers great flexibility. Phase diverse images can be used to diagnose aberrations and then provide feedback control to the optics to correct the aberrations. Alternatively, phase diversity can be used to partially compensate for aberrations during post-detection image processing. The adaptive optic can produce simple defocus or more complex types of phase diversity. This report presents an analysis, based on numerical simulations, of the efficiency of different modes of phase diversity with respect to compensating for specific aberrations during post-processing. It also comments on the efficiency of post-processing versus direct aberration correction. The construction of a bench top optical system that uses a membrane mirror as an active optic is described. The results of characterization tests performed on the bench top optical system are presented. The work described in this report was conducted to explore the use of adaptive optics and phase diversity imaging for responsive space applications.
Sandia National Laboratories was tasked with developing the Defense Nuclear Material Stewardship Integrated Inventory Information Management System (IIIMS) with the sponsorship of NA-125.3 and the concurrence of DOE/NNSA field and area offices. The purpose of IIIMS was to modernize nuclear materials management information systems at the enterprise level. Projects over the course of several years attempted to spearhead this modernization. The scope of IIIMS was broken into broad enterprise-oriented materials management and materials forecasting. The IIIMS prototype was developed to allow multiple participating user groups to explore nuclear material requirements and needs in detail. The purpose of material forecasting was to determine nuclear material availability over a 10 to 15 year period in light of the dynamic nature of nuclear materials management. Formal DOE Directives (requirements) were needed to direct IIIMS efforts but were never issued and the project has been halted. When restarted, duplicating or re-engineering the activities from 1999 to 2003 is unnecessary, and in fact future initiatives can build on previous work. IIIMS requirements should be structured to provide high confidence that discrepancies are detected, and classified information is not divulged. Enterprise-wide materials management systems maintained by the military can be used as overall models to base IIIMS implementation concepts upon.
This report documents the author's efforts in the deterministic modeling of copper-sulfidation corrosion on non-planar substrates such as diodes and electrical connectors. A new framework based on Goma was developed for multi-dimensional modeling of atmospheric copper-sulfidation corrosion on non-planar substrates. In this framework, the moving sulfidation front is explicitly tracked by treating the finite-element mesh as a pseudo solid with an arbitrary Lagrangian-Eulerian formulation and repeatedly performing re-meshing using CUBIT and re-mapping using MAPVAR. Three one-dimensional studies were performed for verifying the framework in asymptotic regimes. Limited model validation was also carried out by comparing computed copper-sulfide thickness with experimental data. The framework was first demonstrated in modeling one-dimensional copper sulfidation with charge separation. It was found that both the thickness of the space-charge layers and the electrical potential at the sulfidation surface decrease rapidly as the Cu{sub 2}S layer thickens initially but eventually reach equilibrium values as Cu{sub 2}S layer becomes sufficiently thick; it was also found that electroneutrality is a reasonable approximation and that the electro-migration flux may be estimated by using the equilibrium potential difference between the sulfidation and annihilation surfaces when the Cu{sub 2}S layer is sufficiently thick. The framework was then employed to model copper sulfidation in the solid-state-diffusion controlled regime (i.e. stage II sulfidation) on a prototypical diode until a continuous Cu{sub 2}S film was formed on the diode surface. The framework was also applied to model copper sulfidation on an intermittent electrical contact between a gold-plated copper pin and gold-plated copper pad; the presence of Cu{sub 2}S was found to raise the effective electrical resistance drastically. Lastly, future research needs in modeling atmospheric copper sulfidation are discussed.
Wind turbine system reliability is a critical factor in the success of a wind energy project. Poor reliability directly affects both the project's revenue stream through increased operation and maintenance (O&M) costs and reduced availability to generate power due to turbine downtime. Indirectly, the acceptance of wind-generated power by the financial and developer communities as a viable enterprise is influenced by the risk associated with the capital equipment reliability; increased risk, or at least the perception of increased risk, is generally accompanied by increased financing fees or interest rates. Cost of energy (COE) is a key project evaluation metric, both in commercial applications and in the U.S. federal wind energy program. To reflect this commercial reality, the wind energy research community has adopted COE as a decision-making and technology evaluation metric. The COE metric accounts for the effects of reliability through levelized replacement cost and unscheduled maintenance cost parameters. However, unlike the other cost contributors, such as initial capital investment and scheduled maintenance and operating expenses, costs associated with component failures are necessarily speculative. They are based on assumptions about the reliability of components that in many cases have not been operated for a complete life cycle. Due to the logistical and practical difficulty of replacing major components in a wind turbine, unanticipated failures (especially serial failures) can have a large impact on the economics of a project. The uncertainty associated with long-term component reliability has direct bearing on the confidence level associated with COE projections. In addition, wind turbine technology is evolving. New materials and designs are being incorporated in contemporary wind turbines with the ultimate goal of reducing weight, controlling loads, and improving energy capture. While the goal of these innovations is reduction in the COE, there is a potential impact on reliability whenever new technologies are introduced. While some of these innovations may ultimately improve reliability, in the short term, the technology risks and the perception of risk will increase. The COE metric used by researchers to evaluate technologies does not address this issue. This paper outlines the issues relevant to wind turbine reliability for wind turbine power generation projects. The first sections describe the current state of the industry, identify the cost elements associated with wind farm O&M and availability and discuss the causes of uncertainty in estimating wind turbine component reliability. The latter sections discuss the means for reducing O&M costs and propose O&M related research and development efforts that could be pursued by the wind energy research community to reduce COE.
This paper builds upon previous work [Sprigg and Ehlen, 2004] by introducing a bond market into a model of production and employment. The previous paper described an economy in which households choose whether to enter the labor and product markets based on wages and prices. Firms experiment with prices and employment levels to maximize their profits. We developed agent-based simulations using Aspen, a powerful economic modeling tool developed at Sandia, to demonstrate that multiple-firm economies converge toward the competitive equilibria typified by lower prices and higher output and employment, but also suffer from market noise stemming from consumer churn. In this paper we introduce a bond market as a mechanism for household savings. We simulate an economy of continuous overlapping generations in which each household grows older in the course of the simulation and continually revises its target level of savings according to a life-cycle hypothesis. Households can seek employment, earn income, purchase goods, and contribute to savings until they reach the mandatory retirement age; upon retirement households must draw from savings in order to purchase goods. This paper demonstrates the simultaneous convergence of product, labor, and savings markets to their calculated equilibria, and simulates how a disruption to a productive sector will create cascading effects in all markets. Subsequent work will use similar models to simulate how disruptions, such as terrorist attacks, would interplay with consumer confidence to affect financial markets and the broader economy.
The Hydrogen Futures Simulation Model (H{sub 2}Sim) is a high level, internally consistent, strategic tool for exploring the options of a hydrogen economy. Once the user understands how to use the basic functions, H{sub 2}Sim can be used to examine a wide variety of scenarios, such as testing different options for the hydrogen pathway, altering key assumptions regarding hydrogen production, storage, transportation, and end use costs, and determining the effectiveness of various options on carbon mitigation. This User's Guide explains how to run the model for the first time user.
In preparation for developing a Z-pinch IFE power plant, the interaction of ferritic steel with the coolant, FLiBe, must be explored. Sandia National Laboratories Fusion Technology Department was asked to drop molten ferritic steel and FLiBe in a vacuum system and determine the gas byproducts and ability to recycle the steel. We tried various methods of resistive heating of ferritic steel using available power supplies and easily obtained heaters. Although we could melt the steel, we could not cause a drop to fall. This report describes the various experiments that were performed and includes some suggestions and materials needed to be successful. Although the steel was easily melted, it was not possible to drip the molten steel into a FLiBe pool Levitation melting of the drop is likely to be more successful.
To establish strength criteria of Big Hill salt, a series of quasi-static triaxial compression tests have been completed. This report summarizes the test methods, set-up, relevant observations, and results. The triaxial compression tests established dilatant damage criteria for Big Hill salt in terms of stress invariants (I{sub 1} and J{sub 2}) and principal stresses ({sigma}{sub a,d} and {sigma}{sub 3}), respectively: {radical}J{sub 2}(psi) = 1746-1320.5 exp{sup -0.00034I{sub 1}(psi)}; {sigma}{sub a,d}(psi) = 2248 + 1.25 {sigma}{sub 3} (psi). For the confining pressure of 1,000 psi, the dilatant damage strength of Big Hill salt is identical to the typical salt strength ({radical}J{sub 2} = 0.27 I{sub 1}). However, for higher confining pressure, the typical strength criterion overestimates the damage strength of Big Hill salt.
Poly(ethylene oxide) (PEO) is the quintessential biocompatible polymer. Due to its ability to form hydrogen bonds, it is soluble in water, and yet is uncharged and relatively inert. It is being investigated for use in a wide range of biomedical and biotechnical applications, including the prevention of protein adhesion (biofouling), controlled drug delivery, and tissue scaffolds. PEO has also been proposed for use in novel polymer hydrogel nanocomposites with superior mechanical properties. However, the phase behavior of PEO in water is highly anomalous and is not addressed by current theories of polymer solutions. The effective interactions between PEO and water are very concentration dependent, unlike other polymer/solvent systems, due to water-water and water-PEO hydrogen bonds. An understanding of this anomalous behavior requires a careful examination of PEO liquids and solutions on the molecular level. We performed massively parallel molecular dynamics simulations and self-consistent Polymer Reference Interaction Site Model (PRISM) calculations on PEO liquids. We also initiated MD studies on PEO/water solutions with and without an applied electric field. This work is summarized in three parts devoted to: (1) A comparison of MD simulations, theory and experiment on PEO liquids; (2) The implementation of water potentials into the LAMMPS MD code; and (3) A theoretical analysis of the effect of an applied electric field on the phase diagram of polymer solutions.
A vegetation study was conducted in Technical Area 3 at Sandia National Laboratories, Albuquerque, New Mexico in 2003 to assist in the design and optimization of vegetative soil covers for hazardous, radioactive, and mixed waste landfills at Sandia National Laboratories/New Mexico and Kirtland Air Force Base. The objective of the study was to obtain site-specific, vegetative input parameters for the one-dimensional code UNSAT-H and to identify suitable, diverse native plant species for use on vegetative soil covers that will persist indefinitely as a climax ecological community with little or no maintenance. The identification and selection of appropriate native plant species is critical to the proper design and long-term performance of vegetative soil covers. Major emphasis was placed on the acquisition of representative, site-specific vegetation data. Vegetative input parameters measured in the field during this study include root depth, root length density, and percent bare area. Site-specific leaf area index was not obtained in the area because there was no suitable platform to measure leaf area during the 2003 growing season due to severe drought that has persisted in New Mexico since 1999. Regional LAI data was obtained from two unique desert biomes in New Mexico, Sevilletta Wildlife Refuge and Jornada Research Station.
A decomposition chemistry and heat transfer model to predict the response of removable epoxy foam (REF) exposed to fire-like heat fluxes is described. The epoxy foam was created using a perfluorohexane blowing agent with a surfactant. The model includes desorption of the blowing agent and surfactant, thermal degradation of the epoxy polymer, polymer fragment transport, and vapor-liquid equilibrium. An effective thermal conductivity model describes changes in thermal conductivity with reaction extent. Pressurization is modeled assuming: (1) no strain in the condensed-phase, (2) no resistance to gas-phase transport, (3) spatially uniform stress fields, and (4) no mass loss from the system due to venting. The model has been used to predict mass loss, pressure rise, and decomposition front locations for various small-scale and large-scale experiments performed by others. The framework of the model is suitable for polymeric foams with absorbed gases.
In the search for ''good'' parallel programming environments for Sandia's current and future parallel architectures, they revisit a long-standing open question. Can the PRAM parallel algorithms designed by theoretical computer scientists over the last two decades be implemented efficiently? This open question has co-existed with ongoing efforts in the HPC community to develop practical parallel programming models that can simultaneously provide ease of use, expressiveness, performance, and scalability. Unfortunately, no single model has met all these competing requirements. Here they propose a parallel programming environment, PRAM C, to bridge the gap between theory and practice. This is an attempt to provide an affirmative answer to the PRAM question, and to satisfy these competing practical requirements. This environment consists of a new thin runtime layer and an ANSI C extension. The C extension has two control constructs and one additional data type concept, ''shared''. This C extension should enable easy translation from PRAM algorithms to real parallel programs, much like the translation from sequential algorithms to C programs. The thin runtime layer bundles fine-grained communication requests into coarse-grained communication to be served by message-passing. Although the PRAM represents SIMD-style fine-grained parallelism, a stand-alone PRAM C environment can support both fine-grained and coarse-grained parallel programming in either a MIMD or SPMD style, interoperate with existing MPI libraries, and use existing hardware. The PRAM C model can also be integrated easily with existing models. Unlike related efforts proposing innovative hardware with the goal to realize the PRAM, ours can be a pure software solution with the purpose to provide a practical programming environment for existing parallel machines; it also has the potential to perform well on future parallel architectures.
A laser hazard analysis and safety assessment was performed for the LASIRISTM Model MAG-501L-670M-1000-45o-K diode laser associated with the High Resolution Pulse Scanner based on the ANSI Standard Z136.1-2000, American National Standard for the Safe Use of Lasers and the ANSI Standard Z136.6-2000, American National Standard for the Safe Use of Lasers Outdoors. The laser was evaluated for both indoor and outdoor use.
A particular engineering aspect of distributed sensor networks that has not received adequate attention is the system level hardware architecture of the individual nodes of the network. A novel hardware architecture based on an idea of task specific modular computing is proposed to provide for both the high flexibility and low power consumption required for distributed sensing solutions. The power consumption of the architecture is mathematically analyzed against a traditional approach, and guidelines are developed for application scenarios that would benefit from using this new design. Furthermore a method of decentralized control for the modular system is developed and analyzed. Finally, a few policies for power minimization in the decentralized system are proposed and analyzed.
The effect of polymer-polymer and solvent-polymer interactions on the behavior of the interdiffusion of a solvent in to an entangled polymer matrix was studied. The state of the polymer was changed from melt to glassy by varying polymer-polymer interaction. From simulation of equilibrated solvent-polymer solution, it was found that the glassy system with Berthelot's rule applied for the cross term is immiscible except in the dilute limit. Increasing the solvent-polymer interaction enhanced the solubility of the system without changing the nature of the diffusion process.
The high mobility of two dimensional electron system in the second Landau level was discussed. In the second level, the larger extent of the wave function as compared to the lowest LL and its additional zero allows for a much broader range of electron correlations to be favorable. An example of electron correlations encountered in the second LL is the even-denominator v=2+1/2 fractional quantum hall effect (FQHE) state. With a varying filling factor, it was observed that quantum liquids of different origins compete with several insulating phases leading to an irregular pattern in the transport parameters.
The Design through Analysis Realization Team (DART) will provide analysts with a complete toolset that reduces the time to create, generate, analyze, and manage the data generated in a computational analysis. The toolset will be both easy to learn and easy to use. The DART Roadmap Vision provides for progressive improvements that will reduce the Design through Analysis (DTA) cycle time by 90-percent over a three-year period while improving both the quality and accountability of the analyses.
We present a two-step approach to modeling the transmembrane spanning helical bundles of integral membrane proteins using only sparse distance constraints, such as those derived from chemical cross-linking, dipolar EPR and FRET experiments. In Step 1, using an algorithm, we developed, the conformational space of membrane protein folds matching a set of distance constraints is explored to provide initial structures for local conformational searches. In Step 2, these structures refined against a custom penalty function that incorporates both measures derived from statistical analysis of solved membrane protein structures and distance constraints obtained from experiments. We begin by describing the statistical analysis of the solved membrane protein structures from which the theoretical portion of the penalty function was derived. We then describe the penalty function, and, using a set of six test cases, demonstrate that it is capable of distinguishing helical bundles that are close to the native bundle from those that are far from the native bundle. Finally, using a set of only 27 distance constraints extracted from the literature, we show that our method successfully recovers the structure of dark-adapted rhodopsin to within 3.2 Å of the crystal structure.
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
A case study is reported to document the details of a validation process to assess the accuracy of a mathematical model to represent experiments involving thermal decomposition of polyurethane foam. The focus of the report is to work through a validation process. The process addresses the following activities. The intended application of mathematical model is discussed to better understand the pertinent parameter space. The parameter space of the validation experiments is mapped to the application parameter space. The mathematical models, computer code to solve the models and its (code) verification are presented. Experimental data from two activities are used to validate mathematical models. The first experiment assesses the chemistry model alone and the second experiment assesses the model of coupled chemistry, conduction, and enclosure radiation. The model results of both experimental activities are summarized and uncertainty of the model to represent each experimental activity is estimated. The comparison between the experiment data and model results is quantified with various metrics. After addressing these activities, an assessment of the process for the case study is given. Weaknesses in the process are discussed and lessons learned are summarized.
The sequential probability ratio test (SPRT) minimizes the expected number of observations to a decision and can solve problems in sequential pattern recognition. Some problems have dependencies between the observations, and Markov chains can model dependencies where the state occupancy probability is geometric. For a non-geometric process we show how to use the effective amount of independent information to modify the decision process, so that we can account for the remaining dependencies. Along with dependencies between observations, a successful system needs to handle the unknown class in unconstrained environments. For example, in an acoustic pattern recognition problem any sound source not belonging to the target set is in the unknown class. We show how to incorporate goodness of fit (GOF) classifiers into the Markov SPRT, and determine the worse case nontarget model. We also develop a multiclass Markov SPRT using the GOF concept.
Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.
A decomposition model has been developed to predict the response of removable syntactic foam (RSF) exposed to fire-like heat fluxes. RSF consists of glass micro-balloons (GMB) in a cured epoxy polymer matrix. A chemistry model is presented based on the chemical structure of the epoxy polymer, mass transport of polymer fragments to the bulk gas, and vapor-liquid equilibrium. Thermophysical properties were estimated from measurements. A bubble nucleation, growth, and coalescence model was used to describe changes in properties with the extent of reaction. Decomposition of a strand of syntactic foam exposed to high temperatures was simulated.
A coupled Euler-Lagrange solution approach is used to model the response of a buried reinforced concrete structure subjected to a close-in detonation of a high explosive charge. The coupling algorithm is discussed along with a set of benchmark calculations involving detonations in clay and sand.
Genetic programming (GP) has proved to be a highly versatile and useful tool for identifying relationships in data for which a more precise theoretical construct is unavailable. In this project, we use a GP search to develop trading strategies for agent based economic models. These strategies use stock prices and technical indicators, such as the moving average convergence/divergence and various exponentially weighted moving averages, to generate buy and sell signals. We analyze the effect of complexity constraints on the strategies as well as the relative performance of various indicators. We also present innovations in the classical genetic programming algorithm that appear to improve convergence for this problem. Technical strategies developed by our GP algorithm can be used to control the behavior of agents in economic simulation packages, such as ASPEN-D, adding variety to the current market fundamentals approach. The exploitation of arbitrage opportunities by technical analysts may help increase the efficiency of the simulated stock market, as it does in the real world. By improving the behavior of simulated stock markets, we can better estimate the effects of shocks to the economy due to terrorism or natural disasters.
The primary goals of the present study are to: (1) determine how and why MEMS-scale friction differs from friction on the macro-scale, and (2) to begin to develop a capability to perform finite element simulations of MEMS materials and components that accurately predicts response in the presence of adhesion and friction. Regarding the first goal, a newly developed nanotractor actuator was used to measure friction between molecular monolayer-coated, polysilicon surfaces. Amontons law does indeed apply over a wide range of forces. However, at low loads, which are of relevance to MEMS, there is an important adhesive contribution to the normal load that cannot be neglected. More importantly, we found that at short sliding distances, the concept of a coefficient of friction is not relevant; rather, one must invoke the notion of 'pre-sliding tangential deflections' (PSTD). Results of a simple 2-D model suggests that PSTD is a cascade of small-scale slips with a roughly constant number of contacts equilibrating the applied normal load. Regarding the second goal, an Adhesion Model and a Junction Model have been implemented in PRESTO, Sandia's transient dynamics, finite element code to enable asperity-level simulations. The Junction Model includes a tangential shear traction that opposes the relative tangential motion of contacting surfaces. An atomic force microscope (AFM)-based method was used to measure nano-scale, single asperity friction forces as a function of normal force. This data is used to determine Junction Model parameters. An illustrative simulation demonstrates the use of the Junction Model in conjunction with a mesh generated directly from an atomic force microscope (AFM) image to directly predict frictional response of a sliding asperity. Also with regards to the second goal, grid-level, homogenized models were studied. One would like to perform a finite element analysis of a MEMS component assuming nominally flat surfaces and to include the effect of roughness in such an analysis by using a homogenized contact and friction models. AFM measurements were made to determine statistical information on polysilicon surfaces with different roughnesses, and this data was used as input to a homogenized, multi-asperity contact model (the classical Greenwood and Williamson model). Extensions of the Greenwood and Williamson model are also discussed: one incorporates the effect of adhesion while the other modifies the theory so that it applies to the case of relatively few contacting asperities.
This report discusses a set of verification test cases for the frequency-domain, boundary-element, electromagnetics code Eiger based on the analytical solution of plane wave scattering from a sphere. Three cases will be considered: when the sphere is made of perfect electric conductor, when the sphere is made of lossless dielectric and when the sphere is made of lossy dielectric. We outline the procedures that must be followed in order to carefully compare the numerical solution to the analytical solution. We define an error criterion and demonstrate convergence behavior for both the analytical and numerical cases. These problems test the code's ability to calculate the surface current density and secondary quantities, such as near fields and far fields.
In this paper we present an analysis of a new configuration for achieving spin stabilized magnetic levitation. In the classical configuration, the rotor spins about a vertical axis; and the spin stabilizes the lateral instability of the top in the magnetic field. In this new configuration the rotor spins about a horizontal axis; and the spin stabilizes the axial instability of the top in the magnetic field.
ML is a multigrid preconditioning package intended to solve linear systems of equations Ax = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package or to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the Aztec 2.1 and AztecOO iterative package [16]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and nonsymmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.
As technical knowledge grows deeper, broader, and more interconnected, knowledge domains increasingly combine a number of sub-domains. More often than not, each of these sub-domains has its own community of specialists and forums for interaction. Hence, from a generalist's viewpoint, it is sometimes difficult to understand the relationships between the sub-domains within the larger domain; and, from a specialist's viewpoint, it may be difficult for those working in one sub-domain to keep abreast of knowledge gained in another sub-domain. These difficulties can be especially important in the initial stages of creating new projects aimed at adding knowledge either at the domain or sub-domain level. To circumvent these difficulties, one would ideally like to create a map of the knowledge domain--a map which would help clarify relationships between the various sub-domains, and a map which would help inform choices regarding investing in the production of knowledge either at the domain or sub-domain levels. In practice, creating such a map is non-trivial. First, relationships between knowledge subdomains are complex, and not likely to be easily simplified into a visualizable 2-or-few-dimensional map. Second, even if some of the relationships can be simplified, capturing them would require some degree of expert understanding of the knowledge domain, rendering impossible any fully automated method for creating the map. In this work, we accept these limitations, and within them, attempt to explore semi-automated methodologies for creating such a map. We chose as the knowledge domain for this case study 'displacement damage phenomena in Si junction devices'. This knowledge domain spans a particularly wide range of knowledge subdomains, and hence is a particularly challenging one.
Microfluidic systems are becoming increasingly complicated as the number of applications grows. The use of microfluidic systems for chemical and biological agent detection, for example, requires that a given sample be subjected to many process steps, which requires microvalves to control the position and transport of the sample. Each microfluidic application has its own specific valve requirements and this has precipitated the wide variety of valve designs reported in the literature. Each of these valve designs has its strengths and weaknesses. The strength of the valve design proposed here is its simplicity, which makes it easy to fabricate, easy to actuate, and easy to integrate with a microfluidic system. It can be applied to either gas phase or liquid phase systems. This novel design uses a secondary fluid to stop the flow of the primary fluid in the system. The secondary fluid must be chosen based on the type of flow that it must stop. A dielectric fluid must be used for a liquid phase flow driven by electroosmosis, and a liquid with a large surface tension should be used to stop a gas phase flow driven by a weak pressure differential. Experiments were carried out investigating certain critical functions of the design. These experiments verified that the secondary fluid can be reversibly moved between its 'valve opened' and 'valve closed' positions, where the secondary fluid remained as one contiguous piece during this transport process. The experiments also verified that when Fluorinert is used as the secondary fluid, the valve can break an electric circuit. It was found necessary to apply a hydrophobic coating to the microchannels to stop the primary fluid, an aqueous electrolyte, from wicking past the Fluorinert and short-circuiting the valve. A simple model was used to develop valve designs that could be closed using an electrokinetic pump, and re-opened by simply turning the pump off and allowing capillary forces to push the secondary fluid back into its stowed position.
The goal of this study was to first establish the fitness for service of the carbon steel based oil coolers presently located at the Bryan Mound and West Hackberry sites, and second, to compare quantitatively the performance of two proposed corrosion mitigation strategies. To address these goals, a series of flow loops were constructed to simulate the conditions present within the oil coolers allowing the performance of each corrosion mitigation strategy, as well as the baseline performance of the existing systems, to be assessed. As prior experimentation had indicated that the corrosion and fouling was relatively uniform within the oil coolers, the hot and cold side of the system were simulated, representing the extremes of temperature observed within a typical oil cooler. Upon completion of the experiment, the depth of localized attack observed on carbon steel was such that perforation of the tube walls would likely result within a 180 day drawdown procedure at West Hackberry. Furthermore, considering the average rate of wall recession (from LPR measurements), combined with the extensive localized attack (pitting) which occurred in both environments, the tubing wall thickness remaining after 180 days would be less than that required to contain the operating pressures of the oil coolers for both sites. Finally, the inhibitor package, while it did reduce the measured corrosion rate in the case of the West Hackberry solutions, did not provide a sufficient reduction in the observed attack to justify its use.
The Seldon terrorist model represents a multi-disciplinary approach to developing organization software for the study of terrorist recruitment and group formation. The need to incorporate aspects of social science added a significant contribution to the vision of the resulting Seldon toolkit. The unique addition of and abstract agent category provided a means for capturing social concepts like cliques, mosque, etc. in a manner that represents their social conceptualization and not simply as a physical or economical institution. This paper provides an overview of the Seldon terrorist model developed to study the formation of cliques, which are used as the major recruitment entity for terrorist organizations.
Natural gas is a clean fuel that will be the most important domestic energy resource for the first half the 21st centtuy. Ensuring a stable supply is essential for our national energy security. The research we have undertaken will maximize the extractable volume of gas while minimizing the environmental impact of surface disturbances associated with drilling and production. This report describes a methodology for comprehensive evaluation and modeling of the total gas system within a basin focusing on problematic horizontal fluid flow variability. This has been accomplished through extensive use of geophysical, core (rock sample) and outcrop data to interpret and predict directional flow and production trends. Side benefits include reduced environmental impact of drilling due to reduced number of required wells for resource extraction. These results have been accomplished through a cooperative and integrated systems approach involving industry, government, academia and a multi-organizational team within Sandia National Laboratories. Industry has provided essential in-kind support to this project in the forms of extensive core data, production data, maps, seismic data, production analyses, engineering studies, plus equipment and staff for obtaining geophysical data. This approach provides innovative ideas and technologies to bring new resources to market and to reduce the overall environmental impact of drilling. More importantly, the products of this research are not be location specific but can be extended to other areas of gas production throughout the Rocky Mountain area. Thus this project is designed to solve problems associated with natural gas production at developing sites, or at old sites under redevelopment.
Nove, Charles E.; Maclin, Richard F.; Theuninck, Andrew K.; Newland, Jeremy L.; Torrey, Lisa A.; Robinson, Eric R.
A novel method employing machine-based learning to identify messages related to other messages is described and evaluated. This technique may enable an analyst to identify and correlate a small number of related messages from a large sample of individual messages. The classic machine learning techniques of decision trees and naive Bayes classification are seeded with few (or no) messages of interest and 'learn' to identify other related messages. The performance of this approach and these specific learning techniques are evaluated and generalized.
This report describes both a general methodology and specific examples of completely passive microwave tags. Surface acoustic wave (SAW) devices were used to make tags for both identification and sensing applications at different frequencies. SAW correlators were optimized for wireless identification, and SAW filters were developed to enable wireless remote sensing of physical properties. Identification tag applications and wireless remote measurement applications are discussed. Significant effort went into optimizing the SAW devices used for this work, and the lessons learned from that effort are reviewed.
Hydrogen has the potential to become an integral part of our energy transportation and heat and power sectors in the coming decades and offers a possible solution to many of the problems associated with a heavy reliance on oil and other fossil fuels. The Hydrogen Futures Simulation Model (H2Sim) was developed to provide a high level, internally consistent, strategic tool for evaluating the economic and environmental trade offs of alternative hydrogen production, storage, transport and end use options in the year 2020. Based on the model's default assumptions, estimated hydrogen production costs range from 0.68 $/kg for coal gasification to as high as 5.64 $/kg for centralized electrolysis using solar PV. Coal gasification remains the least cost option if carbon capture and sequestration costs ($0.16/kg) are added. This result is fairly robust; for example, assumed coal prices would have to more than triple or the assumed capital cost would have to increase by more than 2.5 times for natural gas reformation to become the cheaper option. Alternatively, assumed natural gas prices would have to fall below $2/MBtu to compete with coal gasification. The electrolysis results are highly sensitive to electricity costs, but electrolysis only becomes cost competitive with other options when electricity drops below 1 cent/kWhr. Delivered 2020 hydrogen costs are likely to be double the estimated production costs due to the inherent difficulties associated with storing, transporting, and dispensing hydrogen due to its low volumetric density. H2Sim estimates distribution costs ranging from 1.37 $/kg (low distance, low production) to 3.23 $/kg (long distance, high production volumes, carbon sequestration). Distributed hydrogen production options, such as on site natural gas, would avoid some of these costs. H2Sim compares the expected 2020 per mile driving costs (fuel, capital, maintenance, license, and registration) of current technology internal combustion engine (ICE) vehicles (0.55$/mile), hybrids (0.56 $/mile), and electric vehicles (0.82-0.84 $/mile) with 2020 fuel cell vehicles (FCVs) (0.64-0.66 $/mile), fuel cell vehicles with onboard gasoline reformation (FCVOB) (0.70 $/mile), and direct combustion hydrogen hybrid vehicles (H2Hybrid) (0.55-0.59 $/mile). The results suggests that while the H2Hybrid vehicle may be competitive with ICE vehicles, it will be difficult for the FCV to compete without significant increases in gasoline prices, reduced predicted vehicle costs, stringent carbon policies, or unless they can offer the consumer something existing vehicles can not, such as on demand power, lower emissions, or better performance.
Specimens of poled 'chem-prep' PNZT ceramic from batch HF803 were tested under hydrostatic, uniaxial, and constant stress difference loading conditions at three temperatures of -55, 25, and 75 C and pressures up to 500 MPa. The objective of this experimental study was to obtain the electro-mechanical properties of the ceramic and the criteria of FE (Ferroelectric) to AFE (Antiferroelectric) phase transformations so that grain-scale modeling efforts can develop and test models and codes using realistic parameters. The poled ceramic undergoes anisotropic deformation during the transition from a FE to an AFE structure. The lateral strain measured parallel to the poling direction was typically 35 % greater than the strain measured perpendicular to the poling direction. The rates of increase in the phase transformation pressures per temperature changes were practically identical for both unpoled and poled PNZT HF803 specimens. We observed that the retarding effect of temperature on the kinetics of phase transformation appears to be analogous to the effect of shear stress. We also observed that the FE-to-AFE phase transformation occurs in poled ceramic when the normal compressive stress, acting perpendicular to a crystallographic plane about the polar axis, equals the hydrostatic pressure at which the transformation otherwise takes place.
As part of the Arsenic Water Technology Partnership program, Sandia National Laboratories will carry out field demonstration testing of innovative technologies that have the potential to substantially reduce the costs associated with arsenic removal from drinking water. The scope for this work includes: (1) selection of sites for pilot demonstrations, (2) identification of candidate technologies through Vendor Forums, proof-of-principle bench-scale studies managed by the American Water Works Association Research Foundation (AwwaRF) or the WERC design contest, and (3) pilot-scale studies involving side-by-side tests of innovative technologies. The goal of site selection is identification of a suite of sites that exhibit a sufficiently wide range of groundwater chemistries to allow examination of treatment processes and systems under conditions that are relevant to different geochemical settings throughout the country. A number of candidate sites have been identified through reviews of groundwater quality databases, conference proceedings and discussions with state and local officials. These include sites in New Mexico, Arizona, Colorado, Oklahoma, Illinois, Michigan, Florida, Massachusetts and New Hampshire. In New Mexico, discussions have been held with water utility board staffs in Chama, Jemez Pueblo, Placitas, Socorro and several communities near Las Cruces to determine the suitability of those communities for pilot studies. The initial pilot studies will be carried at Socorro and Jemez Pueblo; other communities will be included as the program progresses. The proposed pilot test at a hot spring water source near Socorro will provide an opportunity to test treatment technologies at relatively high temperatures. If approved by the Tribal Government, the proposed pilot at the Jemez Pueblo would provide an opportunity to test technologies that will remove arsenic in the presence of relatively high concentrations of iron and manganese while leaving the beneficial levels of fluoride unchanged. Candidate technologies for the pilot tests are being reviewed by technical evaluation teams. The initial reviews will consider as many potential technologies and screen out unsuitable ones by considering data from past performance testing, expected costs, complexity of operation and maturity of the technology. The pilot test configurations will depend on the site-specific conditions such as access, power availability, waste disposal options and availability of permanent structures to house the test. Most of the treatment technologies that will be evaluated can be separated into two broad categories: (1) sorption processes that use fixed bed adsorbents and (2) membrane processes. The latter include processes that involve formation of a floc or precipitate that contains the arsenic in a reactor followed by separation of the solids from the water by filtration. Several innovations that could lead to lower treatment costs have been proposed for adsorptive media systems. These include: (1) higher capacity and selectivity using mixed oxides composed of iron and other transition metals, titanium and zirconium based oxides, or mixed resin-metal oxides composite media, (2) improved durability of virgin media and greater chemical stability of the spent media, and (3) use of inexpensive natural or recycled materials with a coating that has a high affinity for arsenic. Improvements to filtration-based treatment systems include: (1) enhanced coagulation with iron compounds or polyelectrolytes and (2) improved filtration with nanocomposite materials. In the pilot tests, the innovative technologies will be evaluated in terms of: (1) their ability to reduce arsenic to levels below the EPA Maximum Contaminant Level (MCL) of 10 ppb, (2) site-specific adsorptive capacity, robustness of performance with respect to likely changes in water quality parameters including pH, TDS, foulants such as Fe, Mn, silica, and organics, effect of competing ions such as other metals and radionuclides, and potentially deleterious effects on the water system such as pipe corrosion from low pH levels, fluoride removal, and generation of disinfection by-products. The new arsenic MCL will result in modification of many rural water systems that otherwise would not require treatment. Opportunities for improvement of water quality in systems that currently do not comply with other standards would be an added benefit from the new arsenic MCL that has both economic and public health value.
Understanding the dynamics of the membrane protein rhodopsin will have broad implications for other membrane proteins and cellular signaling processes. Rhodopsin (Rho) is a light activated G-protein coupled receptor (GPCR). When activated by ligands, GPCRs bind and activate G-proteins residing within the cell and begin a signaling cascade that results in the cell's response to external stimuli. More than 50% of all current drugs are targeted toward G-proteins. Rho is the prototypical member of the class A GPCR superfamily. Understanding the activation of Rho and its interaction with its Gprotein can therefore lead to a wider understanding of the mechanisms of GPCR activation and G-protein activation. Understanding the dark to light transition of Rho is fully analogous to the general ligand binding and activation problem for GPCRs. This transition is dependent on the lipid environment. The effect of lipids on membrane protein activity in general has had little attention, but evidence is beginning to show a significant role for lipids in membrane protein activity. Using the LAMMPS program and simulation methods benchmarked under the IBIG program, we perform a variety of allatom molecular dynamics simulations of membrane proteins.
To investigate the performance of artificial frozen soil materials with a fused interface, split tension (or 'Brazilian') tests and unconfined uniaxial compression tests were carried out in a low temperature environmental chamber. Intact and fused specimens were fabricated from four different soil mixtures (962: clay-rich soil with bentonite; DNA1: clay-poor soil; DNA2: clay-poor soil with vermiculite; and DNA3: clay-poor soil with perlite). Based on the 'Brazilian' test results and density measurements, the DNA3 mixture was selected to closely represent the mechanical properties of the Alaskan frozen soil. The healed-interface by the same soil layer sandwiched between two blocks of the same material yielded the highest 'Brazilian' tensile strength of the interface. Based on unconfined uniaxial compression tests, the frictional strength of the fused DNA3 specimens with the same soil appears to exceed the shear strength of the intact specimen.
Laser-induced incandescence is used to measure time-resolved diesel particulate emissions for two lean NOx trap regeneration strategies that utilize intake throttling and in-cylinder fuel enrichment. The results show that when the main injection event is increased in duration and delayed 13 crank-angle degrees, particulate emissions are very high. For a repetitive pattern of 3 seconds of rich regeneration followed by 27 seconds of NOx-trap loading, we find a monotonic increase in particulate emissions during the loading intervals that approaches twice the initial baseline particulate level after 1000 seconds. In contrast, particulate emissions during the re-generation intervals are constant throughout the test sequence. For regeneration using an additional late injection event (post-injection), particulate emissions are about twice the baseline level for the first regeneration interval, but then decay with an exponential-like behavior over the repetitive test sequence, eventually reaching a level that is comparable to the baseline. In contrast, particulate emissions between regenerations decrease slowly throughout the test sequence, reaching a level 12 percent below the starting baseline value.
An Al{sub 85}Ni{sub 10}La{sub 5} amorphous alloy, produced via gas atomization, was selected to study the mechanisms of nanocrystallization induced by thermal exposure. High resolution transmission electron microscopy results indicated the presence of quenched-in Al nuclei in the amorphous matrix of the atomized powder. However, a eutectic-like reaction, which involved the formation of the Al, Al{sub 11}La{sub 3}, and Al{sub 3}Ni phases, was recorded in the first crystallization event (263 C) during differential scanning calorimetry continuous heating. Isothermal annealing experiments conducted below 263 C revealed that the formation of single fcc-Al phase occurred at 235 C. At higher temperatures, growth of the Al crystals occurred with formation of intermetallic phases, leading to a eutectic-like transformation behavior at 263 C. During the first crystallization stage, nanocrystals were developed in the size range of 5 - 30 nm. During the second crystallization event (283 C), a bimodal size distribution of nanocrystals was formed with the smaller size in the range of around 10 - 30 nm and the larger size around 100 nm. The influence of pre-existing quenched-in Al nuclei on the microstructural evolution in the amorphous Al{sub 85}Ni{sub 10}La{sub 5} alloy is discussed and the effect of the microstructural evolution on the hardening behavior is described in detail.
Fires pose the dominant risk to the safety and security of nuclear weapons, nuclear transport containers, and DOE and DoD facilities. The thermal hazard from these fires primarily results from radiant emission from high-temperature flame soot. Therefore, it is necessary to understand the local transport and chemical phenomena that determine the distributions of soot concentration, optical properties, and temperature in order to develop and validate constitutive models for large-scale, high-fidelity fire simulations. This report summarizes the findings of a Laboratory Directed Research and Development (LDRD) project devoted to obtaining the critical experimental information needed to develop such constitutive models. A combination of laser diagnostics and extractive measurement techniques have been employed in both steady and pulsed laminar diffusion flames of methane, ethylene, and JP-8 surrogate burning in air. For methane and ethylene, both slot and coannular flame geometries were investigated, as well as normal and inverse diffusion flame geometries. For the JP-8 surrogate, coannular normal diffusion flames were investigated. Soot concentrations, polycyclic aromatic hydrocarbon (PAH) laser-induced fluorescence (LIF) signals, hydroxyl radical (OH) LIF, acetylene and water vapor concentrations, soot zone temperatures, and the velocity field were all successfully measured in both steady and unsteady versions of these various flames. In addition, measurements were made of the soot microstructure, soot dimensionless extinction coefficient (&), and the local radiant heat flux. Taken together, these measurements comprise a unique, extensive database for future development and validation of models of soot formation, transport, and radiation.
LIGA is an acronym for the German terms Lithographie, Galvanoformung, Abformung, which describe a microfabrication process for high aspect ratio, structural parts based on electrodeposition of a metal into a poly-methyl-methacrylate (PMMA) mold. LIGA produced parts have very high dimensional tolerances (on the order of a micron) and can vary in size from microns to centimeters. These properties make LIGA parts ideal for incorporation into MEMS devices or for other applications where strict tolerances must be met; however, functionality of the parts can only be maintained if they remain dimensionally stable throughout their lifetime. It follows that any form of corrosion attack (e.g., uniform dissolution, localized pitting, environmental cracking, etc.) cannot be tolerated. This presentation focuses on the pitting behavior of Ni electrodeposits, specifically addressing the influence of the following: grain structure, alloy composition, impurities, plating conditions, post plating processing (including chemical and thermal treatment), galvanic interactions and environment (aqueous vs. atmospheric). A small subset of these results is summarized. A typical LIGA part is shown in Figure 1. Due to the small size scale, electrochemical testing was performed using a capillary based test system. Although very small test areas can be probed with this system (e.g., Figure 2), typically capillaries on the order of 80 to 90 ?m's were used in the testing. All LIGA parts tested in the as-received condition had better pitting resistance than the high purity wrought Ni material used as a control. In the case of LIGA-Ni and LIGA-Ni-Mn, no detrimental effects were observed due to aging at 700C. Ni-S (approximately 500 ppm S), showed good as-received pitting behavior but decreased pitting resistance with thermal aging. Aged Ni-S showed dramatic increases in grain size (from single {micro}m's to 100's of {micro}m's), and significant segregation of S to the boundaries. The capillary test cell was used to measure pitting potentials at the boundaries and within grains (Figure 3) with the results clearly showing the lowered pit resistance being due to the S-rich boundaries. It is believed that the process used to release the LIGA parts from the Cu substrate acts as a pickling agent for the LIGA parts, resulting in removal of surface impurities and detrimental alloying additions. EIS data from freshly polished samples exposed to the release bath support this hypothesis; RP values for all LIGA materials and for wrought Ni, continuously increase during exposure. Mechanical polishing of LIGA parts prior to electrochemical testing consistently resulted in lowering the pitting potentials to a range bounded by Ni 201 and high purity Ni. The as-received vs. polished behavior also effects the galvanic interactions with noble metals. When as-produced material is coupled to Au, initially the LIGA material acts as the cathode, though eventually the behavior switches such that the LIGA becomes the anode. Overall, the LIGA produced Ni and Ni alloys examined in this work demonstrated pitting behavior similar to wrought Ni, only showing reduced resistance when specific metallurgical and environmental conditions were met.
A series of experiments was performed to better characterize the boundary conditions from an inconel heat source ('shroud') painted with Pyromark black paint. Quantifying uncertainties in this type of experimental setup is crucial to providing information for comparisons with code predictions. The characterization of this boundary condition has applications in many scenarios related to fire simulation experiments performed at Sandia National Laboratories Radiant Heat Facility (RHF). Four phases of experiments were performed. Phase 1 results showed that a nominal 1000 C shroud temperature is repeatable to about 2 C. Repeatability of temperatures at individual points on the shroud show that temperatures do not vary more than 10 C from experiment to experiment. This variation results in a 6% difference in heat flux to a target 4 inches away. IR camera images showed the shroud was not at a uniform temperature, although the control temperature was constant to about {+-}2 C during a test. These images showed that a circular shaped, flat shroud with its edges supported by an insulated plate has a temperature distribution with higher temperatures at the edges and lower temperatures in the center. Differences between the center and edge temperatures were up to 75 C. Phase 3 results showed that thermocouple (TC) bias errors are affected by coupling with the surrounding environment. The magnitude of TC error depends on the environment facing the TC. Phase 4 results were used to estimate correction factors for specific applications (40 and 63-mil diameter, ungrounded junction, mineral insulated, metal-sheathed TCs facing a cold surface). Correction factors of about 3.0-4.5% are recommended for 40 mil diameter TCs and 5.5-7.0% for 63 mil diameter TCs. When mounted on the cold side of the shroud, TCs read lower than the 'true' shroud temperature, and the TC reads high when on the hot side. An alternate method uses the average of a cold side and hot side TC of the same size to estimate the true shroud temperature. Phase 2 results compared IR camera measurements with TC measurements and measured values of Pyromark emissivity. Agreement was within measured uncertainties of the Pyromark paint emissivity and IR camera temperatures.
Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be applied to a spacecraft during atmospheric re-entry, and (3) optimal design of a distributed sensor network for the purpose of vehicle tracking and identification.
The irradiation of thin insulating films by high-energy ions (374 MeV Au{sup +25} or 241 MeV I{sup +19}) was used to attempt to form nanometer-size pores through the films spontaneously. Such ions deposit a large amount of energy into the target materials ({approx}20 keV/nm), which significantly disrupts their atomic lattice and sputters material from the surfaces, and might produce nanopores for appropriate ion-material combinations. Transmission electron microscopy was used to examine the resulting ion tracks. Tracks were found in the crystalline oxides quartz, sapphire, and mica. Sapphire and mica showed ion tracks that are likely amorphous and exhibit pits 5 nm in diameter on the surface at the ion entrance and exit points. This suggests that nanopores might form in mica if the film thickness is less than {approx}10 nm. Tracks in quartz showed strain in the matrix around them. Tracks were not found in the amorphous thin films examined: 20 nm-SiN{sub x}, deposited SiOx, fused quartz (amorphous SiO{sub 2}), formvar and 3 nm-C. Other promising materials for nanopore formation were identified, including thin Au and SnO{sub 2} layers.
This report summarizes the results obtained from a Laboratory Directed Research & Development (LDRD) project entitled 'Investigation of Potential Applications of Self-Assembled Nanostructured Materials in Nuclear Waste Management'. The objectives of this project are to (1) provide a mechanistic understanding of the control of nanometer-scale structures on the ion sorption capability of materials and (2) develop appropriate engineering approaches to improving material properties based on such an understanding.
A radioactive sealed source is any radioactive material that is encased in a capsule designed to prevent leakage or escape of the radioactive material. Radioactive sealed sources are used for a wide variety of applications at hospitals, in manufacturing and research. Typical uses are in portable gauges to measure soil compaction and moisture or to determine physical properties of rocks units in boreholes (well logging). Hospitals and clinics use radioactive sealed sources for teletherapy and brachytherapy. Oil exploration and medicine are the largest users. Accidental mismanagement of radioactive sealed sources each year results in a large number of people receiving very high or even fatal does of ionizing radiation. Deliberate mismanagement is a growing international concern. Sealed sources must be managed and disposed effectively in order to protect human health and the environment. Effective national safety and management infrastructures are prerequisites for efficient and safe transportation, treatment, storage, and disposal. The Integrated Management Program for Radioactive Sealed Sources in Egypt (IMPRSS) is a cooperative development agreement between the Egyptian Atomic Energy Authority (EAEA), Egyptian Ministry of Health (MOH), Sandia National Laboratories (SNL), the University of New Mexico (UNM), and Agriculture Cooperative Development International (ACDI/VOCA). The EAEA, teaming with SNL, is conducting a Preliminary Safety Assessment (PSA) of an intermediate-depth borehole disposal in thick arid alluvium in Egypt based on experience with the U.S. Greater Confinement Disposal (GCD). Goldsim has been selected for the preliminary disposal system assessment for the Egyptian GCD Study. The results of the PSA will then be used to decide if Egypt desires to implement such a disposal system.
We have studied the feasibility of an innovative device to sample 1ns low-power single current transients with a time resolution better than 10 ps. The new concept explored here is to close photoconductive semiconductor switches (PCSS) with a Laser for a period of 10 ps. The PCSSs are in a series along a Transmission Line (TL). The transient propagates along the TL allowing one to carry out a spatially resolved sampling of charge at a fixed time instead of the usual timesampling of the current. The fabrication of such a digitizer was proven to be feasible but very difficult.
This paper presents solution verification studies applicable to a class of problems involving wave propagation, frictional contact, geometrical complexity, and localized incompressibility. The studies are in support of a validation exercise of a phenomenological screw failure model. The numerical simulations are performed using a fully explicit transient dynamics finite element code, employing both standard four-node tetrahedral and eight-node mean quadrature hexahedral elements. It is demonstrated that verifying the accuracy of the simulation involves not only consideration of the mesh discretization error, but also the effect of the hourglass control and the contact enforcement. In particular, the proper amount of hourglass control and the behavior of the contact search and enforcement algorithms depend greatly on the mesh resolution. We carry out the solution verification exercise using mesh refinement studies and describe our systematic approach to handling the complicating issues. It is shown that hourglassing and contact must both be carefully monitored as the mesh is refined, and it is often necessary to make adjustments to the hourglass and contact user input parameters to accommodate finer meshes. We introduce in this paper the hourglass energy, which is used as an 'error indicator' for the hourglass control. If the hourglass energy does not tend to zero with mesh refinement, then an hourglass control parameter is changed and the calculation is repeated.
We describe a new mode of encryption with inexpensive authentication, which uses information from the internal state of the cipher to provide the authentication. Our algorithms have a number of benefits: (1) the encryption has properties similar to CBC mode, yet the encipherment and authentication can be parallelized and/or pipelined, (2) the authentication overhead is minimal, and (3) the authentication process remains resistant against some IV reuse. We offer a Manticore class of authenticated encryption algorithms based on cryptographic hash functions, which support variable block sizes up to twice the hash output length and variable key lengths. A proof of security is presented for the MTC4 and Pepper algorithms. We then generalize the construction to create the Cipher-State (CS) mode of encryption that uses the internal state of any round-based block cipher as an authenticator. We provide hardware and software performance estimates for all of our constructions and give a concrete example of the CS mode of encryption that uses AES as the encryption primitive and adds a small speed overhead (10-15%) compared to AES alone.
If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware is necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called 'white-boxing'. We put forth some new attacks and improvements on this method as well as demonstrating its implementation for various algorithms. We also examine cryptographic techniques to achieve obfuscation including encrypted functions and offer a new application to digital signature algorithms. To better understand the lack of security proofs for obfuscation techniques, we examine in detail general theoretical models of obfuscation. We explain the need for formal models in order to obtain provable security and the progress made in this direction thus far. Finally we tackle the problem of verifying remote execution. We introduce some methods of verifying remote exponentiation computations and some insight into generic computation checking.
Microelectronic devices in satellites and spacecraft are exposed to high energy cosmic radiation. Furthermore, Earth-based electronics can be affected by terrestrial radiation. The radiation causes a variety of Single Event Effects (SEE) that can lead to failure of the devices. High energy heavy ion beams are being used to simulate both the cosmic and terrestrial radiation to study radiation effects and to ensure the reliability of electronic devices. Broad beam experiments can provide a measure of the radiation hardness of a device (SEE cross section) but they are unable to pinpoint the failing components in the circuit. A nuclear microbeam is an ideal tool to map SEE on a microscopic scale and find the circuit elements (transistors, capacitors, etc.) that are responsible for the failure of the device. In this paper a review of the latest radiation effects microscopy (REM) work at Sandia will be given. Different SEE mechanisms (Single Event Upset, Single Event Transient, etc.) and the methods to study them (Ion Beam Induced Charge (IBIC), Single Event Upset mapping, etc.) will be discussed. Several examples of using REM to study the basic effects of radiation in electronic devices and failure analysis of integrated circuits will be given.
An important challenge encountered during post-processing of finite element analyses is the visualizing of three-dimensional fields of real-valued second-order tensors. Namely, as finite element meshes become more complex and detailed, evaluation and presentation of the principal stresses becomes correspondingly problematic. In this paper, we describe techniques used to visualize simulations of perturbed in-situ stress fields associated with hypothetical salt bodies in the Gulf of Mexico. We present an adaptation of the Mohr diagram, a graphical paper and pencil method used by the material mechanics community for estimating coordinate transformations for stress tensors, as a new tensor glyph for dynamically exploring tensor variables within three-dimensional finite element models. This interactive glyph can be used as either a probe or a filter through brushing and linking.
Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.
We present the source code for three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or Nway array. This is a supplementary report; details on using this code are provided separately in SAND-XXXX.
The rate coefficient has been measured under pseudo-first-order conditions for the Cl + CH{sub 3} association reaction at T = 202, 250, and 298 K and P = 0.3-2.0 Torr helium using the technique of discharge-flow mass spectrometry with low-energy (12-eV) electron-impact ionization and collision-free sampling. Cl and CH{sub 3} were generated rapidly and simultaneously by reaction of F with HCl and CH{sub 4}, respectively. Fluorine atoms were produced by microwave discharge in an approximately 1% mixture of F{sub 2} in He. The decay of CH{sub 3} was monitored under pseudo-first-order conditions with the Cl-atom concentration in large excess over the CH{sub 3} concentration ([Cl]{sub 0}/[CH{sub 3}]{sub 0} = 9-67). Small corrections were made for both axial and radial diffusion and minor secondary chemistry. The rate coefficient was found to be in the falloff regime over the range of pressures studied. For example, at T = 202 K, the rate coefficient increases from 8.4 x 10{sup -12} at P = 0.30 Torr He to 1.8 x 10{sup -11} at P = 2.00 Torr He, both in units of cm{sup 3} molecule{sup -1} s{sup -1}. A combination of ab initio quantum chemistry, variational transition-state theory, and master-equation simulations was employed in developing a theoretical model for the temperature and pressure dependence of the rate coefficient. Reasonable empirical representations of energy transfer and of the effect of spin-orbit interactions yield a temperature- and pressure-dependent rate coefficient that is in excellent agreement with the present experimental results. The high-pressure limiting rate coefficient from the RRKM calculations is k{sub 2} = 6.0 x 10{sup -11} cm{sup 3} molecule{sup -1} s{sup -1}, independent of temperature in the range from 200 to 300 K.
The purpose of the present work is to increase our understanding of which properties of geomaterials most influence the penetration process with a goal of improving our predictive ability. Two primary approaches were followed: development of a realistic, constitutive model for geomaterials and designing an experimental approach to study penetration from the target's point of view. A realistic constitutive model, with parameters based on measurable properties, can be used for sensitivity analysis to determine the properties that are most important in influencing the penetration process. An immense literature exists that is devoted to the problem of predicting penetration into geomaterials or similar man-made materials such as concrete. Various formulations have been developed that use an analytic or more commonly, numerical, solution for the spherical or cylindrical cavity expansion as a sort of Green's function to establish the forces acting on a penetrator. This approach has had considerable success in modeling the behavior of penetrators, both as to path and depth of penetration. However the approach is not well adapted to the problem of understanding what is happening to the material being penetrated. Without a picture of the stress and strain state imposed on the highly deformed target material, it is not easy to determine what properties of the target are important in influencing the penetration process. We developed an experimental arrangement that allows greater control of the deformation than is possible in actual penetrator tests, yet approximates the deformation processes imposed by a penetrator. Using explosive line charges placed in a central borehole, we loaded cylindrical specimens in a manner equivalent to an increment of penetration, allowing the measurement of the associated strains and accelerations and the retrieval of specimens from the more-or-less intact cylinder. Results show clearly that the deformation zone is highly concentrated near the borehole, with almost no damage occurring beyond 1/2 a borehole diameter. This implies penetration is not strongly influenced by anything but the material within a diameter or so of the penetration. For penetrator tests, target size should not matter strongly once target diameters exceed some small multiple of the penetrator diameter. Penetration into jointed rock should not be much affected unless a discontinuity is within a similar range. Accelerations measured at several points along a radius from the borehole are consistent with highly-concentrated damage and energy absorption; At the borehole wall, accelerations were an order of magnitude higher than at 1/2 a diameter, but at the outer surface, 8 diameters away, accelerations were as expected for propagation through an elastic medium. Accelerations measured at the outer surface of the cylinders increased significantly with cure time for the concrete. As strength increased, less damage was observed near the explosively-driven borehole wall consistent with the lower energy absorption expected and observed for stronger concrete. As it is the energy absorbing properties of a target that ultimately stop a penetrator, we believe this may point the way to a more readily determined equivalent of the S number.
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) generation of samples from uncertain analysis inputs, (3) propagation of sampled inputs through an analysis, (4) presentation of uncertainty analysis results, and (5) determination of sensitivity analysis results.
Similar to entangled ropes, polymer chains cannot slide through each other. These topological constraints, the so-called entanglements, dominate the viscoelastic behavior of high-molecular-weight polymeric liquids. Tube models of polymer dynamics and rheology are based on the idea that entanglements confine a chain to small fluctuations around a primitive path which follows the coarse-grained chain contour. To establish the microscopic foundation for these highly successful phenomenological models, we have recently introduced a method for identifying the primitive path mesh that characterizes the microscopic topological state of computer-generated conformations of long-chain polymer melts and solutions. Here we give a more detailed account of the algorithm and discuss several key aspects of the analysis that are pertinent for its successful use in analyzing the topology of the polymer configurations. We also present a slight modification of the algorithm that preserves the previously neglected self-entanglements and allows us to distinguish between local self-knots and entanglements between distant sections of the same chain. Our results indicate that the latter make a negligible contribution to the tube and that the contour length between local self-knots, N{sub 1k} is significantly larger than the entanglement length N{sub e}.
Water resource scarcity around the world is driving the need for the development of simulation models that can assist in water resources management. Transboundary water resources are receiving special attention because of the potential for conflict over scarce shared water resources. The Rio Grande/Rio Bravo along the U.S./Mexican border is an example of a scarce, transboundary water resource over which conflict has already begun. The data collection and modeling effort described in this report aims at developing methods for international collaboration, data collection, data integration and modeling for simulating geographically large and diverse international watersheds, with a special focus on the Rio Grande/Rio Bravo. This report describes the basin, and the data collected. This data collection effort was spatially aggregated across five reaches consisting of Fort Quitman to Presidio, the Rio Conchos, Presidio to Amistad Dam, Amistad Dam to Falcon Dam, and Falcon Dam to the Gulf of Mexico. This report represents a nine-month effort made in FY04, during which time the model was not completed.
This report describes a project to develop both fixed and programmable surface acoustic wave (SAW) correlators for use in a low power space communication network. This work was funded by NASA at Sandia National Laboratories for fiscal years 2004, 2003, and the final part of 2002. The role of Sandia was to develop the SAW correlator component, although additional work pertaining to use of the component in a system and system optimization was also done at Sandia. The potential of SAW correlator-based communication systems, the design and fabrication of SAW correlators, and general system utilization of those correlators are discussed here.
Drainage of water from the region between an advancing probe tip and a flat sample is reconsidered under the assumption that the tip and sample surfaces are both coated by a thin water 'interphase' (of width {approx}a few nm) whose viscosity is much higher than the bulk liquid's. A formula derived by solving the Navier-Stokes equations allows one to extract an interphase viscosity of {approx}59 KPa-sec (or {approx}6.6x10{sup 7} times the viscosity of bulk water at 25C) from Interfacial Force Microscope measurements with both tip and sample functionalized hydrophilic by OH-terminated tri(ethylene glycol) undecylthiol, self-assembled monolayers.
Current computing architectures are 'inherently insecure' because they are designed to execute ANY arbitrary sequence of instructions. As a result they are subject to subversion by malicious code. Our goal is to produce a cryptographic method of 'tamper-proofing' trusted code over a large portion of the software life cycle. We have developed a technique called 'faithful execution', to cryptographically protect instruction sequences from subversion. This paper presents an overview of, and the lessons learned from, our implementations of faithful execution in a Java virtual machine prototype and also in a configurable soft-core processor implemented in a field programmable gate array (FPGA).
With the build-out of large transport networks utilizing optical technologies, more and more capacity is being made available. Innovations in Dense Wave Division Multiplexing (DWDM) and the elimination of optical-electrical-optical conversions have brought on advances in communication speeds as we move into 10 Gigabit Ethernet and above. Of course, there is a need to encrypt data on these optical links as the data traverses public and private network backbones. Unfortunately, as the communications infrastructure becomes increasingly optical, advances in encryption (done electronically) have failed to keep up. This project examines the use of optical logic for implementing encryption in the photonic domain to achieve the requisite encryption rates. This paper documents the innovations and advances of work first detailed in 'Photonic Encryption using All Optical Logic,' [1]. A discussion of underlying concepts can be found in SAND2003-4474. In order to realize photonic encryption designs, technology developed for electrical logic circuits must be translated to the photonic regime. This paper examines S-SEED devices and how discrete logic elements can be interconnected and cascaded to form an optical circuit. Because there is no known software that can model these devices at a circuit level, the functionality of S-SEED devices in an optical circuit was modeled in PSpice. PSpice allows modeling of the macro characteristics of the devices in context of a logic element as opposed to device level computational modeling. By representing light intensity as voltage, 'black box' models are generated that accurately represent the intensity response and logic levels in both technologies. By modeling the behavior at the systems level, one can incorporate systems design tools and a simulation environment to aid in the overall functional design. Each black box model takes certain parameters (reflectance, intensity, input response), and models the optical ripple and time delay characteristics. These 'black box' models are interconnected and cascaded in an encrypting/scrambling algorithm based on a study of candidate encryption algorithms. Demonstration circuits show how these logic elements can be used to form NAND, NOR, and XOR functions. This paper also presents functional analysis of a serial, low gate count demonstration algorithm suitable for scrambling/encryption using S-SEED devices.
We observe the spontaneous formation of parallel oxide rods upon exposing a clean NiAl(110) surface to oxygen at elevated temperatures (850-1350 K). By following the self-assembly of individual nanorods in real time with low-energy electron microscopy (LEEM), we are able to investigate the processes by which the rods lengthen along their axes and thicken normal to the surface of the substrate. At a fixed temperature and O{sub 2} pressure, the rods lengthen along their axes at a constant rate. The exponential temperature dependence of this rate yields an activation energy for growth of 1.2 {+-} 0.1 eV. The rod growth rates do not change as their ends pass in close proximity (<40 nm) to each other, which suggests that they do not compete for diffusing flux in order to elongate. Both LEEM and scanning tunneling microscopy (STM) studies show that the rods can grow vertically in layer-by-layer fashion. The heights of the rods are extremely bias dependent in STM images, but occur in integer multiples of approximately 2-{angstrom}-thick oxygen-cation layers. As the rods elongate from one substrate terrace to the next, we commonly see sharp changes in their rates of elongation that result from their tendency to gain (lose) atomic layers as they descend (climb) substrate steps. Diffraction analysis and dark-field imaging with LEEM indicate that the rods are crystalline, with a lattice constant that is well matched to that of the substrate along their length. We discuss the factors that lead to the formation of these highly anisotropic structures.
The performance characteristics and material properties such as stress, microstructure, and composition of nickel coatings and electroformed components can be controlled over a wide range by the addition of small amounts of surface-active compounds to the electroplating bath. Saccharin is one compound that is widely utilized for its ability to reduce tensile stress and refine grain size in electrodeposited nickel. While the effects of saccharin on nickel electrodeposition have been studied by many authors in the past, there is still uncertainty over saccharin's mechanisms of incorporation, stress reduction, and grain refinement. In-situ scanning probe microscopy (SPM) is a tool that can be used to directly image the nucleation and growth of thin nickel films at nanometer length scales to help elucidate saccharin's role in the development and evolution of grain structure. In this study, in-situ atomic force microscopy (AFM) and scanning tunneling microscopy (STM) techniques are used to investigate the effects of saccharin on the morphological evolution of thin nickel films. By observing mono-atomic height nickel island growth with and without saccharin present we conclude that saccharin has little effect on the nickel surface mobility during deposition at low overpotentials where the growth occurs in a layer-by-layer mode. Saccharin was imaged on Au(l11) terraces as condensed patches without resolved packing structure. AFM measurements of the roughness evolution of nickel films up to 1200 nm thick on polycrystalline gold indicate that saccharin initially increases the roughness and surface skewness of the deposit that at greater thickness becomes smoother than films deposited without saccharin. Faceting of the deposit morphology decreases as saccharin concentration increases even for the thinnest films that have 3-D growth.