This paper presents a new map representing the structure of all of science, based on journal articles, including both the natural and social sciences. Similar to cartographic maps of our world, the map of science provides a bird's eye view of today's scientific landscape. It can be used to visually identify major areas of science, their size, similarity, and interconnectedness. In order to be useful, the map needs to be accurate on a local and on a global scale. While our recent work has focused on the former aspect, this paper summarizes results on how to achieve structural accuracy. Eight alternative measures of journal similarity were applied to a data set of 7,121 journals covering over 1 million documents in the combined Science Citation and Social Science Citation Indexes. For each journal similarity measure we generated two-dimensional spatial layouts using the force-directed graph layout tool, VxOrd. Next, mutual information values were calculated for each graph at different clustering levels to give a measure of structural accuracy for each map. The best co-citation and inter-citation maps according to local and structural accuracy were selected and are presented and characterized. These two maps are compared to establish robustness. The inter-citation map is then used to examine linkages between disciplines. Biochemistry appears as the most interdisciplinary discipline in science.
The Z accelerator [R.B. Spielman, W.A. Stygar, J.F. Seamen et al., Proceedings of the 11th International Pulsed Power Conference, Baltimore, MD, 1997, edited by G. Cooperstein and I. Vitkovitsky (IEEE, Piscataway, NJ, 1997), Vol. 1, p. 709] at Sandia National Laboratories delivers {approx}20 MA load currents to create high magnetic fields (>1000 T) and high pressures (megabar to gigabar). In a z-pinch configuration, the magnetic pressure (the Lorentz force) supersonically implodes a plasma created from a cylindrical wire array, which at stagnation typically generates a plasma with energy densities of about 10 MJ/cm{sup 3} and temperatures >1 keV at 0.1% of solid density. These plasmas produce x-ray energies approaching 2 MJ at powers >200 TW for inertial confinement fusion (ICF) and high energy density physics (HEDP) experiments. In an alternative configuration, the large magnetic pressure directly drives isentropic compression experiments to pressures >3 Mbar and accelerates flyer plates to >30 km/s for equation of state (EOS) experiments at pressures up to 10 Mbar in aluminum. Development of multidimensional radiation-magnetohydrodynamic codes, coupled with more accurate material models (e.g., quantum molecular dynamics calculations with density functional theory), has produced synergy between validating the simulations and guiding the experiments. Z is now routinely used to drive ICF capsule implosions (focusing on implosion symmetry and neutron production) and to perform HEDP experiments (including radiation-driven hydrodynamic jets, EOS, phase transitions, strength of materials, and detailed behavior of z-pinch wire-array initiation and implosion). This research is performed in collaboration with many other groups from around the world. A five year project to enhance the capability and precision of Z, to be completed in 2007, will result in x-ray energies of nearly 3 MJ at x-ray powers >300 TW.
Terahertz radiation from optically-induced plasmas on metal, semiconductor, and dielectric surfaces is compared to electron-hole plasma radiation from GaAs and Ge. Electro-optic sampling and electric-field probes measure radiated field waveforms and distributions to 0.350 THz.
Gold nanocrystal(NC)/silica films are synthesized through self-assembly of water-soluble gold nanocrystal micelles and silica by sol-gel processing. Absorption and transmission spectra show a strong surface plasmon resonance absorption peak at {approx}520 nm. Angular excitation spectra of surface plasmon show a steep dip in the reflectivity curve at {approx}65{sup o} depending on the thickness and refractive index of the gold NC/silica film. A potential SPR sensor with enhanced sensitivities can be realized based on these gold NC/silica films.
Cold spray, a new member of the thermal spray process family, can be used to prepare dense, thick metal coatings. It has tremendous potential as a spray-forming process. However, it is well known that significant cold work occurs during the cold spray deposition process. This cold work results in hard coatings but relatively brittle bulk deposits. This work investigates the mechanical properties of cold-sprayed aluminum and the effect of annealing on those properties. Cold spray coatings approximately 1 cm thick were prepared using three different feedstock powders: Valimet H-10; Valimet H-20; and Brodmann Flomaster. ASTM E8 tensile specimens were machined from these coatings and tested using standard tensile testing procedures. Each material was tested in two conditions: as-sprayed; and after a 300 C, 22 h air anneal. The as-sprayed material showed high ultimate strength and low ductility, with <1% elongation. The annealed samples showed a reduction in ultimate strength but a dramatic increase in ductility, with up to 10% elongation. The annealed samples exhibited mechanical properties that were similar to those of wrought 1100 H14 aluminum. Microstructural examination and fractography clearly showed a change in fracture mechanism between the as-sprayed and annealed materials. These results indicate good potential for cold spray as a bulk-forming process.
Laser-induced breakdown spectroscopy (LIBS) was used in the evaluation of aerosol concentration in the exhaust of an oxygen/natural-gas glass furnace. Experiments showed that for a delay time of 10 {micro}s and a gate width of 50 {micro}s, the presence of CO{sub 2} and changes in gas temperature affect the intensity of both continuum emission and the Na D lines. The intensity increased for the neutral Ca and Mg lines in the presence of 21% CO{sub 2} when compared to 100% N{sub 2}, whereas the intensity of the Mg and Ca ionic lines decreased. An increase in temperature from 300 to 730 K produced an increase in both continuum emission and Na signal. These laboratory measurements were consistent with measurements in the glass furnace exhaust. Time-resolved analysis of the spark radiation suggested that differences in continuum radiation resulting from changes in bath composition are only apparent at long delay times. The changes in the intensity of ionic and neutral lines in the presence of CO{sub 2} are believed to result from higher free electron number density caused by lower ionization energies of species formed during the spark decay process in the presence of CO{sub 2}. For the high Na concentration observed in the glass furnace exhaust, self-absorption of the spark radiation occurred. Power law regression was used to fit laboratory Na LIBS calibration data for sodium loadings, gas temperatures, and a CO{sub 2} content representative of the furnace exhaust. Improvement of the LIBS measurement in this environment may be possible by evaluation of Na lines with weaker emission and through the use of shorter gate delay times.
The junction temperature of AlGaN ultraviolet light-emitting diodes emitting at 295 nm is measured by using the temperature coefficients of the diode forward voltage and emission peak energy. The high-energy slope of the spectrum is explored to measure the carrier temperature. A linear relation between junction temperature and current is found. Analysis of the experimental methods reveals that the diode-forward voltage is the most accurate ({+-}3 C). A theoretical model for the dependence of the diode forward voltage (V{sub f}) on junction temperature (T{sub j}) is developed that takes into account the temperature dependence of the energy gap. A thermal resistance of 87.6 K/W is obtained with the device mounted with thermal paste on a heat sink.
The effect of temperature on the tensile properties of annealed 304L stainless steel and HERF 304L stainless steel forgings was determined by completing experiments over the moderate range of -40 F to 160 F. Temperature effects were more significant in the annealed material than the HERF material. The tensile yield strength of the annealed material at -40 F averaged twenty two percent above the room temperature value and at 160 F averaged thirteen percent below. The tensile yield strength for the three different geometry HERF forgings at -40 F and 160 F changed less than ten percent from room temperature. The ultimate tensile strength was more temperature dependent than the yield strength. The annealed material averaged thirty six percent above and fourteen percent below the room temperature ultimate strength at -40 F and 160 F, respectively. The HERF forgings exhibited similar, slightly lower changes in ultimate strength with temperature. For completeness and illustrative purposes, the stress-strain curves are included for each of the tensile experiments conducted. The results of this study prompted a continuation study to determine tensile property changes of welded 304L stainless steel material with temperature, documented separately.
Several groups of plastic molded CD4011 were electrically tested as part of an Army dormant storage program. For this test, parts had been in storage in missile containers for 4.5 years. Eight of the parts (out of 1200) failed the electrical tests and were subsequently analyzed to determine the cause of the failures. The root cause was found to be corrosion of the unpassivated Al bondpads. No significant attack of the passivated Al traces was found. Seven of the eight failures occurred in parts stored on a preposition ship (Jeb Stuart), suggesting a link between the external environment and observed corrosion.
Social and ecological scientists emphasize that effective natural resource management depends in part on understanding the dynamic relationship between the physical and non-physical process associated with resource consumption. In this case, the physical processes include hydrological, climatological and ecological dynamics, and the non-physical process include social, economic and cultural dynamics among humans who do the resource consumption. This project represents a case study aimed at modeling coupled social and physical processes in a single decision support system. In central New Mexico, individual land use decisions over the past five decades have resulted in the gradual transformation of the Middle Rio Grande Valley from a primarily rural agricultural landscape to a largely urban one. In the arid southwestern U.S., the aggregate impact of individual decisions about land use is uniquely important to understand, because scarce hydrological resources will likely limit the viability of resulting growth and development trajectories. This decision support tool is intended to help planners in the area look forward in their efforts to create a collectively defined 'desired' social landscape in the Middle Rio Grande. Our research question explored the ways in which socio-cultural values impact decisions regarding that landscape and associated land use. Because of the constraints hydrological resources place on land use, we first assumed that water use, as embodied in water rights, was a reasonable surrogate for land use. We thought that modeling the movement of water rights over time and across water source types (surface and ground) would provide planners with insight into the possibilities for certain types of decisions regarding social landscapes, and the impact those same decisions would have on those landscapes. We found that water rights transfer data in New Mexico is too incomplete and inaccurate to use as the basis for the model. Furthermore, because of its lack of accuracy and completeness, water rights ownership was a poor indicator of water and land usage habits and patterns. We also found that commitment among users in the Middle Rio Grande Valley is to an agricultural lifestyle, not to a community or place. This commitment is conditioned primarily by generational cohort and past experience. If conditions warrant, many would be willing to practice the lifestyle elsewhere. A related finding was that sometimes the pressure to sell was not the putative price of the land, but the taxes on the land. These taxes were, in turn, a function of the level of urbanization of the neighborhood. This urbanization impacted the quality of the agricultural lifestyle. The project also yielded some valuable lessons regarding the model development process. A facilitative and collaborative style (rather than a top-down, directive style) was most productive with the inter-disciplinary , inter-institutional team that worked on the project. This allowed for the emergence of a process model which combined small, discipline- and/or task-specific subgroups with larger, integrating team meetings. The project objective was to develop a model that could be used to run test scenarios in which we explored the potential impact of different policy options. We achieved that objective, although not with the level of success or modeling fidelity which we had hoped for. This report only describes very superficially the results of test scenarios, since more complete analysis of scenarios would require more time and effort. Our greatest obstacle in the successful completion of the project was that required data were sparse, of poor quality, or completely nonexistent. Moreover, we found no similar modeling or research efforts taking place at either the state or local level. This leads to a key finding of this project: that state and local policy decisions regarding land use, development, urbanization, and water resource allocation are being made with minimal data and without the benefit of economic or social policy analysis.
Warm dense matter is the region in phase space of density and temperature where the thermal, Fermi, and Coulomb energies are approximately equal. The lack of a dominating scale and physical behavior makes it challenging to model the physics to high fidelity. For Sandia, a fundamental understanding of the region is of importance because of the needs of our experimental HEDP programs for high fidelity descriptive and predictive modeling. We show that multi-scale simulations of macroscopic physical phenomena now have predictive capability also for difficult but ubiquitous materials such as stainless steel, a transition metal alloy.
Imagine free-standing flexible membranes with highly-aligned arrays of carbon nanotubes (CNTs) running through their thickness. Perhaps with both ends of the CNTs open for highly controlled nanofiltration? Or CNTs at heights uniformly above a polymer membrane for a flexible array of nanoelectrodes or field-emitters? How about CNT films with incredible amounts of accessible surface area for analyte adsorption? These self-assembled crystalline nanotubes consist of multiple layers of graphene sheets rolled into concentric cylinders. Tube diameters (3-300 nm), inner-bore diameters (2-15 nm), and lengths (nanometers - microns) are controlled to tailor physical, mechanical, and chemical properties. We proposed to explore growth and characterize nanotube arrays to help determine their exciting functionality for Sandia applications. Thermal chemical vapor deposition growth in a furnace nucleates from a metal catalyst. Ordered arrays grow using templates from self-assembled hexagonal arrays of nanopores in anodized-aluminum oxide. Polymeric-binders can mechanically hold the CNTs in place for polishing, lift-off, and membrane formation. The stiffness, electrical and thermal conductivities of CNTs make them ideally suited for a wide-variety of possible applications. Large-area, highly-accessible gas-adsorbing carbon surfaces, superb cold-cathode field-emission, and unique nanoscale geometries can lead to advanced microsensors using analyte adsorption, arrays of functionalized nanoelectrodes for enhanced electrochemical detection of biological/explosive compounds, or mass-ionizers for gas-phase detection. Materials studies involving membrane formation may lead to exciting breakthroughs in nanofiltration/nanochromatography for the separation of chemical and biological agents. With controlled nanofilter sizes, ultrafiltration will be viable to separate and preconcentrate viruses and many strains of bacteria for 'down-stream' analysis.
Chemical microsensors rely on partitioning of airborne chemicals into films to collect and measure trace quantities of hazardous vapors. Polymer sensor coatings used today are typically slow to respond and difficult to apply reproducibly. The objective of this project was to produce a durable sensor coating material based on graphitic nanoporous-carbon (NPC), a new material first studied at Sandia, for collection and detection of volatile organic compounds (VOC), toxic industrial chemicals (TIC), chemical warfare agents (CWA) and nuclear processing precursors (NPP). Preliminary studies using NPC films on exploratory surface-acoustic-wave (SAW) devices and as a {micro}ChemLab membrane preconcentrator suggested that NPC may outperform existing, irreproducible coatings for SAW sensor and {micro}ChemLab preconcentrator applications. Success of this project will provide a strategic advantage to the development of a robust, manufacturable, highly-sensitive chemical microsensor for public health, industrial, and national security needs. We use pulsed-laser deposition to grow NPC films at room-temperature with negligible residual stress, and hence, can be deposited onto nearly any substrate material to any thickness. Controlled deposition yields reproducible NPC density, morphology, and porosity, without any discernable variation in surface chemistry. NPC coatings > 20 {micro}m thick with density < 5% that of graphite have been demonstrated. NPC can be 'doped' with nearly any metal during growth to provide further enhancements in analyte detection and selectivity. Optimized NPC-coated SAW devices were compared directly to commonly-used polymer coated SAWs for sensitivity to a variety of VOC, TIC, CWA and NPP. In every analyte, NPC outperforms each polymer coating by multiple orders-of-magnitude in detection sensitivity, with improvements ranging from 103 to 108 times greater detection sensitivity! NPC-coated SAW sensors appear capable of detecting most analytes tested to concentrations below parts-per-billion. In addition, the graphitic nature of NPC enables thermal stability > 600 C, several hundred degrees higher than the polymers. This superior thermal stability will enable higher-Temperature preconcentrator operation, as well as greatly prolonged device reliability, since polymers tend to degrade with time and repeated thermal cycling.
The convergence of nanoscience and biotechnology has opened the door to the integration of a wide range of biological molecules and processes with synthetic materials and devices. A primary biomolecule of interest has been DNA based upon its role as information storage in living systems, as well as its ability to withstand a wide range of environmental conditions. DNA also offers unique chemistries and interacts with a range of biomolecules, making it an ideal component in biological sensor applications. The primary goal of this project was to develop methods that utilize in vitro DNA synthesis to provide spatial localization of nanocrystal quantum dots (nQDs). To accomplish this goal, three specific technical objectives were addressed: (1) attachment of nQDs to DNA nucleotides, (2) demonstrating the synthesis of nQD-DNA strands in bulk solution, and (3) optimizing the ratio of unlabeled to nQD-labeled nucleotides. DNA nucleotides were successfully attached to nQDs using the biotin-streptavidin linkage. Synthesis of 450-nm long, nQD-coated DNA strands was demonstrated using a DNA template and the polymerase chain reaction (PCR)-based method of DNA amplification. Modifications in the synthesis process and conditions were subsequently used to synthesize 2-{micro}m long linear nQD-DNA assemblies. In the case of the 2-{micro}m structures, both the ratio of streptavidin-coated nQDs to biotinylated dCTP, and streptavidin-coated nQD-dCTPs to unlabeled dCTPs affected the ability to synthesize the nQD-DNA assemblies. Overall, these proof-of-principles experiments demonstrated the successful synthesis of nQD-DNA using DNA templates and in vitro replication technologies. Continued development of this technology may enable rapid, spatial patterning of semiconductor nanoparticles with Angstrom-level resolution, as well as optically active probes for DNA and other biomolecular analyses.
A combined experimental/modeling study was conducted to better understand the critical role of gas-surface interactions in rarefied gas flows. An experimental chamber and supporting diagnostics were designed and assembled to allow simultaneous measurements of gas heat flux and inter-plate gas density profiles in an axisymmetric, parallel-plate geometry. Measurements of gas density profiles and heat flux are made under identical conditions, eliminating an important limitation of earlier studies. The use of in situ, electron-beam fluorescence is demonstrated as a means to measure gas density profiles although additional work is required to improve the accuracy of this technique. Heat flux is inferred from temperature-drop measurements using precision thermistors. The system can be operated with a variety of gases (monatomic, diatomic, polyatomic, mixtures) and carefully controlled, well-characterized surfaces of different types (metals, ceramics) and conditions (smooth, rough). The measurements reported here are for 304 stainless steel plates with a standard machined surface coupled with argon, helium, and nitrogen. The resulting heat-flux and gas-density-profile data are analyzed using analytic and computational models to show that a simple Maxwell gas-surface interaction model is adequate to represent all of the observations. Based on this analysis, thermal accommodation coefficients for 304 stainless steel coupled with argon, nitrogen, and helium are determined to be 0.88, 0.80, and 0.38, respectively, with an estimated uncertainty of {+-}0.02.
Time domain reflectometry (TDR) operates by propagating a radar frequency electromagnetic pulse down a transmission line while monitoring the reflected signal. As the electromagnetic pulse propagates along the transmission line, it is subject to impedance by the dielectric properties of the media along the transmission line (e.g., air, water, sediment), reflection at dielectric discontinuities (e.g., air-water or water-sediment interface), and attenuation by electrically conductive materials (e.g., salts, clays). Taken together, these characteristics provide a basis for integrated stream monitoring; specifically, concurrent measurement of stream stage, channel profile and aqueous conductivity. Here, we make novel application of TDR within the context of stream monitoring. Efforts toward this goal followed three critical phases. First, a means of extracting the desired stream parameters from measured TDR traces was required. Analysis was complicated by the fact that interface location and aqueous conductivity vary concurrently and multiple interfaces may be present at any time. For this reason a physically based multisection model employing the S11 scatter function and Cole-Cole parameters for dielectric dispersion and loss was developed to analyze acquired TDR traces. Second, we explored the capability of this multisection modeling approach for interpreting TDR data acquired from complex environments, such as encountered in stream monitoring. A series of laboratory tank experiments were performed in which the depth of water, depth of sediment, and conductivity were varied systematically. Comparisons between modeled and independently measured data indicate that TDR measurements can be made with an accuracy of {+-}3.4x10{sup -3} m for sensing the location of an air/water or water/sediment interface and {+-}7.4% of actual for the aqueous conductivity. Third, monitoring stations were sited on the Rio Grande and Paria rivers to evaluate performance of the TDR system under normal field conditions. At the Rio Grande site (near Central Bridge in Albuquerque, New Mexico) continuous monitoring of stream stage and aqueous conductivity was performed for 6 months. Additionally, channel profile measurements were acquired at 7 locations across the river. At the Paria site (near Lee's Ferry, Arizona) stream stage and aqueous conductivity data were collected over a 4-month period. Comparisons drawn between our TDR measurements and USGS gage data indicate that the stream stage is accurate within {+-}0.88 cm, conductivity is accurate within {+-}11% of actual, and channel profile measurements agree within {+-}1.2 cm.
Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operations indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.
Deep X-ray lithography on PMMA resist is used in the LIGA process. The resist is exposed to synchrotron X-rays through a patterned mask and then is developed in a liquid developer to make high aspect ratio microstructures. The limitations in dimensional accuracies of the LIGA generated microstructure originate from many sources, including synchrotron and X-ray physics, thermal and mechanical properties of mask and resist, and from the kinetics of the developer. This work addresses the thermal analysis and temperature rise of the mask-resist assembly during exposure in air at the Advanced Light Source (ALS) synchrotron. The concern is that dimensional errors generated at the mask and the resist due to thermal expansion will lower the accuracy of the lithography. We have developed a three-dimensional finite-element model of the mask and resist assembly that includes a mask with absorber, a resist with substrate, three metal holders, and a water-cooling block. We employed the LIGA exposure-development software LEX-D to calculate volumetric heat sources generated in the assembly by X-ray absorption and the commercial software ABAQUS to calculate heat transfer including thermal conduction inside the assembly, natural and forced convection, and thermal radiation. at assembly outer and/or inner surfaces. The calculations of assembly maximum temperature. have been compared with temperature measurements conducted at ALS. In some of these experiments, additional cooling of the assembly was produced by forced nitrogen flow ('nitrogen jets') directed at the mask surface. The temperature rise in the silicon mask and the mask holder comes directly from the X-ray absorption, but nitrogen jets carry away a significant portion of heat energy from the mask surface, while natural convection carries away negligibly small amounts energy from the holder. The temperature rise in PMMA resist is mainly from heat conducted from the silicon substrate backward to the resist and from the inner cavity air forward to the resist, while the X-ray absorption is only secondary. Therefore, reduction of heat flow conducted from both substrate and cavity air to the resist is essential. An improved water-cooling block is expected to carry away most heat energy along the main heat conductive path, leaving the resist at a favorable working temperature.
Acts of terrorism could have a range of broad impacts on an economy, including changes in consumer (or demand) confidence and the ability of productive sectors to respond to changes. As a first step toward a model of terrorism-based impacts, we develop here a model of production and employment that characterizes dynamics in ways useful toward understanding how terrorism-based shocks could propagate through the economy; subsequent models will introduce the role of savings and investment into the economy. We use Aspen, a powerful economic modeling tool developed at Sandia, to demonstrate for validation purposes that a single-firm economy converges to the known monopoly equilibrium price, output, and employment levels, while multiple-firm economies converge toward the competitive equilibria typified by lower prices and higher output and employment. However, we find that competition also leads to churn by consumers seeking lower prices, making it difficult for firms to optimize with respect to wages, prices, and employment levels. Thus, competitive firms generate market ''noise'' in the steady state as they search for prices and employment levels that will maximize profits. In the context of this model, not only could terrorism depress overall consumer confidence and economic activity but terrorist acts could also cause normal short-run dynamics to be misinterpreted by consumers as a faltering economy.
The overall purpose of this LDRD is multifold. First, we are interested in preparing new homogeneous catalysts that can be used in the oligomerization of ethylene and in understanding commercially important systems better. Second, we are interested in attempting to support these new homogeneous catalysts in the pores of nano- or mesoporous materials in order to force new and unusual distributions of a-olefins to be formed during the oligomerization. Thus the overall purpose is to try to prepare new catalytic species and to possibly control the active site architecture in order to yield certain desired products during a catalytic reaction, much like nature does with enzymes. In order to rationally synthesize catalysts it is imperative to comprehend the function of the various components of the catalyst. In heterogeneous systems, it is of utmost importance to know how a support interacts with the active site of the catalyst. In fact, in the catalysis world this lack of fundamental understanding of the relationship between active site and support is the single largest reason catalysis is considered an 'empirical' or 'black box' science rather than a well-understood one. In this work we will be preparing novel ethylene oligomerization catalysts, which are normally P-O chelated homogeneous complexes, with new ligands that replace P with a stable carbene. We will also examine a commercially catalyst system and investigate the active site in it via X-ray crystallography. We will also attempt to support these materials inside the pores of nano- and mesoporous materials. Essentially, we will be tailoring the size and scale of the catalyst active site and its surrounding environment to match the size of the molecular product(s) we wish to make. The overall purpose of the study will be to prepare new homogeneous catalysts, and if successful in supporting them to examine the effects that steric constraints and pore structures can have on growing oligomer chains.
Lien, Steve J.; Baxter, Larry L.; Frederick Jr., W.J.; Wessel, Richard A.
As part of the U.S. Department of Energy (DOE) Office of Industrial Technologies (OIT) Industries of the Future (IOF) Forest Products research program, the mechanisms of particle deposition and properties of deposits that form in the convection passes of recovery boilers were investigated. Research from experimental facilities at Sandia National Laboratories, the Institute of Paper Science and Technology (IPST), and the University of Toronto (U of T) was coordinated into a single effort to define the controlling mechanisms and rates of deposition. Deposition rates were recorded on a volumetric and mass basis in a Sandia facility for particle sizes in the range of 0.1 to 150 {micro}m. Deposit thickness, mass, spectral emissivity, thermal conductivity, surface temperature, and apparent density were monitored simultaneously and in situ on instrumented probes that allow determination of heat flux and probe surface temperature. Particle composition and mass deposition rates were also recorded in a U of T facility for particle sizes in the range of 100 to 600 {micro}m. These measurements allowed determination of the liquid content and sticking efficiency of carryover particles that inertially impact on a deposition probe. In addition, information on particulates, stable gas species, gas temperature and velocity were obtained from field tests in an operating recovery boiler. The results were used to develop algorithms appropriate for use in computer codes that simulate recovery boilers. Representative calculations were performed using B&W's comprehensive recovery boiler model to demonstrate the use of the algorithms in such computer codes. Comparisons between observations in commercial systems and model predictions were made to identify algorithm strengths and weaknesses.
Property-based testing is a testing technique that evaluates executions of a program. The method checks that specifications, called properties, hold throughout the execution of the program. TASpec is a language used to specify these properties. This paper compares some attributes of the language with the specification patterns used for model-checking languages, and then presents some descriptions of properties that can be used to detect common security flaws in programs. This report describes the results of a one year research project at the University of California, Davis, which was funded by a University Collaboration LDRD entitled ''Property-based Testing for Cyber Security Assurance''.
The Matrixed Business Support Comparison Study reviewed the current matrixed Chief Financial Officer (CFO) division staff models at Sandia National Laboratories. There were two primary drivers of this analysis: (1) the increasing number of financial staff matrixed to mission customers and (2) the desire to further understand the matrix process and the opportunities and challenges it creates.
At Sandia National Laboratories, miniaturization dominates future hardware designs, and technologies that address the manufacture of micro-scale to nano-scale features are in demand. Currently, Sandia is developing technologies such as photolithography/etching (e.g. silicon MEMS), LIGA, micro-electro-discharge machining (micro-EDM), and focused ion beam (FIB) machining to fulfill some of the component design requirements. Some processes are more encompassing than others, but each process has its niche, where all performance characteristics cannot be met by one technology. For example, micro-EDM creates highly accurate micro-scale features but the choice of materials is limited to conductive materials. With silicon-based MEMS technology, highly accurate nano-scale integrated devices are fabricated but the mechanical performance may not meet the requirements. Femtosecond laser processing has the potential to fulfill a broad range of design demands, both in terms of feature resolution and material choices, thereby improving fabrication of micro-components. One of the unique features of femtosecond lasers is the ability to ablate nearly all materials with little heat transfer, and therefore melting or damage, to the surrounding material, resulting in highly accurate micro-scale features. Another unique aspect to femtosecond radiation is the ability to create localized structural changes thought nonlinear absorption processes. By scanning the focal point within transparent material, we can create three-dimensional waveguides for biological sensors and optical components. In this report, we utilized the special characteristics of femtosecond laser processing for microfabrication. Special emphasis was placed on the laser-material interactions to gain a science-based understanding of the process and to determine the process parameter space for laser processing of metals and glasses. Two areas were investigated, including laser ablation of ferrous alloys and direct-write optical waveguides and integrated optics in bulk glass. The effects of laser and environmental parameters on such aspects as removal rate, feature size, feature definition, and ablation angle during the ablation process of metals were studied. In addition, the manufacturing requirements for component fabrication including precision and reproducibility were investigated. The effect of laser processing conditions on the optical properties of direct-written waveguides and an unusual laser-induced birefringence in an optically isotropic glass are reported. Several integrated optical devices, including a Y coupler, directional coupler, and Mach-Zehnder interferometer, were made to demonstrate the simplicity and flexibility of this technique in comparison to the conventional waveguide fabrication processes.
We have studied the feasibility of using the 3D fully electromagnetic implicit hybrid particle code LSP (Large Scale Plasma) to study laser plasma interactions with dense, compressed plasmas like those created with Z, and which might be created with the planned ZR. We have determined that with the proper additional physics and numerical algorithms developed during the LDRD period, LSP was transformed into a unique platform for studying such interactions. Its uniqueness stems from its ability to consider realistic compressed densities and low initial target temperatures (if required), an ability that conventional PIC codes do not possess. Through several test cases, validations, and applications to next generation machines described in this report, we have established the suitability of the code to look at fast ignition issues for ZR, as well as other high-density laser plasma interaction problems relevant to the HEDP program at Sandia (e.g. backlighting).
The combination of phase diversity and adaptive optics offers great flexibility. Phase diverse images can be used to diagnose aberrations and then provide feedback control to the optics to correct the aberrations. Alternatively, phase diversity can be used to partially compensate for aberrations during post-detection image processing. The adaptive optic can produce simple defocus or more complex types of phase diversity. This report presents an analysis, based on numerical simulations, of the efficiency of different modes of phase diversity with respect to compensating for specific aberrations during post-processing. It also comments on the efficiency of post-processing versus direct aberration correction. The construction of a bench top optical system that uses a membrane mirror as an active optic is described. The results of characterization tests performed on the bench top optical system are presented. The work described in this report was conducted to explore the use of adaptive optics and phase diversity imaging for responsive space applications.
Sandia National Laboratories was tasked with developing the Defense Nuclear Material Stewardship Integrated Inventory Information Management System (IIIMS) with the sponsorship of NA-125.3 and the concurrence of DOE/NNSA field and area offices. The purpose of IIIMS was to modernize nuclear materials management information systems at the enterprise level. Projects over the course of several years attempted to spearhead this modernization. The scope of IIIMS was broken into broad enterprise-oriented materials management and materials forecasting. The IIIMS prototype was developed to allow multiple participating user groups to explore nuclear material requirements and needs in detail. The purpose of material forecasting was to determine nuclear material availability over a 10 to 15 year period in light of the dynamic nature of nuclear materials management. Formal DOE Directives (requirements) were needed to direct IIIMS efforts but were never issued and the project has been halted. When restarted, duplicating or re-engineering the activities from 1999 to 2003 is unnecessary, and in fact future initiatives can build on previous work. IIIMS requirements should be structured to provide high confidence that discrepancies are detected, and classified information is not divulged. Enterprise-wide materials management systems maintained by the military can be used as overall models to base IIIMS implementation concepts upon.
This report documents the author's efforts in the deterministic modeling of copper-sulfidation corrosion on non-planar substrates such as diodes and electrical connectors. A new framework based on Goma was developed for multi-dimensional modeling of atmospheric copper-sulfidation corrosion on non-planar substrates. In this framework, the moving sulfidation front is explicitly tracked by treating the finite-element mesh as a pseudo solid with an arbitrary Lagrangian-Eulerian formulation and repeatedly performing re-meshing using CUBIT and re-mapping using MAPVAR. Three one-dimensional studies were performed for verifying the framework in asymptotic regimes. Limited model validation was also carried out by comparing computed copper-sulfide thickness with experimental data. The framework was first demonstrated in modeling one-dimensional copper sulfidation with charge separation. It was found that both the thickness of the space-charge layers and the electrical potential at the sulfidation surface decrease rapidly as the Cu{sub 2}S layer thickens initially but eventually reach equilibrium values as Cu{sub 2}S layer becomes sufficiently thick; it was also found that electroneutrality is a reasonable approximation and that the electro-migration flux may be estimated by using the equilibrium potential difference between the sulfidation and annihilation surfaces when the Cu{sub 2}S layer is sufficiently thick. The framework was then employed to model copper sulfidation in the solid-state-diffusion controlled regime (i.e. stage II sulfidation) on a prototypical diode until a continuous Cu{sub 2}S film was formed on the diode surface. The framework was also applied to model copper sulfidation on an intermittent electrical contact between a gold-plated copper pin and gold-plated copper pad; the presence of Cu{sub 2}S was found to raise the effective electrical resistance drastically. Lastly, future research needs in modeling atmospheric copper sulfidation are discussed.
Wind turbine system reliability is a critical factor in the success of a wind energy project. Poor reliability directly affects both the project's revenue stream through increased operation and maintenance (O&M) costs and reduced availability to generate power due to turbine downtime. Indirectly, the acceptance of wind-generated power by the financial and developer communities as a viable enterprise is influenced by the risk associated with the capital equipment reliability; increased risk, or at least the perception of increased risk, is generally accompanied by increased financing fees or interest rates. Cost of energy (COE) is a key project evaluation metric, both in commercial applications and in the U.S. federal wind energy program. To reflect this commercial reality, the wind energy research community has adopted COE as a decision-making and technology evaluation metric. The COE metric accounts for the effects of reliability through levelized replacement cost and unscheduled maintenance cost parameters. However, unlike the other cost contributors, such as initial capital investment and scheduled maintenance and operating expenses, costs associated with component failures are necessarily speculative. They are based on assumptions about the reliability of components that in many cases have not been operated for a complete life cycle. Due to the logistical and practical difficulty of replacing major components in a wind turbine, unanticipated failures (especially serial failures) can have a large impact on the economics of a project. The uncertainty associated with long-term component reliability has direct bearing on the confidence level associated with COE projections. In addition, wind turbine technology is evolving. New materials and designs are being incorporated in contemporary wind turbines with the ultimate goal of reducing weight, controlling loads, and improving energy capture. While the goal of these innovations is reduction in the COE, there is a potential impact on reliability whenever new technologies are introduced. While some of these innovations may ultimately improve reliability, in the short term, the technology risks and the perception of risk will increase. The COE metric used by researchers to evaluate technologies does not address this issue. This paper outlines the issues relevant to wind turbine reliability for wind turbine power generation projects. The first sections describe the current state of the industry, identify the cost elements associated with wind farm O&M and availability and discuss the causes of uncertainty in estimating wind turbine component reliability. The latter sections discuss the means for reducing O&M costs and propose O&M related research and development efforts that could be pursued by the wind energy research community to reduce COE.
This paper builds upon previous work [Sprigg and Ehlen, 2004] by introducing a bond market into a model of production and employment. The previous paper described an economy in which households choose whether to enter the labor and product markets based on wages and prices. Firms experiment with prices and employment levels to maximize their profits. We developed agent-based simulations using Aspen, a powerful economic modeling tool developed at Sandia, to demonstrate that multiple-firm economies converge toward the competitive equilibria typified by lower prices and higher output and employment, but also suffer from market noise stemming from consumer churn. In this paper we introduce a bond market as a mechanism for household savings. We simulate an economy of continuous overlapping generations in which each household grows older in the course of the simulation and continually revises its target level of savings according to a life-cycle hypothesis. Households can seek employment, earn income, purchase goods, and contribute to savings until they reach the mandatory retirement age; upon retirement households must draw from savings in order to purchase goods. This paper demonstrates the simultaneous convergence of product, labor, and savings markets to their calculated equilibria, and simulates how a disruption to a productive sector will create cascading effects in all markets. Subsequent work will use similar models to simulate how disruptions, such as terrorist attacks, would interplay with consumer confidence to affect financial markets and the broader economy.
The Hydrogen Futures Simulation Model (H{sub 2}Sim) is a high level, internally consistent, strategic tool for exploring the options of a hydrogen economy. Once the user understands how to use the basic functions, H{sub 2}Sim can be used to examine a wide variety of scenarios, such as testing different options for the hydrogen pathway, altering key assumptions regarding hydrogen production, storage, transportation, and end use costs, and determining the effectiveness of various options on carbon mitigation. This User's Guide explains how to run the model for the first time user.
In preparation for developing a Z-pinch IFE power plant, the interaction of ferritic steel with the coolant, FLiBe, must be explored. Sandia National Laboratories Fusion Technology Department was asked to drop molten ferritic steel and FLiBe in a vacuum system and determine the gas byproducts and ability to recycle the steel. We tried various methods of resistive heating of ferritic steel using available power supplies and easily obtained heaters. Although we could melt the steel, we could not cause a drop to fall. This report describes the various experiments that were performed and includes some suggestions and materials needed to be successful. Although the steel was easily melted, it was not possible to drip the molten steel into a FLiBe pool Levitation melting of the drop is likely to be more successful.
To establish strength criteria of Big Hill salt, a series of quasi-static triaxial compression tests have been completed. This report summarizes the test methods, set-up, relevant observations, and results. The triaxial compression tests established dilatant damage criteria for Big Hill salt in terms of stress invariants (I{sub 1} and J{sub 2}) and principal stresses ({sigma}{sub a,d} and {sigma}{sub 3}), respectively: {radical}J{sub 2}(psi) = 1746-1320.5 exp{sup -0.00034I{sub 1}(psi)}; {sigma}{sub a,d}(psi) = 2248 + 1.25 {sigma}{sub 3} (psi). For the confining pressure of 1,000 psi, the dilatant damage strength of Big Hill salt is identical to the typical salt strength ({radical}J{sub 2} = 0.27 I{sub 1}). However, for higher confining pressure, the typical strength criterion overestimates the damage strength of Big Hill salt.
Poly(ethylene oxide) (PEO) is the quintessential biocompatible polymer. Due to its ability to form hydrogen bonds, it is soluble in water, and yet is uncharged and relatively inert. It is being investigated for use in a wide range of biomedical and biotechnical applications, including the prevention of protein adhesion (biofouling), controlled drug delivery, and tissue scaffolds. PEO has also been proposed for use in novel polymer hydrogel nanocomposites with superior mechanical properties. However, the phase behavior of PEO in water is highly anomalous and is not addressed by current theories of polymer solutions. The effective interactions between PEO and water are very concentration dependent, unlike other polymer/solvent systems, due to water-water and water-PEO hydrogen bonds. An understanding of this anomalous behavior requires a careful examination of PEO liquids and solutions on the molecular level. We performed massively parallel molecular dynamics simulations and self-consistent Polymer Reference Interaction Site Model (PRISM) calculations on PEO liquids. We also initiated MD studies on PEO/water solutions with and without an applied electric field. This work is summarized in three parts devoted to: (1) A comparison of MD simulations, theory and experiment on PEO liquids; (2) The implementation of water potentials into the LAMMPS MD code; and (3) A theoretical analysis of the effect of an applied electric field on the phase diagram of polymer solutions.
A vegetation study was conducted in Technical Area 3 at Sandia National Laboratories, Albuquerque, New Mexico in 2003 to assist in the design and optimization of vegetative soil covers for hazardous, radioactive, and mixed waste landfills at Sandia National Laboratories/New Mexico and Kirtland Air Force Base. The objective of the study was to obtain site-specific, vegetative input parameters for the one-dimensional code UNSAT-H and to identify suitable, diverse native plant species for use on vegetative soil covers that will persist indefinitely as a climax ecological community with little or no maintenance. The identification and selection of appropriate native plant species is critical to the proper design and long-term performance of vegetative soil covers. Major emphasis was placed on the acquisition of representative, site-specific vegetation data. Vegetative input parameters measured in the field during this study include root depth, root length density, and percent bare area. Site-specific leaf area index was not obtained in the area because there was no suitable platform to measure leaf area during the 2003 growing season due to severe drought that has persisted in New Mexico since 1999. Regional LAI data was obtained from two unique desert biomes in New Mexico, Sevilletta Wildlife Refuge and Jornada Research Station.
A decomposition chemistry and heat transfer model to predict the response of removable epoxy foam (REF) exposed to fire-like heat fluxes is described. The epoxy foam was created using a perfluorohexane blowing agent with a surfactant. The model includes desorption of the blowing agent and surfactant, thermal degradation of the epoxy polymer, polymer fragment transport, and vapor-liquid equilibrium. An effective thermal conductivity model describes changes in thermal conductivity with reaction extent. Pressurization is modeled assuming: (1) no strain in the condensed-phase, (2) no resistance to gas-phase transport, (3) spatially uniform stress fields, and (4) no mass loss from the system due to venting. The model has been used to predict mass loss, pressure rise, and decomposition front locations for various small-scale and large-scale experiments performed by others. The framework of the model is suitable for polymeric foams with absorbed gases.
In the search for ''good'' parallel programming environments for Sandia's current and future parallel architectures, they revisit a long-standing open question. Can the PRAM parallel algorithms designed by theoretical computer scientists over the last two decades be implemented efficiently? This open question has co-existed with ongoing efforts in the HPC community to develop practical parallel programming models that can simultaneously provide ease of use, expressiveness, performance, and scalability. Unfortunately, no single model has met all these competing requirements. Here they propose a parallel programming environment, PRAM C, to bridge the gap between theory and practice. This is an attempt to provide an affirmative answer to the PRAM question, and to satisfy these competing practical requirements. This environment consists of a new thin runtime layer and an ANSI C extension. The C extension has two control constructs and one additional data type concept, ''shared''. This C extension should enable easy translation from PRAM algorithms to real parallel programs, much like the translation from sequential algorithms to C programs. The thin runtime layer bundles fine-grained communication requests into coarse-grained communication to be served by message-passing. Although the PRAM represents SIMD-style fine-grained parallelism, a stand-alone PRAM C environment can support both fine-grained and coarse-grained parallel programming in either a MIMD or SPMD style, interoperate with existing MPI libraries, and use existing hardware. The PRAM C model can also be integrated easily with existing models. Unlike related efforts proposing innovative hardware with the goal to realize the PRAM, ours can be a pure software solution with the purpose to provide a practical programming environment for existing parallel machines; it also has the potential to perform well on future parallel architectures.
A laser hazard analysis and safety assessment was performed for the LASIRISTM Model MAG-501L-670M-1000-45o-K diode laser associated with the High Resolution Pulse Scanner based on the ANSI Standard Z136.1-2000, American National Standard for the Safe Use of Lasers and the ANSI Standard Z136.6-2000, American National Standard for the Safe Use of Lasers Outdoors. The laser was evaluated for both indoor and outdoor use.
A particular engineering aspect of distributed sensor networks that has not received adequate attention is the system level hardware architecture of the individual nodes of the network. A novel hardware architecture based on an idea of task specific modular computing is proposed to provide for both the high flexibility and low power consumption required for distributed sensing solutions. The power consumption of the architecture is mathematically analyzed against a traditional approach, and guidelines are developed for application scenarios that would benefit from using this new design. Furthermore a method of decentralized control for the modular system is developed and analyzed. Finally, a few policies for power minimization in the decentralized system are proposed and analyzed.
The effect of polymer-polymer and solvent-polymer interactions on the behavior of the interdiffusion of a solvent in to an entangled polymer matrix was studied. The state of the polymer was changed from melt to glassy by varying polymer-polymer interaction. From simulation of equilibrated solvent-polymer solution, it was found that the glassy system with Berthelot's rule applied for the cross term is immiscible except in the dilute limit. Increasing the solvent-polymer interaction enhanced the solubility of the system without changing the nature of the diffusion process.
The high mobility of two dimensional electron system in the second Landau level was discussed. In the second level, the larger extent of the wave function as compared to the lowest LL and its additional zero allows for a much broader range of electron correlations to be favorable. An example of electron correlations encountered in the second LL is the even-denominator v=2+1/2 fractional quantum hall effect (FQHE) state. With a varying filling factor, it was observed that quantum liquids of different origins compete with several insulating phases leading to an irregular pattern in the transport parameters.
The Design through Analysis Realization Team (DART) will provide analysts with a complete toolset that reduces the time to create, generate, analyze, and manage the data generated in a computational analysis. The toolset will be both easy to learn and easy to use. The DART Roadmap Vision provides for progressive improvements that will reduce the Design through Analysis (DTA) cycle time by 90-percent over a three-year period while improving both the quality and accountability of the analyses.
We present a two-step approach to modeling the transmembrane spanning helical bundles of integral membrane proteins using only sparse distance constraints, such as those derived from chemical cross-linking, dipolar EPR and FRET experiments. In Step 1, using an algorithm, we developed, the conformational space of membrane protein folds matching a set of distance constraints is explored to provide initial structures for local conformational searches. In Step 2, these structures refined against a custom penalty function that incorporates both measures derived from statistical analysis of solved membrane protein structures and distance constraints obtained from experiments. We begin by describing the statistical analysis of the solved membrane protein structures from which the theoretical portion of the penalty function was derived. We then describe the penalty function, and, using a set of six test cases, demonstrate that it is capable of distinguishing helical bundles that are close to the native bundle from those that are far from the native bundle. Finally, using a set of only 27 distance constraints extracted from the literature, we show that our method successfully recovers the structure of dark-adapted rhodopsin to within 3.2 Å of the crystal structure.
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
A case study is reported to document the details of a validation process to assess the accuracy of a mathematical model to represent experiments involving thermal decomposition of polyurethane foam. The focus of the report is to work through a validation process. The process addresses the following activities. The intended application of mathematical model is discussed to better understand the pertinent parameter space. The parameter space of the validation experiments is mapped to the application parameter space. The mathematical models, computer code to solve the models and its (code) verification are presented. Experimental data from two activities are used to validate mathematical models. The first experiment assesses the chemistry model alone and the second experiment assesses the model of coupled chemistry, conduction, and enclosure radiation. The model results of both experimental activities are summarized and uncertainty of the model to represent each experimental activity is estimated. The comparison between the experiment data and model results is quantified with various metrics. After addressing these activities, an assessment of the process for the case study is given. Weaknesses in the process are discussed and lessons learned are summarized.
The sequential probability ratio test (SPRT) minimizes the expected number of observations to a decision and can solve problems in sequential pattern recognition. Some problems have dependencies between the observations, and Markov chains can model dependencies where the state occupancy probability is geometric. For a non-geometric process we show how to use the effective amount of independent information to modify the decision process, so that we can account for the remaining dependencies. Along with dependencies between observations, a successful system needs to handle the unknown class in unconstrained environments. For example, in an acoustic pattern recognition problem any sound source not belonging to the target set is in the unknown class. We show how to incorporate goodness of fit (GOF) classifiers into the Markov SPRT, and determine the worse case nontarget model. We also develop a multiclass Markov SPRT using the GOF concept.
Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.