As part of the Arsenic Water Technology Partnership program, Sandia National Laboratories will carry out field demonstration testing of innovative technologies that have the potential to substantially reduce the costs associated with arsenic removal from drinking water. The scope for this work includes: (1) selection of sites for pilot demonstrations, (2) identification of candidate technologies through Vendor Forums, proof-of-principle bench-scale studies managed by the American Water Works Association Research Foundation (AwwaRF) or the WERC design contest, and (3) pilot-scale studies involving side-by-side tests of innovative technologies. The goal of site selection is identification of a suite of sites that exhibit a sufficiently wide range of groundwater chemistries to allow examination of treatment processes and systems under conditions that are relevant to different geochemical settings throughout the country. A number of candidate sites have been identified through reviews of groundwater quality databases, conference proceedings and discussions with state and local officials. These include sites in New Mexico, Arizona, Colorado, Oklahoma, Illinois, Michigan, Florida, Massachusetts and New Hampshire. In New Mexico, discussions have been held with water utility board staffs in Chama, Jemez Pueblo, Placitas, Socorro and several communities near Las Cruces to determine the suitability of those communities for pilot studies. The initial pilot studies will be carried at Socorro and Jemez Pueblo; other communities will be included as the program progresses. The proposed pilot test at a hot spring water source near Socorro will provide an opportunity to test treatment technologies at relatively high temperatures. If approved by the Tribal Government, the proposed pilot at the Jemez Pueblo would provide an opportunity to test technologies that will remove arsenic in the presence of relatively high concentrations of iron and manganese while leaving the beneficial levels of fluoride unchanged. Candidate technologies for the pilot tests are being reviewed by technical evaluation teams. The initial reviews will consider as many potential technologies and screen out unsuitable ones by considering data from past performance testing, expected costs, complexity of operation and maturity of the technology. The pilot test configurations will depend on the site-specific conditions such as access, power availability, waste disposal options and availability of permanent structures to house the test. Most of the treatment technologies that will be evaluated can be separated into two broad categories: (1) sorption processes that use fixed bed adsorbents and (2) membrane processes. The latter include processes that involve formation of a floc or precipitate that contains the arsenic in a reactor followed by separation of the solids from the water by filtration. Several innovations that could lead to lower treatment costs have been proposed for adsorptive media systems. These include: (1) higher capacity and selectivity using mixed oxides composed of iron and other transition metals, titanium and zirconium based oxides, or mixed resin-metal oxides composite media, (2) improved durability of virgin media and greater chemical stability of the spent media, and (3) use of inexpensive natural or recycled materials with a coating that has a high affinity for arsenic. Improvements to filtration-based treatment systems include: (1) enhanced coagulation with iron compounds or polyelectrolytes and (2) improved filtration with nanocomposite materials. In the pilot tests, the innovative technologies will be evaluated in terms of: (1) their ability to reduce arsenic to levels below the EPA Maximum Contaminant Level (MCL) of 10 ppb, (2) site-specific adsorptive capacity, robustness of performance with respect to likely changes in water quality parameters including pH, TDS, foulants such as Fe, Mn, silica, and organics, effect of competing ions such as other metals and radionuclides, and potentially deleterious effects on the water system such as pipe corrosion from low pH levels, fluoride removal, and generation of disinfection by-products. The new arsenic MCL will result in modification of many rural water systems that otherwise would not require treatment. Opportunities for improvement of water quality in systems that currently do not comply with other standards would be an added benefit from the new arsenic MCL that has both economic and public health value.
Understanding the dynamics of the membrane protein rhodopsin will have broad implications for other membrane proteins and cellular signaling processes. Rhodopsin (Rho) is a light activated G-protein coupled receptor (GPCR). When activated by ligands, GPCRs bind and activate G-proteins residing within the cell and begin a signaling cascade that results in the cell's response to external stimuli. More than 50% of all current drugs are targeted toward G-proteins. Rho is the prototypical member of the class A GPCR superfamily. Understanding the activation of Rho and its interaction with its Gprotein can therefore lead to a wider understanding of the mechanisms of GPCR activation and G-protein activation. Understanding the dark to light transition of Rho is fully analogous to the general ligand binding and activation problem for GPCRs. This transition is dependent on the lipid environment. The effect of lipids on membrane protein activity in general has had little attention, but evidence is beginning to show a significant role for lipids in membrane protein activity. Using the LAMMPS program and simulation methods benchmarked under the IBIG program, we perform a variety of allatom molecular dynamics simulations of membrane proteins.
To investigate the performance of artificial frozen soil materials with a fused interface, split tension (or 'Brazilian') tests and unconfined uniaxial compression tests were carried out in a low temperature environmental chamber. Intact and fused specimens were fabricated from four different soil mixtures (962: clay-rich soil with bentonite; DNA1: clay-poor soil; DNA2: clay-poor soil with vermiculite; and DNA3: clay-poor soil with perlite). Based on the 'Brazilian' test results and density measurements, the DNA3 mixture was selected to closely represent the mechanical properties of the Alaskan frozen soil. The healed-interface by the same soil layer sandwiched between two blocks of the same material yielded the highest 'Brazilian' tensile strength of the interface. Based on unconfined uniaxial compression tests, the frictional strength of the fused DNA3 specimens with the same soil appears to exceed the shear strength of the intact specimen.
Laser-induced incandescence is used to measure time-resolved diesel particulate emissions for two lean NOx trap regeneration strategies that utilize intake throttling and in-cylinder fuel enrichment. The results show that when the main injection event is increased in duration and delayed 13 crank-angle degrees, particulate emissions are very high. For a repetitive pattern of 3 seconds of rich regeneration followed by 27 seconds of NOx-trap loading, we find a monotonic increase in particulate emissions during the loading intervals that approaches twice the initial baseline particulate level after 1000 seconds. In contrast, particulate emissions during the re-generation intervals are constant throughout the test sequence. For regeneration using an additional late injection event (post-injection), particulate emissions are about twice the baseline level for the first regeneration interval, but then decay with an exponential-like behavior over the repetitive test sequence, eventually reaching a level that is comparable to the baseline. In contrast, particulate emissions between regenerations decrease slowly throughout the test sequence, reaching a level 12 percent below the starting baseline value.
An Al{sub 85}Ni{sub 10}La{sub 5} amorphous alloy, produced via gas atomization, was selected to study the mechanisms of nanocrystallization induced by thermal exposure. High resolution transmission electron microscopy results indicated the presence of quenched-in Al nuclei in the amorphous matrix of the atomized powder. However, a eutectic-like reaction, which involved the formation of the Al, Al{sub 11}La{sub 3}, and Al{sub 3}Ni phases, was recorded in the first crystallization event (263 C) during differential scanning calorimetry continuous heating. Isothermal annealing experiments conducted below 263 C revealed that the formation of single fcc-Al phase occurred at 235 C. At higher temperatures, growth of the Al crystals occurred with formation of intermetallic phases, leading to a eutectic-like transformation behavior at 263 C. During the first crystallization stage, nanocrystals were developed in the size range of 5 - 30 nm. During the second crystallization event (283 C), a bimodal size distribution of nanocrystals was formed with the smaller size in the range of around 10 - 30 nm and the larger size around 100 nm. The influence of pre-existing quenched-in Al nuclei on the microstructural evolution in the amorphous Al{sub 85}Ni{sub 10}La{sub 5} alloy is discussed and the effect of the microstructural evolution on the hardening behavior is described in detail.
Fires pose the dominant risk to the safety and security of nuclear weapons, nuclear transport containers, and DOE and DoD facilities. The thermal hazard from these fires primarily results from radiant emission from high-temperature flame soot. Therefore, it is necessary to understand the local transport and chemical phenomena that determine the distributions of soot concentration, optical properties, and temperature in order to develop and validate constitutive models for large-scale, high-fidelity fire simulations. This report summarizes the findings of a Laboratory Directed Research and Development (LDRD) project devoted to obtaining the critical experimental information needed to develop such constitutive models. A combination of laser diagnostics and extractive measurement techniques have been employed in both steady and pulsed laminar diffusion flames of methane, ethylene, and JP-8 surrogate burning in air. For methane and ethylene, both slot and coannular flame geometries were investigated, as well as normal and inverse diffusion flame geometries. For the JP-8 surrogate, coannular normal diffusion flames were investigated. Soot concentrations, polycyclic aromatic hydrocarbon (PAH) laser-induced fluorescence (LIF) signals, hydroxyl radical (OH) LIF, acetylene and water vapor concentrations, soot zone temperatures, and the velocity field were all successfully measured in both steady and unsteady versions of these various flames. In addition, measurements were made of the soot microstructure, soot dimensionless extinction coefficient (&), and the local radiant heat flux. Taken together, these measurements comprise a unique, extensive database for future development and validation of models of soot formation, transport, and radiation.
LIGA is an acronym for the German terms Lithographie, Galvanoformung, Abformung, which describe a microfabrication process for high aspect ratio, structural parts based on electrodeposition of a metal into a poly-methyl-methacrylate (PMMA) mold. LIGA produced parts have very high dimensional tolerances (on the order of a micron) and can vary in size from microns to centimeters. These properties make LIGA parts ideal for incorporation into MEMS devices or for other applications where strict tolerances must be met; however, functionality of the parts can only be maintained if they remain dimensionally stable throughout their lifetime. It follows that any form of corrosion attack (e.g., uniform dissolution, localized pitting, environmental cracking, etc.) cannot be tolerated. This presentation focuses on the pitting behavior of Ni electrodeposits, specifically addressing the influence of the following: grain structure, alloy composition, impurities, plating conditions, post plating processing (including chemical and thermal treatment), galvanic interactions and environment (aqueous vs. atmospheric). A small subset of these results is summarized. A typical LIGA part is shown in Figure 1. Due to the small size scale, electrochemical testing was performed using a capillary based test system. Although very small test areas can be probed with this system (e.g., Figure 2), typically capillaries on the order of 80 to 90 ?m's were used in the testing. All LIGA parts tested in the as-received condition had better pitting resistance than the high purity wrought Ni material used as a control. In the case of LIGA-Ni and LIGA-Ni-Mn, no detrimental effects were observed due to aging at 700C. Ni-S (approximately 500 ppm S), showed good as-received pitting behavior but decreased pitting resistance with thermal aging. Aged Ni-S showed dramatic increases in grain size (from single {micro}m's to 100's of {micro}m's), and significant segregation of S to the boundaries. The capillary test cell was used to measure pitting potentials at the boundaries and within grains (Figure 3) with the results clearly showing the lowered pit resistance being due to the S-rich boundaries. It is believed that the process used to release the LIGA parts from the Cu substrate acts as a pickling agent for the LIGA parts, resulting in removal of surface impurities and detrimental alloying additions. EIS data from freshly polished samples exposed to the release bath support this hypothesis; RP values for all LIGA materials and for wrought Ni, continuously increase during exposure. Mechanical polishing of LIGA parts prior to electrochemical testing consistently resulted in lowering the pitting potentials to a range bounded by Ni 201 and high purity Ni. The as-received vs. polished behavior also effects the galvanic interactions with noble metals. When as-produced material is coupled to Au, initially the LIGA material acts as the cathode, though eventually the behavior switches such that the LIGA becomes the anode. Overall, the LIGA produced Ni and Ni alloys examined in this work demonstrated pitting behavior similar to wrought Ni, only showing reduced resistance when specific metallurgical and environmental conditions were met.
A series of experiments was performed to better characterize the boundary conditions from an inconel heat source ('shroud') painted with Pyromark black paint. Quantifying uncertainties in this type of experimental setup is crucial to providing information for comparisons with code predictions. The characterization of this boundary condition has applications in many scenarios related to fire simulation experiments performed at Sandia National Laboratories Radiant Heat Facility (RHF). Four phases of experiments were performed. Phase 1 results showed that a nominal 1000 C shroud temperature is repeatable to about 2 C. Repeatability of temperatures at individual points on the shroud show that temperatures do not vary more than 10 C from experiment to experiment. This variation results in a 6% difference in heat flux to a target 4 inches away. IR camera images showed the shroud was not at a uniform temperature, although the control temperature was constant to about {+-}2 C during a test. These images showed that a circular shaped, flat shroud with its edges supported by an insulated plate has a temperature distribution with higher temperatures at the edges and lower temperatures in the center. Differences between the center and edge temperatures were up to 75 C. Phase 3 results showed that thermocouple (TC) bias errors are affected by coupling with the surrounding environment. The magnitude of TC error depends on the environment facing the TC. Phase 4 results were used to estimate correction factors for specific applications (40 and 63-mil diameter, ungrounded junction, mineral insulated, metal-sheathed TCs facing a cold surface). Correction factors of about 3.0-4.5% are recommended for 40 mil diameter TCs and 5.5-7.0% for 63 mil diameter TCs. When mounted on the cold side of the shroud, TCs read lower than the 'true' shroud temperature, and the TC reads high when on the hot side. An alternate method uses the average of a cold side and hot side TC of the same size to estimate the true shroud temperature. Phase 2 results compared IR camera measurements with TC measurements and measured values of Pyromark emissivity. Agreement was within measured uncertainties of the Pyromark paint emissivity and IR camera temperatures.
Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be applied to a spacecraft during atmospheric re-entry, and (3) optimal design of a distributed sensor network for the purpose of vehicle tracking and identification.
The irradiation of thin insulating films by high-energy ions (374 MeV Au{sup +25} or 241 MeV I{sup +19}) was used to attempt to form nanometer-size pores through the films spontaneously. Such ions deposit a large amount of energy into the target materials ({approx}20 keV/nm), which significantly disrupts their atomic lattice and sputters material from the surfaces, and might produce nanopores for appropriate ion-material combinations. Transmission electron microscopy was used to examine the resulting ion tracks. Tracks were found in the crystalline oxides quartz, sapphire, and mica. Sapphire and mica showed ion tracks that are likely amorphous and exhibit pits 5 nm in diameter on the surface at the ion entrance and exit points. This suggests that nanopores might form in mica if the film thickness is less than {approx}10 nm. Tracks in quartz showed strain in the matrix around them. Tracks were not found in the amorphous thin films examined: 20 nm-SiN{sub x}, deposited SiOx, fused quartz (amorphous SiO{sub 2}), formvar and 3 nm-C. Other promising materials for nanopore formation were identified, including thin Au and SnO{sub 2} layers.
This report summarizes the results obtained from a Laboratory Directed Research & Development (LDRD) project entitled 'Investigation of Potential Applications of Self-Assembled Nanostructured Materials in Nuclear Waste Management'. The objectives of this project are to (1) provide a mechanistic understanding of the control of nanometer-scale structures on the ion sorption capability of materials and (2) develop appropriate engineering approaches to improving material properties based on such an understanding.
A radioactive sealed source is any radioactive material that is encased in a capsule designed to prevent leakage or escape of the radioactive material. Radioactive sealed sources are used for a wide variety of applications at hospitals, in manufacturing and research. Typical uses are in portable gauges to measure soil compaction and moisture or to determine physical properties of rocks units in boreholes (well logging). Hospitals and clinics use radioactive sealed sources for teletherapy and brachytherapy. Oil exploration and medicine are the largest users. Accidental mismanagement of radioactive sealed sources each year results in a large number of people receiving very high or even fatal does of ionizing radiation. Deliberate mismanagement is a growing international concern. Sealed sources must be managed and disposed effectively in order to protect human health and the environment. Effective national safety and management infrastructures are prerequisites for efficient and safe transportation, treatment, storage, and disposal. The Integrated Management Program for Radioactive Sealed Sources in Egypt (IMPRSS) is a cooperative development agreement between the Egyptian Atomic Energy Authority (EAEA), Egyptian Ministry of Health (MOH), Sandia National Laboratories (SNL), the University of New Mexico (UNM), and Agriculture Cooperative Development International (ACDI/VOCA). The EAEA, teaming with SNL, is conducting a Preliminary Safety Assessment (PSA) of an intermediate-depth borehole disposal in thick arid alluvium in Egypt based on experience with the U.S. Greater Confinement Disposal (GCD). Goldsim has been selected for the preliminary disposal system assessment for the Egyptian GCD Study. The results of the PSA will then be used to decide if Egypt desires to implement such a disposal system.
We have studied the feasibility of an innovative device to sample 1ns low-power single current transients with a time resolution better than 10 ps. The new concept explored here is to close photoconductive semiconductor switches (PCSS) with a Laser for a period of 10 ps. The PCSSs are in a series along a Transmission Line (TL). The transient propagates along the TL allowing one to carry out a spatially resolved sampling of charge at a fixed time instead of the usual timesampling of the current. The fabrication of such a digitizer was proven to be feasible but very difficult.
This paper presents solution verification studies applicable to a class of problems involving wave propagation, frictional contact, geometrical complexity, and localized incompressibility. The studies are in support of a validation exercise of a phenomenological screw failure model. The numerical simulations are performed using a fully explicit transient dynamics finite element code, employing both standard four-node tetrahedral and eight-node mean quadrature hexahedral elements. It is demonstrated that verifying the accuracy of the simulation involves not only consideration of the mesh discretization error, but also the effect of the hourglass control and the contact enforcement. In particular, the proper amount of hourglass control and the behavior of the contact search and enforcement algorithms depend greatly on the mesh resolution. We carry out the solution verification exercise using mesh refinement studies and describe our systematic approach to handling the complicating issues. It is shown that hourglassing and contact must both be carefully monitored as the mesh is refined, and it is often necessary to make adjustments to the hourglass and contact user input parameters to accommodate finer meshes. We introduce in this paper the hourglass energy, which is used as an 'error indicator' for the hourglass control. If the hourglass energy does not tend to zero with mesh refinement, then an hourglass control parameter is changed and the calculation is repeated.
We describe a new mode of encryption with inexpensive authentication, which uses information from the internal state of the cipher to provide the authentication. Our algorithms have a number of benefits: (1) the encryption has properties similar to CBC mode, yet the encipherment and authentication can be parallelized and/or pipelined, (2) the authentication overhead is minimal, and (3) the authentication process remains resistant against some IV reuse. We offer a Manticore class of authenticated encryption algorithms based on cryptographic hash functions, which support variable block sizes up to twice the hash output length and variable key lengths. A proof of security is presented for the MTC4 and Pepper algorithms. We then generalize the construction to create the Cipher-State (CS) mode of encryption that uses the internal state of any round-based block cipher as an authenticator. We provide hardware and software performance estimates for all of our constructions and give a concrete example of the CS mode of encryption that uses AES as the encryption primitive and adds a small speed overhead (10-15%) compared to AES alone.
If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware is necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called 'white-boxing'. We put forth some new attacks and improvements on this method as well as demonstrating its implementation for various algorithms. We also examine cryptographic techniques to achieve obfuscation including encrypted functions and offer a new application to digital signature algorithms. To better understand the lack of security proofs for obfuscation techniques, we examine in detail general theoretical models of obfuscation. We explain the need for formal models in order to obtain provable security and the progress made in this direction thus far. Finally we tackle the problem of verifying remote execution. We introduce some methods of verifying remote exponentiation computations and some insight into generic computation checking.
Microelectronic devices in satellites and spacecraft are exposed to high energy cosmic radiation. Furthermore, Earth-based electronics can be affected by terrestrial radiation. The radiation causes a variety of Single Event Effects (SEE) that can lead to failure of the devices. High energy heavy ion beams are being used to simulate both the cosmic and terrestrial radiation to study radiation effects and to ensure the reliability of electronic devices. Broad beam experiments can provide a measure of the radiation hardness of a device (SEE cross section) but they are unable to pinpoint the failing components in the circuit. A nuclear microbeam is an ideal tool to map SEE on a microscopic scale and find the circuit elements (transistors, capacitors, etc.) that are responsible for the failure of the device. In this paper a review of the latest radiation effects microscopy (REM) work at Sandia will be given. Different SEE mechanisms (Single Event Upset, Single Event Transient, etc.) and the methods to study them (Ion Beam Induced Charge (IBIC), Single Event Upset mapping, etc.) will be discussed. Several examples of using REM to study the basic effects of radiation in electronic devices and failure analysis of integrated circuits will be given.
An important challenge encountered during post-processing of finite element analyses is the visualizing of three-dimensional fields of real-valued second-order tensors. Namely, as finite element meshes become more complex and detailed, evaluation and presentation of the principal stresses becomes correspondingly problematic. In this paper, we describe techniques used to visualize simulations of perturbed in-situ stress fields associated with hypothetical salt bodies in the Gulf of Mexico. We present an adaptation of the Mohr diagram, a graphical paper and pencil method used by the material mechanics community for estimating coordinate transformations for stress tensors, as a new tensor glyph for dynamically exploring tensor variables within three-dimensional finite element models. This interactive glyph can be used as either a probe or a filter through brushing and linking.
Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.
We present the source code for three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or Nway array. This is a supplementary report; details on using this code are provided separately in SAND-XXXX.
The rate coefficient has been measured under pseudo-first-order conditions for the Cl + CH{sub 3} association reaction at T = 202, 250, and 298 K and P = 0.3-2.0 Torr helium using the technique of discharge-flow mass spectrometry with low-energy (12-eV) electron-impact ionization and collision-free sampling. Cl and CH{sub 3} were generated rapidly and simultaneously by reaction of F with HCl and CH{sub 4}, respectively. Fluorine atoms were produced by microwave discharge in an approximately 1% mixture of F{sub 2} in He. The decay of CH{sub 3} was monitored under pseudo-first-order conditions with the Cl-atom concentration in large excess over the CH{sub 3} concentration ([Cl]{sub 0}/[CH{sub 3}]{sub 0} = 9-67). Small corrections were made for both axial and radial diffusion and minor secondary chemistry. The rate coefficient was found to be in the falloff regime over the range of pressures studied. For example, at T = 202 K, the rate coefficient increases from 8.4 x 10{sup -12} at P = 0.30 Torr He to 1.8 x 10{sup -11} at P = 2.00 Torr He, both in units of cm{sup 3} molecule{sup -1} s{sup -1}. A combination of ab initio quantum chemistry, variational transition-state theory, and master-equation simulations was employed in developing a theoretical model for the temperature and pressure dependence of the rate coefficient. Reasonable empirical representations of energy transfer and of the effect of spin-orbit interactions yield a temperature- and pressure-dependent rate coefficient that is in excellent agreement with the present experimental results. The high-pressure limiting rate coefficient from the RRKM calculations is k{sub 2} = 6.0 x 10{sup -11} cm{sup 3} molecule{sup -1} s{sup -1}, independent of temperature in the range from 200 to 300 K.
The purpose of the present work is to increase our understanding of which properties of geomaterials most influence the penetration process with a goal of improving our predictive ability. Two primary approaches were followed: development of a realistic, constitutive model for geomaterials and designing an experimental approach to study penetration from the target's point of view. A realistic constitutive model, with parameters based on measurable properties, can be used for sensitivity analysis to determine the properties that are most important in influencing the penetration process. An immense literature exists that is devoted to the problem of predicting penetration into geomaterials or similar man-made materials such as concrete. Various formulations have been developed that use an analytic or more commonly, numerical, solution for the spherical or cylindrical cavity expansion as a sort of Green's function to establish the forces acting on a penetrator. This approach has had considerable success in modeling the behavior of penetrators, both as to path and depth of penetration. However the approach is not well adapted to the problem of understanding what is happening to the material being penetrated. Without a picture of the stress and strain state imposed on the highly deformed target material, it is not easy to determine what properties of the target are important in influencing the penetration process. We developed an experimental arrangement that allows greater control of the deformation than is possible in actual penetrator tests, yet approximates the deformation processes imposed by a penetrator. Using explosive line charges placed in a central borehole, we loaded cylindrical specimens in a manner equivalent to an increment of penetration, allowing the measurement of the associated strains and accelerations and the retrieval of specimens from the more-or-less intact cylinder. Results show clearly that the deformation zone is highly concentrated near the borehole, with almost no damage occurring beyond 1/2 a borehole diameter. This implies penetration is not strongly influenced by anything but the material within a diameter or so of the penetration. For penetrator tests, target size should not matter strongly once target diameters exceed some small multiple of the penetrator diameter. Penetration into jointed rock should not be much affected unless a discontinuity is within a similar range. Accelerations measured at several points along a radius from the borehole are consistent with highly-concentrated damage and energy absorption; At the borehole wall, accelerations were an order of magnitude higher than at 1/2 a diameter, but at the outer surface, 8 diameters away, accelerations were as expected for propagation through an elastic medium. Accelerations measured at the outer surface of the cylinders increased significantly with cure time for the concrete. As strength increased, less damage was observed near the explosively-driven borehole wall consistent with the lower energy absorption expected and observed for stronger concrete. As it is the energy absorbing properties of a target that ultimately stop a penetrator, we believe this may point the way to a more readily determined equivalent of the S number.
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) generation of samples from uncertain analysis inputs, (3) propagation of sampled inputs through an analysis, (4) presentation of uncertainty analysis results, and (5) determination of sensitivity analysis results.
Similar to entangled ropes, polymer chains cannot slide through each other. These topological constraints, the so-called entanglements, dominate the viscoelastic behavior of high-molecular-weight polymeric liquids. Tube models of polymer dynamics and rheology are based on the idea that entanglements confine a chain to small fluctuations around a primitive path which follows the coarse-grained chain contour. To establish the microscopic foundation for these highly successful phenomenological models, we have recently introduced a method for identifying the primitive path mesh that characterizes the microscopic topological state of computer-generated conformations of long-chain polymer melts and solutions. Here we give a more detailed account of the algorithm and discuss several key aspects of the analysis that are pertinent for its successful use in analyzing the topology of the polymer configurations. We also present a slight modification of the algorithm that preserves the previously neglected self-entanglements and allows us to distinguish between local self-knots and entanglements between distant sections of the same chain. Our results indicate that the latter make a negligible contribution to the tube and that the contour length between local self-knots, N{sub 1k} is significantly larger than the entanglement length N{sub e}.
Water resource scarcity around the world is driving the need for the development of simulation models that can assist in water resources management. Transboundary water resources are receiving special attention because of the potential for conflict over scarce shared water resources. The Rio Grande/Rio Bravo along the U.S./Mexican border is an example of a scarce, transboundary water resource over which conflict has already begun. The data collection and modeling effort described in this report aims at developing methods for international collaboration, data collection, data integration and modeling for simulating geographically large and diverse international watersheds, with a special focus on the Rio Grande/Rio Bravo. This report describes the basin, and the data collected. This data collection effort was spatially aggregated across five reaches consisting of Fort Quitman to Presidio, the Rio Conchos, Presidio to Amistad Dam, Amistad Dam to Falcon Dam, and Falcon Dam to the Gulf of Mexico. This report represents a nine-month effort made in FY04, during which time the model was not completed.
This report describes a project to develop both fixed and programmable surface acoustic wave (SAW) correlators for use in a low power space communication network. This work was funded by NASA at Sandia National Laboratories for fiscal years 2004, 2003, and the final part of 2002. The role of Sandia was to develop the SAW correlator component, although additional work pertaining to use of the component in a system and system optimization was also done at Sandia. The potential of SAW correlator-based communication systems, the design and fabrication of SAW correlators, and general system utilization of those correlators are discussed here.
Drainage of water from the region between an advancing probe tip and a flat sample is reconsidered under the assumption that the tip and sample surfaces are both coated by a thin water 'interphase' (of width {approx}a few nm) whose viscosity is much higher than the bulk liquid's. A formula derived by solving the Navier-Stokes equations allows one to extract an interphase viscosity of {approx}59 KPa-sec (or {approx}6.6x10{sup 7} times the viscosity of bulk water at 25C) from Interfacial Force Microscope measurements with both tip and sample functionalized hydrophilic by OH-terminated tri(ethylene glycol) undecylthiol, self-assembled monolayers.
Sandia National Laboratories has previously tested a capability to impose a 7.5 g-rms (30 g peak) radial vibration load up to 2 kHz on a 25 lb object with superimposed 50 g acceleration at its centrifuge facility. This was accomplished by attaching a 3,000 lb Unholtz-Dickie mechanical shaker at the end of the centrifuge arm to create a 'Vibrafuge'. However, the combination of non-radial vibration directions, and linear accelerations higher than 50g's are currently not possible because of the load capabilities of the shaker and the stresses on the internal shaker components due to the combined centrifuge acceleration. Therefore, a new technique using amplified piezo-electric actuators has been developed to surpass the limitations of the mechanical shaker system. They are lightweight, modular and would overcome several limitations presented by the current shaker. They are 'scalable', that is, adding more piezo-electric units in parallel or in series can support larger-weight test articles or displacement/frequency regimes. In addition, the units could be mounted on the centrifuge arm in various configurations to provide a variety of input directions. The design along with test results will be presented to demonstrate the capabilities and limitations of the new piezo-electric Vibrafuge.
Current computing architectures are 'inherently insecure' because they are designed to execute ANY arbitrary sequence of instructions. As a result they are subject to subversion by malicious code. Our goal is to produce a cryptographic method of 'tamper-proofing' trusted code over a large portion of the software life cycle. We have developed a technique called 'faithful execution', to cryptographically protect instruction sequences from subversion. This paper presents an overview of, and the lessons learned from, our implementations of faithful execution in a Java virtual machine prototype and also in a configurable soft-core processor implemented in a field programmable gate array (FPGA).
With the build-out of large transport networks utilizing optical technologies, more and more capacity is being made available. Innovations in Dense Wave Division Multiplexing (DWDM) and the elimination of optical-electrical-optical conversions have brought on advances in communication speeds as we move into 10 Gigabit Ethernet and above. Of course, there is a need to encrypt data on these optical links as the data traverses public and private network backbones. Unfortunately, as the communications infrastructure becomes increasingly optical, advances in encryption (done electronically) have failed to keep up. This project examines the use of optical logic for implementing encryption in the photonic domain to achieve the requisite encryption rates. This paper documents the innovations and advances of work first detailed in 'Photonic Encryption using All Optical Logic,' [1]. A discussion of underlying concepts can be found in SAND2003-4474. In order to realize photonic encryption designs, technology developed for electrical logic circuits must be translated to the photonic regime. This paper examines S-SEED devices and how discrete logic elements can be interconnected and cascaded to form an optical circuit. Because there is no known software that can model these devices at a circuit level, the functionality of S-SEED devices in an optical circuit was modeled in PSpice. PSpice allows modeling of the macro characteristics of the devices in context of a logic element as opposed to device level computational modeling. By representing light intensity as voltage, 'black box' models are generated that accurately represent the intensity response and logic levels in both technologies. By modeling the behavior at the systems level, one can incorporate systems design tools and a simulation environment to aid in the overall functional design. Each black box model takes certain parameters (reflectance, intensity, input response), and models the optical ripple and time delay characteristics. These 'black box' models are interconnected and cascaded in an encrypting/scrambling algorithm based on a study of candidate encryption algorithms. Demonstration circuits show how these logic elements can be used to form NAND, NOR, and XOR functions. This paper also presents functional analysis of a serial, low gate count demonstration algorithm suitable for scrambling/encryption using S-SEED devices.
We observe the spontaneous formation of parallel oxide rods upon exposing a clean NiAl(110) surface to oxygen at elevated temperatures (850-1350 K). By following the self-assembly of individual nanorods in real time with low-energy electron microscopy (LEEM), we are able to investigate the processes by which the rods lengthen along their axes and thicken normal to the surface of the substrate. At a fixed temperature and O{sub 2} pressure, the rods lengthen along their axes at a constant rate. The exponential temperature dependence of this rate yields an activation energy for growth of 1.2 {+-} 0.1 eV. The rod growth rates do not change as their ends pass in close proximity (<40 nm) to each other, which suggests that they do not compete for diffusing flux in order to elongate. Both LEEM and scanning tunneling microscopy (STM) studies show that the rods can grow vertically in layer-by-layer fashion. The heights of the rods are extremely bias dependent in STM images, but occur in integer multiples of approximately 2-{angstrom}-thick oxygen-cation layers. As the rods elongate from one substrate terrace to the next, we commonly see sharp changes in their rates of elongation that result from their tendency to gain (lose) atomic layers as they descend (climb) substrate steps. Diffraction analysis and dark-field imaging with LEEM indicate that the rods are crystalline, with a lattice constant that is well matched to that of the substrate along their length. We discuss the factors that lead to the formation of these highly anisotropic structures.
The performance characteristics and material properties such as stress, microstructure, and composition of nickel coatings and electroformed components can be controlled over a wide range by the addition of small amounts of surface-active compounds to the electroplating bath. Saccharin is one compound that is widely utilized for its ability to reduce tensile stress and refine grain size in electrodeposited nickel. While the effects of saccharin on nickel electrodeposition have been studied by many authors in the past, there is still uncertainty over saccharin's mechanisms of incorporation, stress reduction, and grain refinement. In-situ scanning probe microscopy (SPM) is a tool that can be used to directly image the nucleation and growth of thin nickel films at nanometer length scales to help elucidate saccharin's role in the development and evolution of grain structure. In this study, in-situ atomic force microscopy (AFM) and scanning tunneling microscopy (STM) techniques are used to investigate the effects of saccharin on the morphological evolution of thin nickel films. By observing mono-atomic height nickel island growth with and without saccharin present we conclude that saccharin has little effect on the nickel surface mobility during deposition at low overpotentials where the growth occurs in a layer-by-layer mode. Saccharin was imaged on Au(l11) terraces as condensed patches without resolved packing structure. AFM measurements of the roughness evolution of nickel films up to 1200 nm thick on polycrystalline gold indicate that saccharin initially increases the roughness and surface skewness of the deposit that at greater thickness becomes smoother than films deposited without saccharin. Faceting of the deposit morphology decreases as saccharin concentration increases even for the thinnest films that have 3-D growth.
The outline of this report is: (1) structures of hexagonal Er meal, ErH{sub 2} fluorite, and molybdenum; (2) texture issues and processing effects; (3) idea of pole figure integration; and (4) promising neutron diffraction work. Summary of this report are: (1) ErD{sub 2} and ErT{sub 2} film microstructures are strongly effected by processing conditions; (2) both x-ray and neutron diffraction are being pursued to help diagnose structure/property issues regarding ErT{sub 2} films and these correlations to He retention/release; (3) texture issues are great challenges for determination of site occupancy; and (4) work on pole-figure-integration looks to have promise addressing texture issues in ErD{sub 2} and ErT{sub 2} films.
Hydrogen energy may provide the means to an environmentally friendly future. One of the problems related to its application for transportation is 'on-board' storage. Hydrogen storage in solids has long been recognized as one of the most practical approaches for this purpose. The H-capacity in interstitial hydrides of most metals and alloys is limited to below 2.5% by weight and this is unsatisfactory for on-board transportation applications. Magnesium hydride is an exception with hydrogen capacity of -8.2 wt.%, however, its operating temperature, above 350 C, is too high for practical use. Sodium alanate (NaAlH{sub 4}) absorbs hydrogen up to 5.6 wt.% theoretically; however, its reaction kinetics and partial reversibility do not completely meet the new target for transportation application. Recently Chen et al. [1] reported that (Li{sub 3}N+2H{sub 2} {leftrightarrow} LiNH{sub 2} + 2LiH) provides a storage material with a possible high capacity, up to 11.5 wt.%, although this material is still too stable to meet the operating pressure/temperature requirement. Here we report a new approach to destabilize lithium imide system by partial substitution of lithium by magnesium in the (LiNH{sub 2} + LiH {leftrightarrow} Li{sub 2}NH + H{sub 2}) system with a minimal capacity loss. This Mg-substituted material can reversibly absorb 5.2 wt.% hydrogen at pressure of 30 bar at 200 C. This is a very promising material for on-board hydrogen storage applications. It is interesting to observe that the starting material (2LiNH{sub 2} + MgH{sub 2}) converts to (Mg(NH{sub 2}){sub 2} + 2LiH) after a desorption/re-absorption cycle.
Biosecurity must be implemented without impeding biomedical and bioscience research. Existing security literature and regulatory requirements do not present a comprehensive approach or clear model for biosecurity, nor do they wholly recognize the operational issues within laboratory environments. To help address these issues, the concept of Biosecurity Levels should be developed. Biosecurity Levels would have increasing levels of security protections depending on the attractiveness of the pathogens to adversaries. Pathogens and toxins would be placed in a Biosecurity Level based on their security risk. Specifically, the security risk would be a function of an agent's weaponization potential and consequences of use. To demonstrate the concept, examples of security risk assessments for several human, animal, and plant pathogens will be presented. Higher security than that currently mandated by federal regulations would be applied for those very few agents that represent true weapons threats and lower levels for the remainder.
This paper describes the analyses and the experimental mechanics program to support the National Aeronautics and Space Administration (NASA) investigation of the Shuttle Columbia accident. A synergism of the analysis and experimental effort is required to insure that the final analysis is valid - the experimental program provides both the material behavior and a basis for validation, while the analysis is required to insure the experimental effort provides behavior in the correct loading regime. Preliminary scoping calculations of foam impact onto the Shuttle Columbia's wing leading edge determined if enough energy was available to damage the leading edge panel. These analyses also determined the strain-rate regimes for various materials to provide the material test conditions. Experimental testing of the reinforced carbon-carbon wing panels then proceeded to provide the material behavior in a variety of configurations and strain-rates for flown or conditioned samples of the material. After determination of the important failure mechanisms of the material, validation experiments were designed to provide a basis of comparison for the analytical effort. Using this basis, the final analyses were used for test configuration, instrumentation location, and calibration definition in support of full-scale testing of the panels in June 2003. These tests subsequently confirmed the accident cause.
Photocatalytic porphyrins are used to reduce metal complexes from aqueous solution and, further, to control the deposition of metals onto porphyrin nanotubes and surfactant assembly templates to produce metal composite nanostructures and nanodevices. For example, surfactant templates lead to spherical platinum dendrites and foam-like nanomaterials composed of dendritic platinum nanosheets. Porphyrin nanotubes are reported for the first time, and photocatalytic porphyrin nanotubes are shown to reduce metal complexes and deposit the metal selectively onto the inner or outer surface of the tubes, leading to nanotube-metal composite structures that are capable of hydrogen evolution and other nanodevices.
This report describes the purpose and results of the two-year, Sandia-sponsored Laboratory Directed Research and Development (LDRD) project entitled Understanding Communication in Counterterrorism Crisis Management The purpose of this project was to facilitate the capture of key communications among team members in simulated training exercises, and to learn how to improve communication in that domain. The first section of this document details the scenario development aspects of the simulation. The second section covers the new communication technologies that were developed and incorporated into the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of decision support tools. The third section provides an overview of the features of the simulation and highlights its communication aspects. The fourth section describes the Team Communication Study processes and methodologies. The fifth section discusses future directions and areas in which to apply the new technologies and study results obtained as a result of this LDRD.