This report summarizes the results obtained from a Laboratory Directed Research & Development (LDRD) project entitled 'Investigation of Potential Applications of Self-Assembled Nanostructured Materials in Nuclear Waste Management'. The objectives of this project are to (1) provide a mechanistic understanding of the control of nanometer-scale structures on the ion sorption capability of materials and (2) develop appropriate engineering approaches to improving material properties based on such an understanding.
A radioactive sealed source is any radioactive material that is encased in a capsule designed to prevent leakage or escape of the radioactive material. Radioactive sealed sources are used for a wide variety of applications at hospitals, in manufacturing and research. Typical uses are in portable gauges to measure soil compaction and moisture or to determine physical properties of rocks units in boreholes (well logging). Hospitals and clinics use radioactive sealed sources for teletherapy and brachytherapy. Oil exploration and medicine are the largest users. Accidental mismanagement of radioactive sealed sources each year results in a large number of people receiving very high or even fatal does of ionizing radiation. Deliberate mismanagement is a growing international concern. Sealed sources must be managed and disposed effectively in order to protect human health and the environment. Effective national safety and management infrastructures are prerequisites for efficient and safe transportation, treatment, storage, and disposal. The Integrated Management Program for Radioactive Sealed Sources in Egypt (IMPRSS) is a cooperative development agreement between the Egyptian Atomic Energy Authority (EAEA), Egyptian Ministry of Health (MOH), Sandia National Laboratories (SNL), the University of New Mexico (UNM), and Agriculture Cooperative Development International (ACDI/VOCA). The EAEA, teaming with SNL, is conducting a Preliminary Safety Assessment (PSA) of an intermediate-depth borehole disposal in thick arid alluvium in Egypt based on experience with the U.S. Greater Confinement Disposal (GCD). Goldsim has been selected for the preliminary disposal system assessment for the Egyptian GCD Study. The results of the PSA will then be used to decide if Egypt desires to implement such a disposal system.
We have studied the feasibility of an innovative device to sample 1ns low-power single current transients with a time resolution better than 10 ps. The new concept explored here is to close photoconductive semiconductor switches (PCSS) with a Laser for a period of 10 ps. The PCSSs are in a series along a Transmission Line (TL). The transient propagates along the TL allowing one to carry out a spatially resolved sampling of charge at a fixed time instead of the usual timesampling of the current. The fabrication of such a digitizer was proven to be feasible but very difficult.
This paper presents solution verification studies applicable to a class of problems involving wave propagation, frictional contact, geometrical complexity, and localized incompressibility. The studies are in support of a validation exercise of a phenomenological screw failure model. The numerical simulations are performed using a fully explicit transient dynamics finite element code, employing both standard four-node tetrahedral and eight-node mean quadrature hexahedral elements. It is demonstrated that verifying the accuracy of the simulation involves not only consideration of the mesh discretization error, but also the effect of the hourglass control and the contact enforcement. In particular, the proper amount of hourglass control and the behavior of the contact search and enforcement algorithms depend greatly on the mesh resolution. We carry out the solution verification exercise using mesh refinement studies and describe our systematic approach to handling the complicating issues. It is shown that hourglassing and contact must both be carefully monitored as the mesh is refined, and it is often necessary to make adjustments to the hourglass and contact user input parameters to accommodate finer meshes. We introduce in this paper the hourglass energy, which is used as an 'error indicator' for the hourglass control. If the hourglass energy does not tend to zero with mesh refinement, then an hourglass control parameter is changed and the calculation is repeated.
We describe a new mode of encryption with inexpensive authentication, which uses information from the internal state of the cipher to provide the authentication. Our algorithms have a number of benefits: (1) the encryption has properties similar to CBC mode, yet the encipherment and authentication can be parallelized and/or pipelined, (2) the authentication overhead is minimal, and (3) the authentication process remains resistant against some IV reuse. We offer a Manticore class of authenticated encryption algorithms based on cryptographic hash functions, which support variable block sizes up to twice the hash output length and variable key lengths. A proof of security is presented for the MTC4 and Pepper algorithms. We then generalize the construction to create the Cipher-State (CS) mode of encryption that uses the internal state of any round-based block cipher as an authenticator. We provide hardware and software performance estimates for all of our constructions and give a concrete example of the CS mode of encryption that uses AES as the encryption primitive and adds a small speed overhead (10-15%) compared to AES alone.
If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware is necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called 'white-boxing'. We put forth some new attacks and improvements on this method as well as demonstrating its implementation for various algorithms. We also examine cryptographic techniques to achieve obfuscation including encrypted functions and offer a new application to digital signature algorithms. To better understand the lack of security proofs for obfuscation techniques, we examine in detail general theoretical models of obfuscation. We explain the need for formal models in order to obtain provable security and the progress made in this direction thus far. Finally we tackle the problem of verifying remote execution. We introduce some methods of verifying remote exponentiation computations and some insight into generic computation checking.
Microelectronic devices in satellites and spacecraft are exposed to high energy cosmic radiation. Furthermore, Earth-based electronics can be affected by terrestrial radiation. The radiation causes a variety of Single Event Effects (SEE) that can lead to failure of the devices. High energy heavy ion beams are being used to simulate both the cosmic and terrestrial radiation to study radiation effects and to ensure the reliability of electronic devices. Broad beam experiments can provide a measure of the radiation hardness of a device (SEE cross section) but they are unable to pinpoint the failing components in the circuit. A nuclear microbeam is an ideal tool to map SEE on a microscopic scale and find the circuit elements (transistors, capacitors, etc.) that are responsible for the failure of the device. In this paper a review of the latest radiation effects microscopy (REM) work at Sandia will be given. Different SEE mechanisms (Single Event Upset, Single Event Transient, etc.) and the methods to study them (Ion Beam Induced Charge (IBIC), Single Event Upset mapping, etc.) will be discussed. Several examples of using REM to study the basic effects of radiation in electronic devices and failure analysis of integrated circuits will be given.
An important challenge encountered during post-processing of finite element analyses is the visualizing of three-dimensional fields of real-valued second-order tensors. Namely, as finite element meshes become more complex and detailed, evaluation and presentation of the principal stresses becomes correspondingly problematic. In this paper, we describe techniques used to visualize simulations of perturbed in-situ stress fields associated with hypothetical salt bodies in the Gulf of Mexico. We present an adaptation of the Mohr diagram, a graphical paper and pencil method used by the material mechanics community for estimating coordinate transformations for stress tensors, as a new tensor glyph for dynamically exploring tensor variables within three-dimensional finite element models. This interactive glyph can be used as either a probe or a filter through brushing and linking.
Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.
We present the source code for three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or Nway array. This is a supplementary report; details on using this code are provided separately in SAND-XXXX.
The rate coefficient has been measured under pseudo-first-order conditions for the Cl + CH{sub 3} association reaction at T = 202, 250, and 298 K and P = 0.3-2.0 Torr helium using the technique of discharge-flow mass spectrometry with low-energy (12-eV) electron-impact ionization and collision-free sampling. Cl and CH{sub 3} were generated rapidly and simultaneously by reaction of F with HCl and CH{sub 4}, respectively. Fluorine atoms were produced by microwave discharge in an approximately 1% mixture of F{sub 2} in He. The decay of CH{sub 3} was monitored under pseudo-first-order conditions with the Cl-atom concentration in large excess over the CH{sub 3} concentration ([Cl]{sub 0}/[CH{sub 3}]{sub 0} = 9-67). Small corrections were made for both axial and radial diffusion and minor secondary chemistry. The rate coefficient was found to be in the falloff regime over the range of pressures studied. For example, at T = 202 K, the rate coefficient increases from 8.4 x 10{sup -12} at P = 0.30 Torr He to 1.8 x 10{sup -11} at P = 2.00 Torr He, both in units of cm{sup 3} molecule{sup -1} s{sup -1}. A combination of ab initio quantum chemistry, variational transition-state theory, and master-equation simulations was employed in developing a theoretical model for the temperature and pressure dependence of the rate coefficient. Reasonable empirical representations of energy transfer and of the effect of spin-orbit interactions yield a temperature- and pressure-dependent rate coefficient that is in excellent agreement with the present experimental results. The high-pressure limiting rate coefficient from the RRKM calculations is k{sub 2} = 6.0 x 10{sup -11} cm{sup 3} molecule{sup -1} s{sup -1}, independent of temperature in the range from 200 to 300 K.
The purpose of the present work is to increase our understanding of which properties of geomaterials most influence the penetration process with a goal of improving our predictive ability. Two primary approaches were followed: development of a realistic, constitutive model for geomaterials and designing an experimental approach to study penetration from the target's point of view. A realistic constitutive model, with parameters based on measurable properties, can be used for sensitivity analysis to determine the properties that are most important in influencing the penetration process. An immense literature exists that is devoted to the problem of predicting penetration into geomaterials or similar man-made materials such as concrete. Various formulations have been developed that use an analytic or more commonly, numerical, solution for the spherical or cylindrical cavity expansion as a sort of Green's function to establish the forces acting on a penetrator. This approach has had considerable success in modeling the behavior of penetrators, both as to path and depth of penetration. However the approach is not well adapted to the problem of understanding what is happening to the material being penetrated. Without a picture of the stress and strain state imposed on the highly deformed target material, it is not easy to determine what properties of the target are important in influencing the penetration process. We developed an experimental arrangement that allows greater control of the deformation than is possible in actual penetrator tests, yet approximates the deformation processes imposed by a penetrator. Using explosive line charges placed in a central borehole, we loaded cylindrical specimens in a manner equivalent to an increment of penetration, allowing the measurement of the associated strains and accelerations and the retrieval of specimens from the more-or-less intact cylinder. Results show clearly that the deformation zone is highly concentrated near the borehole, with almost no damage occurring beyond 1/2 a borehole diameter. This implies penetration is not strongly influenced by anything but the material within a diameter or so of the penetration. For penetrator tests, target size should not matter strongly once target diameters exceed some small multiple of the penetrator diameter. Penetration into jointed rock should not be much affected unless a discontinuity is within a similar range. Accelerations measured at several points along a radius from the borehole are consistent with highly-concentrated damage and energy absorption; At the borehole wall, accelerations were an order of magnitude higher than at 1/2 a diameter, but at the outer surface, 8 diameters away, accelerations were as expected for propagation through an elastic medium. Accelerations measured at the outer surface of the cylinders increased significantly with cure time for the concrete. As strength increased, less damage was observed near the explosively-driven borehole wall consistent with the lower energy absorption expected and observed for stronger concrete. As it is the energy absorbing properties of a target that ultimately stop a penetrator, we believe this may point the way to a more readily determined equivalent of the S number.
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) generation of samples from uncertain analysis inputs, (3) propagation of sampled inputs through an analysis, (4) presentation of uncertainty analysis results, and (5) determination of sensitivity analysis results.
Similar to entangled ropes, polymer chains cannot slide through each other. These topological constraints, the so-called entanglements, dominate the viscoelastic behavior of high-molecular-weight polymeric liquids. Tube models of polymer dynamics and rheology are based on the idea that entanglements confine a chain to small fluctuations around a primitive path which follows the coarse-grained chain contour. To establish the microscopic foundation for these highly successful phenomenological models, we have recently introduced a method for identifying the primitive path mesh that characterizes the microscopic topological state of computer-generated conformations of long-chain polymer melts and solutions. Here we give a more detailed account of the algorithm and discuss several key aspects of the analysis that are pertinent for its successful use in analyzing the topology of the polymer configurations. We also present a slight modification of the algorithm that preserves the previously neglected self-entanglements and allows us to distinguish between local self-knots and entanglements between distant sections of the same chain. Our results indicate that the latter make a negligible contribution to the tube and that the contour length between local self-knots, N{sub 1k} is significantly larger than the entanglement length N{sub e}.
Water resource scarcity around the world is driving the need for the development of simulation models that can assist in water resources management. Transboundary water resources are receiving special attention because of the potential for conflict over scarce shared water resources. The Rio Grande/Rio Bravo along the U.S./Mexican border is an example of a scarce, transboundary water resource over which conflict has already begun. The data collection and modeling effort described in this report aims at developing methods for international collaboration, data collection, data integration and modeling for simulating geographically large and diverse international watersheds, with a special focus on the Rio Grande/Rio Bravo. This report describes the basin, and the data collected. This data collection effort was spatially aggregated across five reaches consisting of Fort Quitman to Presidio, the Rio Conchos, Presidio to Amistad Dam, Amistad Dam to Falcon Dam, and Falcon Dam to the Gulf of Mexico. This report represents a nine-month effort made in FY04, during which time the model was not completed.
This report describes a project to develop both fixed and programmable surface acoustic wave (SAW) correlators for use in a low power space communication network. This work was funded by NASA at Sandia National Laboratories for fiscal years 2004, 2003, and the final part of 2002. The role of Sandia was to develop the SAW correlator component, although additional work pertaining to use of the component in a system and system optimization was also done at Sandia. The potential of SAW correlator-based communication systems, the design and fabrication of SAW correlators, and general system utilization of those correlators are discussed here.
Drainage of water from the region between an advancing probe tip and a flat sample is reconsidered under the assumption that the tip and sample surfaces are both coated by a thin water 'interphase' (of width {approx}a few nm) whose viscosity is much higher than the bulk liquid's. A formula derived by solving the Navier-Stokes equations allows one to extract an interphase viscosity of {approx}59 KPa-sec (or {approx}6.6x10{sup 7} times the viscosity of bulk water at 25C) from Interfacial Force Microscope measurements with both tip and sample functionalized hydrophilic by OH-terminated tri(ethylene glycol) undecylthiol, self-assembled monolayers.
Current computing architectures are 'inherently insecure' because they are designed to execute ANY arbitrary sequence of instructions. As a result they are subject to subversion by malicious code. Our goal is to produce a cryptographic method of 'tamper-proofing' trusted code over a large portion of the software life cycle. We have developed a technique called 'faithful execution', to cryptographically protect instruction sequences from subversion. This paper presents an overview of, and the lessons learned from, our implementations of faithful execution in a Java virtual machine prototype and also in a configurable soft-core processor implemented in a field programmable gate array (FPGA).
With the build-out of large transport networks utilizing optical technologies, more and more capacity is being made available. Innovations in Dense Wave Division Multiplexing (DWDM) and the elimination of optical-electrical-optical conversions have brought on advances in communication speeds as we move into 10 Gigabit Ethernet and above. Of course, there is a need to encrypt data on these optical links as the data traverses public and private network backbones. Unfortunately, as the communications infrastructure becomes increasingly optical, advances in encryption (done electronically) have failed to keep up. This project examines the use of optical logic for implementing encryption in the photonic domain to achieve the requisite encryption rates. This paper documents the innovations and advances of work first detailed in 'Photonic Encryption using All Optical Logic,' [1]. A discussion of underlying concepts can be found in SAND2003-4474. In order to realize photonic encryption designs, technology developed for electrical logic circuits must be translated to the photonic regime. This paper examines S-SEED devices and how discrete logic elements can be interconnected and cascaded to form an optical circuit. Because there is no known software that can model these devices at a circuit level, the functionality of S-SEED devices in an optical circuit was modeled in PSpice. PSpice allows modeling of the macro characteristics of the devices in context of a logic element as opposed to device level computational modeling. By representing light intensity as voltage, 'black box' models are generated that accurately represent the intensity response and logic levels in both technologies. By modeling the behavior at the systems level, one can incorporate systems design tools and a simulation environment to aid in the overall functional design. Each black box model takes certain parameters (reflectance, intensity, input response), and models the optical ripple and time delay characteristics. These 'black box' models are interconnected and cascaded in an encrypting/scrambling algorithm based on a study of candidate encryption algorithms. Demonstration circuits show how these logic elements can be used to form NAND, NOR, and XOR functions. This paper also presents functional analysis of a serial, low gate count demonstration algorithm suitable for scrambling/encryption using S-SEED devices.
We observe the spontaneous formation of parallel oxide rods upon exposing a clean NiAl(110) surface to oxygen at elevated temperatures (850-1350 K). By following the self-assembly of individual nanorods in real time with low-energy electron microscopy (LEEM), we are able to investigate the processes by which the rods lengthen along their axes and thicken normal to the surface of the substrate. At a fixed temperature and O{sub 2} pressure, the rods lengthen along their axes at a constant rate. The exponential temperature dependence of this rate yields an activation energy for growth of 1.2 {+-} 0.1 eV. The rod growth rates do not change as their ends pass in close proximity (<40 nm) to each other, which suggests that they do not compete for diffusing flux in order to elongate. Both LEEM and scanning tunneling microscopy (STM) studies show that the rods can grow vertically in layer-by-layer fashion. The heights of the rods are extremely bias dependent in STM images, but occur in integer multiples of approximately 2-{angstrom}-thick oxygen-cation layers. As the rods elongate from one substrate terrace to the next, we commonly see sharp changes in their rates of elongation that result from their tendency to gain (lose) atomic layers as they descend (climb) substrate steps. Diffraction analysis and dark-field imaging with LEEM indicate that the rods are crystalline, with a lattice constant that is well matched to that of the substrate along their length. We discuss the factors that lead to the formation of these highly anisotropic structures.
The performance characteristics and material properties such as stress, microstructure, and composition of nickel coatings and electroformed components can be controlled over a wide range by the addition of small amounts of surface-active compounds to the electroplating bath. Saccharin is one compound that is widely utilized for its ability to reduce tensile stress and refine grain size in electrodeposited nickel. While the effects of saccharin on nickel electrodeposition have been studied by many authors in the past, there is still uncertainty over saccharin's mechanisms of incorporation, stress reduction, and grain refinement. In-situ scanning probe microscopy (SPM) is a tool that can be used to directly image the nucleation and growth of thin nickel films at nanometer length scales to help elucidate saccharin's role in the development and evolution of grain structure. In this study, in-situ atomic force microscopy (AFM) and scanning tunneling microscopy (STM) techniques are used to investigate the effects of saccharin on the morphological evolution of thin nickel films. By observing mono-atomic height nickel island growth with and without saccharin present we conclude that saccharin has little effect on the nickel surface mobility during deposition at low overpotentials where the growth occurs in a layer-by-layer mode. Saccharin was imaged on Au(l11) terraces as condensed patches without resolved packing structure. AFM measurements of the roughness evolution of nickel films up to 1200 nm thick on polycrystalline gold indicate that saccharin initially increases the roughness and surface skewness of the deposit that at greater thickness becomes smoother than films deposited without saccharin. Faceting of the deposit morphology decreases as saccharin concentration increases even for the thinnest films that have 3-D growth.