This report describes work done in FY2003 under Advanced and Exploratory Studies funding for Advanced Weapons Controllers. The contemporary requirements and envisioned missions for nuclear weapons are changing from the class of missions originally envisioned during development of the current stockpile. Technology available today in electronics, computing, and software provides capabilities not practical or even possible 20 years ago. This exploratory work looks at how Weapon Electrical Systems can be improved to accommodate new missions and new technologies while maintaining or improving existing standards in nuclear safety and reliability.
A concurrent computational and experimental investigation of thermal transport is performed with the goal of improving understanding of, and predictive capability for, thermal transport in microdevices. The computational component involves Monte Carlo simulation of phonon transport. In these simulations, all acoustic modes are included and their properties are drawn from a realistic dispersion relation. Phonon-phonon and phonon-boundary scattering events are treated independently. A new set of phonon-phonon scattering coefficients are proposed that reflect the elimination of assumptions present in earlier analytical work from the simulation. The experimental component involves steady-state measurement of thermal conductivity on silicon films as thin as 340nm at a range of temperatures. Agreement between the experiment and simulation on single-crystal silicon thin films is excellent, Agreement for polycrystalline films is promising, but significant work remains to be done before predictions can be made confidently. Knowledge gained from these efforts was used to construct improved semiclassical models with the goal of representing microscale effects in existing macroscale codes in a computationally efficient manner.
The work discussed in this report was supported by a Campus Fellowship LDRD. The report contains three papers that were published by the fellowship recipient and these papers form the bulk of his dissertation. They are reproduced here to satisfy LDRD reporting requirements.
As MEMS transducers are scaled up in size, the threshold is quickly crossed to where magnetoquasistatic (MQS) transducers are superior for force production compared to electroquasistatic (EQS) transducers. Considerable progress has been made increasing the force output of MEMS EQS transducers, but progress with MEMS MQS transducers has been more modest. A key reason for this has been the difficulty implementing efficient lithographically-fabricated magnetic coil structures. The contribution of this study is a planar multilayer polyphase coil architecture which provides for the lithographic implementation of efficient stator windings suitable for linear magnetic machines. A millimeter-scale linear actuator with complex stator windings was fabricated using this architecture. The stators of the actuator were fabricated using a BCB/Cu process, which does not require replanarization of the wafer between layers. The prototype stator was limited to thin copper layers (3 {micro}m) due to the use of evaporated metal at the time of fabrication. Two layers of metal were implemented in the prototype, but the winding architecture naturally supports additional metal layer pairs. It was found in laboratory tests that the windings can support very high current densities of 4 x 10{sup 9}A/m{sup 2} without damage. Force production normal to the stator was calculated to be 0.54 N/A. For thin stators such as this one, force production increases approximately linearly with the thickness of the windings and a six-layer stator fabricated using a newly implemented electroplated BCB/Cu process (six layers of 15 {micro}m thick metal) is projected to produce approximately 8.8 N/A.
A laser safety hazard evaluation and pertinent output measurements were performed (June 2003 through August 2003) on several VITAL-2 Variable Intensity Tactical Aiming Light--infrared laser, associated with the Proforce M-4 system used in force-on-force exercises. The VITAL-2 contains two diode lasers presenting 'Extended Source' viewing out to a range on the order of 1.3 meters before reverting to a 'Small Source' viewing hazard. Laser hazard evaluation was performed in concert with the ANSI Std. Z136.1-2000 for the safe use of lasers and the ANSI Std. Z136.6-2000 for the safe use of lasers outdoors. The results of the laser hazard analysis for the VITAL-2, indicates that this Tactical Aiming IR laser presents a Class 1 laser hazard to personnel in the area of use. Field measurements performed on 71 units confirmed that the radiant outputs were at all times below the Allowable Emission Limit and that the irradiance of the laser spot was at all locations below the Maximum Exposure Limit. This system is eye-safe and it may be used under current SNL policy in force-on-force exercises. The VITAL-2 Variable Intensity Tactical Aiming Light does not present a laser hazard greater than Class 1, to aided viewing with binoculars.
Alloying element loss from the weld pool during laser spot welding of stainless steel was investigated experimentally and theoretically. The experimental work involved determination of work-piece weight loss and metal vapor composition for various welding conditions. The transient temperature and velocity fields in the weld pool were numerically simulated. The vaporization rates of the alloying elements were modeled using the computed temperature profiles. The fusion zone geometry could be predicted from the transient heat transfer and fluid flow model for various welding conditions. The laser power and the pulse duration were the most important variables in determining the transient temperature profiles. The velocity of the liquid metal in the weld pool increased with time during heating and convection played an increasingly important role in the heat transfer. The peak temperature and velocity increased significantly with laser power density and pulse duration. At very high power densities, the computed temperatures were higher than the boiling point of 304 stainless steel. As a result, evaporation of alloying elements was caused by both the total pressure and the concentration gradients. The calculations showed that the vaporization occurred mainly from a small region under the laser beam where the temperatures were very high. The computed vapor loss was found to be lower than the measured mass loss because of the ejection of tiny metal droplets owing to the recoil force exerted by the metal vapours. The ejection of metal droplets has been predicted by computations and verified by experiments.
Detailed experiments involving extensive high resolution transmission electron microscopy (TEM) revealed significant microstructural differences between Cu sulfides formed at low and high relative humidity (RH). It was known from prior experiments that the sulfide grows linearly with time at low RH up to a sulfide thickness approaching or exceeding one micron, while the sulfide initially grows linearly with time at high RH then becomes sub-linear at a sulfide thickness less than about 0.2 microns, with the sulfidation rate eventually approaching zero. TEM measurements of the Cu2S morphology revealed that the Cu2S formed at low RH has large sized grains (75 to greater than 150 nm) that are columnar in structure with sharp, abrupt grain boundaries. In contrast, the Cu2S formed at high RH has small equiaxed grains of 20 to 50 nm in size. Importantly, the small grains formed at high RH have highly disordered grain boundaries with a high concentration of nano-voids. Two-dimensional diffusion modeling was performed to determine whether the existence of localized source terms at the Cu/Cu2S interface could be responsible for the suppression of Cu sulfidation at long times at high RH. The models indicated that the existence of static localized source terms would not predict the complete suppression of growth that was observed. Instead, the models suggest that the diffusion of Cu through Cu2S becomes restricted during Cu2S formation at high RH. The leading speculation is that the extensive voiding that exists at grain boundaries in this material greatly reduces the flux of Cu between grains, leading to a reduction in the rate of sulfide film formation. These experiments provide an approach for adding microstructural information to Cu sulfidation rate computer models. In addition to the microstructural studies, new micro-patterned test structures were developed in this LDRD to offer insight into the point defect structure of Cu2S and to permit measurement of surface reaction rates during Cu sulfidation. The surface reaction rate was measured by creating micropatterned Cu lines of widths ranging from 5 microns to 100 microns. When sulfidized, the edges of the Cu lines show greater sulfidation than the center, an effect known as microloading. Measurement of the sulfidation profile enables an estimate of the ratio of the diffusivity of H2S in the gas phase to the surface reaction rate constant, k. Our measurements indicated that the gas phase diffusivity exceeds k by more than 10, but less than 100. This is consistent with computer simulations of the sulfidation process. Other electrical test structures were developed to measure the electrical conductivity of Cu2S that forms on Cu. This information can be used to determine relative vacancy concentrations in the Cu2S layer as a function of RH. The test structures involved micropatterned Cu disks and thin films, and the initial measurements showed that the electrical approach is feasible for point defect studies in Cu2S.
We introduce a filter-based evolutionary algorithm (FEA) for constrained optimization. The filter used by an FEA explicitly imposes the concept of dominance on a partially ordered solution set. We show that the algorithm is provably robust for both linear and nonlinear problems and constraints. FEAs use a finite pattern of mutation offsets, and our analysis is closely related to recent convergence results for pattern search methods. We discuss how properties of this pattern impact the ability of an FEA to converge to a constrained local optimum.
In the epitaxial lateral overgrowth of GaN, mass transport and the effects of crystal-growth kinetics lead to a wide range of observed feature growth rates depending on the dimensions of the masked and exposed regions. Based on a simple model, scaling relationships are derived that reveal the dynamic similarity of growth behavior across pattern designs. A time-like quantity is introduced that takes into account the varying transport effects, and provides a dimensionless time basis for analyzing crystal growth kinetics in this system. Illustrations of these scaling relationships are given through comparison with experiment. Published by Elsiver B.V.
The views of state of art in verification and validation (V & V) in computational physics are discussed. These views are described in the framework in which predictive capability relies on V & V, as well as other factors that affect predictive capability. Some of the research topics addressed are development of improved procedures for the use of the phenomena identification and ranking table (PIRT) for prioritizing V & V activities, and the method of manufactured solutions for code verification. It also addressed development and use of hierarchical validation diagrams, and the construction and use of validation metrics incorporating statistical measures.
Estimates of mass transfer timescales from 316 solute transport experiments reported in 35 publications are compared to the pore-water velocities and residence times, as well as the experimental durations. New tracer experiments were also conducted in columns of different lengths so that the velocity and the advective residence time could be varied independently. In both the experiments reported in the literature and the new experiments, the estimated mass transfer timescale (inverse of the mass-transfer rate coefficient) is better correlated to residence time and the experimental duration than to velocity. Of the measures considered, the experimental duration multiplied by 1 + β (where β is the capacity coefficient, defined as the ratio of masses in the immobile and mobile domains at equilibrium) best predicted the estimated mass transfer timescale. This relation is consistent with other work showing that aquifer and soil material commonly produce multiple timescales of mass transfer.
Given a finite set of points in Euclidean space, we can ask what is the minimum number of times a piecewise-linear path must change direction in order to pass through all of them. We prove some new upper and lower bounds for the rectilinear version of this problem in which all motion is orthogonal to the coordinate axes. We also consider the more general case of arbitrary directions.
Deep X-ray lithography based techniques such as LIGA (German acronym representing Lithographie, Galvanoformung, and Abformung) are being currently used to fabricate net-shape components for microelectromechanical systems (MEMS). Unlike other microfabrication techniques, LIGA lends itself to a broad range of materials, including metals, alloys, polymers, as well as ceramics and composites. Currently, Ni and Ni alloys are the materials of choice for LIGA microsystems. While Ni alloys may meet the structural requirements for MEMS, their tribological (friction and wear) behavior poses great challenges for the reliable operation of LIGA-fabricated MEMS. Typical sidewall morphologies of LIGA-fabricated parts are described, and their role in the tribological behavior of MEMS is discussed. The adaptation of commercial plasma-enhanced chemical vapor deposition to coat the sidewalls of LIGA-fabricated parts with diamond-like nanocomposite is described.
The propagation of a 30 kA, 3.5 Mev electron beam which was focused into gas and plasma-filled cells was discussed. Gas cells which were used for X-ray radiography were produced using pulsed-power accelerators, onto a high atomic number target to generate bremsstrahlung radiation. The effectiveness of beam focusing using neutral gas, partially ionized gas, and fully ionized (plasma-filled) cells was investigated using numerical simulation. It was observed in an optimized gas cell that an initial plasma density approaching 1016 cm-3 was sufficient to prevent significant net currents and the subsequent beam sweep.
Our national security, economic prosperity, and national well-being are dependent upon a set of highly interdependent critical infrastructures. Examples of these infrastructures include the national electrical grid, oil and natural gas systems, telecommunication and information networks, transportation networks, water systems, and banking and financial systems. Given the importance of their reliable and secure operations, understanding the behavior of these infrastructures - particularly when stressed or under attack - is crucial. Models and simulations can provide considerable insight into the complex nature of their behaviors and operational characteristics. These models and simulations must include interdependencies among infrastructures if they are to provide accurate representations of infrastructure characteristics and operations. A number of modeling and simulation approaches under development today directly address interdependencies and offer considerable insight into the operational and behavioral characteristics of critical infrastructures.
A robust nonlinear adaptive control (NAC) system was designed for the rotational slewing of an active structure. The control laws were developed for both motor torque control and beam vibration control actuation. The experiments validated the control system performance. Robustness to parameter variations were tested increasing the tip mass, that reduced the first-mode bending frequency. It was found that the control system performance results were similar to the zero tip mass case.
Acoustic testing using commercial sound system components is becoming more popular as a cost effective way of generating the required environment both in and out of a reverberant chamber. This paper will present the development of such a sound system that uses a state-of-the-art random vibration controller to perform closed-loop control in the reverberant chamber at Sandia National Laboratories. Test data will be presented that demonstrates narrow-band controlability, performance and some limitations of commercial sound generation equipment in a reverberant chamber.
This report summarizes a series of structural calculations that examine effects of raising the Waste Isolation Pilot Plant repository horizon from the original design level upward 2.43 meters. These calculations allow evaluation of various features incorporated in conceptual models used for performance assessment. Material presented in this report supports the regulatory compliance re-certification, and therefore begins by replicating the calculations used in the initial compliance certification application. Calculations are then repeated for grid changes appropriate for the new horizon raised to Clay Seam G. Results are presented in three main areas: 1. Disposal room porosity, 2. Disturbed rock zone characteristics, and 3. Anhydrite marker bed failure. No change to the porosity surface for the compliance re-certification application is necessary to account for raising the repository horizon, because the new porosity surface is essentially identical. The disturbed rock zone evolution and devolution are charted in terms of a stress invariant criterion over the regulatory period. This model shows that the damage zone does not extend upward to MB 138, but does reach MB 139 below the repository. Damaged salt would be expected to heal in nominally 100 years. The anhydrite marker beds sustain states of stress that promote failure and substantial marker bed deformation into the room assures fractured anhydrite will sustain in the proximity of the disposal rooms.
Finite difference equations are derived for the simulation of dielectric waveguides using an Hz -Ez formulation defined on a nonuniform triangular grid. The resulting equations may be solved as a banded eigenproblem for waveguide structures of arbitrary shape composed of regions of piecewise constant isotropic dielectric, and all transverse fields then computed from the solutions. Benchmark comparisons are presented for problems with analytic solutions, as well as a sample calculation of the propagation loss of a hollow Bragg fiber.
This report documents the results obtained during a one-year Laboratory Directed Research and Development (LDRD) initiative aimed at investigating coupled structural acoustic interactions by means of algorithm development and experiment. Finite element acoustic formulations have been developed based on fluid velocity potential and fluid displacement. Domain decomposition and diagonal scaling preconditioners were investigated for parallel implementation. A formulation that includes fluid viscosity and that can simulate both pressure and shear waves in fluid was developed. An acoustic wave tube was built, tested, and shown to be an effective means of testing acoustic loading on simple test structures. The tube is capable of creating a semi-infinite acoustic field due to nonreflecting acoustic termination at one end. In addition, a micro-torsional disk was created and tested for the purposes of investigating acoustic shear wave damping in microstructures, and the slip boundary conditions that occur along the wet interface when the Knudsen number becomes sufficiently large.
Motivated by observations about job runtimes on the CPlant system, we use a trace-driven microsimulator to begin characterizing the performance of different classes of allocation algorithms on jobs with different communication patterns in space-shared parallel systems with mesh topology. We show that relative performance varies considerably with communication pattern. The Paging strategy using the Hilbert space-filling curve and the Best Fit heuristic performed best across several communication patterns.
We have made progress in developing a new statistical mechanics approach to designing self organizing systems that is unique to SNL. The primary application target for this ongoing research has been the development of new kinds of nanoscale components and hardware systems. However, this research also enables an out of the box connection to the field of software development. With appropriate modification, the collective behavior physics ideas for enabling simple hardware components to self organize may also provide design methods for a new class of software modules. Our current physics simulations suggest that populations of these special software components would be able to self assemble into a variety of much larger and more complex software systems. If successful, this would provide a radical (disruptive technology) path to developing complex, high reliability software unlike any known today. This high risk, high payoff opportunity does not fit well into existing SNL funding categories, as it is well outside of the mainstreams of both conventional software development practices and the nanoscience research area that spawned it. This LDRD effort was aimed at developing and extending the capabilities of self organizing/assembling software systems, and to demonstrate the unique capabilities and advantages of this radical new approach for software development.
Biological systems create proteins that perform tasks more efficiently and precisely than conventional chemicals. For example, many plants and animals produce proteins to control the freezing of water. Biological antifreeze proteins (AFPs) inhibit the solidification process, even below the freezing point. These molecules bond to specific sites at the ice/water interface and are theorized to suppress solidification chemically or geometrically. In this project, we investigated the theoretical and experimental data on AFPs and performed analyses to understand the unique physics of AFPs. The experimental literature was analyzed to determine chemical mechanisms and effects of protein bonding at ice surfaces, specifically thermodynamic freezing point depression, suppression of ice nucleation, decrease in dendrite growth kinetics, solute drag on the moving solid/liquid interface, and stearic pinning of the ice interface. Stearic pinning was found to be the most likely candidate to explain experimental results, including freezing point depression, growth morphologies, and thermal hysteresis. A new stearic pinning model was developed and applied to AFPs, with excellent quantitative results. Understanding biological antifreeze mechanisms could enable important medical and engineering applications, but considerable future work will be necessary.
An estimate of the distribution of fatigue ranges or extreme loads for wind turbines may be obtained by separating the problem into two uncoupled parts, (1) a turbine specific portion, independent of the site and (2) a site-specific description of environmental variables. We consider contextually appropriate probability models to describe the turbine specific response for extreme loads or fatigue. The site-specific portion is described by a joint probability distribution of a vector of environmental variables, which characterize the wind process at the hub-height of the wind turbine. Several approaches are considered for combining the two portions to obtain an estimate of the extreme load, e.g., 50-year loads or fatigue damage. We assess the efficacy of these models to obtain accurate estimates, including various levels of epistemic uncertainty, of the turbine response.
The quantitative analysis of ammonia binding sites in the Davison (Type 3A) zeolite desiccant using solid-state {sup 15}N MAS NMR spectroscopy is reported. By utilizing 15N enriched ammonia ({sup 15}NH{sub 3}) gas, the different adsorption/binding sites within the zeolite were investigated as a function of NH{sub 3} loading. Using {sup 15}N MAS NMR multiple sites were resolved that have distinct cross-polarization dynamics and chemical shift behavior. These differences in the {sup 15}N NMR were used to characterize the adsorption environments in both the pure 3A zeolite and the silicone-molded forms of the desiccant.
This SAND report provides the technical progress through October 2003 of the Sandia-led project, 'Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling,' funded by the DOE Office of Science Genomes to Life Program. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO2 are important terms in the global environmental response to anthropogenic atmospheric inputs of CO2 and that oceanic microorganisms play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. In this project, we will investigate the carbon sequestration behavior of Synechococcus Sp., an abundant marine cyanobacteria known to be important to environmental responses to carbon dioxide levels, through experimental and computational methods. This project is a combined experimental and computational effort with emphasis on developing and applying new computational tools and methods. Our experimental effort will provide the biology and data to drive the computational efforts and include significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Computational tools will be essential to our efforts to discover and characterize the function of the molecular machines of Synechococcus. To this end, molecular simulation methods will be coupled with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes. In addition, we will develop a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. The ultimate goal of this effort is develop and apply new experimental and computational methods needed to generate a new level of understanding of how the Synechococcus genome affects carbon fixation at the global scale. Anticipated experimental and computational methods will provide ever-increasing insight about the individual elements and steps in the carbon fixation process, however relating an organism's genome to its cellular response in the presence of varying environments will require systems biology approaches. Thus a primary goal for this effort is to integrate the genomic data generated from experiments and lower level simulations with data from the existing body of literature into a whole cell model. We plan to accomplish this by developing and applying a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats. These challenges are unprecedented in high performance scientific computing and necessitate the development of a companion computational infrastructure to support this effort. More information about this project, including a copy of the original proposal, can be found at www.genomes-to-life.org
Military test and training ranges operate with live fire engagements to provide realism important to the maintenance of key tactical skills. Ordnance detonations during these operations typically produce minute residues of parent explosive chemical compounds. Occasional low order detonations also disperse solid phase energetic material onto the surface soil. These detonation remnants are implicated in chemical contamination impacts to groundwater on a limited set of ranges where environmental characterization projects have occurred. Key questions arise regarding how these residues and the environmental conditions (e.g., weather and geostratigraphy) contribute to groundwater pollution impacts. This report documents interim results of experimental work evaluating mass transfer processes from solid phase energetics to soil pore water. The experimental work is used as a basis to formulate a mass transfer numerical model, which has been incorporated into the porous media simulation code T2TNT. This report documents the results of the Phase III experimental effort, which evaluated the impacts of surface deposits versus buried deposits, energetic material particle size, and low order detonation debris. Next year, the energetic material mass transfer model will be refined and a 2-d screening model will be developed for initial site-specific applications. A technology development roadmap was created to show how specific R&D efforts are linked to technology and products for key customers.
A Micro Electro Mechanical System (MEMS) typically consists of micron-scale parts that move through a gas at atmospheric or reduced pressure. In this situation, the gas-molecule mean free path is comparable to the geometric features of the microsystem, so the gas flow is noncontinuum. When mean-free-path effects cannot be neglected, the Boltzmann equation must be used to describe the gas flow. Solution of the Boltzmann equation is difficult even for the simplest case because of its sevenfold dimensionality (one temporal dimension, three spatial dimensions, and three velocity dimensions) and because of the integral nature of the collision term. The Direct Simulation Monte Carlo (DSMC) method is the method of choice to simulate high-speed noncontinuum flows. However, since DSMC uses computational molecules to represent the gas, the inherent statistical noise must be minimized by sampling large numbers of molecules. Since typical microsystem velocities are low (< 1 m/s) compared to molecular velocities ({approx}400 m/s), the number of molecular samples required to achieve 1% precision can exceed 1010 per cell. The Discrete Velocity Gas (DVG) method, an approach motivated by radiation transport, provides another way to simulate noncontinuum gas flows. Unlike DSMC, the DVG method restricts molecular velocities to have only certain discrete values. The transport of the number density of a velocity state is governed by a discrete Boltzmann equation that has one temporal dimension and three spatial dimensions and a polynomial collision term. Specification and implementation of DVG models are discussed, and DVG models are applied to Couette flow and to Fourier flow. While the DVG results for these benchmark problems are qualitatively correct, the errors in the shear stress and the heat flux can be order-unity even for DVG models with 88 velocity states. It is concluded that the DVG method, as described herein, is not sufficiently accurate to simulate the low-speed gas flows that occur in microsystems.
CommAspen is a new agent-based model for simulating the interdependent effects of market decisions and disruptions in the telecommunications infrastructure on other critical infrastructures in the U.S. economy such as banking and finance, and electric power. CommAspen extends and modifies the capabilities of Aspen-EE, an agent-based model previously developed by Sandia National Laboratories to analyze the interdependencies between the electric power system and other critical infrastructures. CommAspen has been tested on a series of scenarios in which the communications network has been disrupted, due to congestion and outages. Analysis of the scenario results indicates that communications networks simulated by the model behave as their counterparts do in the real world. Results also show that the model could be used to analyze the economic impact of communications congestion and outages.
This report is the latest in a continuing series that highlights the recent technical accomplishments associated with the work being performed within the Materials and Process Sciences Center. Our research and development activities primarily address the materials-engineering needs of Sandia's Nuclear-Weapons (NW) program. In addition, we have significant efforts that support programs managed by the other laboratory business units. Our wide range of activities occurs within six thematic areas: Materials Aging and Reliability, Scientifically Engineered Materials, Materials Processing, Materials Characterization, Materials for Microsystems, and Materials Modeling and Simulation. We believe these highlights collectively demonstrate the importance that a strong materials-science base has on the ultimate success of the NW program and the overall DOE technology portfolio.
The catalytic combustion of natural gas has been the topic of much research over the past decade. Interest in this technology results from a desire to decrease or eliminate the emissions of harmful nitrogen oxides (NOX) from gas turbine power plants. A low-pressure drop catalyst support, such as a ceramic monolith, is ideal for this high-temperature, high-flow application. A drawback to the traditional honeycomb monoliths under these operating conditions is poor mass transfer to the catalyst surface in the straight-through channels. 'Robocasting' is a unique process developed at Sandia National Laboratories that can be used to manufacture ceramic monoliths with alternative 3-dimensional geometries, providing tortuous pathways to increase mass transfer while maintaining low pressure drops. This report details the mass transfer effects for novel 3-dimensional robocast monoliths, traditional honeycomb-type monoliths, and ceramic foams. The mass transfer limit is experimentally determined using the probe reaction of CO oxidation over a Pt / {gamma}-Al{sub 2}O{sub 3} catalyst, and the pressure drop is measured for each monolith sample. Conversion versus temperature data is analyzed quantitatively using well-known dimensionless mass transfer parameters. The results show that, relative to the honeycomb monolith support, considerable improvement in mass transfer efficiency is observed for robocast samples synthesized using an FCC-like geometry of alternating rods. Also, there is clearly a trade-off between enhanced mass transfer and increased pressure drop, which can be optimized depending on the particular demands of a given application.
This document introduces the use of Trilinos, version 3.1. Trilinos has been written to support, in a rigorous manner, the solver needs of the engineering and scientific applications at Sandia National Laboratories. Aim of this manuscript is to present the basic features of some of the Trilinos packages. The presented material includes the definition of distributed matrices and vectors with Epetra, the iterative solution of linear system with AztecOO, incomplete factorizations with IFPACK, multilevel methods with ML, direct solution of linear system with Amesos, and iterative solution of nonlinear systems with NOX. With the help of several examples, some of the most important classes and methods are detailed to the inexperienced user. For the most majority, each example is largely commented throughout the text. Other comments can be found in the source of each example. This document is a companion to the Trilinos User's Guide and Trilinos Development Guides. Also, the documentation included in each of the Trilinos' packages is of fundamental importance.
We report our conclusions in support of the FY 2003 Science and Technology Milestone ST03-3.5. The goal of the milestone was to develop a research plan for expanding Sandia's capabilities in materials modeling and simulation. From inquiries and discussion with technical staff during FY 2003 we conclude that it is premature to formulate the envisioned coordinated research plan. The more appropriate goal is to develop a set of computational tools for making scale transitions and accumulate experience with applying these tools to real test cases so as to enable us to attack each new problem with higher confidence of success.
Simulation-based life-cycle-engineering and the ASCI program have resulted in models of unprecedented size and fidelity. The validation of these models requires high-resolution, multi-parameter diagnostics. Within the thermal-fluids disciplines, the need for detailed, high-fidelity measurements exceeds the limits of current engineering sciences capabilities and severely tests the state of the art. The focus of this LDRD is the development and application of filtered Rayleigh scattering (FRS) for high-resolution, nonintrusive measurement of gas-phase velocity and temperature. With FRS, the flow is laser-illuminated and Rayleigh scattering from naturally occurring sources is detected through a molecular filter. The filtered transmission may be interpreted to yield point or planar measurements of three-component velocities and/or thermodynamic state. Different experimental configurations may be employed to obtain compromises between spatial resolution, time resolution, and the quantity of simultaneously measured flow variables. In this report, we present the results of a three-year LDRD-funded effort to develop FRS combustion thermometry and Aerosciences velocity measurement systems. The working principles and details of our FRS opto-electronic system are presented in detail. For combustion thermometry we present 2-D, spatially correlated FRS results from nonsooting premixed and diffusion flames and from a sooting premixed flame. The FRS-measured temperatures are accurate to within {+-}50 K (3%) in a premixed CH4-air flame and within {+-}100 K for a vortex-strained diluted CH4-air diffusion flame where the FRS technique is severely tested by large variation in scattering cross section. In the diffusion flame work, FRS has been combined with Raman imaging of the CH4 fuel molecule to correct for the local light scattering properties of the combustion gases. To our knowledge, this is the first extension of FRS to nonpremixed combustion and the first use of joint FRS-Raman imaging. FRS has been applied to a sooting C2H4-air flame and combined with LII to assess the upper sooting limit where FRS may be utilized. The results from this sooting flame show FRS temperatures has potential for quantitative temperature imaging for soot volume fractions of order 0.1 ppm. FRS velocity measurements have been performed in a Mach 3.7 overexpanded nitrogen jet. The FRS results are in good agreement with expected velocities as predicted by inviscid analysis of the jet flowfield. We have constructed a second FRS opto-electronic system for measurements at Sandia's hypersonic wind tunnel. The details of this second FRS system are provided here. This facility is currently being used for velocity characterization of these production hypersonic facilities.
Molecular analysis of cancer, at the genomic level, could lead to individualized patient diagnostics and treatments. The developments to follow will signal a significant paradigm shift in the clinical management of human cancer. Despite our initial hopes, however, it seems that simple analysis of microarray data cannot elucidate clinically significant gene functions and mechanisms. Extracting biological information from microarray data requires a complicated path involving multidisciplinary teams of biomedical researchers, computer scientists, mathematicians, statisticians, and computational linguists. The integration of the diverse outputs of each team is the limiting factor in the progress to discover candidate genes and pathways associated with the molecular biology of cancer. Specifically, one must deal with sets of significant genes identified by each method and extract whatever useful information may be found by comparing these different gene lists. Here we present our experience with such comparisons, and share methods developed in the analysis of an infant leukemia cohort studied on Affymetrix HG-U95A arrays. In particular, spatial gene clustering, hyper-dimensional projections, and computational linguistics were used to compare different gene lists. In spatial gene clustering, different gene lists are grouped together and visualized on a three-dimensional expression map, where genes with similar expressions are co-located. In another approach, projections from gene expression space onto a sphere clarify how groups of genes can jointly have more predictive power than groups of individually selected genes. Finally, online literature is automatically rearranged to present information about genes common to multiple groups, or to contrast the differences between the lists. The combination of these methods has improved our understanding of infant leukemia. While the complicated reality of the biology dashed our initial, optimistic hopes for simple answers from microarrays, we have made progress by combining very different analytic approaches.
A mine dog evaluation project initiated by the Geneva International Center for Humanitarian Demining is evaluating the capability and reliability of mine detection dogs. The performance of field-operational mine detection dogs will be measured in test minefields in Afghanistan containing actual, but unfused landmines. Repeated performance testing over two years through various seasonal weather conditions will provide data simulating near real world conditions. Soil samples will be obtained adjacent to the buried targets repeatedly over the course of the test. Chemical analysis results from these soil samples will be used to evaluate correlations between mine dog detection performance and seasonal weather conditions. This report documents the analytical chemical methods and results from the fifth batch of soils received. This batch contained samples from Kharga, Afghanistan collected in June 2003.
The structure of laminar inverse diffusion flames (IDF) of methane and ethylene in air was studied using a cylindrical co-flowing burner. IDF were similar to normal diffusion flames, except that the relative positions of the fuel and oxidizer were reversed. Radiation from soot surrounding the IDF masked the reaction zone in visible images. As a result, flame heights determined from visible images were overestimated. The height of the reaction zone as indicated by OH LIF was a more relevant measure of height. The concentration and position of PAH and soot were observed using LIF and laser-induced incandescence (LII). PAH LIF and soot LII indicated that PAH and soot are present on the fuel side of the flame, and that soot is located closer to the reaction zone than PAH. Ethylene flames produced significantly higher PAH LIF and soot LII signals than methane flames, which was consistent with the sooting propensity of ethylene. The soot and PAH were present on the fuel side of the reaction zone, but the soot was closer to the reaction zone than the PAH. This is an abstract of a paper presented at the 30th International Symposium on combustion (Chicago, IL 7/25-30/2004).
Many practical combustion devices and uncontrolled fires involve high Reynolds number nonpremixed turbulent flames that feature non-equilibrium finite-rate chemistry effects, e.g., local flame extinction and reignition, where enhanced transport of mass and heat away from the flame due to rapid turbulent mixing exceeds the local burning rate. Probability density function methods have shown promise in predicting piloted nonpremixed CH4-air flames over a range of Reynolds numbers and varying degrees of flame extinction and reignition. A study was carried out to quantify and characterize the kinetics of localized extinction and reignition in the Sandia flames D, E, and F, for which detailed velocity and scalar data exists. PDF methods in large eddy simulation to predict the filtered mass density function (FMDF) was used. A simple idealized mixing simulation was performed of a nonpremixed turbulent fuel jet in an air co-flow. Mixing statistics from the Monte Carlo-based FMDF solution of the chemical species scalar were compared to those from a more traditional Eulerian mixing simulation using gradient transport-based subgrid closure models. The FMDF solution will be performed with the Euclidian minimum spanning tree mixing model that uses the phenomenological connection between physical space and state space for mixing events. This is an abstract of a paper presented at the 30th International Symposium on Combustion (Chicago, IL 7/25-30/2004).
The synthesis, characterization, and separations capability of defect-free, thin-film zeolite membranes were presented. The one-micron thick sodium-aluminosilicate films of Silicalite-1 and ZSM-5 were synthesized by hydrothermal methods on either disk- or tube-supports. Techniques for growing membranes on both Al2O3 substrates as well as oxide-coated stainless steel substrates were presented. The resulting defect-free zeolite films had high flux rates at room temperature (∼ 10-7 mole/Pa-sec-sq m) and showed selective separations (3-7) between pure gases of H2 and CH4, O2, N2, CO2, CO, SF6. Results from mixed gas studies showed similar flux rates as pure gases with enhanced selectivity (15-50) for H2. The selectivity through both Silicalite-1 and ZSM-5 membranes was compared and contrasted for several gas mixtures. Data comparisons for defect-free and "defect-filled" membranes were also discussed. Under operation, the flow through these membranes quickly reached its maximum value and was stable over long periods of time. Results from experiments at high temperatures, ≤ 300°C, were compared with the data obtained at room temperature. This is an abstract of a paper presented at the 228th ACS National Meeting (Philadelphia, PA, 8/22-26/2004).
Three-dimensional seismic wave propagation within a heterogeneous isotropic poroelastic medium is simulated with an explicit, time-domain, finite-difference algorithm. A system of thirteen, coupled, first-order partial differential equations is solved for the velocity vector components, stress tensor components, and pressure associated with solid and fluid constituents of the composite medium. A massively parallel computational implementation, utilizing the spatial domain decomposition strategy, allows investigation of large-scale earth models and/or broadband wave propagation within reasonable execution times.
Statistical active contour models (aka statistical pressure snakes) have attractive properties for use in mobile manipulation platforms as both a method for use in visual servoing and as a natural component of a human-computer interface. Unfortunately, the constantly changing illumination expected in outdoor environments presents problems for statistical pressure snakes and for their image gradient-based predecessors. This paper introduces a new color-based variant of statistical pressure snakes that gives superior performance under dynamic lighting conditions and improves upon the previously published results of attempts to incorporate color imagery into active deformable models.
Equilibrated melts of long chain polymers were prepared. The combination of molecular dynamic (MD) relaxation, double-bridging and slow push-off allowed the efficient and controlled preparation of equilibrated melts of short, medium, and long chains, respectively. Results were obtained for an off-lattice bead-spring model with chain lengths up to N=7000 beads.
Fluid flows that do not have local equilibrium are characteristic of some of the new frontiers in engineering and technology, for example, high-speed high-altitude aerodynamics and the development of micrometre-sized fluid pumps, turbines and other devices. However, this area of fluid dynamics is poorly understood from both the experimental and simulation perspectives, which hampers the progress of these technologies. This paper reviews some of the recent developments in experimental techniques and modelling methods for non-equilibrium gas flows, examining their advantages and drawbacks. We also present new results from our computational investigations into both hypersonic and microsystem flows using two distinct numerical methodologies: the direct simulation Monte Carlo method and extended hydrodynamics. While the direct simulation approach produces excellent results and is used widely, extended hydrodynamics is not as well developed but is a promising candidate for future more complex simulations. Finally, we discuss some of the other situations where these simulation methods could be usefully applied, and look to the future of numerical tools for non-equilibrium flows.
Multivariate curve resolution (MCR) using constrained alternating least squares algorithms represents a powerful analysis capability for the quantitative analysis of hyperspectral image data. We will demonstrate the application of MCR using data from a new hyperspectral fluorescence imaging microarray scanner for monitoring gene expression in cells from thousands of genes on the array. The new scanner collects the entire fluorescence spectrum from each pixel of the scanned microarray. Application of MCR with nonnegativity and equality constraints reveals several sources of undesired fluorescence that emit in the same wavelength range as the reporter fluorophores. MCR analysis of the hyperspectral images confirms that one of the sources of fluorescence is due to contaminant fluorescence under the printed DNA spots that is spot localized. Thus, traditional background subtraction methods used with data collected from the current commercial microarray scanners will lead to errors in determining the relative expression of low-expressed genes. With the new scanner and MCR analysis, we generate relative concentration maps of the background, impurity, and fluorescent labels over the entire image. Since the concentration maps of the fluorescent labels are relatively unaffected by the presence of background and impurity emissions, the accuracy and useful dynamic range of the gene expression data are both greatly improved over those obtained by commercial microarray scanners.
Over the past decade, more women have become interested in renewable energy, particularly photovoltaics, but a suitable training environment is difficult to find. Approximately five years ago, Solar Energy International (SEI) started offering classes for women only. The premise is that a women only class provides a friendly atmosphere for women to ask basic questions, take time working with tools and concepts, and practice hands-on activities in a supportive environment. Sandia National Labs has assisted SEI by providing technical content and hands-on instruction. The classes are split between the classroom and the field. This paper will provide an overview of the technical training, safety and the importance of the National Electrical Code® (NEC)®, and accomplishments of the students beyond these classes.
The existing IEEE stationary battery maintenance and testing standards fall into two basic categories: those associated with grid-tied standby applications and those associated with stand-alone photovoltaic cycling applications. These applications differ in several significant ways which in turn influence their associated standards. A review of the factors influencing the maintenance and testing of stationary battery systems provides the reasons for the differences between these standards and some of the hazards of using a standard inappropriate to the application. This review also provides a background on why these standards will need to be supplemented in the future to support emerging requirements of other applications, such as grid-tied cycling and photovoltaic hybrid applications.
The deformation of an infinite bar subjected to a self-equilibrated load distribution is investigated using the peridynamic formulation of elasticity theory. The peridynamic theory differs from the classical theory and other nonlocal theories in that it does not involve spatial derivatives of the displacement field. The bar problem is formulated as a linear Fredholm integral equation and solved using Fourier transform methods. The solution is shown to exhibit, in general, features that are not found in the classical result. Among these are decaying oscillations in the displacement field and progressively weakening discontinuities that propagate outside of the loading region. These features, when present, are guaranteed to decay provided that the wave speeds are real. This leads to a one-dimensional version of St. Venant's principle for peridynamic materials that ensures the increasing smoothness of the displacement field remotely from the loading region. The peridynamic result converges to the classical result in the limit of short-range forces. An example gives the solution to the concentrated load problem, and hence provides the Green's function for general loading problems.
The integration and approaches utilized in the various stages of the Advanced Dish Development System (ADDS) project are presented and described. Insights gained from integration of the ADDS are also discussed. The ADDS project focuses on development of a product that meets the needs of the remote power market and helps to identify key technology development needs that resulted in a system that is closer to commercialization. A pursuance of solving problems, a lack of fear of breaking things, and hands-on involvement by design engineers are the key components leading to rapid improvement of the project.
ESTECH 2003: 49th Annual Technical Meeting and Exposition of the Institute of Environmental Science and Technology. Proceedings Constamination Control Design, Test, and Evaluation Product Reliability
Real physical systems subjected to dynamic environments all display nonlinear behavior, yet they are most frequently modeled in a linear framework. The main reasons are, first, that it is convenient and efficient to solve linear equations, and second, that the system behavior can often be accurately approximated using linear governing equations. Experience shows that much of the nonlinearity of system behavior arises from the dynamic action of mechanical joints in systems. When the linear framework is used, the stiffness of joints is modeled as linear, and the damping is modeled as linear and viscous. To model mechanical joints otherwise requires a nonlinear framework and mathematical finite element model that accommodates transient time domain analysis. This study investigates a particular mechanical joint energy dissipation model. It is the Iwan model for energy dissipation caused by microslip friction. The sensitivity of energy dissipation in a system due to variation of model parameters is studied. The results of a combined numerical/experimental example that uses a model calibrated to a sequence of experiments are presented.
Measurements revealed that a foam-target DH length of ∼8 mm maximizes the axial power. The decrease in axial power at an enhanced level for lengths longer than 10 mm may be due to instability effects and/or effects of the W/CH2 interface near the REH.
The spatial, spectral and temporal properties of self-focusing 798-nm 100-fs pulses in air were experimentally measured. It was measured using high-resolution, single-shot techniques at a set propagation distance of 10.91 m. The data were taken over an extended energy range and can thus be used to test the validity of physical models. The experimental results show that significant spatial, spectral and temporal changes occur at intensities lower than than those required for strong ionization of air.
Modern high-performance Synthetic Aperture Radar (SAR) systems have evolved into highly versatile, robust, and reliable tactical sensors, offering images and information not available from other sensor systems. For example, real-time images are routinely formed by the Sandia-designed General Atomics (AN/APY-8) Lynx SAR yielding 4-inch resolution at 25 km range (representing better than arc-second resolutions) in clouds, smoke, and rain. Sandia's Real-Time Visualization (RTV) program operates an Interferometric SAR (IFSAR) system that forms three dimensional (3-D) topographic maps in near real-time with National Imagery and Mapping Agency (MIMA) Digital Terrain Elevation Data (DTED) level 4 performance (3 meter post spacing with 0.8-meter height accuracy) or better. When exported to 3-D rendering software, this data allows remarkable interactive fly-through experiences. Coherent Change Detection (CCD) allows detecting tire tracks on dirt roads, foot-prints, and other minor, otherwise indiscernible ground disturbances long after their originators have left the scene. Ground Moving Target Indicator (GMTI) radar modes allow detecting and tracking moving vehicles. A Sandia program known as "MiniSAR" is developing technologies that are expected to culminate in a fully functioning, high-performance, real-time SAR that weighs less than 20 Ibs. The purpose of this paper is to provide an overview of recent technology developments, as well as current on-going research and development efforts at Sandia National Laboratories.
Optimizing the design of the upgrade to the Z pulser at Sandia National Laboratories renewed interest in the ubiquitous Scyllac-cased capacitor. For the Z upgrade, the desired capacitance value in each case is different than those built before, and double that of the existing units in Z. The cost and fundamental importance of the Marx capacitors in pulsers like Z prompted the decision to build a test facility that could evaluate sample units from capacitor manufacturers. The number of interested vendors and the expected lifetime indicated about 350 thousand capacitor-shots for the capacitors in a plus-minus configuration. The project schedule demanded that the initial testing be completed in a few months. These factors, and budget limitations, pointed to the need for a system that could test multiple pairs of capacitors at once, without a full-time attendant. The system described here tests up to ten pairs of 2.6 μF capacitors charged to 100 kV in 90 seconds, then discharged at 150 kA and 35 percent reversal. Unattended operation requires sophisticated fault detection, and so much attention has been paid to this. This paper will describe the system, and the key components including the control system, the switches, and the load resistors. The paper will also show some lifetime and performance data from commonly used 200 kV spark gap switches.
Sandia National Laboratories' Z machine provides a unique capability to a number of National Nuclear Security Administration (NNSA) and basic science communities, and routinely produces x-ray power more than 5 times, and energy 50 times, greater than any other non-pulsed power laboratory device. To address an increasing demand and widening range of research interests, Sandia's Z refurbishment (ZR) program intends to increase Z utilization by providing the capability to double the number of shots per year, improve the overall precision for better reproducibility and enhanced data quality, and increase delivered current to provide additional performance capability. Reliability and operations analysis has been included from the onset of the ZR program to maximize performance and operations capacity. Preliminary analysis using a system-level reliability model highlighted Z failure modes requiring reliability improvement to help meet the increased ZR requirements. Preliminary results from analysis with a developed Z and ZR operations simulation model indicate, from an overall operations perspective including penalty costs and personnel resources, the scheduled maintenance activities and unscheduled repairs most in need of reduced time requirements and rates of occurrence.
Vadose Zone Monitoring System (VZMS) was used for the long-term performance assessment of a corrective action management unit (CAMU) containment cell at Sandia National Laboratories, New Mexico. A cost saving of approximately $200 million was realized by utilization of the CAMU versus off-site waste disposition. The VZMS permits the analysis of volatile organic compounds (VOC) concentrations in the soil gas directly underlying the containment cell. The configuration of the VZMS allowed for changes in the requirements for selected monitoring components, monitoring frequency and level of sensitivity.
Coherent stereo pairs from cross-track synthetic aperture radar (SAR) collects allow fully automated correlation matching using magnitude and phase data. Yet, automated feature matching (correspondence) becomes more difficult when imaging rugged terrain utilizing large stereo crossing angle geometries because high-relief features can undergo significant spatial distortions. These distortions sometimes cause traditional, shift-only correlation matching to fail. This paper presents a possible solution addressing this difficulty. Changing the complex correlation maximization search from shift-only to shift-and-scaling using the downhill simplex method results in higher correlation. This is shown on eight coherent spotlight-mode cross-track stereo pairs with stereo crossing angles averaging 93.7° collected over terrain with slopes greater than 20°. The resulting digital elevation maps (DEMs) are compared to ground truth. Using the shift-scaling correlation approach to calculate disparity, height errors decrease and the number of reliable DEM posts increase.
The Detached Eddy Simulation (DES) and steadystate Reynolds-Averaged Navier-Stokes (RANS) turbulence modeling approaches are examined for the incompressible flow over a square cross-section cylinder at a Reynolds number of 21,400. A compressible flow code is used which employes a second-order Roe upwind spatial discretization. Efforts are made to assess the numerical accuracy of the DES predictions with regards to statistical convergence, iterative convergence, and temporal and spatial discretization error. Three-dimensional DES simulations compared well with two-dimensional DES simulations, suggesting that the dominant vortex shedding mechanism is effectively two-dimensional. The two-dimensional simulations are validated via comparison to experimental data for mean and RMS velocities as well as Reynolds stress in the cylinder wake. The steady-state RANS models significantly overpredict the size of the recirculation zone, thus underpredicting the drag coefficient relative to the experimental value. The DES model is found to give good agreement with the experimental velocity data in the wake, drag coefficient, and recirculation zone length.
Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=l) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.
The implementation of GeoPowering the West (GPW), a communication and outreach component of the Department of Energy (DOE) to bring geothermal heat and power to homes and business across the West was discussed. GPQ helps to overcome financial risks, environmantal misconceptions, transactional costs, creates public awareness and define the benefits of geothermal development. The GPW complements the research and development activities conducted by the department and its national laboratories. It was stated that the GPW will continue to provide technical assistance to states that are considering to implement Renewable energy policies.
This paper defines a process for selecting dosimetry-quality cross sections. The recommended cross-section evaluation depends on screening high-quality evaluations with quantified uncertainties, down-selecting based on comparison to experiments in standard neutron fields, and consistency checking in reference neutron fields. This procedure is illustrated for the 23Na(n, γ)24 Na reaction.
Thermoluminescent dosimeters (TLDs), particularly CaF2:Mn, are often used as photon dosimeters in mixed (n/γ) field environments. In these mixed field environments, it is desirable to separate the photon response of a dosimeter from the neutron response. For passive dosimeters that measure an integral response, such as TLDs, the separation of the two components must be performed by postexperiment analysis because the TLD reading system cannot distinguish between photon- and neutron-produced response. Using a model of an aluminum-equilibrated TLD-400 (CaF2:Mn) chip, a systematic effort has been made to analytically determine the various components that contribute to the neutron response of a TLD reading. The calculations were performed for five measured reactor neutron spectra and one theoretical thermal neutron spectrum. The five measured reactor spectra all have experimental values for aluminum-equilibrated TLD-400 chips. Calculations were used to determine the percentage of the total TLD response produced by neutron interactions in the TLD and aluminum equilibrator. These calculations will aid the Sandia National Laboratories-Radiation Metrology Laboratory (SNL-RML) in the interpretation of the uncertainty for TLD dosimetry measurements in the mixed field environments produced by SNL reactor facilities.
A UV generation system consisting of a quasi-monolithic nonplanar-ring-oscillator image-rotating OPO, called the RISTRA OPO, is presented. High beam quality and the absence of mirror adjustments due to the monolithic design make this OPO well-suited for demanding applications such as satellite deployment. Initial tests of self seeding using low-quality flattop beams with poor spatial overlap between the OPO's cavity mode and the spatial mode of the injected signal pulsed showed pump depletion of 63%.
The decoding of received error control encoded bit streams is fairly straightforward when the channel encoding algorithms are efficient and known. But if the encoding scheme is unknown or part of the data is missing, how would one design a viable decoder for the received transmission? Communication engineers may not frequently encounter this situation, but for computational biologists this is an immediate challenge as we attempt to decipher and understand the vast amount of sequence data produced by genome sequencing projects. Assuming the systematic parity check block code model of protein translation initiation, this work presents an approach for determining the generator matrix given a set of potential codewords. The resulting generators and corresponding parity matrices are applied to valid and invalid Escherichia coli K-12 MG1655 messenger RNA leader sequences. The generators constructed using strict subsets of the 16S ribosomal RNA performed better than those constructed using the (5,2) block code model in earlier work.
The Disturbed Rock Zone constitutes an important geomechanical element of the Waste Isolation Pilot Plant. The science and engineering underpinning the disturbed rock zone provide the basis for evaluating ongoing operational issues and their impact on performance assessment. Contemporary treatment of the disturbed rock zone applied to the evaluation of the panel closure system and to a new mining horizon improves the level of detail and quantitative elements associated with a damaged zone surrounding the repository openings. Technical advancement has been realized by virtue of ongoing experimental investigations and international collaboration. The initial portion of this document discusses the disturbed rock zone relative to operational issues pertaining to re-certification of the repository. The remaining sections summarize and document theoretical and experimental advances that quantify characteristics of the disturbed rock zone as applied to nuclear waste repositories in salt.
This report describes the complete revision of a deuterium equation of state (EOS) model published in 1972. It uses the same general approach as the 1972 EOS, i.e., the so-called 'chemical model,' but incorporates a number of theoretical advances that have taken place during the past thirty years. Three phases are included: a molecular solid, an atomic solid, and a fluid phase consisting of both molecular and atomic species. Ionization and the insulator-metal transition are also included. The most important improvements are in the liquid perturbation theory, the treatment of molecular vibrations and rotations, and the ionization equilibrium and mixture models. In addition, new experimental data and theoretical calculations are used to calibrate certain model parameters, notably the zero-Kelvin isotherms for the molecular and atomic solids, and the quantum corrections to the liquid phase. The report gives a general overview of the model, followed by detailed discussions of the most important theoretical issues and extensive comparisons with the many experimental data that have been obtained during the last thirty years. Questions about the validity of the chemical model are also considered. Implications for modeling the 'giant planets' are also discussed.
This report summarizes the development of new biocompatible self-assembly procedures enabling the immobilization of genetically engineered cells in a compact, self-sustaining, remotely addressable sensor platform. We used evaporation induced self-assembly (EISA) to immobilize cells within periodic silica nanostructures, characterized by unimodal pore sizes and pore connectivity, that can be patterned using ink-jet printing or photo patterning. We constructed cell lines for the expression of fluorescent proteins and induced reporter protein expression in immobilized cells. We investigated the role of the abiotic/biotic interface during cell-mediated self-assembly of synthetic materials.
This report documents work undertaken to endow the cognitive framework currently under development at Sandia National Laboratories with a human-like memory for specific life episodes. Capabilities have been demonstrated within the context of three separate problem areas. The first year of the project developed a capability whereby simulated robots were able to utilize a record of shared experience to perform surveillance of a building to detect a source of smoke. The second year focused on simulations of social interactions providing a queriable record of interactions such that a time series of events could be constructed and reconstructed. The third year addressed tools to promote desktop productivity, creating a capability to query episodic logs in real time allowing the model of a user to build on itself based on observations of the user's behavior.
Epetra is a package of classes for the construction and use of serial and distributed parallel linear algebra objects. It is one of the base packages in Trilinos. This document describes guidelines for Epetra coding style. The issues discussed here go beyond correct C++ syntax to address issues that make code more readable and self-consistent. The guidelines presented here are intended to aid current and future development of Epetra specifically. They reflect design decisions that were made in the early development stages of Epetra. Some of the guidelines are contrary to more commonly used conventions, but we choose to continue these practices for the purposes of self-consistency. These guidelines are intended to be complimentary to policies established in the Trilinos Developers Guide.
On October 22-24, 2003, about 40 experts involved in various aspects of homeland security from the United States and four other Pacific region countries meet in Kihei, Hawaii to engage in a free-wheeling discussion and brainstorm (a 'fest') of the role that technology could play in winning the war on terrorism in the Pacific region. The result of this exercise is a concise and relatively thorough definition of the terrorism problem in the Pacific region, emphasizing the issues unique to Island nations in the Pacific setting, along with an action plan for developing working demonstrators of advanced technological solutions to these issues. In this approach, the participants were asked to view the problem and their potential solutions from multiple perspectives, and then to identify barriers (especially social and policy barriers) to any proposed technological solution. The final step was to create a roadmap for further action. This roadmap includes plans to: (1) create a conceptual monitoring and tracking system for people and things moving around the region that would be 'scale free', and develop a simple concept demonstrator; (2) pursue the development of a system to improve local terrorism context information, perhaps through the creation of an information clearinghouse for Pacific law enforcement; (3) explore the implementation of a Hawaii based pilot system to explore hypothetical terrorist scenarios and the development of fusion and analysis tools to work with this data (Sandia); and (4) share information concerning the numerous activities ongoing at various organizations around the understanding and modeling of terrorist behavior.
The goal of this LDRD was to investigate III-antimonide/nitride based materials for unique semiconductor properties and applications. Previous to this study, lack of basic information concerning these alloys restricted their use in semiconductor devices. Long wavelength emission on GaAs substrates is of critical importance to telecommunication applications for cost reduction and integration into microsystems. Currently InGaAsN, on a GaAs substrate, is being commercially pursued for the important 1.3 micrometer dispersion minima of silica-glass optical fiber; due, in large part, to previous research at Sandia National Laboratories. However, InGaAsN has not shown great promise for 1.55 micrometer emission which is the low-loss window of single mode optical fiber used in transatlantic fiber. Other important applications for the antimonide/nitride based materials include the base junction of an HBT to reduce the operating voltage which is important for wireless communication links, and for improving the efficiency of a multijunction solar cell. We have undertaken the first comprehensive theoretical, experimental and device study of this material with promising results. Theoretical modeling has identified GaAsSbN to be a similar or potentially superior candidate to InGaAsN for long wavelength emission on GaAs. We have confirmed these predictions by producing emission out to 1.66 micrometers and have achieved edge emitting and VCSEL electroluminescence at 1.3 micrometers. We have also done the first study of the transport properties of this material including mobility, electron/hole mass, and exciton reduced mass. This study has increased the understanding of the III-antimonide/nitride materials enough to warrant consideration for all of the target device applications.
This report describes the research accomplishments achieved under the LDRD Project 'Radiation Hardened Optoelectronic Components for Space-Based Applications.' The aim of this LDRD has been to investigate the radiation hardness of vertical-cavity surface-emitting lasers (VCSELs) and photodiodes by looking at both the effects of total dose and of single-event upsets on the electrical and optical characteristics of VCSELs and photodiodes. These investigations were intended to provide guidance for the eventual integration of radiation hardened VCSELs and photodiodes with rad-hard driver and receiver electronics from an external vendor for space applications. During this one-year project, we have fabricated GaAs-based VCSELs and photodiodes, investigated ionization-induced transient effects due to high-energy protons, and measured the degradation of performance from both high-energy protons and neutrons.
This one-year feasibility study was aimed at developing finite element modeling capabilities for simulating nano-scale tests. This work focused on methods to model: (1) the adhesion of a particle to a substrate, and (2) the delamination of a thin film from a substrate. Adhesion was modeled as a normal attractive force that depends on the distance between opposing material surfaces while delamination simulations used a cohesive zone model. Both of these surface interaction models had been implemented in a beta version of the three-dimensional, transient dynamics, PRESTO finite element code, and the present study verified that implementation. Numerous illustrative calculations have been performed using these models, and when possible comparisons were made with existing solutions. These capabilities are now available in PRESTO version 1.07.
All ceramics and powder metals, including the ceramics components that Sandia uses in critical weapons components such as PZT voltage bars and current stacks, multi-layer ceramic MET's, ahmindmolybdenum & alumina cermets, and ZnO varistors, are manufactured by sintering. Sintering is a critical, possibly the most important, processing step during manufacturing of ceramics. The microstructural evolution, the macroscopic shrinkage, and shape distortions during sintering will control the engineering performance of the resulting ceramic component. Yet, modeling and prediction of sintering behavior is in its infancy, lagging far behind the other manufacturing models, such as powder synthesis and powder compaction models, and behind models that predict engineering properties and reliability. In this project, we developed a model that was capable of simulating microstructural evolution during sintering, providing constitutive equations for macroscale simulation of shrinkage and distortion during sintering. And we developed macroscale sintering simulation capability in JAS3D. The mesoscale model can simulate microstructural evolution in a complex powder compact of hundreds or even thousands of particles of arbitrary shape and size by 1. curvature-driven grain growth, 2. pore migration and coalescence by surface diffusion, 3. vacancy formation, grain boundary diffusion and annihilation. This model was validated by comparing predictions of the simulation to analytical predictions for simple geometries. The model was then used to simulate sintering in complex powder compacts. Sintering stress and materials viscous module were obtained from the simulations. These constitutive equations were then used by macroscopic simulations for simulating shrinkage and shape changes in FEM simulations. The continuum theory of sintering embodied in the constitutive description of Skorohod and Olevsky was combined with results from microstructure evolution simulations to model shrinkage and deformation during. The continuum portion is based on a finite element formulation that allows 3D components to be modeled using SNL's nonlinear large-deformation finite element code, JAS3D. This tool provides a capability to model sintering of complex three-dimensional components. The model was verified by comparing to simulations results published in the literature. The model was validated using experimental results from various laboratory experiments performed by Garino. In addition, the mesoscale simulations were used to study anisotropic shrinkage in aligned, elongated powder compacts. Anisotropic shrinkage occurred in all compacts with aligned, elongated particles. However, the direction of higher shrinkage was in some cases along the direction of elongation and in other cases in the perpendicular direction depending on the details of the powder compact. In compacts of simple-packed, mono-sized, elongated particles, shrinkage was higher in the direction of elongation. In compacts of close-packed, mono-sized, elongated particles and of elongated particles with a size and shape distribution, the shrinkage was lower in the direction of elongation. We also explored the concept of a sintering stress tensor rather than the traditional sintering stress scalar concept for the case of anisotropic shrinkage. A thermodynamic treatment of this is presented. A method to calculate the sintering stress tensor is also presented. A user-friendly code that can simulate microstructural evolution during sintering in 2D and in 3D was developed. This code can run on most UNIX platforms and has a motif-based GUI. The microstructural evolution is shown as the code is running and many of the microstructural features, such as grain size, pore size, the average grain boundary length (in 2D) and area (in 3D), etc. are measured and recorded as a function of time. The overall density as the function of time is also recorded.
The goal of this LDRD was to demonstrate the use of robotic vehicles for deploying and autonomously reconfiguring seismic and acoustic sensor arrays with high (centimeter) accuracy to obtain enhancement of our capability to locate and characterize remote targets. The capability to accurately place sensors and then retrieve and reconfigure them allows sensors to be placed in phased arrays in an initial monitoring configuration and then to be reconfigured in an array tuned to the specific frequencies and directions of the selected target. This report reviews the findings and accomplishments achieved during this three-year project. This project successfully demonstrated autonomous deployment and retrieval of a payload package with an accuracy of a few centimeters using differential global positioning system (GPS) signals. It developed an autonomous, multisensor, temporally aligned, radio-frequency communication and signal processing capability, and an array optimization algorithm, which was implemented on a digital signal processor (DSP). Additionally, the project converted the existing single-threaded, monolithic robotic vehicle control code into a multi-threaded, modular control architecture that enhances the reuse of control code in future projects.
In this paper, the effect of viscous wave motion on a micro rotational resonator is discussed. This work shows the inadequacy of developing theory to represent energy losses due to shear motion in air. Existing theory predicts Newtonian losses with little slip at the interface. Nevertheless, experiments showed less effect due to Newtonian losses and elevated levels of slip for small gaps. Values of damping were much less than expected. Novel closed form solutions for the response of components are presented. The stiffness of the resonator is derived using Castigliano's theorem, and viscous fluid motion above and below the resonator is derived using a wave approach. Analytical results are compared with experimental results to determine the utility of existing theory. It was found that existing macro and molecular theory is inadequate to describes measured responses.
This report summarizes the Mentoring Program at Sandia National Laboratories (SNL), which has been an on-going success since its inception in 1995. The Mentoring Program provides a mechanism to develop a workforce able to respond to changing requirements and complex customer needs. The program objectives are to enhance employee contributions through increased knowledge of SNL culture, strategies, and programmatic direction. Mentoring is a proven mechanism for attracting new employees, retaining employees, and developing leadership. It helps to prevent the loss of corporate knowledge from attrition and retirement, and it increases the rate and level of contributions of new managers and employees, also spurring cross-organizational teaming. The Mentoring Program is structured as a one-year partnership between an experienced staff member or leader and a less experienced one. Mentors and mentees are paired according to mutual objectives and interests. Support is provided to the matched pairs from their management as well as division program coordinators in both New Mexico and California locations. In addition, bi-monthly large-group training sessions are held.
Large-scale finite element analysis often requires the iterative solution of equations with many unknowns. Preconditioners based on domain decomposition concepts have proven effective at accelerating the convergence of iterative methods like conjugate gradients for such problems. A study of two new domain decomposition preconditioners is presented here. The first is based on a substructuring approach and can viewed as a primal counterpart of the dual-primal variant of the finite element tearing and interconnecting method called FETI-DP. The second uses an algebraic approach to construct a coarse problem for a classic overlapping Schwarz method. The numerical properties of both preconditioners are shown to scale well with problem size. Although developed primarily for structural mechanics applications, the preconditioners are also useful for other problems types. Detailed descriptions of the two preconditioners along with numerical results are included.
Mobile wireless ad hoc networks that are resistant to adversarial manipulation are necessary for distributed systems used in military and security applications. Critical to the successful operation of these networks, which operate in the presence of adversarial stressors, are robust and efficient information assurance methods. In this report we describe necessary enhancements for a distributed certificate authority (CA) used in secure wireless network architectures. Necessary cryptographic algorithms used in distributed CAs are described and implementation enhancements of these algorithms in mobile wireless ad hoc networks are developed. The enhancements support a network's ability to detect compromised nodes and facilitate distributed CA services. We provide insights to the impacts the enhancements will have on network performance with timing diagrams and preliminary network simulation studies.
Sandia, Los Alamos, and Lawrence Livermore National Laboratories currently deploy high speed, Wide Area Network links to permit remote access to their Supercomputer systems. The current TCP congestion algorithm does not take full advantage of high delay, large bandwidth environments. This report involves evaluating alternative TCP congestion algorithms and comparing them with the currently used congestion algorithm. The goal was to find if an alternative algorithm could provide higher throughput with minimal impact on existing network traffic. The alternative congestion algorithms used were Scalable TCP and High-Speed TCP. Network lab experiments were run to record the performance of each algorithm under different network configurations. The network configurations used were back-to-back with no delay, back-to-back with a 30ms delay, and two-to-one with a 30ms delay. The performance of each algorithm was then compared to the existing TCP congestion algorithm to determine if an acceptable alternative had been found. Comparisons were made based on throughput, stability, and fairness.
As part of the Testing Evaluation and Qualification Project, which was contracted by Organization 9336, this paper compares three cubicle-class switches from various vendors to assess how well they would perform in the unclassified networks at Sandia National Laboratories. The switches tested were the SMC TigerSwitch 6709L2, the Cisco Catalyst 2950G-12, and the Extreme Summit 5i. Each switch was evaluated by testing performance, functionality, interoperability, security, and total cost of ownership. The results of this report show the SMC TigerSwitch as being the best choice for cubicle use because of its high performance and very low cost. The Cisco Catalyst is also rated highly for cubicle use and in some cases may be preferred over the SMC TigerSwitch. The Extreme Summit 5i is not recommended for cubicle use due to its size and extremely loud fans but is a full featured, high performance switch that would work very well for access layer switching.
The Unique Signal is a key constituent of Enhanced Nuclear Detonation Safety (ENDS). Although the Unique Signal approach is well prescribed and mathematically assured, there are numerous unsolved mathematical problems that could help assess the risk of deviations from the ideal approach. Some of the mathematics-based results shown in this report are: 1. The risk that two patterns with poor characteristics (easily generated by inadvertent processes) could be combined through exclusive-or mixing to generate an actual Unique Signal pattern has been investigated and found to be minimal (not significant when compared to the incompatibility metric of actual Unique Signal patterns used in nuclear weapons). 2. The risk of generating actual Unique Signal patterns with linear feedback shift registers is minimal, but the patterns in use are not as invulnerable to inadvertent generation by dependent processes as previously thought. 3. New methods of testing pair-wise incompatibility threats have resulted in no significant problems found for the set of Unique Signal patterns currently used. Any new patterns introduced would have to be carefully assessed for compatibility with existing patterns, since some new patterns under consideration were found to be deficient when associated with other patterns in use. 4. Markov models were shown to correspond to some of the engineered properties of Unique Signal sequences. This gives new support for the original design objectives. 5. Potential dependence among events (caused by a variety of communication protocols) has been studied. New evidence has been derived of the risk associated with combined communication of multiple events, and of the improvement in abnormal-environment safety that can be achieved through separate-event communication.
To model the telecommunications infrastructure and its role and robustness to shocks, we must characterize the business and engineering of telecommunications systems in the year 2003 and beyond. By analogy to environmental systems modeling, we seek to develop a 'conceptual model' for telecommunications. Here, the conceptual model is a list of high-level assumptions consistent with the economic and engineering architectures of telecommunications suppliers and customers, both today and in the near future. We describe the present engineering architectures of the most popular service offerings, and describe the supplier markets in some detail. We also develop a characterization of the customer base for telecommunications services and project its likely response to disruptions in service, base-lining such conjectures against observed behaviors during 9/11.
With the build-out of large transport networks utilizing optical technologies, more and more capacity is being made available. Innovations in Dense Wave Division Multiplexing (DWDM) and the elimination of optical-electrical-optical conversions have brought on advances in communication speeds as we move into 10 Gigabit Ethernet and above. Of course, there is a need to encrypt data on these optical links as the data traverses public and private network backbones. Unfortunately, as the communications infrastructure becomes increasingly optical, advances in encryption (done electronically) have failed to keep up. This project examines the use of optical logic for implementing encryption in the photonic domain to achieve the requisite encryption rates. In order to realize photonic encryption designs, technology developed for electrical logic circuits must be translated to the photonic regime. This paper examines two classes of all optical logic (SEED, gain competition) and how each discrete logic element can be interconnected and cascaded to form an optical circuit. Because there is no known software that can model these devices at a circuit level, the functionality of the SEED and gain competition devices in an optical circuit were modeled in PSpice. PSpice allows modeling of the macro characteristics of the devices in context of a logic element as opposed to device level computational modeling. By representing light intensity as voltage, 'black box' models are generated that accurately represent the intensity response and logic levels in both technologies. By modeling the behavior at the systems level, one can incorporate systems design tools and a simulation environment to aid in the overall functional design. Each black box model of the SEED or gain competition device takes certain parameters (reflectance, intensity, input response), and models the optical ripple and time delay characteristics. These 'black box' models are interconnected and cascaded in an encrypting/scrambling algorithm based on a study of candidate encryption algorithms. We found that a low gate count, cascadable encryption algorithm is most feasible given device and processing constraints. The modeling and simulation of optical designs using these components is proceeding in parallel with efforts to perfect the physical devices and their interconnect. We have applied these techniques to the development of a 'toy' algorithm that may pave the way for more robust optical algorithms. These design/modeling/simulation techniques are now ready to be applied to larger optical designs in advance of our ability to implement such systems in hardware.
Prokaryotic single-cell microbes are the simplest of all self-sufficient living organisms. Yet microbes create and use much of the molecular machinery present in more complex organisms, and the macro-molecules in microbial cells interact in regulatory, metabolic, and signaling pathways that are prototypical of the reaction networks present in all cells. We have developed a simple simulation model of a prokaryotic cell that treats proteins, protein complexes, and other organic molecules as particles which diffuse via Brownian motion and react with nearby particles in accord with chemical rate equations. The code models protein motion and chemistry within an idealized cellular geometry. It has been used to simulate several simple reaction networks and compared to more idealized models which do not include spatial effects. In this report we describe an initial version of the simulation code that was developed with FY03 funding. We discuss the motivation for the model, highlight its underlying equations, and describe simulations of a 3-stage kinase cascade and a portion of the carbon fixation pathway in the Synechococcus microbe.
Algorithms for higher order accuracy modeling of kinematic behavior within the ALEGRA framework are presented. These techniques improve the behavior of the code when kinematic errors are found, ensure orthonormality of the rotation tensor at each time step, and increase the accuracy of the Lagrangian stretch and rotation tensor update algorithm. The implementation of these improvements in ALEGRA is described. A short discussion of issues related to improving the accuracy of the stress update procedures is also included.
A Simple Removable Epoxy Foam (SREF) decomposition chemistry model has been developed to predict the decomposition behavior of an epoxy foam encapsulant exposed to high temperatures. The foam is composed of an epoxy polymer, blowing agent, and surfactant. The model is based on a simple four-step mass loss model using distributed Arrhenius reaction rates. A single reaction was used to describe desorption of the blowing agent and surfactant (BAS). Three of the reactions were used to describe degradation of the polymer. The coordination number of the polymeric lattice was determined from the chemical structure of the polymer; and a lattice statistics model was used to describe the evolution of polymer fragments. The model lattice was composed of sites connected by octamethylcylotetrasiloxane (OS) bridges, mixed product (MP) bridges, and bisphenol-A (BPA) bridges. The mixed products were treated as a single species, but are likely composed of phenols, cresols, and furan-type products. Eleven species are considered in the SREF model - (1) BAS, (2) OS, (3) MP, (4) BPA, (5) 2-mers, (6) 3-mers, (7) 4-mers, (8) nonvolatile carbon residue, (9) nonvolatile OS residue, (10) L-mers, and (11) XL-mers. The first seven of these species (VLE species) can either be in the condensed-phase or gas-phase as determined by a vapor-liquid equilibrium model based on the Rachford-Rice equation. The last four species always remain in the condensed-phase. The 2-mers, 3-mers, and 4-mers are polymer fragments that contain two, three, or four sites, respectively. The residue can contain C, H, N, O, and/or Si. The L-mer fraction consists of polymer fragments that contain at least five sites (5-mer) up to a user defined maximum mer size. The XL-mer fraction consists of polymer fragments greater than the user specified maximum mer size and can contain the infinite lattice if the bridge population is less than the critical bridge population. Model predictions are compared to 133-thermogravimetric analysis (TGA) experiments performed at 24 different conditions. The average RMS error between the model and the 133 experiments was 4.25%. The model was also used to predict the response of two other removable epoxy foams with different compositions as well as the pressure rise in a constant volume hot cell.
Recyclable transmission lines (RTL)s are being studied as a means to repetitively drive z pinches to generate fusion energy. We have shown previously that the RTL mass can be quite modest. Minimizing the RTL mass reduces recycling costs and the impulse delivered to the first wall of a fusion chamber. Despite this reduction in mass, a few seconds will be needed to reload an RTL after each subsequent shot. This is in comparison to other inertial fusion approaches that expect to fire up to ten capsules per second. Thus a larger fusion yield is needed to compensate for the slower repetition rate in a z-pinch driven fusion reactor. We present preliminary designs of z-pinch driven fusion capsules that provide an adequate yield of 1-4 GJ. We also present numerical simulations of the effect of these fairly large fusion yields on the RTL and the first wall of the reactor chamber. These simulations were performed with and without a neutron absorbing blanket surrounding the fusion explosion. We find that the RTL will be fully vaporized out to a radius of about 3 meters assuming normal incidence. However, at large enough radius the RTL will remain in either the liquid or solid state and this portion of the RTL could fragment and become shrapnel. We show that a dynamic fragmentation theory can be used to estimate the size of these fragmented particles. We discuss how proper design of the RTL can allow this shrapnel to be directed away from the sensitive mechanical parts of the reactor chamber.
A comprehensive settlement of the North Korean nuclear issue may involve military, economic, political, and diplomatic components, many of which will require verification to ensure reciprocal implementation. This paper sets out potential verification methodologies that might address a wide range of objectives. The inspection requirements set by the International Atomic Energy Agency form the foundation, first as defined at the time of the Agreed Framework in 1994, and now as modified by the events since revelation of the North Korean uranium enrichment program in October 2002. In addition, refreezing the reprocessing facility and 5 MWe reactor, taking possession of possible weapons components and destroying weaponization capabilities add many new verification tasks. The paper also considers several measures for the short-term freezing of the North's nuclear weapon program during the process of negotiations, should that process be protracted. New inspection technologies and monitoring tools are applicable to North Korean facilities and may offer improved approaches over those envisioned just a few years ago. These are noted, and potential bilateral and regional verification regimes are examined.
The distributed data problem, is characterized by the desire to bring together semantically related data from syntactically unrelated portions of a term. Two strategic combinators, dynamic and transient, are introduced in the context of a classical strategic programming framework. The impact of the resulting system on instances of the distributed data problem is then explored.
Artificially structured photonic lattice materials are commonly investigated for their unique ability to block and guide light. However, an exciting aspect of photonic lattices which has received relatively little attention is the extremely high refractive index dispersion within the range of frequencies capable of propagating within the photonic lattice material. In fact, it has been proposed that a negative refractive index may be realized with the correct photonic lattice configuration. This report summarizes our investigation, both numerically and experimentally, into the design and performance of such photonic lattice materials intended to optimize the dispersion of refractive index in order to realize new classes of photonic devices.
Chemical synthesis methods are being developed as a future source of PZT 95/5 powder for neutron generator voltage bar applications. Laboratory-scale powder processes were established to produce PZT billets from these powders. The interactions between calcining temperature, sintering temperature, and pore former content were studied to identify the conditions necessary to produce PZT billets of the desired density and grain size. Several binder systems and pressing aids were evaluated for producing uniform sintered billets with low open porosity. The development of these processes supported the powder synthesis efforts and enabled comparisons between different chem-prep routes.
In this work we have demonstrated the fabrication of two different classes of devices which demonstrate the integration of simple MEMS structures with photonics structures. In the first class of device a suspended, movable Si waveguide was designed and fabricated. This waveguide was designed to be able to be actuated so that it could be brought into close proximity to a ring resonator or similar structure. In the course of this work we also designed a technique to improve the input coupling to the waveguide. While these structures were successfully fabricated, post fabrication and testing involved a significant amount of manipulation of the devices and due to their relatively flimsy nature our structures could not readily survive this extra handling. As a result we redesigned our devices so that instead of moving the waveguides themselves we moved a much smaller optical element into close proximity to the waveguides. Using this approach it was also possible to fabricate a much larger array of actively switched photonic devices: switches, ring resonators, couplers (which act as switches or splitters) and attenuators. We successfully fabricated all these structures and were able to successfully demonstrate splitters, switches and attenuators. The quality of the SiN waveguides fabricated in this work were found to be qualitatively compatible to those made using semiconductor materials.
Solidification and blood flow seemingly have little in common, but each involves a fluid in contact with a deformable solid. In these systems, the solid-fluid interface moves as the solid advects and deforms, often traversing the entire domain of interest. Currently, these problems cannot be simulated without innumerable expensive remeshing steps, mesh manipulations or decoupling the solid and fluid motion. Despite the wealth of progress recently made in mechanics modeling, this glaring inadequacy persists. We propose a new technique that tracks the interface implicitly and circumvents the need for remeshing and remapping the solution onto the new mesh. The solid-fluid boundary is tracked with a level set algorithm that changes the equation type dynamically depending on the phases present. This novel approach to coupled mechanics problems promises to give accurate stresses, displacements and velocities in both phases, simultaneously.
High throughput instruments and analysis techniques are required in order to make good use of the genomic sequences that have recently become available for many species, including humans. These instruments and methods must work with tens of thousands of genes simultaneously, and must be able to identify the small subsets of those genes that are implicated in the observed phenotypes, or, for instance, in responses to therapies. Microarrays represent one such high throughput method, which continue to find increasingly broad application. This project has improved microarray technology in several important areas. First, we developed the hyperspectral scanner, which has discovered and diagnosed numerous flaws in techniques broadly employed by microarray researchers. Second, we used a series of statistically designed experiments to identify and correct errors in our microarray data to dramatically improve the accuracy, precision, and repeatability of the microarray gene expression data. Third, our research developed new informatics techniques to identify genes with significantly different expression levels. Finally, natural language processing techniques were applied to improve our ability to make use of online literature annotating the important genes. In combination, this research has improved the reliability and precision of laboratory methods and instruments, while also enabling substantially faster analysis and discovery.
The pump and actuator systems designed and built in the SUMMiT{trademark} process, Sandia's surface micromachining polysilicon MEMS (Micro-Electro-Mechanical Systems) fabrication technology, on the previous campus executive program LDRD (SAND2002-0704P) with FSU/FAMU (Florida State University/Florida Agricultural and Mechanical University) were characterized in this LDRD. These results demonstrated that the device would pump liquid against the flow resistance of a microfabricated channel, but the devices were determined to be underpowered for reliable pumping. As a result a new set of SUMMiT{trademark} pumps with actuators that generate greater torque will be designed and submitted for fabrication. In this document we will report details of dry actuator/pump assembly testing, wet actuator/pump testing, channel resistance characterization, and new pump/actuator design recommendations.
In a superposition of quantum states, a bit can be in both the states '0' and '1' at the same time. This feature of the quantum bit or qubit has no parallel in classical systems. Currently, quantum computers consisting of 4 to 7 qubits in a 'quantum computing register' have been built. Innovative algorithms suited to quantum computing are now beginning to emerge, applicable to sorting and cryptanalysis, and other applications. A framework for overcoming slightly inaccurate quantum gate interactions and for causing quantum states to survive interactions with surrounding environment is emerging, called quantum error correction. Thus there is the potential for rapid advances in this field. Although quantum information processing can be applied to secure communication links (quantum cryptography) and to crack conventional cryptosystems, the first few computing applications will likely involve a 'quantum computing accelerator' similar to a 'floating point arithmetic accelerator' interfaced to a conventional Von Neumann computer architecture. This research is to develop a roadmap for applying Sandia's capabilities to the solution of some of the problems associated with maintaining quantum information, and with getting data into and out of such a 'quantum computing accelerator'. We propose to focus this work on 'quantum I/O technologies' by applying quantum optics on semiconductor nanostructures to leverage Sandia's expertise in semiconductor microelectronic/photonic fabrication techniques, as well as its expertise in information theory, processing, and algorithms. The work will be guided by understanding of practical requirements of computing and communication architectures. This effort will incorporate ongoing collaboration between 9000, 6000 and 1000 and between junior and senior personnel. Follow-on work to fabricate and evaluate appropriate experimental nano/microstructures will be proposed as a result of this work.
Present methods of air sampling for low concentrations of chemicals like explosives and bioagents involve noisy and power hungry collectors with mechanical parts for moving large volumes of air. However there are biological systems that are capable of detecting very low concentrations of molecules with no mechanical moving parts. An example is the silkworm moth antenna which is a highly branched structure where each of 100 branches contains about 200 sensory 'hairs' which have dimensions of 2 microns wide by 100 microns long. The hairs contain about 3000 pores which is where the gas phase molecules enter the aqueous (lymph) phase for detection. Simulations of diffusion of molecules indicate that this 'forest' of hairs is 'designed' to maximize the extraction of the vapor phase molecules. Since typical molecules lose about 4 decades in diffusion constant upon entering the liquid phase, it is important to allow air diffusion to bring the molecule as close to the 'sensor' as possible. The moth acts on concentrations as low as 1000 molecules per cubic cm. (one part in 1e16). A 3-D collection system of these dimensions could be fabricated by micromachining techniques available at Sandia. This LDRD addresses the issues involved with extracting molecules from air onto micromachined structures and then delivering those molecules to microsensors for detection.
Particle image velocimetry data have been acquired in the far field of the interaction generated by an overexpanded axisymmetric supersonic jet exhausting transversely from a flat plate into a subsonic compressible crossflow. Mean velocity fields were found in the streamwise plane along the flowfield centerline for different values of the crossflow Mach number M{sub {infinity}} and the jet-to-freestream dynamic pressure ratio J. The magnitude of the streamwise velocity deficit and the vertical velocity component both decay with downstream distance and were observed to be greater for larger J while M{sub {infinity}} remained constant. Jet trajectories derived independently using the maxima of each of these two velocity components are not identical, but show increasing jet penetration for larger J. Similarity in the normalized velocity field was found for constant J at two different transonic M{sub {infinity}}, but at two lower M{sub {infinity}} the jet appeared to interact with the wall boundary layer and data did not collapse. The magnitude and width of the peak in the vertical velocity component both increase with J, suggesting that the strength and size of the counter-rotating vortex pair increase and, thus, may have a stronger influence on aerodynamic surfaces despite further jet penetration from the wall.
The sea presents unique possibilities for implementing confidence building measures (CBMs) between India and Pakistan that are currently not available along the contentious land borders surrounding Jammu and Kashmir. This is due to the nature of maritime issues, the common military culture of naval forces, and a less contentious history of maritime interaction between the two nations. Maritime issues of mutual concern provide a strong foundation for more far-reaching future CBMs on land, while addressing pressing security, economic, and humanitarian needs at sea in the near-term. Although Indian and Pakistani maritime forces currently have stronger opportunities to cooperate with one another than their counterparts on land, reliable mechanisms to alleviate tension or promote operational coordination remain non-existent. Therefore, possible maritime CBMs, as well as pragmatic mechanisms to initiate and sustain cooperation, require serious examination. This report reflects the unique joint research undertaking of two retired Senior Naval Officers from both India and Pakistan, sponsored by the Cooperative Monitoring Center of the International Security Center at Sandia National Laboratories. Research focuses on technology as a valuable tool to facilitate confidence building between states having a low level of initial trust. Technical CBMs not only increase transparency, but also provide standardized, scientific means of interacting on politically difficult problems. Admirals Vohra and Ansari introduce technology as a mechanism to facilitate consistent forms of cooperation and initiate discussion in the maritime realm. They present technical CBMs capable of being acted upon as well as high-level political recommendations regarding the following issues: (1) Delimitation of the maritime boundary between India and Pakistan and its relationship to the Sir Creek dispute; (2) Restoration of full shipping links and the security of ports and cargos; (3) Fishing within disputed areas and resolution of issues relating to arrest and repatriation of fishermen from both sides; and (4) Naval and maritime agency interaction and possibilities for cooperation.
The DIII-D research program is developing the scientific basis for advanced tokamak (AT) modes of operation in order to enhance the attractiveness of the tokamak as an energy producing system. Since the last international atomic energy agency (IAEA) meeting, we have made significant progress in developing the building blocks needed for AT operation: (1) we have doubled the magnetohydrodynamic (MHD) stable tokamak operating space through rotational stabilization of the resistive wall mode; (2) using this rotational stabilization, we have achieved {beta}{sub N}H{sub 89} {ge} 10 for 4{tau}{sub E} limited by the neoclassical tearing mode (NTM); (3) using real-time feedback of the electron cyclotron current drive (ECCD) location, we have stabilized the (m, n) = (3, 2) NTM and then increased {beta}{sub T} by 60%; (4) we have produced ECCD stabilization of the (2, 1) NTM in initial experiments; (5) we have made the first integrated AT demonstration discharges with current profile control using ECCD; (6) ECCD and electron cyclotron heating (ECH) have been used to control the pressure profile in high performance plasmas; and (7) we have demonstrated stationary tokamak operation for 6.5 s (36{tau}{sub E}) at the same fusion gain parameter of {beta}{sub N}H{sub 89}/q{sub 95}{sup 2} {approx_equal} as ITER but at much higher q{sub 95} = 4.2. We have developed general improvements applicable to conventional and AT operating modes: (1) we have an existence proof of a mode of tokamak operation, quiescent H-mode, which has no pulsed, edge localized modes (ELM) heat load to the divertor and which can run for long periods of time (3.8 s or 25{tau}{sub E}) with constant density and constant radiated power; (2) we have demonstrated real-time disruption detection and mitigation for vertical disruption events using high pressure gas jet injection of noble gases; (3) we have found that the heat and particle fluxes to the inner strike points of balanced, double-null divertors are much smaller than to the outer strike points. We have made detailed investigations of the edge pedestal and scrape-off layer (SOL): (1) atomic physics and plasma physics both play significant roles in setting the width of the edge density barrier in H-mode; (2) ELM heat flux conducted to the divertor decreases as density increases; (3) intermittent, bursty transport contributes to cross field particle transport in the SOL of H-mode and, especially, L-mode plasmas.
Buried landmines are often detected through their chemical signature in the thin air layer, or boundary layer, right above the soil surface by sensors or animals. Environmental processes play a significant role in the available chemical signature. Due to the shallow burial depth of landmines, the weather also influences the release of chemicals from the landmine, transport through the soil to the surface, and degradation processes in the soil. The effect of weather on the landmine chemical signature from a PMN landmine was evaluated with the T2TNT code for three different climates: Kabul, Afghanistan, Ft. Leonard Wood, Missouri, USA, and Napacala, Mozambique. Results for TNT gas-phase and solid-phase concentrations are presented as a function of time of the year.
A review is given on the recent progress in three-dimensional (3D) all-metallic photonic-crystals in the near- and mid-infrared wavelengths. Results of optical spectroscopy of the sample will be described. Unique light emission characteristics at a narrow band from the photonic-crystal will also be presented. This new class of 3D all-metallic photonic-crystal is promising for thermal photo-voltaic power generation and for lighting application.
Various tools and techniques, which were leveraged from the IC industry, were used for the failure analysis and qualification of MEMS. Resistive contrast imaging (RCI) was employed to analyze a wide variety of MEMS technologies. Multi-functional analytical tools are able to operate several samples in parallel and extract structural, chemical and electrical information.
MEMS processes and components are rapidly changing in device design, processing, and, most importantly, application. This paper will discuss the future challenges faced by the MEMS failure analysis as the field of MEMS (fabrication, component design, and applications) grows. Specific areas of concern for the failure analyst will also be discussed.
Microelectromechanical Systems (MEMS) have gained acceptance as viable products for many commercial and government applications. MEMS are currently being used as displays for digital projection systems, sensors for airbag deployment systems, inkjet print head systems, and optical routers. This paper will discuss current and future MEMS applications.
MEMS components by their very nature have different and unique failure mechanisms than their macroscopic counterparts. This paper discusses failure mechanisms observed in various MEMS components and technologies. MEMS devices fabricated using bulk and surface micromachining process technologies are emphasized.
This report summarizes the accomplishments of the Laboratory Directed Research and Development (LDRD) project 26546 at Sandia, during the period FY01 through FY03. The project team visited four DoD depots that support extensive aircraft maintenance in order to understand critical needs for automation, and to identify maintenance processes for potential automation or integration opportunities. From the visits, the team identified technology needs and application issues, as well as non-technical drivers that influence the application of automation in depot maintenance of aircraft. Software tools for automation facility design analysis were developed, improved, extended, and integrated to encompass greater breadth for eventual application as a generalized design tool. The design tools for automated path planning and path generation have been enhanced to incorporate those complex robot systems with redundant joint configurations, which are likely candidate designs for a complex aircraft maintenance facility. A prototype force-controlled actively compliant end-effector was designed and developed based on a parallel kinematic mechanism design. This device was developed for demonstration of surface finishing, one of many in-contact operations performed during aircraft maintenance. This end-effector tool was positioned along the workpiece by a robot manipulator, programmed for operation by the automated planning tools integrated for this project. Together, the hardware and software tools demonstrate many of the technologies required for flexible automation in a maintenance facility.
The Passive-legged, Multi-segmented, Robotic Vehicle concept is a simple legged vehicle that is modular and scaleable, and can be sized to fit through confined areas that are slightly larger than the size of the vehicle. A specific goal of this project was to be able to fit through the opening in the fabric of a chain link fence. This terrain agile robotic platform will be composed of multiple segments that are each equipped with appendages (legs) that resemble oars extending from a boat. Motion is achieved by pushing with these legs that can also flex to fold next to the body when passing through a constricted area. Each segment is attached to another segment using an actuated joint. This joint represents the only actuation required for mobility. The major feature of this type of mobility is that the terrain agility advantage of legs can be attained without the complexity of the multiple-actuation normally required for the many joints of an active leg. The minimum number of segments is two, but some concepts require three or more segments. This report discusses several concepts for achieving this type of mobility, their design, and the results obtained for each.
This report describes the technical work carried out under a 2003 Laboratory Directed Research and Development project to develop a covert air vehicle. A mesoscale air vehicle that mimics a bird offers exceptional mobility and the possibility of remaining undetected during flight. Although some such vehicles exist, they are lacking in key areas: unassisted landing and launching, true mimicry of bird flight to remain covert, and a flapping flight time of any real duration. Current mainstream technology does not have the energy or power density necessary to achieve bird like flight for any meaningful length of time; however, Sandia has unique combustion powered linear actuators with the unprecedented high energy and power density needed for bird like flight. The small-scale, high-pressure valves and small-scale ignition to make this work have been developed at Sandia. We will study the feasibility of using this to achieve vehicle takeoff and wing flapping for sustained flight. This type of vehicle has broad applications for reconnaissance and communications networks, and could prove invaluable for military and intelligence operations throughout the world. Initial tests were conducted on scaled versions of the combustion-powered linear actuator. The tests results showed that heat transfer and friction effects dominate the combustion process at 'bird-like' sizes. The problems associated with micro-combustion must be solved before a true bird-like ornithopter can be developed.
Polyoxometalates (POMs) are ionic (usually anionic) metal -oxo clusters that are both functional entities for a variety of applications, as well as structural units that can be used as building blocks if reacted under appropriate conditions. This is a powerful combination in that functionality can be built into materials, or doped into matrices. Additionally, by assembling functional POMs in ordered materials, new collective behaviors may be realized. Further, the vast variety of POM geometries, compositions and charges that are achievable gives this system a high degree of tunability. Processing conditions to link together POMs to build materials offer another vector of control, thus providing infinite possibilities of materials that can he nano-engineered through POM building blocks. POM applications that can be built into POM-based materials include catalysis, electro-optic and electro-chromic, anti-viral, metal binding, and protein binding. We have begun to explore three approaches in developing this field of functional, nano-engineered POM-based materials; and this report summarizes the work carried out for these approaches to date. The three strategies are: (1) doping POMs into silica matrices using sol-gel science, (2) forming POM-surfactant arrays and metal-POM-surfactant arrays, (3) using aerosol-spray pyrolysis of the POM-surfactant arrays to superimpose hierarchical architecture by self-assembly during aerosol-processing. Doping POMs into silica matrices was successful, but the POMs were partially degraded upon attempts to remove the structure-directing templates. The POM-surfactant and metal-POM-surfactant arrays approach was highly successful and holds much promise as a novel approach to nano-engineering new materials from structural and functional POM building blocks, as well as forming metastable or unusual POM geometries that may not be obtained by other synthetic methods. The aerosol-assisted self assembly approach is in very preliminary state of investigation, but also shows promise in that structured materials were formed; where the structure was altered by aerosol processing. We will be seeking alternative funding to continue investigating the second synthetic strategy that we have begun to develop during this 1-year project.
This research consisted of testing surface treatment processes for stainless steel and aluminum for the purpose of suppressing electron emission over large surface areas to improve the pulsed high voltage hold-off capabilities of these metals. Improvements to hold-off would be beneficial to the operation of the vacuum-insulator grading rings and final self-magnetically insulated transmission line on the ZR-upgrade machine and other pulsed power applications such as flash radiograph and pulsed-microwave machines. The treatments tested for stainless steel include the Z-protocol (chemical polish, HVFF, and gold coating), pulsed E-beam surface treatments by IHCE, Russia, and chromium oxide coatings. Treatments for aluminum were anodized and polymer coatings. Breakdown thresholds also were measured for a range of surface finishes and gap distances. The study found that: (1.) Electrical conditioning and solvent cleaning in a filtered air environment each improve HV hold-off 30%. (2.) Anodized coatings on aluminum give a factor of two improvement in high voltage hold-off. However, anodized aluminum loses this improvement when the damage is severe. Chromium oxide coatings on stainless steel give a 40% and 20% improvement in hold-off before and after damage from many arcs. (3.) Bare aluminum gives similar hold-off for surface roughness, R{sub a}, ranging from 0.08 to 3.2 {micro}m. (4.) The various EBEST surfaces tested give high voltage hold-off a factor of two better than typical machined and similar to R{sub a} = 0.05 {micro}m polished stainless steel surfaces. (5.) For gaps > 2 mm the hold-off voltage increases as the square root of the gap for bare metal surfaces. This is inconsistent with the accepted model for metals that involves E-field induced electron emission from dielectric inclusions. Micro-particles accelerated across the gap during the voltage pulse give the observed voltage dependence. However the similarity in observed breakdown times for large and small gaps places a requirement that the particles be of molecular size. This makes accelerated micro-particle induced breakdown seem improbable also.
This report summarizes work performed to determine the capability of the Pinpoint Locator system, a commercial system designed and manufactured by RF Technologies. It is intended for use in finding people with locator badges in multi-story buildings. The Pinpoint system evaluated is a cell-based system, meaning it can only locate badges within an area bordered by its antennas.
The purpose of this program was to investigate methods to characterize the colloidal stability of nanoparticles during the synthesis reaction, and to characterize their organization related to interparticle forces. Studies were attempted using Raman spectroscopy and ultrasonic attenuation to observe the nucleation and growth process with characterization of stability parameters such as the zeta potential. The application of the techniques available showed that the instrumentation requires high sensitivity to the concentration of the system. Optical routes can be complicated by the scattering effects of colloidal suspensions, but dilution can cause a lowering of signal that prevents collection of data. Acoustic methods require a significant particle concentration, preventing the observation of nucleation events. Studies on the dispersion of nanoparticles show that electrostatic routes are unsuccessful with molecular surfactants at high particle concentration due to electrostatic interaction collapse by counterions. The study of molecular surfactants show that steric lengths on the order of 2 nm are successful for dispersion of nanoparticle systems at high particle concentration, similar to dispersion with commercial polyelectrolyte surfactants.
In many applications, the ability to monitor the output of a capacitive discharge circuit is imperative to ensuring the reliability and accuracy of the unit. This monitoring is commonly accomplished with the use of a Current Viewing Transformer (CVT). In order to calibrate the CVT, the circuit is assembled with a Current Viewing Transformer (CVR) in addition to the CVT and the peak outputs compared. However, difficulties encountered with the use of CVRs make it desirable to eliminate the use of the CVR from the calibration process. This report describes a method for determining the calibration factor between the current throughput and the CVT voltage output in a capacitive discharge unit from the CVT ringdown data and values of initial voltage and capacitance of the circuit. Previous linear RLC fitting work for determining R, L, and C is adapted to return values of R, L, and the calibration factor, k. Separate solutions for underdamped and overdamped cases are presented and implemented on real circuit data using MathCad software with positive results. This technique may also offer a unique approach to self calibration of current measuring devices.
The Auxiliary Hot Cell Facility (AHCF) at Sandia National Laboratories, New Mexico (SNL/NM) is a Hazard Category 3 nuclear facility used to characterize, treat, and repackage radioactive and mixed material for reuse, recycling, or ultimate disposal. Mixed waste may also be handled at the AHCF. A significant upgrade to a previous facility, the Temporary Hot Cell, was required to perform this mission. A checklist procedure was used to perform a human-factors evaluation of the AHCF modifications. This evaluation resulted in two recommendations, both of which have been implemented.
The mouth of the Rio Grande has become silted up, obstructing its flow into the Gulf of Mexico. This is problematic in that it has created extensive flooding. The purpose of this study was to determine the erosion and transport potential of the sediments obstructing the flow of the Rio Grande by employing a unique Mobile High Shear Stress flume developed by Sandia's Carlsbad Programs Group for the US Army Corps of Engineers. The flume measures in-situ sediment erosion properties at shear stresses ranging from normal flow to flood conditions for a variable depth sediment core. The flume is in a self-contained trailer that can be placed on site in the field. Erosion rates and sediment grain size distributions were determined from sediment samples collected in and around the obstruction and were subsequently used to characterize the erosion potential of the sediments under investigation.
Diversionary devices such as flashbang grenades are used in a wide variety of military and law-enforcement operations. They function to distract and/or incapacitate adversaries in scenarios ranging from hostage rescue to covert strategic paralysis operations. There are a number of disadvantages associated with currently available diversionary devices. Serious injuries and fatalities have resulted from their use both operationally and in training. Because safety is of paramount importance, desired improvements to these devices include protection against inadvertent initiation, the elimination of the production of high-velocity fragments, less damaging decibel output and increased light output. Sandia National Laboratories has developed a next-generation diversionary flash-bang device that will provide the end user with these enhanced safety features.
It has been recognized that documentation for new customers of ASCI Red, aka janus or the Intel Teraflops at Sandia National Laboratories, has been sadly lacking. This document has been prepared by a team of subject matter experts to fill that void and to provide a starting point for providing a similar document for ASCI Red Storm in the future. This document is intended for SNL users who need to jumpstart their use of Janus and Janus-s.
We have investigated the possibility of constructing nanoscale metallic vehicles powered by biological motors or flagella that are activated and powered by visible light. The vehicle's body is to be composed of the surfactant bilayer of a liposome coated with metallic nanoparticles or nanosheets grown together into a porous single crystal. The diameter of the rigid metal vesicles is from about 50 nm to microns. Illumination with visible light activates a photosynthetic system in the bilayer that can generate a pH gradient across the liposomal membrane. The proton gradient can fuel a molecular motor that is incorporated into the membrane. Some molecular motors require ATP to fuel active transport. The protein ATP synthase, when embedded in the membrane, will use the pH gradient across the membrane to produce ATP from ADP and inorganic phosphate. The nanoscale vehicle is thus composed of both natural biological components (ATPase, flagellum; actin-myosin, kinesin-microtubules) and biomimetic components (metal vehicle casing, photosynthetic membrane) as functional units. Only light and storable ADP, phosphate, water, and weak electron donor are required fuel components. These nano-vehicles are being constructed by self-assembly and photocatalytic and autocatalytic reactions. The nano-vehicles can potentially respond to chemical gradients and other factors such as light intensity and field gradients, in a manner similar to the way that magnetic bacteria navigate. The delivery package might include decision-making and guidance components, drugs or other biological and chemical agents, explosives, catalytic reactors, and structural materials. We expected in one year to be able only to assess the problems and major issues at each stage of construction of the vehicle and the likely success of fabricating viable nanovehicles with our biomimetic photocatalytic approach. Surprisingly, we have been able to demonstrate that metallized photosynthetic liposomes can indeed be made. We have completed the synthesis of metallized liposomes with photosynthetic function included and studied these structures by electron microscopy. Both platinum and palladium nanosheeting have been used to coat the micelles. The stability of the vehicles to mechanical stress and the solution environment is enhanced by the single-crystalline platinum or palladium coating on the vesicle. With analogous platinized micelles, it is possible to dry the vehicles and re-suspend them with full functionality. However, with the liposomes drying on a TEM grid may cause the platinized liposomes to collapse, although probably stay viable in solution. It remains to be shown whether a proton motive force across the metallized bilayer membrane can be generated and whether we will also be able to incorporate various functional capabilities including ATP synthesis and functional molecular motors. Future tasks to complete the nanovehicles would be the incorporation of ATP synthase into metallized liposomes and the incorporation of a molecular motor into metallized liposomes.
ALEGRA is an arbitrary Lagrangian-Eulerian finite element code that emphasizes large distortion and shock propagation in inviscid fluids and solids. This document describes user options for modeling magnetohydrodynamic, thermal conduction, and radiation emission effects.
This report summarizes the work on breakdown modeling in nonuniform geometries by the ionization coefficient approach. Included are: (1) fits to primary and secondary ionization coefficients used in the modeling; (2) analytical test cases for sphere-to-sphere, wire-to-wire, corner, coaxial, and rod-to-plane geometries; a compilation of experimental data with source references; comparisons between code results, test case results, and experimental data. A simple criterion is proposed to differentiate between corona and spark. The effect of a dielectric surface on avalanche growth is examined by means of Monte Carlo simulations. The presence of a clean dry surface does not appear to enhance growth.
Wireless communication networks are highly resource-constrained; thus many security protocols which work in other settings may not be efficient enough for use in wireless environments. This report considers a variety of cryptographic techniques which enable secure, authenticated communication when resources such as processor speed, battery power, memory, and bandwidth are tightly limited.
The primary objective of the Safety and Survivability of Aircraft Initiative is to improve the safety and survivability of systems by using validated computational models to predict the hazard posed by a fire. To meet this need, computational model predictions and experimental data have been obtained to provide insight into the thermal environment inside an aircraft dry bay. The calculations were performed using the Vulcan fire code, and the experiments were completed using a specially designed full-scale fixture. The focus of this report is to present comparisons of the Vulcan results with experimental data for a selected test scenario and to assess the capability of the Vulcan fire field model to accurately predict dry bay fire scenarios. Also included is an assessment of the sensitivity of the fire model predictions to boundary condition distribution and grid resolution. To facilitate the comparison with experimental results, a brief description of the dry bay fire test fixture and a detailed specification of the geometry and boundary conditions are included. Overall, the Vulcan fire field model has shown the capability to predict the thermal hazard posed by a sustained pool fire within a dry bay compartment of an aircraft; although, more extensive experimental data and rigorous comparison are required for model validation.
We present here the details of the implementation of the parallel tempering Monte Carlo technique into a LAMMPS, a heavily used massively parallel molecular dynamics code at Sandia. This technique allows for many replicas of a system to be run at different simulation temperatures. At various points in the simulation, configurations can be swapped between different temperature environments and then continued. This allows for large regions of energy space to be sampled very quickly, and allows for minimum energy configurations to emerge in very complex systems, such as large biomolecular systems. By including this algorithm into an existing code, we immediately gain all of the previous work that had been put into LAMMPS, and allow this technique to quickly be available to the entire Sandia and international LAMMPS community. Finally, we present an example of this code applied to folding a small protein.
This report originates in a workshop held at Sandia National Laboratories, bringing together a variety of external experts with Sandia personnel to discuss 'The Implications of Global Climate Change for International Security.' Whatever the future of the current global warming trend, paleoclimatic history shows that climate change happens, sometimes abruptly. These changes can severely impact human water supplies, agriculture, migration patterns, infrastructure, financial flows, disease prevalence, and economic activity. Those impacts, in turn, can lead to national or international security problems stemming from aggravation of internal conflicts, increased poverty and inequality, exacerbation of existing international conflicts, diversion of national and international resources from international security programs (military or non-military), contribution to global economic decline or collapse, or international realignments based on climate change mitigation policies. After reviewing these potential problems, the report concludes with a brief listing of some research, technology, and policy measures that might mitigate them.
This report describes a passive, optical component called resonant subwavelength gratings (RSGs), which can be employed as one element in an RSG array. An RSG functions as an extremely narrow wavelength and angular band reflector, or mode selector. Theoretical studies predict that the infinite, laterally-extended RSG can reflect 100% of the resonant light while transmitting the balance of the other wavelengths. Experimental realization of these remarkable predictions has been impacted primarily by fabrication challenges. Even so, we will present large area (1.0mm) RSG reflectivity as high as 100.2%, normalized to deposited gold. Broad use of the RSG will only truly occur in an accessible micro-optical system. This program at Sandia is a normal incidence array configuration of RSGs where each array element resonates with a distinct wavelength to act as a dense array of wavelength- and mode-selective reflectors. Because of the array configuration, RSGs can be matched to an array of pixels, detectors, or chemical/biological cells for integrated optical sensing. Micro-optical system considerations impact the ideal, large area RSG performance by requiring finite extent devices and robust materials for the appropriate wavelength. Theoretical predictions and experimental measurements are presented that demonstrate the component response as a function of decreasing RSG aperture dimension and off-normal input angular incidence.
We conducted a study of the time and resources that would be required for Sandia National Laboratories to once again perform nuclear weapons effects experiments of the sort that it did in the past. The study is predicated on the assumptions that if underground nuclear weapons effects testing (UG/NWET) is ever resumed, (1) a brief series of tests (i.e., 2-3) would be done, and (2) all required resources other than those specific to SNL experiments would be provided by others. The questions that we sought to answer were: (1) What experiments would SNL want to do and why? (2) How much would they cost? (3) How long would they take to field? To answer these questions, we convened panels of subject matter experts first to identify five experiments representative of those that SNL has done in the past, and then to determine the costs and timelines to design, fabricate and field each of them. We found that it would cost $76M to $84M to do all five experiments, including 164 to 174 FTEs to conduct all five experiments in a single test. Planning and expenditures for some of the experiments needed to start as early as 5.5 years prior to zero-day, and some work would continue up to 2 years beyond the event. Using experienced personnel as mentors, SNL could probably field such experiments within the next five years. However, beyond that time frame, loss of personnel would place us in the position of essentially starting over.
This report summarizes the results of a five-month LDRD late start project which explored the potential of enabling technology to improve the performance of small groups. The purpose was to investigate and develop new methods to assist groups working in high consequence, high stress, ambiguous and time critical situations, especially those for which it is impractical to adequately train or prepare. A testbed was constructed for exploratory analysis of a small group engaged in tasks with high cognitive and communication performance requirements. The system consisted of five computer stations, four with special devices equipped to collect physiologic, somatic, audio and video data. Test subjects were recruited and engaged in a cooperative video game. Each team member was provided with a sensor array for physiologic and somatic data collection while playing the video game. We explored the potential for real-time signal analysis to provide information that enables emergent and desirable group behavior and improved task performance. The data collected in this study included audio, video, game scores, physiological, somatic, keystroke, and mouse movement data. The use of self-organizing maps (SOMs) was explored to search for emergent trends in the physiological data as it correlated with the video, audio and game scores. This exploration resulted in the development of two approaches for analysis, to be used concurrently, an individual SOM and a group SOM. The individual SOM was trained using the unique data of each person, and was used to monitor the effectiveness and stress level of each member of the group. The group SOM was trained using the data of the entire group, and was used to monitor the group effectiveness and dynamics. Results suggested that both types of SOMs were required to adequately track evolutions and shifts in group effectiveness. Four subjects were used in the data collection and development of these tools. This report documents a proof of concept study, and its observations are preliminary. Its main purpose is to demonstrate the potential for the tools developed here to improve the effectiveness of groups, and to suggest possible hypotheses for future exploration.
Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.
A study was performed on the atomic configurations corresponding to local-energy minima for the neutral MgH complex in wurtzite GaN. The density-functional theory and the generalized-gradient approximation for exchange and correlation were used for the identification. The results showed that the dominant configuration consisted of H at an antibonding site of a N neighbor of the substitutional Mg, and the Mg-N and N-H bonds were nearly aligned.
A convergence theory is presented for a substructuring preconditioner based on constrained energy minimization concepts. The substructure spaces consist of local functions with zero values of the constraints, while the coarse space consists of minimal energy functions with the constraint values continuous across substructure interfaces. In applications, the constraints include values at comers and optionally averages on edges and faces. The preconditioner is reformulated as an additive Schwarz method and analysed by building on existing results for balancing domain decomposition. The main result is a bound on the condition number based on inequalities involving the matrices of the preconditioner. Estimates of the form C(1 + log 2(H/h)) are obtained under the standard assumptions of substructuring theory. Computational results demonstrating the performance of method are included. Published in 2003 by John Wiley & Sons, Ltd.
A great deal of money and effort has been spent on environmental restoration during the past several decades. Significant progress has been made on improving air quality, cleaning up and preventing leaching from dumps and landfills, and improving surface water quality. However, significant challenges still exist in all of these areas. Among the more difficult and expensive environmental problems, and often the primary factor limiting closure of contaminated sites following surface restoration, is contamination of ground water. The most common technology used for remediating ground water is surface treatment where the water is pumped to the surface, treated and pumped back into the ground or released at a nearby river or lake. Although still useful for certain remediation scenarios, the limitations of pump-and-treat technologies have recently been recognized, along with the need for innovative solutions to ground-water contamination. Even with the current challenges we face there is a strong need to create geological repository systems for dispose of radioactive wastes containing long-lived radionuclides. The potential contamination of groundwater is a major factor in selection of a radioactive waste disposal site, design of the facility, future scenarios such as human intrusion into the repository and possible need for retrieving the radioactive material, and the use of backfills designed to keep the radionuclides immobile. One of the most promising technologies for remediation of contaminated sites and design of radioactive waste repositories is the use of permeable reactive barriers (PRBs). PRBs are constructed of reactive material(s) to intercept and remove the radionuclides from the water and decontaminate the plumes in situ. The concept of PRBs is relatively simple. The reactive material(s) is placed in the subsurface between the waste or contaminated area and the groundwater. Reactive materials used thus far in practice and research include zero valent iron, hydroxyapatite, magnesium oxide, and others. As the contaminant moves through the reactive material, the contaminant is either sorbed by the reactive material or chemically reacts with the material to form a less harmful substance. Because of the high risk associated with failure of a geological repository for nuclear waste, most nations favor a near-field multibarrier engineered system using backfill materials to prevent release of radionuclides into the surrounding groundwater.
Genetic programming is a powerful methodology for automatically producing solutions to problems in a variety of domains. It has been used successfully to develop behaviors for RoboCup soccer players and simple combat agents. We will attempt to use genetic programming to solve a problem in the domain of strategic combat, keeping in mind the end goal of developing sophisticated behaviors for compound defense and infiltration. The simplified problem at hand is that of two armed agents in a small room, containing obstacles, fighting against each other for survival. The base case and three changes are considered: a memory of positions using stacks, context-dependent genetic programming, and strongly typed genetic programming. Our work demonstrates slight improvements from the first two techniques, and no significant improvement from the last.
Incorporating results from a previously developed finite element model, an uncertainty and parameter sensitivity analysis was conducted using preliminary site-specific data from Horonobe, Japan (data available from five boreholes as of 2003). Latin Hypercube Sampling was used to draw random parameter values from the site-specific measured, or approximated, physicochemical uncertainty distributions. Using pathlengths and groundwater velocities extracted from the three-dimensional, finite element flow and particle tracking model, breakthrough curves for multiple realizations were calculated with the semi-analytical, one-dimensional, multirate transport code, STAMMT-L. A stepwise linear regression analysis using the 5, 50, and 95% breakthrough times as the dependent variables and LHS sampled site physicochemical parameters as the independent variables was used to perform a sensitivity analysis. Results indicate that the distribution coefficients and hydraulic conductivities are the parameters responsible for most of the variation among simulated breakthrough times. This suggests that researchers and data collectors at the Horonobe site should focus on accurately assessing these parameters and quantifying their uncertainty. Because the Horonobe Underground Research Laboratory is in an early phase of its development, this work should be considered as a first step toward an integration of uncertainty and sensitivity analyses with decision analysis.
We address the electromagnetic induction problem for fully 3D geologic media and present a solution to the governing Maxwell equations based on a power series expansion. The coefficients in the series are computed using the adjoint method assuming an underlying homogeneous reference model. These solutions are available analytically for point dipole source terms and lead to rapid calculation of the expansion coefficients. First order solutions are presented for a model study in petroleum geophysics composed of a multi-component induction sonde proximal to a fault within a compartmentalized hydrocarbon reservoir.
The US Department of Energy requires a periodic 'self assessment' of Sandia's Microsystems Program. An external panel review of this program is held approximately every 18 months, and the report from the external review panel serves as the basis for the DOE 'self assessment.' The review for this fiscal year was held on September 30-October 1, 2002 at Sandia National Laboratories, Albuquerque, NM. The panel was comprised of experts in the fields of microelectronics, photonics and microsystems from universities, industry and other Government agencies. A complete list of the panel members is shown as Appendix A to the attached report. The review assesses four areas: relevance to national needs and agency mission; quality of science technology and engineering; performance in the operation of a major facility; and program performance management and planning. Relevance to national needs and agency mission was rated as 'outstanding.' The quality of science, technology, and engineering was rated as 'outstanding.' Operation of a major facility was noted as 'outstanding,' while the category of program performance, management, and planning was rated as 'outstanding.' Sandia's Microsystems Program received an overall rating of 'outstanding' [the highest possible rating]. The attached report was prepared by the panel in a format requested by Sandia to conform with the performance criteria for the DOE self assessment.
A comprehensive evaluation and experimental optimization of the FireFly{trademark} 600 off-grid photovoltaic system manufactured by Energia Total, Ltd. was conducted at Sandia National Laboratories in May and June of 2001. This evaluation was conducted at the request of the manufacturer and addressed performance of individual system components, overall system functionality and performance, safety concerns, and compliance with applicable codes and standards. A primary goal of the effort was to identify areas for improvement in performance, reliability, and safety. New system test procedures were developed during the effort.
The SNL/NM CY2002 SWEIS Annual Review discusses changes in facilities and facility operations that have occurred in selected and notable facilities since source data were collected for the SNL/NM SWEIS (DOE/EIS-0281). The following information is presented: {sm_bullet} An updated overview of SNL/NM selected and notable facilities and infrastructure capabilities. {sm_bullet} An overview of SNL/NM environment, safety, and health programs, including summaries of the purpose, operations, activities, hazards, and hazard controls at relevant facilities and risk management methods for SNL/NM. {sm_bullet} Updated base year activities data, together with related inventories, material consumption, emissions, waste, and resource consumption. {sm_bullet} Appendices summarizing activities and related hazards at SNL/NM individual special, general, and highbay laboratories, and chemical purchases.
This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latency interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.
We present our research results on membrane pores. The study was divided into two primary sections. The first involved the formation of protein pores in free-standing lipid bilayer membranes. The second involved the fabrication via surface micromachining techniques and subsequent testing of solid-state nanopores using the same characterization apparatus and procedures as that used for the protein pores. We were successful in our ability to form leak-free lipid bilayers, to detect the formation of single protein pores, and to monitor the translocation dynamics of individual homogeneous 100 base strands of DNA. Differences in translocation dynamics were observed when the base was switched from adenine to cytosine. The solid state pores (2-5 nm estimated) were fabricated in thin silicon nitride membranes. Testing of the solid sate pores indicated comparable currents for the same size protein pore with excellent noise and sensitivity. However, there were no conditions under which DNA translocation was observed. After considerable effort, we reached the unproven conclusion that multiple (<1 nm) pores were formed in the nitride membrane, thus explaining both the current sensitivity and the lack of DNA translocation blockages.
The Sandia Petaflops Planner is a tool for projecting the design and performance of parallel supercomputers into the future. The mathematical basis of these projections is the International Technology Roadmap for Semiconductors (ITRS, or a detailed version of Moore's Law) and DOE balance factors for supercomputer procurements. The planner is capable of various forms of scenario analysis, cost estimation, and technology analysis. The tool is described along with technology conclusions regarding PFLOPS-level supercomputers in the upcoming decade.
This document presents a high-level description of the Xyce {trademark} Parallel Electronic Simulator Release and Distribution Management Process. The purpose of this process is to standardize the manner in which all Xyce software products progress toward release and how releases are made available to customers. Rigorous Release Management will assure that Xyce releases are created in such a way that the elements comprising the release are traceable and the release itself is reproducible. Distribution Management describes what is to be done with a Xyce release that is eligible for distribution.
Wireless networking is becoming a common element of industrial, corporate, and home networks. Commercial wireless network systems have become reliable, while the cost of these solutions has become more affordable than equivalent wired network solutions. The security risks of wireless systems are higher than wired and have not been studied in depth. This report starts to bring together information on wireless architectures and their connection to wired networks. We detail information contained on the many different views of a wireless network system. The method of using multiple views of a system to assist in the determination of vulnerabilities comes from the Information Design Assurance Red Team (IDART{trademark}) Methodology of system analysis developed at Sandia National Laboratories.
Using molecular dynamics simulations we examine the effective interactions between two like-charged rods as a function of angle and separation. In particular, we determine how the competing electrostatic repulsions and multivalent-ion-induced attractions depend upon concentrations of simple and multivalent salts. We find that with increasing multivalent salt, the stable configuration of two rods evolves from isolated rods to aggregated perpendicular rods to aggregated parallel rods; at sufficiently high concentration, additional multivalent salt reduces the attraction. Monovalent salt enhances the attraction near the onset of aggregation and reduces it at a higher concentration of multivalent salt.
Sandia is currently developing a lead-zirconate-titanate ceramic 95/5-2Nb (or PNZT) from chemically prepared ('chem-prep') precursor powders. Previous PNZT ceramic was fabricated from the powders prepared using a 'mixed-oxide' process. The specimens of unpoled PNZT ceramic from batch HF803 were tested under hydrostatic, uniaxial, and constant stress difference loading conditions within the temperature range of -55 to 75 C and pressures to 500 MPa. The objective of this experimental study was to obtain mechanical properties and phase relationships so that the grain-scale modeling effort can develop and test its models and codes using realistic parameters. The stress-strain behavior of 'chem-prep' PNZT under different loading paths was found to be similar to that of 'mixed-oxide' PNZT. The phase transformation from ferroelectric to antiferroelectric occurs in unpoled ceramic with abrupt increase in volumetric strain of about 0.7 % when the maximum compressive stress, regardless of loading paths, equals the hydrostatic pressure at which the transformation otherwise takes place. The stress-volumetric strain relationship of the ceramic undergoing a phase transformation was analyzed quantitatively using a linear regression analysis. The pressure (P{sub T1}{sup H}) required for the onset of phase transformation with respect to temperature is represented by the best-fit line, P{sub T1}{sup H} (MPa) = 227 + 0.76 T (C). We also confirmed that increasing shear stress lowers the mean stress and the volumetric strain required to trigger phase transformation. At the lower bound (-55 C) of the tested temperature range, the phase transformation is permanent and irreversible. However, at the upper bound (75 C), the phase transformation is completely reversible as the stress causing phase transformation is removed.
Of special promise for providing dynamic mesoscale response data is the line-imaging VISAR, an instrument for providing spatially resolved velocity histories in dynamic experiments. We have prepared two line-imaging VISAR systems capable of spatial resolution in the 10-20 micron range, at the Z and STAR facilities. We have applied this instrument to selected experiments on a compressed gas gun, chosen to provide initial data for several problems of interest, including: (1) pore-collapse in copper (two variations: 70 micron diameter hole in single-crystal copper) and (2) response of a welded joint in dissimilar materials (Ta, Nb) to ramp loading relative to that of a compression joint. The instrument is capable of resolving details such as the volume and collapse history of a collapsing isolated pore.
The Computational Plant or Cplant is a commodity-based distributed-memory supercomputer under development at Sandia National Laboratories. Distributed-memory supercomputers run many parallel programs simultaneously. Users submit their programs to a job queue. When a job is scheduled to run, it is assigned to a set of available processors. Job runtime depends not only on the number of processors but also on the particular set of processors assigned to it. Jobs should be allocated to localized clusters of processors to minimize communication costs and to avoid bandwidth contention caused by overlapping jobs. This report introduces new allocation strategies and performance metrics based on space-filling curves and one dimensional allocation strategies. These algorithms are general and simple. Preliminary simulations and Cplant experiments indicate that both space-filling curves and one-dimensional packing improve processor locality compared to the sorted free list strategy previously used on Cplant. These new allocation strategies are implemented in Release 2.0 of the Cplant System Software that was phased into the Cplant systems at Sandia by May 2002. Experimental results then demonstrated that the average number of communication hops between the processors allocated to a job strongly correlates with the job's completion time. This report also gives processor-allocation algorithms for minimizing the average number of communication hops between the assigned processors for grid architectures. The associated clustering problem is as follows: Given n points in {Re}d, find k points that minimize their average pairwise L{sub 1} distance. Exact and approximate algorithms are given for these optimization problems. One of these algorithms has been implemented on Cplant and will be included in Cplant System Software, Version 2.1, to be released. In more preliminary work, we suggest improvements to the scheduler separate from the allocator.
Our overall goal is to understand and develop a novel light-driven approach to the controlled growth of unique metal and semiconductor nanostructures and nanomaterials. In this photochemical process, bio-inspired porphyrin-based photocatalysts reduce metal salts in aqueous solutions at ambient temperatures to provide metal nucleation and growth centers. Photocatalyst molecules are pre-positioned at the nanoscale to control the location and morphology of the metal nanostructures grown. Self-assembly, chemical confinement, and molecular templating are some of the methods used for nanoscale positioning of the photocatalyst molecules. When exposed to light, the photocatalyst molecule repeatedly reduces metal ions from solution, leading to deposition and the synthesis of the new nanostructures and nanostructured materials. Studies of the photocatalytic growth process and the resulting nanostructures address a number of fundamental biological, chemical, and environmental issues and draw on the combined nanoscience characterization and multi-scale simulation capabilities of the new DOE Center for Integrated Nanotechnologies, the University of New Mexico, and Sandia National Laboratories. Our main goals are to elucidate the processes involved in the photocatalytic growth of metal nanomaterials and provide the scientific basis for controlled synthesis. The nanomaterials resulting from these studies have applications in nanoelectronics, photonics, sensors, catalysis, and micromechanical systems. The proposed nanoscience concentrates on three thematic research areas: (1) the creation of nanoscale structures for realizing novel phenomena and quantum control, (2) understanding nanoscale processes in the environment, and (3) the development and use of multi-scale, multi-phenomena theory and simulation. Our goals for FY03 have been to understand the role of photocatalysis in the synthesis of dendritic platinum nanostructures grown from aqueous surfactant solutions under ambient conditions. The research is expected to lead to highly nanoengineered materials for catalysis mediated by platinum, palladium, and potentially other catalytically important metals. The nanostructures made also have potential applications in nanoelectronics, nanophotonics, and nanomagnetic systems. We also expect to develop a fundamental understanding of the uses and limitations of biomimetic photocatalysis as a means of producing metal and semiconductor nanostructures and nanomaterials. The work has already led to a relationship with InfraSUR LLC, a small business that is developing our photocatalytic metal reduction processes for environmental remediation. This work also contributes to science education at a predominantly Hispanic and Native American university.
The velocity of short duration high-amplitude shock waves and high-speed motion created by sources such as explosives, high energy plasmas and other rapid-acceleration devices are difficult to measure due to their fast reaction times. One measurement tool frequently used is VISAR (Velocity Interferometer System for Any Reflector). VISAR is an optical-based system that utilizes Doppler interferometry techniques to measure the complete time-history of the motion of a surface. This technique is gaining worldwide acceptance as the tool of choice for measurement of shock phenomena. However, one limitation of the single point VISAR is that it measures only one point on a surface. The new Multi Point VISAR remedies the single point VISAR's limitation by using multiple fiber optics and sensors to send and receive information. Upcoming programs that need analysis of large diameter flyers prompted the concept and design of a single cavity-multiple fiber optic Multi Point VISAR (MPV). Preliminary designs and the testing of a single cavity prototype in 1996 supported the theory of compact fiber optic bundle systems for development into the Multi Point VISAR. The new MPV was used to evaluate the performance of two components; a piezo-driven plane wave generating isolator, and a slim-loop ferroelectric (SFE)-type fireset.
Millions of sealed radioactive sources (SRSs) are being used for a wide variety of beneficial purposes throughout the world. Security experts are now concerned that these beneficial SRSs could be used in a radiological dispersion device to terrorize and disrupt society. The greatest safety and security threat is from those highly radioactive Category 1 and 2 SRSs. Without adequate controls, it may be relatively easy to legally purchase a Category 1 or 2 SRS on the international market under false pretenses. Additionally, during transfer, SRSs are particularly susceptible to theft since the sources are in a shielded and mobile configuration, transportation routes are predictable, and shipments may not be adequately guarded. To determine if government controls on SRS are adequate, this study was commissioned to review the current SRS import and export controls of six countries. Canada, the Russian Federation, and South Africa were selected as the exporting countries, and Egypt, the Philippines, and the United States were selected as importing countries. A detailed review of the controls in each country is presented. The authors found that Canada and Russia are major exporters, and are exporting highly radioactive SRSs without first determining if the recipient is authorized by the receiving country to own and use the SRSs. Available evidence was used to estimate that on average there are tens to possibly hundreds of intercountry transfers of highly radioactive SRSs each day. Based on these and other findings, this reports recommends stronger controls on the export and import of highly radioactive SRSs.
The PANDA code is used to construct tabular equations of state (EOS) for four metals-- beryllium, nickel, tungsten and gold. Each EOS includes melting, vaporization, and thermal electronic excitation. Separate EOS tables are constructed for the solid and fluid phases, and the PANDA phase transition model is used to construct a multiphase EOS table for each metal. These new EOS tables are available for use with the CTH code and other hydrocodes that access the CTH database.
The PANDA code is used to build tabular equations of state (EOS) for titanium and the alloy Ti4Al6V. Each EOS includes solid-solid phase transitions, melting, vaporization, and thermal electronic excitation. Separate EOS tables are constructed for the solid and fluid phases, and the PANDA phase transition model is used to construct a single multiphase table. The model explains a number of interesting features seen in the Hugoniot data, including an anomalous increase in shock velocity, recently observed near 200 GPa in Ti6Al4V. These new EOS tables are available for use with the CTH code and other hydrocodes that access the CTH database.
This report summarizes the results of a three-year LDRD project on prognostics and health management. System failure over some future time interval (an alternative definition is the capability to predict the remaining useful life of a system). Prognostics are integrated with health monitoring (through inspections, sensors, etc.) to provide an overall PHM capability that optimizes maintenance actions and results in higher availability at a lower cost. Our goal in this research was to develop PHM tools that could be applied to a wide variety of equipment (repairable, non-repairable, manufacturing, weapons, battlefield equipment, etc.) and require minimal customization to move from one system to the next. Thus, our approach was to develop a toolkit of reusable software objects/components and architecture for their use. We have developed two software tools: an Evidence Engine and a Consequence Engine. The Evidence Engine integrates information from a variety of sources in order to take into account all the evidence that impacts a prognosis for system health. The Evidence Engine has the capability for feature extraction, trend detection, information fusion through Bayesian Belief Networks (BBN), and estimation of remaining useful life. The Consequence Engine involves algorithms to analyze the consequences of various maintenance actions. The Consequence Engine takes as input a maintenance and use schedule, spares information, and time-to-failure data on components, then generates maintenance and failure events, and evaluates performance measures such as equipment availability, mission capable rate, time to failure, and cost. This report summarizes the capabilities we have developed, describes the approach and architecture of the two engines, and provides examples of their use. 'Prognostics' refers to the capability to predict the probability of
An experimental program is being conducted to study a proposed approach for oil reintroduction in the Strategic Petroleum Reserve (SPR). The goal is to assess whether useful oil is rendered unusable through formation of a stable oil-brine emulsion during reintroduction of degassed oil into the brine layer in storage caverns. This report documents the first stage of the program, in which simulant liquids are used to characterize the buoyant plume that is produced when a jet of crude oil is injected downward from a tube into brine. The experiment consists of a large transparent vessel that is a scale model of the proposed oil injection process at the SPR. An oil layer is floated on top of a brine layer. Silicon oil (Dow Corning 200{reg_sign} Fluid, 5 cSt) is used as the simulant for crude oil to allow visualization of the flow and to avoid flammability and related concerns. Sodium nitrate solution is used as the simulant for brine because it is not corrosive and it can match the density ratio between brine and crude oil. The oil is injected downward through a tube into the brine at a prescribed depth below the oil-brine interface. Flow rates are determined by scaling to match the ratio of buoyancy to momentum between the experiment and the SPR. Initially, the momentum of the flow produces a downward jet of oil below the tube end. Subsequently, the oil breaks up into droplets due to shear forces, buoyancy dominates the flow, and a plume of oil droplets rises to the interface. The interface is deflected upward by the impinging oil-brine plume. Two different diameter injection tubes were used (1/2-inch and 1-inch OD) to vary the scaling. Use of the 1-inch injection tube also assured that turbulent pipe flow was achieved, which was questionable for lower flow rates in the 1/2-inch tube. In addition, a 1/2-inch J-tube was used to direct the buoyant jet upwards rather than downwards to determine whether flow redirection could substantially reduce the oil-plume size and the oil-droplet residence time in the brine. Reductions of these quantities would inhibit emulsion formation by limiting the contact between the oil and the brine. Videos of this flow were recorded for scaled flow rates that bracket the equivalent pumping rates in an SPR cavern. Image-processing analyses were performed to quantify the penetration depth of the oil jet, the width of the jet, and the deflection of the interface. The measured penetration depths are shallow, as predicted by penetration-depth models, in agreement with the assumption that the flow is buoyancy-dominated, rather than momentum-dominated. The turbulent penetration depth model provided a good estimate of the measured values for the 1-inch injection tube but overpredicted the penetration depth for the 1/2-inch injection tube. Adding a virtual origin term would improve the prediction for the 1/2-inch tube for low to nominal injection flow rates but could not capture the rollover seen at high injection flow rates. As expected, the J-tube yielded a much narrower plume because the flow was directed upward, unlike the downward-oriented straight-tube cases where the plume had to reverse direction, leading to a much wider effective plume area. Larger surface deflections were caused by the narrower plume emitted from the J-tube. Although velocity was not measured in these experiments, the video data showed that the J-tube plume was clearly faster than those emitted from the downward-oriented tubes. These results indicate that oil injection tube modifications could inhibit emulsion formation by reducing the amount of contact (both time and area) between the oil and the brine. Future studies will employ crude oil, saturated brine, and interfacial solids (sludge) from actual SPR caverns.
The hydrostatically induced ferroelectric(FE)-to-antiferroelectric(AFE) phase transformation for chemically prepared niobium modified PZT 95/5 ceramics was studied as a function of density and pore former type (Lucite or Avicel). Special attention was placed on the effect of different pore formers on the charge release behavior associated with the FE-to-AFE phase transformation. Within the same density range (7.26 g/cm3 to 7.44 g/cm3), results showed that ceramics prepared with Lucite pore former exhibit a higher bulk modulus and a sharper polarization release behavior than those prepared with Avicel. In addition, the average transformation pressure was 10.7% greater and the amount of polarization released was 2.1% higher for ceramics with Lucite pore former. The increased transformation pressure was attributed to the increase of bulk modulus associated with Lucite pore former. Data indicated that a minimum volumetric transformational strain of -0.42% was required to trigger the hydrostatically induced FE-to-AFE phase transformation. This work has important implications for increasing the high temperature charge output for neutron generator power supply units.
This document describes the design, fabrication, and testing of the SNL Data Encryption Standard (DES) ASIC. This device was fabricated in Sandia's Microelectronics Development Laboratory using 0.6 {micro}m CMOS technology. The SNL DES ASIC was modeled using VHDL, then simulated, and synthesized using Synopsys, Inc. software and finally IC layout was performed using Compass Design Automation's CAE tools. IC testing was performed by Sandia's Microelectronic Validation Department using a HP 82000 computer aided test system. The device is a single integrated circuit, pipelined realization of DES encryption and decryption capable of throughputs greater than 6.5 Gb/s. Several enhancements accommodate ATM or IP network operation and performance scaling. This design is the latest step in the evolution of DES modules.
Athermal, tethered chains are modeled with Density Functional (DFT) theory for both the explicit solvent and continuum solvent cases. The structure of DFT is shown to reduce to Self-Consistent-Field (SCF) theory in the incompressible limit where there is symmetry between solvent and monomer, and to Single-Chain-Mean-Field (SCMF) theory in the continuum solvent limit. We show that by careful selection of the reference and ideal systems in DFT theory, self-consistent numerical solutions can be obtained, thereby avoiding the single chain Monte Carlo simulation in SCMF theory. On long length scales, excellent agreement is seen between the simplified DFT theory and Molecular Dynamics simulations of both continuum solvents and explicit-molecule solvents. In order to describe the structure of the polymer and solvent near the surface it is necessary to include compressibility effects and the nonlocality of the field.
A fundamental challenge for all communication systems, engineered or living, is the problem of achieving efficient, secure, and error-free communication over noisy channels. Information theoretic principals have been used to develop effective coding theory algorithms to successfully transmit information in engineering systems. Living systems also successfully transmit biological information through genetic processes such as replication, transcription, and translation, where the genome of an organism is the contents of the transmission. Decoding of received bit streams is fairly straightforward when the channel encoding algorithms are efficient and known. If the encoding scheme is unknown or part of the data is missing or intercepted, how would one design a viable decoder for the received transmission? For such systems blind reconstruction of the encoding/decoding system would be a vital step in recovering the original message. Communication engineers may not frequently encounter this situation, but for computational biologists and biotechnologist this is an immediate challenge. The goal of this work is to develop methods for detecting and reconstructing the encoder/decoder system for engineered and biological data. Building on Sandia's strengths in discrete mathematics, algorithms, and communication theory, we use linear programming and will use evolutionary computing techniques to construct efficient algorithms for modeling the coding system for minimally errored engineered data stream and genomic regulatory DNA and RNA sequences. The objective for the initial phase of this project is to construct solid parallels between biological literature and fundamental elements of communication theory. In this light, the milestones for FY2003 were focused on defining genetic channel characteristics and providing an initial approximation for key parameters, including coding rate, memory length, and minimum distance values. A secondary objective addressed the question of determining similar parameters for a received, noisy, error-control encoded data set. In addition to these goals, we initiated exploration of algorithmic approaches to determine if a data set could be approximated with an error-control code and performed initial investigations into optimization based methodologies for extracting the encoding algorithm given the coding rate of an encoded noise-free and noisy data stream.
Ceramic packaging is crucial to the development of MEMS-based microsystems. It is an enabling technology, giving the ability to build complex packages that combine MEMS, electronics, optics, and sensors in a compact volume. In addition, ceramic hermetic packaging has a long history of providing protection to the enclosed devices, even under harsh conditions. These capabilities are being used at Sandia to package complex, MEMS-based microsystems. Looking ahead, ceramic packaging is developing new capabilities important to microsystems, such as the addition of fluidic channels. These developments will make ceramic packaging a viable option for a wide variety of compact, highly integrated microsystems. However, MEMS, particularly surface micromachines, have new reliability concerns that ceramic packaging needs to address. One example is stiction, where small amounts of water can generate surface forces large enough to cause parts to stick together. This demonstrates the need to measure and control the internal environment with greater precision than has been required in the past. Despite these challenges, it is clear that ceramic packaging will be a key technology for complex microsystems in the future.
Development of next generation electronics for pulse discharge systems requires miniaturization and integration of high voltage, high value resistors (greater than 100 megohms) with novel substrate materials. These material advances are needed for improved reliability, robustness and performance. In this study, high sheet resistance inks of 1 megohm per square were evaluated to reduce overall electrical system volume. We investigated a deposition process that permits co-sintering of high-sheet-resistance inks with a variety of different material substrates. Our approach combines the direct write process of aerosol jetting with laser sintering and conventional thermal sintering processes. One advantage of aerosol jetting is that high quality, fine line depositions can be achieved on a wide variety of substrates. When combined with laser sintering, the aerosol jetting approach has the capability to deposit resistors at any location on a substrate and to additively trim the resistors to specific values. We have demonstrated a 400 times reduction in overall resistor volume compared to commercial chip resistors using the above process techniques. Resistors that exhibited this volumetric efficiency were fabricated by 850°C thermal processing on alumina substrates and by 0.1W laser sintering on Kapton substrates.
IEEE Antennas and Propagation Society, AP-S International Symposium (Digest)
Rohwer, Judd A.; Abdallah, Chaouki T.; Christodoulou, Christos G.
A multiclass LS-SVM architecture for DOA estimation as applied to a CDMA cellular system is presented. As such, simulation results showed a high degree of accuracy, as related to the DOA classes and proved that the LS-SVM DDAG system has a wide range of performance capabilities. The broad range of the research in machine learning based DOA estimation includes multilabel and multiclass classification, classification accuracy, error control and validation, kernel selection, estimation of signal subspace dimension, and overall performance characterization of the LS-SVM DDAG DOA estimation algorithm.
This report presents the implementation of a stateless scheme for Faithful Execution, the design for which is presented in a companion report, ''Principles of Faithful Execution in the Implementation of Trusted Objects'' (SAND 2003-2328). We added a simple cryptographic capability to an already simplified class loader and its associated Java Virtual Machine (JVM) to provide a byte-level implementation of Faithful Execution. The extended class loader and JVM we refer to collectively as the Sandia Faithfully Executing Java architecture (or JavaFE for short). This prototype is intended to enable exploration of more sophisticated techniques which we intend to implement in hardware.
We begin with the following definitions: Definition: A trusted volume is the computing machinery (including communication lines) within which data is assumed to be physically protected from an adversary. A trusted volume provides both integrity and privacy. Definition: Program integrity consists of the protection necessary to enable the detection of changes in the bits comprising a program as specified by the developer, for the entire time that the program is outside a trusted volume. For ease of discussion we consider program integrity to be the aggregation of two elements: instruction integrity (detection of changes in the bits within an instruction or block of instructions), and sequence integrity (detection of changes in the locations of instructions within a program). Definition: Faithful Execution (FE) is a type of software protection that begins when the software leaves the control of the developer and ends within the trusted volume of a target processor. That is, FE provides program integrity, even while the program is in execution. (As we will show below, FE schemes are a function of trusted volume size.) FE is a necessary quality for computing. Without it we cannot trust computations. In the early days of computing FE came for free since the software never left a trusted volume. At that time the execution environment was the same as the development environment. In some circles that environment was referred to as a ''closed shop:'' all of the software that was used there was developed there. When an organization bought a large computer from a vendor the organization would run its own operating system on that computer, use only its own editors, only its own compilers, only its own debuggers, and so on. However, with the continuing maturity of computing technology, FE becomes increasingly difficult to achieve
Sandia National Laboratories, New Mexico (SNL/NM) is a government-owned, contractor-operated facility overseen by the U.S. Department of Energy (DOE), National Nuclear Security Administration (NNSA) through the Sandia Site Office (SSO), Albuquerque, New Mexico. Sandia Corporation, a wholly-owned subsidiary of Lockheed Martin Corporation, operates SNL/NM. This annual report summarizes data and the compliance status of Sandia Corporation's environmental protection and monitoring programs through December 31, 2002. Major environmental programs include air quality, water quality, groundwater protection, terrestrial surveillance, waste management, pollution prevention (P2), environmental restoration (ER), oil and chemical spill prevention, and the National Environmental Policy Act (NEPA). Environmental monitoring and surveillance programs are required by DOE Order 5400.1, General Environmental Protection Program (DOE 1990) and DOE Order 231.1, Environment, Safety, and Health Reporting (DOE 1996).
Tonopah Test Range (TTR) in Nevada and Kauai Test Facility (KTF) in Hawaii are government-owned, contractor-operated facilities operated by Sandia Corporation, a subsidiary of Lockheed Martin Corporation. The U.S. Department of Energy (DOE), National Nuclear Security Administration (NNSA), through the Sandia Site Office (SSO), in Albuquerque, NM, oversees TTR and KTF's operations. Sandia Corporation conducts operations at TTR in support of DOE/NNSA's Weapons Ordnance Program and has operated the site since 1957. Westinghouse Government Services subcontracts to Sandia Corporation in administering most of the environmental programs at TTR. Sandia Corporation operates KTF as a rocket preparation launching and tracking facility. This Annual Site Environmental Report (ASER) summarizes data and the compliance status of the environmental protection and monitoring program at TTR and KTF through Calendar Year (CY) 2002. The compliance status of environmental regulations applicable at these sites include state and federal regulations governing air emissions, wastewater effluent, waste management, terrestrial surveillance, and Environmental Restoration (ER) cleanup activities. Sandia Corporation is responsible only for those environmental program activities related to its operations. The DOE/NNSA, Nevada Site Office (NSO) retains responsibility for the cleanup and management of ER TTR sites. Currently, there are no ER Sites at KTF. Environmental monitoring and surveillance programs are required by DOE Order 5400.1, General Environmental Protection Program (DOE 1990) and DOE Order 231.1, Environment, Safety, and Health Reporting (DOE 1996).
Evidence for the existence of discrete sub-movements underlying continuous human movement has motivated many attempts to "extract" them. Although they produce visually convincing results, all of the methodologies that have been employed are prone to produce spurious decompositions. Examples of potential failures are given. A branch-and-bound algorithm for submovement extraction, capable of global nonlinear minimization (and hence capable of avoiding spurious decompositions), is developed and demonstrated.
Natural language processing-based knowledge management software, traditionally developed for security organizations, is now becoming commercially available. An informal survey was conducted to discover and examine current NLP and related technologies and potential applications for information retrieval, information extraction, summarization, categorization, terminology management, link analysis, and visualization for possible implementation at Sandia National Laboratories. This report documents our current understanding of the technologies, lists software vendors and their products, and identifies potential applications of these technologies.
The Z Refurbishment (ZR) Project is a program to upgrade the Z machine at SNL with modern durable pulsed power technology, providing additional shot capacity and improved reliability as well as advanced capabilities for both pulsed x-ray production and high pressure generation. The development of enhanced diagnostic capabilities is an essential requirement for ZR to meet critical mission needs. This report presents a comprehensive plan for diagnostic instrument and infrastructure development for the first few years of ZR operation. The focus of the plan is on: (1) developing diagnostic instruments with high spatial and temporal resolution, capable of low noise operation and survival in the severe EMP, bremsstrahlung, and blast environments of ZR; and (2) providing diagnostic infrastructure improvements, including reduced diagnostic trigger signal jitter, more and flexible diagnostic line-of-sight access, and the capability for efficient exchange of diagnostics with other laboratories. This diagnostic plan is the first step in an extended process to provide enhanced diagnostic capabilities for ZR to meet the diverse programmatic needs of a broad range of defense, energy, and general science programs of an international user community into the next decade.
If we are to build a supercomputer with a speed of 10{sup 15} floating operations per second (1 PetaFLOPS), interconnect technology will need to be improved considerably over what it is today. In this report, we explore one possible interconnect design for such a network. The guiding principle in this design is the optimization of all components for the finiteness of the speed of light. To achieve a linear speedup in time over well-tested supercomputers of todays' designs will require scaling up of processor power and bandwidth and scaling down of latency. Latency scaling is the most challenging: it requires a 100 ns user-to-user latency for messages traveling the full diameter of the machine. To meet this constraint requires simultaneously minimizing wire length through 3D packaging, new low-latency electrical signaling mechanisms, extremely fast routers, and new network interfaces. In this report, we outline approaches and implementations that will meet the requirements when implemented as a system. No technology breakthroughs are required.
This reference document provides summary information on the animal, plant, zoonotic, and human pathogens and toxins regulated and categorized by 9 CFR 331 and 7 CFR 121, 'Agricultural Bioterrorism Protection Act of 2002; Possession, Use and Transfer of Biological Agents and Toxins,' and 42 CFR 73, 'Possession, Use, and Transfer of Select Agents and Toxins.' Summary information includes, at a minimum, a description of the agent and its associated symptoms; often additional information is provided on the diagnosis, treatment, geographic distribution, transmission, control and eradication, and impacts on public health.
The essential oils of Juniperus scopulorum, Artemisia tridentata, and Salvia apiana obtained by steam extraction were analyzed by GC-MS and GC-FID. For J. scopulorum, twenty-five compounds were identified which accounts for 92.43% of the oil. The primary constituents were sabinene (49.91%), {alpha}-terpinene (9.95%), and 4-terpineol (6.79%). For A. tridentata, twenty compounds were identified which accounts for 84.32% of the oil. The primary constituents were camphor (28.63%), camphene (16.88%), and 1,8-cineole (13.23%). For S. apiana, fourteen compounds were identified which accounts for 96.76% of the oil. The primary component was 1,8-cineole (60.65%).
This paper analyzes what additional costs would be incurred in supporting dual-mode, i.e. both classified and unclassified use of the Institutional Computing (IC) hardware. The following five options are considered: periods processing in which a fraction of the system alternates in time between classified and unclassified modes, static split in which the system is constructed as a set of smaller clusters which remain in one mode or the other, re-configurable split in which the system is constructed in a split fashion but a mechanism is provided to reconfigure it very infrequently, red/black switching in which a mechanism is provided to switch sections of the system between modes frequently, and complementary operation in which parts of the system are operated entirely in one mode at one geographical site and entirely in the other mode at the other geographical site and other systems are repartitioned to balance work load. These options are evaluated against eleven criteria such as disk storage costs, distance computing costs, reductions in capability and capacity as a result of various factors etc. The evaluation is both qualitative and quantitative, and is captured in various summary tables.