The blast parameters for the 6-foot diameter by 200-foot long, explosively driven shock tube are presented in this report. The purpose, main characteristics, and blast simulation capabilities of this PETN Primacord, explosively driven facility are included. Experimental data are presented for air and Sulfurhexaflouride (SF6) test gases with initial pressures between 0.5 to 12.1 psia (ambient). Experimental data are presented and include shock wave time of amval at various test stations, flow duration, static or side-on overpressure, and stagnation or head-on overpressure. The blast parameters calculated from the above measured parameters and presented in this report include shock wave velocity, shock strength, shock Mach number, flow Mach Number, reflected pressure, dynamic pressure, particle velocity, density, and temperature. Graphical data for the above parameters are included. Algorithms and least squares fit equations are also included.
The assembly and packaging of MEMS (Microelectromechanical Systems) devices raise a number of issues over and above those normally associated with the assembly of standard microelectronic circuits. MEMS components include a variety of sensors, microengines, optical components, and other devices. They often have exposed mechanical structures which during assembly require particulate control, free space in the package, non-contact handling procedures, low-stress die attach, precision die placement, unique process schedules, hermetic sealing in controlled environments (including vacuum), and other special constraints. These constraints force changes in the techniques used to separate die on a wafer, in the types of packages which can be used, in the assembly processes and materials, and in the sealing environment and process. This paper discusses a number of these issues and provides information on approaches being taken or proposed to address them.
The Geothermal Research Dept. at Sandia Natl. Laboratories, in collaboration with Drill Cool Systems Inc., has worked to develop and test insulated drillpipe (IDP). IDP will allow much cooler drilling fluid to reach the bottom of the hole, making possible the use of downhole motors, electronics, and steering tools that are now useless in high-temperature formations. Other advantages of cooler fluid include reduced degradation of drilling fluid, longer bit life, and reduced corrosion rates. This article describes the theoretical background, laboratory testing, and field testing of IDP, including structural and thermal laboratory testing procedures and results. We also give results for a field test in a geothermal well in which circulating temperatures in IDP are compared with those in conventional drillpipe (CDP) at different flow rates. A brief description of the software used to model wellbore temperature and to calculate sensitivity in IDP design differences is included, along with a comparison of calculated and measured wellbore temperatures in the field test. There is also analysis of mixed (IDP and CDP) drillstrings and discussion of where IDP should be placed in a mixed string.
A study was performed on the Sandia Heat Flux Gauge (HFG) developed as a rugged, cost effective technique for performing steady state heat flux measurements in the pool fire environment. The technique involved reducing the time-temperature history of a thin metal plate to an incident heat flux via a dynamic thermal model, even though the gauge was intended for use at steady state. A validation experiment was presented where the gauge was exposed to a step input of radiant heat flux.
We consider the steady-state transport of normally incident pencil beams of radiation in slabs of material. A method has been developed for determining the exact radial moments of three-dimensional (3-D) beams of radiation as a function of depth into the slab, by solving systems of one-dimensional (1-D) transport equations. We implement these radial-moment equations in the ONEBFP discrete ordinates code and simulate energy-dependent, coupled electron-photon beams using CEPXS-generated cross sections. Modified PN synthetic acceleration is employed to speed up the iterative convergence of the 1-D charged-particle calculations. For high-energy photon beams, a hybrid Monte Carlo/discrete ordinates method is examined. We demonstrate the efficiency of the calculations and make comparisons with 3-D Monte Carlo calculations. Thus, by solving 1-D transport equations, we obtain realistic multidimensional information concerning the broadening of electron-photon beams. This information is relevant to fields such as industrial radiography, medical imaging, radiation oncology, particle accelerators, and lasers.
A family of transients with the property that the initial and final acceleration, velocity, and displacement are all zero is derived. The transients are based on a relatively arbitrary function multiplied by window of the form cosm(x). Several special cases are discussed which result in odd acceleration and displacement functions. This is desirable for shaker reproduction because the required positive and negative peak accelerations and displacements will be balanced. Another special case is discussed which will permit the development of transients with the first five (0-4) temporal moments specified. The transients are defined with three or four parameters that will allow sums of components to be found which will match a variety of shock response spectra.
Two major issues associated with model validation are addressed here. First, we present a maximum likelihood approach to define and evaluate a model validation metric. The advantage of this approach is it is more easily applied to nonlinear problems than the methods presented earlier by Hills and Trucano (1999, 2001); the method is based on optimization for which software packages are readily available; and the method can more easily be extended to handle measurement uncertainty and prediction uncertainty with different probability structures. Several examples are presented utilizing this metric. We show conditions under which this approach reduces to the approach developed previously by Hills and Trucano (2001). Secondly, we expand our earlier discussions (Hills and Trucano, 1999, 2001) on the impact of multivariate correlation and the effect of this on model validation metrics. We show that ignoring correlation in multivariate data can lead to misleading results, such as rejecting a good model when sufficient evidence to do so is not available.
Collaboration between Sandia National Laboratories and the University of New Mexico Biology Department resulted in the capability to train students in microarray techniques and the interpretation of data from microarray experiments. These studies provide for a better understanding of the role of stationary phase and the gene regulation involved in exit from stationary phase, which may eventually have important clinical implications. Importantly, this research trained numerous students and is the basis for three new Ph.D. projects.
Hyperspectral Fourier transform infrared images have been obtained from a neoprene sample aged in air at elevated temperatures. The massive amount of spectra available from this heterogeneous sample provides the opportunity to perform quantitative analysis of the spectral data without the need for calibration standards. Multivariate curve resolution (MCR) methods with non-negativity constraints applied to the iterative alternating least squares analysis of the spectral data has been shown to achieve the goal of quantitative image analysis without the use of standards. However, the pure-component spectra and the relative concentration maps were heavily contaminated by the presence of system artifacts in the spectral data. We have demonstrated that the detrimental effects of these artifacts can be minimized by adding an estimate of the error covariance structure of the spectral image data to the MCR algorithm. The estimate is added by augmenting the concentration and pure-component spectra matrices with scores and eigenvectors obtained from the mean-centered repeat image differences of the sample. The implementation of augmentation is accomplished by employing efficient equality constraints on the MCR analysis. Augmentation with the scores from the repeat images is found to primarily improve the pure-component spectral estimates while augmentation with the corresponding eigenvectors primarily improves the concentration maps. Augmentation with both scores and eigenvectors yielded the best result by generating less noisy pure-component spectral estimates and relative concentration maps that were largely free from a striping artifact that is present due to system errors in the FT-IR images. The MCR methods presented are general and can also be applied productively to non-image spectral data.
One of the major needs of the law enforcement field is a product that quickly, accurately, and inexpensively identifies whether a person has recently fired a gun--even if the suspect has attempted to wash the traces of gunpowder off. The Field Test Kit for Gunshot Residue Identification based on Sandia National Laboratories technology works with a wide variety of handguns and other weaponry using gunpowder. There are several organic chemicals in small arms propellants such as nitrocellulose, nitroglycerine, dinitrotoluene, and nitrites left behind after the firing of a gun that result from the incomplete combustion of the gunpowder. Sandia has developed a colorimetric shooter identification kit for in situ detection of gunshot residue (GSR) from a suspect. The test kit is the first of its kind and is small, inexpensive, and easily transported by individual law enforcement personnel requiring minimal training for effective use. It will provide immediate information identifying gunshot residue.
The Nonactinide Isotopes and Sealed Sources (NISS) Web Application is a web-based database query and data management tool designed to facilitate the identification and reapplication of radioactive sources throughout the Department of Energy (DOE) complex. It provides search capability to the general Internet community and detailed data management functions to contributing site administrators.
The present document summarizes the experimental efforts of a three-year study funded under the Laboratory Directed Research and Development program of Sandia National Laboratories. The Innovative Diagnostics LDRD project was designed to develop new measurement capabilities to examine the interaction of a propulsive spin jet in a transonic freestream for a model in a wind tunnel. The project motivation was the type of jet/fin interactions commonly occurring during deployment of weapon systems. In particular, the two phenomena of interest were the interaction of the propulsive spin jet with the freestream in the vicinity of the nozzle and the impact of the spin rocket plume and its vortices on the downstream fins. The main thrust of the technical developments was to incorporate small-size, Lagrangian sensors for pressure and roll-rate on a scale model and include data acquisition, transmission, and power circuitry onboard. FY01 was the final year of the three-year LDRD project and the team accomplished much of the project goals including use of micron-scale pressure sensors, an onboard telemetry system for data acquisition and transfer, onboard jet exhaust, and roll-rate measurements. A new wind tunnel model was designed, fabricated, and tested for the program which incorporated the ability to house multiple MEMS-based pressure sensors, interchangeable vehicle fins with pressure instrumentation, an onboard multiple-channel telemetry data package, and a high-pressure jet exhaust simulating a spin rocket motor plume. Experiments were conducted for a variety of MEMS-based pressure sensors to determine performance and sensitivity in order to select pressure transducers for use. The data acquisition and analysis path was most successful by using multiple, 16-channel data processors with telemetry capability to a receiver outside the wind tunnel. The development of the various instrumentation paths led to the fabrication and installation of a new wind tunnel model for baseline non-rotating experiments to validate the durability of the technologies and techniques. The program successfully investigated a wide variety of instrumentation and experimental techniques and ended with basic experiments for a non-rotating model with jet-on with the onboard jets operating and both rotating and non-rotating model conditions.
This report summarizes a multiyear effort to establish a new capability for determining dynamic material properties. By utilizing a significant reduction in experimental length and time scales, this new capability addresses both the high per-experiment costs of current methods and the inability of these methods to characterize materials having very small dimensions. Possible applications include bulk-processed materials with minimal dimensions, very scarce or hazardous materials, and materials that can only be made with microscale dimensions. Based on earlier work to develop laser-based techniques for detonating explosives, the current study examined the laser acceleration, or photonic driving, of small metal discs (''flyers'') that can generate controlled, planar shockwaves in test materials upon impact. Sub-nanosecond interferometric diagnostics were developed previously to examine the motion and impact of laser-driven flyers. To address a broad range of materials and stress states, photonic driving levels must be scaled up considerably from the levels used in earlier studies. Higher driving levels, however, increase concerns over laser-induced damage in optics and excessive heating of laser-accelerated materials. Sufficiently high levels require custom beam-shaping optics to ensure planar acceleration of flyers. The present study involved the development and evaluation of photonic driving systems at two driving levels, numerical simulations of flyer acceleration and impact using the CTH hydrodynamics code, design and fabrication of launch assemblies, improvements in diagnostic instrumentation, and validation experiments on both bulk and thin-film materials having well-established shock properties. The primary conclusion is that photonic driving techniques are viable additions to the methods currently used to obtain dynamic material properties. Improvements in launch conditions and diagnostics can certainly be made, but the main challenge to future applications will be the successful design and fabrication of test assemblies for materials of interest.
SERAPHIM technology appears capable of efficiently driving a tip driven fan. If the motor is powered using an inverter and resonant circuit, the size and weight could be considerably below that of a comparable rotary electric motor.
Recyclable transmission lines (RTL) are studied as a means of repetitively driving z pinches. The lowest reprocessing costs should be obtained by minimizing the mass of the RTL. Low mass transmission lines (LMTL) could also help reduce the cost of a single shot facility such as the proposed X-1 accelerator and make z-pinch driven space propulsion feasible. We present calculations to determine the minimum LMTL electrode mass to provide sufficient inertia against the magnetic pressure produced by the large currents needed to drive the z pinches. The results indicate an electrode thickness which is much smaller than the resistive skin depth. We have performed experiments to determine if such thin electrodes can efficiently carry the required current. The tests were performed with various thickness of materials. The results indicate that LMTLs should efficiently carry the large z-pinch currents needed for inertial fusion. We also use our results to estimate of the performance of pulsed power driven pulsed nuclear rockets.
High-brightness flash x-ray sources are needed for penetrating dynamic radiography for a variety of applications. Various bremsstrahlung source experiments have been conducted on the TriMeV accelerator (3MV, 60 {Omega}, 20 ns) to determine the best diode and focusing configuration in the 2-3 MV range. Three classes of candidate diodes were examined: gas cell focusing, magnetically immersed, and rod pinch. The best result for the gas cell diode was 6 rad at 1 meter from the source with a 5 mm diameter x-ray spot. Using a 0.5 mm diameter cathode immersed in a 17 T solenoidal magnetic field, the best shot produced 4.1 rad with a 2.9 mm spot. The rod pinch diode demonstrated very reproducible radiographic spots between 0.75 and 0.8 mm in diameter, producing 1.2 rad. This represents a factor of eight improvement in the TriMeV flash radiographic capability above the original gas cell diode to a figure of merit (dose/spot diameter) > 1.8 rad/mm. These results clearly show the rod pinch diode to be the choice x-ray source for flash radiography at 2-3 M V endpoint.
This report describes the development of bulk hydrous titanium oxide (HTO)- and silica-doped hydrous titanium oxide (HTO:Si)-supported Pt catalysts for lean-burn NOx catalyst applications. The effects of various preparation methods, including both anion and cation exchange, and specifically the effect of Na content on the performance of Pt/HTO:Si catalysts, were evaluated. Pt/HTO:Si catalysts with low Na content (< 0.5 wt.%) were found to be very active for NOx reduction in simulated lean-burn exhaust environments utilizing propylene as the major reductant species. The activity and performance of these low Na Pt/HTO:Si catalysts were comparable to supported Pt catalysts prepared using conventional oxide or zeolite supports. In ramp down temperature profile test conditions, Pt/HTO:Si catalysts with Na contents in the range of 3-5 wt.% showed a wide temperature window of appreciable NOx conversion relative to low Na Pt/HTO:Si catalysts. Full reactant species analysis using both ramp up and isothermal test conditions with the high Na Pt/HTO:Si catalysts, as well as diffuse reflectance FTIR studies, showed that this phenomenon was related to transient NOx storage effects associated with NaNO{sub 2}/NaNO{sub 3} formation. These nitrite/nitrate species were found to decompose and release NOx at temperatures above 300 C in the reaction environment (ramp up profile). A separate NOx uptake experiment at 275 C in NO/N{sub 2}/O{sub 2} showed that the Na phase was inefficiently utilized for NOx storage. Steady state tests showed that the effect of increased Na content was to delay NOx light-off and to decrease the maximum NOx conversion. Similar results were observed for high K Pt/HTO:Si catalysts, and the effects of high alkali content were found to be independent of the sample preparation technique. Catalyst characterization (BET surface area, H{sub 2} chemisorption, and transmission electron microscopy) was performed to elucidate differences between the HTO- and HTO:Si-supported Pt catalysts and conventional oxide- or zeolite-supported Pt catalysts.
Sandia National Laboratories has previously developed a unidirectional High Shear Stress Sediment Erosion flume for the US Army Corps of Engineers, Coastal Hydraulics Laboratory. The flow regime for this flume has limited applicability to wave-dominated environments. A significant design modification to the existing flume allows oscillatory flow to be superimposed upon a unidirectional current. The new flume simulates highshear stress erosion processes experienced in coastal waters where wave forcing dominates the system. Flow velocity measurements, and erosion experiments with known sediment samples were performed with the new flume. Also, preliminary computational flow models closely simulate experimental results and allow for a detailed assessment of the induced shear stresses at the sediment surface.
The Department of Energy (DOE) is moving towards Long-Term Stewardship (LTS) of many environmental restoration sites that cannot be released for unrestricted use. One aspect of information management for LTS is geospatial data archiving. This report discusses the challenges facing the DOE LTS program concerning the data management and archiving of geospatial data. It discusses challenges in using electronic media for archiving, overcoming technological obsolescence, data refreshing, data migration, and emulation. It gives an overview of existing guidance and policy and discusses what the United States Geological Service (USGS), National Oceanic and Atmospheric Administration (NOAA) and the Federal Emergency Management Agency (FEMA) are doing to archive the geospatial data that their agencies are responsible for. In the conclusion, this report provides issues for further discussion around long-term spatial data archiving.
Solar Two was a collaborative, cost-shared project between 11 U. S. industry and utility partners and the U. S. Department of Energy to validate molten-salt power tower technology. The Solar Two plant, located east of Barstow, CA, comprised 1926 heliostats, a receiver, a thermal storage system, a steam generation system, and steam-turbine power block. Molten nitrate salt was used as the heat transfer fluid and storage media. The steam generator powered a 10-MWe (megawatt electric), conventional Rankine cycle turbine. Solar Two operated from June 1996 to April 1999. The major objective of the test and evaluation phase of the project was to validate the technical characteristics of a molten salt power tower. This report describes the significant results from the test and evaluation activities, the operating experience of each major system, and overall plant performance. Tests were conducted to measure the power output (MW) of the each major system, the efficiencies of the heliostat, receiver, thermal storage, and electric power generation systems and the daily energy collected, daily thermal-to-electric conversion, and daily parasitic energy consumption. Also included are detailed test and evaluation reports.
This document provides a guide to the deployment of the software verification activities, software engineering practices, and project management principles that guide the development of Accelerated Strategic Computing Initiative (ASCI) applications software at Sandia National Laboratories (Sandia). The goal of this document is to identify practices and activities that will foster the development of reliable and trusted products produced by the ASCI Applications program. Document contents include an explanation of the structure and purpose of the ASCI Quality Management Council, an overview of the software development lifecycle, an outline of the practices and activities that should be followed, and an assessment tool. These sections map practices and activities at Sandia to the ASCI Software Quality Engineering: Goals, Principles, and Guidelines, a Department of Energy document.
Water shortages affect 88 developing countries that are home to half of the world's population. In these places, 80-90% of all diseases and 30% of all deaths result from poor water quality. Furthermore, over the next 25 years, the number of people affected by severe water shortages is expected to increase fourfold. Low cost methods of purifying freshwater, and desalting seawater are required to contend with this destabilizing trend. Membrane distillation (MD) is an emerging technology for separations that are traditionally accomplished via conventional distillation or reverse osmosis. As applied to desalination, MD involves the transport of water vapor from a saline solution through the pores of a hydrophobic membrane. In sweeping gas MD, a flowing gas stream is used to flush the water vapor from the permeate side of the membrane, thereby maintaining the vapor pressure gradient necessary for mass transfer. Since liquid does not penetrate the hydrophobic membrane, dissolved ions are completely rejected by the membrane. MD has a number of potential advantages over conventional desalination including low temperature and pressure operation, reduced membrane strength requirements, compact size, and 100% rejection of non-volatiles. The present work evaluated the suitability of commercially available technology for sweeping gas membrane desalination. Evaluations were conducted with Celgard Liqui-Cel{reg_sign} Extra-Flow 2.5X8 membrane contactors with X-30 and X-40 hydrophobic hollow fiber membranes. Our results show that sweeping gas membrane desalination systems are capable of producing low total dissolved solids (TDS) water, typically 10 ppm or less, from seawater, using low grade heat. However, there are several barriers that currently prevent sweeping gas MD from being a viable desalination technology. The primary problem is that large air flows are required to achieve significant water yields, and the costs associated with transporting this air are prohibitive. To overcome this barrier, at least two improvements are required. First, new and different contactor geometries are necessary to achieve efficient contact with an extremely low pressure drop. Second, the temperature limits of the membranes must be increased. In the absence of these improvements, sweeping gas MD will not be economically competitive. However, the membranes may still find use in hybrid desalination systems.
The use of oxidized metal powders in mechanical shock or crush safety enhancers in nuclear weapons has been investigated. The functioning of these devices is based on the remarkable electrical behavior of compacts of certain oxidized metal powders when subjected to compressive stress. For example, the low voltage resistivity of a compact of oxidized tantalum powder was found to decrease by over six orders of magnitude during compaction between 1 MPa, where the thin, insulating oxide coatings on the particles are intact, to 10 MPa, where the oxide coatings have broken down along a chain of particles spanning the electrodes. In this work, the behavior of tantalum and aluminum powders was investigated. The low voltage resistivity during compaction of powders oxidized under various conditions was measured and compared. In addition, the resistivity at higher voltages and the dielectric breakdown strength during compaction were also measured. A key finding was that significant changes in the electrical properties persist after the removal of the stress so that a mechanical shock enhancer is feasible. This was verified by preliminary shock experiments. Finally, conceptual designs for both types of enhancers are presented.
Preliminary thermal decomposition experiments with Ablefoam and EF-AR20 foam (Ablefoam replacement) were done to determine the important chemical and associated physical phenomena that should be investigated to develop the foam decomposition chemistry sub-models that are required in numerical simulations of the fire-induced response of foam-filled engineered systems for nuclear safety applications. Although the two epoxy foams are physically and chemically similar, the thermal decomposition of each foam involves different chemical mechanisms, and the associated physical behavior of the foams, particularly ''foaming'' and ''liquefaction,'' have significant implications for modeling. A simplified decomposition chemistry sub-model is suggested that, subject to certain caveats, may be appropriate for ''scoping-type'' calculations.
This report is a presentation of modeling and simulation work for analyzing three designs of Micro Electro Mechanical (MEM) Compound Pivot Mirrors (CPM). These CPMs were made at Sandia National Laboratories using the SUMMiT{trademark} process. At 75 volts and above, initial experimental analysis of fabricated mirrors showed tilt angles of up to 7.5 degrees for one design, and 5 degrees for the other two. Nevertheless, geometric design models predicted higher tilt angles. Therefore, a detailed study was conducted to explain why lower tilt angles occurred and if design modifications could be made to produce higher tilt angles at lower voltages. This study showed that the spring stiffnesses of the CPMs were too great to allow for desired levels of rotation at lower levels of voltage. To produce these lower stiffnesses, a redesign is needed.
The semiconductor bridge (SCB) is an electroexplosive device used to initiate detonators. A C cable is commonly used to connect the SCB to a firing set. A series of tests were performed to identify smaller, lighter cables for firing single and multiple SCBs. This report provides a description of these tests and their results. It was demonstrated that lower threshold voltages and faster firing times can be achieved by increasing the wire size, which reduces ohmic losses. The RF 100 appears to be a reasonable substitute for C cable when firing single SCBs. This would reduce the cable volume by 68% and the weight by 67% while increasing the threshold voltage by only 22%. In general, RG 58 outperforms twisted pair when firing multiple SCBs in parallel. The RG 58's superior performance is attributed to its larger conductor size.
The verification and validation (V & V) in computational fluid dynamics was presented. The methods and procedures for assessing V & V were presented. The issues such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainity, conceptual sources of error and uncertainity, and the relationship between validation and prediction was discussed. Methods for determining the accuracy of numerical solutions were presented and the importance of software testing during verification activities were emphasized.
The Computational Plant or Cplant is a commodity-based supercomputer under development at Sandia National Laboratories. This paper describes resource-allocation strategies to achieve processor locality for parallel jobs in Cplant and other supercomputers. Users of Cplant and other Sandia supercomputers submit parallel jobs to a job queue. When a job is scheduled to run, it is assigned to a set of processors. To obtain maximum throughput, jobs should be allocated to localized clusters of processors to minimize communication costs and to avoid bandwidth contention caused by overlapping jobs. This paper introduces new allocation strategies and performance metrics based on space-filling curves and one dimensional allocation strategies. These algorithms are general and simple. Preliminary simulations and Cplant experiments indicate that both space-filling curves and one-dimensional packing improve processor locality compared to the sorted free list strategy previously used on Cplant. These new allocation strategies are implemented in the new release of the Cplant System Software, Version 2.0, phased into the Cplant systems at Sandia by May 2002.
We present low-temperature (T = 4K) photoluminescence studies of the effect of adding nitrogen to 6-nm-wide single-strained GaAsSb quantum wells on GaAs. The samples were grown by both MBE and MOCVD techniques. The nominal Sb concentration is about 30%. Adding about 1 to 2% N drastically reduced the bandgap energies from 1 to 0.75 eV, or 1.20 to 1.64 μm. Upon performing ex situ rapid thermal anneals, 825°C for 10s, the band gap energies as well as the photoluminescence intensities increased. The intensities increased by an order of magnitude for the annealed samples and the band gap energies increased by about 50 - 100 meV, depending on growth temperatures. The photoluminescence linewidths tended to decrease upon annealing. Preliminary results of a first-principles band structure calculation for the GaAsSbN system are also presented.
The ABO3 perovskite oxides constitute an important family of technologically important ferroelectrics whose relatively simple chemical and crystallographic structures have contributed significantly to our understanding of ferroelectricity. They readily undergo structural phase transitions involving both polar and non-polar distortions from the ideal cubic lattice. This paper focuses on the mixed perovskite system KTa1-xNbxO3, or KTN, which has turned out to be a model system. While the end members KTaO3 and KNbO3 might be expected to be similar, in reality they exhibit very different properties. Their mixed crystals, which can be grown over the whole composition range, exhibit a rich set of phenomena whose study has added greatly to our current understanding of the phase trsitions and dielectric properties of these materials. Included among these phenomena are soft mode response, ferroelectric (FE)-to-relaxor (R) crossover, quantum mechanical suppression of the transition, the appearance of a quantum paraelectric state and relaxational effects associated with dipolar impurities. Each of these phenomena is discussed briefly and illustrated. Some emphasis is on the unique role of pressure in elucidating the physics involved.
One of the concerns surrounding composite doubler technology pertains to long-term survivability, especially in the presence of non-optimum installations. This test program demonstrated the damage-tolerance capabilities of bonded composite doublers. The fatigue and strength tests quantified the structural response and crack-abatement capabilities of boron-epoxy doublers in the presence of worst-case flaw scenarios. The engineered flaws included cracks in the parent material, disbonds in the adhesive layer, and impact damage to the composite laminate. Environmental conditions representing temperature and humidity exposure were also included in the coupon tests. Large strains immediately adjacent to the doubler flaws emphasize the fact that relatively large disbond or delamination flaws (up to 100 diameter) in the composite doubler have only localized effects on strain and minimal effect on the overall doubler performance (i.e., undesirable strain relief over disbond but favorable load transfer immediately next to disbond). This statement is made relative to the inspection requirement that result in the detection of disbonds/delaminations of 0.5 '' diameter or greater. The point at which disbonds become detrimental depends upon the size and location of the disbond and the strain field around the doubler. This study did not attempt to determine a "flaw size vs. effect" relation. Rather, it used flaws that were twice as large as the detectable limit to demonstrate the ability of composite doublers to tolerate potential damage.
Chemometric analysis of nuclear magnetic resonance (NMR) spectroscopy has increased dramatically in recent years. Various chemometric techniques have been applied to a wide range of problems in food, agricultural, medical, process, and industrial system. This article gives a brief review of chemometric analysis of NMR spectral data, including a summary of the types of mixtures and experiments analyzed with chemometric techniques. Common experiment problems encountered during the chemometric analysis of NMR data are also discussed.
Stiction and friction in micromachines is commonly inhibited through the use of silane coupling agents such as 1H-, 1H-, 2H-, 2H-perfluorodecyltrichlorosilane (FDTS). FDTS coatings have allowed micromachine parts processed in water to be released without debilitating capillary adhesion occurring. These coatings are frequently considered as densely-packed monolayers, well-bonded to the substrate. In this paper, it is demonstrated that FDTS coatings can exhibit complex nanoscale structures, which control whether micromachine parts release or not. Surface images obtained via atomic force microscopy reveal that FDTS coating solutions can generate micellar aggregates that deposit on substrate surfaces. Interferometric imaging of model beam structures shows that stiction is high when the droplets are present and low when only monolayers are deposited. As the aggregate thickness (tens of nanometers) is insufficient to bridge the 2 μm gap under the beams, the aggregates appear to promote beam-substrate adhesion by changing the wetting characteristics of coated surfaces. Contact angle measurements and condensation figure experiments have been performed on surfaces and under coated beams to quantify the changes in interfacial properties that accompany different coating structures. These results may explain the irreproducibility that is often observed with these films.
A DOE/Sandia project termed the Blade Manufacturing Program was established at Sandia to develop means of advancing manufacturing processes in ways that lower costs and improve the reliability of turbine blades. Through industry contracts, manufacturers are improving processes such as resin infusion, resin transfer molding, and thermoplastic casting. Testing and modeling research at universities and national labs are adding to the knowledge of how composite materials perform in substructures and sub-scale blades as a function of their fabrication process.
Optimal estimation theory has been applied to the problem of estimating process variables during vacuum arc remelting (VAR), a process widely used in the specialty metals industry to cast large ingots of segregation sensitive and/or reactive metal alloys. Four state variables were used to develop a simple state-space model of the VAR process: electrode gap (G), electrode mass (M), electrode position (X) and electrode melting rate (R). The optimal estimator consists of a Kalman filter that incorporates the model and uses electrode feed rate and measurement based estimates of G, M and X to produce optimal estimates of all four state variables. Simulations show that the filter provides estimates that have error variances between one and three orders-of-magnitude less than estimates based solely on measurements. Examples are presented that verify this for electrode gap, an extremely important control parameter for the process.
Direct Simulation Monte Carlo (DSMC) and Navier-Stokes calculations are performed for a Mach 11 25 deg.-55 deg. spherically blunted biconic. The conditions are such that flow is laminar, with separation occurring at the cone-cone juncture. The simulations account for thermochemical nonequilibrium based on standard Arrhenius chemical rates for nitrogen dissociation and Millikan and White vibrational relaxation. The simulation error for the Navier-Stokes (NS) code is estimated to be 2% for the surface pressure and 10% for the surface heat flux. The grid spacing for the DSMC simulations was adjusted to be less than the local mean-freepath (mfp) and the time step less than the cell transient time of a computational particle. There was overall good agreement between the two simulations; however, the recirculation zone was computed to be larger for the NS simulation. A sensitivity study is performed to examine the effects of experimental uncertainty in the freestream properties on the surface pressure and heat flux distributions. The surface quantities are found to be extremely sensitive to the vibrational excitation state of the gas at the test section, with differences of 25% found in the surface pressure and 25%-35% for the surface heat flux. These calculations are part of a blind validation comparison and thus the experimental data has not yet been released.
Simulations of a turbulent methanol pool fire are conducted using both Reynolds-Averaged Navier-Stokes (RANS) and Large Eddy Simulation (LES) modeling methodologies. Two simple conserved scalar flameletbased combustion models with assumed PDF are developed and implemented. The first model assumes statistical independence between mixture fraction and its variance and results in poor predictions of time-averaged temperature and velocity. The second combustion model makes use of the PDF transport equation for mixture fraction and does not employ the statistical independence assumption. Results using this model show good agreement with experimental data for both the 2D and 3D LES, indicating that the use of statistical independence between mixture fraction and its dissipation is not valid for pool fire simulations. Lastly, "finger-like" flow structures near the base of the plume, generated from stream-wise vorticity, are shown to be important mixing mechanisms for accurate prediction of time-averaged temperature and velocity.
The concept of genetic divisors can be given a quantitative measure with a non-Archimedean p-adic metric that is both computationally convenient and physically motivated. For two particles possessing distinct mass parameters x and y, the metric distance D(x, y) is expressed on the field of rational numbers Q as the inverse of the greatest common divisor [gcd (x , y)]. As a measure of genetic similarity, this metric can be applied to (1) the mass numbers of particle states and (2) the corresponding subgroup orders of these systems. The use of the Bezout identity in the form of a congruence for the expression of the gcd (x , y) corresponding to the v{sub e} and {sub {mu}} neutrinos (a) connects the genetic divisor concept to the cosmic seesaw congruence, (b) provides support for the {delta}-conjecture concerning the subgroup structure of particle states, and (c) quantitatively strengthens the interlocking relationships joining the values of the prospectively derived (i) electron neutrino (v{sub e}) mass (0.808 meV), (ii) muon neutrino (v{sub {mu}}) mass (27.68 meV), and (iii) unified strong-electroweak coupling constant ({alpha}*{sup -1} = 34.26).
Alkylation reactions of benzene with propylene using zeolites were studied for their affinity for cumene production. The current process for the production of cumene involves heating corrosive acid catalysts, cooling, transporting, and distillation. This study focused on the reaction of products in a static one-pot vessel using non-corrosive zeolite catalysts, working towards a more efficient one-step process with a potentially large energy savings. A series of experiments were conducted to find the best reaction conditions yielding the highest production of cumene. The experiments looked at cumene formation amounts in two different reaction vessels that had different physical traits. Different zeolites, temperatures, mixing speeds, and amounts of reactants were also investigated to find their affects on the amount of cumene produced. Quantitative analysis of product mixture was performed by gas chromatography. Mass spectroscopy was also utilized to observe the gas phase components during the alkylation process.
The ultimate goal of many environmental measurements is to determine the risk posed to humans or ecosystems by various contaminants. Conventional environmental monitoring typically requires extensive sampling grids covering several media including air, water, soil and vegetation. A far more efficient, innovative and inexpensive tactic has been found using honeybees as sampling mechanisms. Members from a single bee colony forage over large areas ({approx}2 x 10{sup 6} m{sup 2}), making tens of thousands of trips per day, and return to a fixed location where sampling can be conveniently conducted. The bees are in direct contact with the air, water, soil and vegetation where they encounter and collect any contaminants that are present in gaseous, liquid and particulate form. The monitoring of honeybees when they return to the hive provides a rapid method to assess chemical distributions and impacts (1). The primary goal of this technology is to evaluate the efficiency of the transport mechanism (honeybees) to the hive using preconcentrators to collect samples. Once the extent and nature of the contaminant exposure has been characterized, resources can be distributed and environmental monitoring designs efficiently directed to the most appropriate locations. Methyl salicylate, a chemical agent surrogate was used as the target compound in this study.
Using intense magnetic pressure, a method was developed to launch flyer plates to velocities in excess of 20 km/s. This technique was used to perform plate-impact, shock wave experiments on cryogenic liquid deuterium (LD{sub 2}) to examine its high-pressure equation of state (EOS). Using an impedance matching method, Hugoniot measurements were obtained in the pressure range of 30-70 GPa. The results of these experiments disagree with previously reported Hugoniot measurements of LD{sub 2} in the pressure range above {approx}40 GPa, but are in good agreement with first principles, ab-initio models for hydrogen and its isotopes.
Sandstones that overlie or that are interbedded with evaporitic or other ductile strata commonly contain numerous localized domains of fractures, each covering an area of a few square miles. Fractures within the Entrada Sandstone at the Salt Valley Anticline are associated with salt mobility within the underlying Paradox Formation. The fracture relationships observed at Salt Valley (along with examples from Paleozoic strata at the southern edge of the Holbrook basin in northeastern Arizona, and sandstones of the Frontier Formation along the western edge of the Green River basin in southwestern Wyoming), show that although each fracture domain may contain consistently oriented fractures, the orientations and patterns of the fractures vary considerably from domain to domain. Most of the fracture patterns in the brittle sandstones are related to local stresses created by subtle, irregular flexures resulting from mobility of the associated, interbedded ductile strata (halite or shale). Sequential episodes of evaporite dissolution and/or mobility in different directions can result in multiple, superimposed fracture sets in the associated sandstones. Multiple sets of superimposed fractures create reservoir-quality fracture interconnectivity within restricted localities of a formation. However, it is difficult to predict the orientations and characteristics of this type of fracturing in the subsurface. This is primarily because the orientations and characteristics of these fractures typically have little relationship to the regional tectonic stresses that might be used to predict fracture characteristics prior to drilling. Nevertheless, the high probability of numerous, intersecting fractures in such settings attests to the importance of determining fracture orientations in these types of fractured reservoirs.
Carbon is an important support for heterogeneous catalysts, such as platinum supported on activated carbon (AC). An important property of these catalysts is that they decompose upon heating in air. Consequently, Pt/AC catalysts can be used in applications requiring rapid decomposition of a material, leaving little residue. This report describes the catalytic effects of platinum on carbon decomposition in an attempt to maximize decomposition rates. Catalysts were prepared by impregnating the AC with two different Pt precursors, Pt(NH{sub 3}){sub 4}(NO{sub 3}){sub 2} and H{sub 2}PtCl{sub 6}. Some catalysts were treated in flowing N{sub 2} or H{sub 2} at elevated temperatures to decompose the Pt precursor. The catalysts were analyzed for weight loss in air at temperatures ranging from 375 to 450 C, using thermogravimetric analysis (TGA). The following results were obtained: (1) Pt/AC decomposes much faster than pure carbon; (2) treatment of the as-prepared 1% Pt/AC samples in N{sub 2} or H{sub 2} enhances decomposition; (3) autocatalytic behavior is observed for 1% Pt/AC samples at temperatures {ge} 425 C; (4) oxygen is needed for decomposition to occur. Overall, the Pt/AC catalyst with the highest activity was impregnated with H{sub 2}PtCl{sub 6} dissolved in acetone, and then treated in H{sub 2}. However, further research and development should produce a more active Pt/AC material.
The Microsystems Subgrid Physics project is intended to address gaps between developing high-performance modeling and simulation capabilities and microdomain specific physics. The initial effort has focused on incorporating electrostatic excitations, adhesive surface interactions, and scale dependent material and thermal properties into existing modeling capabilities. Developments related to each of these efforts are summarized, and sample applications are presented. While detailed models of the relevant physics are still being developed, a general modeling framework is emerging that can be extended to incorporate evolving material and surface interaction modules.
Recently an innovative technique known as the Isentropic Compression Experiment (ICE) was developed that allows the dynamic compressibility curve of a material to be measured in a single experiment. Hence, ICE significantly reduces the cost and time required for generating and validating theoretical models of dynamic material response. ICE has been successfully demonstrated on several materials using the 20 MA Z accelerator, resulting in a large demand for its use. The present project has demonstrated its use on another accelerator, Saturn. In the course of this study, Saturn was tailored to produce a satisfactory drive time structure, and instrumented to produce velocity data. Pressure limits are observed to be approximately 10-15 GPa (''LP'' configuration) or 40-50 GPa (''HP'' configuration), depending on sample material. Drive reproducibility (panel to panel within a shot and between shots) is adequate for useful experimentation, but alignment fixturing problems make it difficult to achieve the same precision as is possible at Z. Other highlights included the useful comparison of slightly different PZT and ALOX compositions (neutron generator materials), temperature measurement using optical pyrometry, and the development of a new technique for preheating samples. 28 ICE tests have been conducted at Saturn to date, including the experiments described herein.
Sandia is investigating the shock response of single-crystal diamond up to several Mbar pressure in a collaborative effort with the Institute for Shock Physics (ISP) at Washington State University (WSU). This is project intended to determine (i) the usefulness of diamond as a window material for high pressure velocity interferometry measurements, (ii) the maximum stress level at which diamond remains transparent in the visible region, (iii) if a two-wave structure can be detected and analyzed, and if so, (iv) the Hugoniot elastic limit (HEL) for the [110] orientation of diamond. To this end experiments have been designed and performed, scoping the shock response in diamond in the 2-3 Mbar pressure range using conventional velocity interferometry techniques (conventional VISAR diagnostic). In order to perform more detailed and highly resolved measurements, an improved line-imaging VISAR has been developed and experiments using this technique have been designed. Prior to performing these more detailed experiments, additional scoping experiments are being performed using conventional techniques at WSU to refine the experimental design.
Explosive charges placed on the fuze end of a drained chemical munition are expected to be used as a means to destroy the fuze and burster charges of the munition. Analyses are presented to evaluate the effect of these additional initiation charges on the fragmentation characteristics for the M121A1 155mm chemical munition, modeled with a T244 fuze attached, and to assess the consequences of these fragment impacts on the walls of a containment chamber--the Burster Detonation Vessel. A numerical shock physics code (CTH) is used to characterize the mass and velocity of munition fragments. Both two- and three-dimensional simulations of the munition have been completed in this study. Based on threshold fragment velocity/mass results drawn from both previous and current analyses, it is determined that under all fragment impact conditions from the munition configurations considered in this study, no perforation of the inner chamber wall will occur, and the integrity of the Burster Detonation Vessel is retained. However, the munition case fragments have sufficient mass and velocity to locally damage the surface of the inner wall of the containment vessel.
This document describes the High Performance Electrical Modeling and Simulation (HPEMS) Global Verification Test Suite (VERTS). The VERTS is a regression test suite used for verification of the electrical circuit simulation codes currently being developed by the HPEMS code development team. This document contains descriptions of the Tier I test cases.
The Geometric Search Engine is a software system for storing and searching a database of geometric models. The database maybe searched for modeled objects similar in shape to a target model supplied by the user. The database models are generally from CAD models while the target model may be either a CAD model or a model generated from range data collected from a physical object. This document describes key generation, database layout, and search of the database.
This report provides a summary of the work completed in the Source Code Assurance Tool project. This work was done as part of the Laboratory Directed Research and Development program.
This report provides a preliminary functional description of a novel software application, the Source Code Assurance Tool, which would assist a system analyst in the software assessment process. An overview is given of the tool's functionality and design; and how the analyst would use it to assess a body of source code. This work was done as part of a Laboratory Directed Research and Development project.
In this paper we describe a new language, Visual Structure Language (VSL), designed to describe the structure of a program and explain its pieces. This new language is built on top of a general-purpose language, such as C. The language consists of three extensions: explanations, nesting, and arcs. Explanations are comments explicitly associated with code segments. These explanations can be nested. And arcs can be inserted between explanations to show data- or control-flow. The value of VSL is that it enables a developer to better control a code. The developer can represent the structure via nested explanations, using arcs to indicate the flow of data and control. The explanations provide a ''second opinion'' about the code so that at any level, the developer can confirm that the code operates as it is intended to do. We believe that VSL enables a programmer to use in a computer language the same model--a hierarchy of components--that they use in their heads when they conceptualize systems.
We present the tool we built as part of a Laboratory Directed Research and Development (LDRD) project. This tool consists of a commercially-available, graphical editor front-end, combined with a back end ''slicer.'' The significance of the tool is that it shows how to slice across system components. This is an advance from slicing across program components.
This report details experimental data useful in validating radiative transfer codes involving participating media, particularly for cases involving combustion. Special emphasis is on data for pool fires. Features sought in the references are: Flame geometry and fuel that approximate conditions for a pool fire or a well-defined flame geometry and characteristics that can be completely modeled; detailed information that could be used as code input data, including species concentration and temperature profiles and associated absorption coefficients, soot morphology and concentration profiles, associated scattering coefficients and phase functions, specification of system geometry, and system boundary conditions; detailed information that could be compared against code output predictions, including measured boundary radiative energy flux distributions (preferably spectral) and/or boundary temperature distributions; and a careful experimental error analysis so that code predictions could be rationally compared with experimental measurements. Reference data were gathered from more than 35 persons known to be active in the field of radiative transfer and combustion, particularly in experimental work. A literature search was carried out using key words. Additionally, the reference lists in papers/reports were pursued for additional leads. The report presents extended abstracts of the cited references, with comments on available and missing data for code validation, and comments on reported error. A graphic for quick reference is added to each abstract that indicates the completeness of data and how well the data mimics a large-scale pool fire. The references are organized into Lab-Scale Pool Fires, Large-Scale Pool Fires, Momentum-Driven Diffusion Flames, and Enclosure Fires. As an additional aid to report users, the Tables in Appendix A show the types of data included in each reference. The organization of the tables follows that used for the abstracts.
Sandia National Laboratories is developing innovative alternative technology to replace open burn/open detonation (OB/OD) operations for the destruction and disposal of obsolete, excess, and off-spec energetic materials. Alternatives to OB/OD are necessary to comply with increasingly stringent regulations. This program is developing an alternative technology to destruct energetic materials using organic amines with minimal discharge of toxic chemicals to the environment and defining the application of the by-products for the manufacture of structural materials.
Wire explosion experiments have been carried out at the University of Nevada, Reno. These experiments investigated the explosion phase of wires with properties and current-driving conditions comparable to that used in the initial stage of wire array z-pinch implosions on the Z machine at Sandia National Laboratories. Specifically, current pulses similar to and faster than the pre-pulse current on Z (current prior to fast rise in current pulse) were applied to single wire loads to study wire heating and the early development of plasmas in the wire initiation process. Understanding such issues are important to larger pulsed power machines that implode cylindrical wire array loads comprised of many wires. It is thought that the topology of an array prior to its acceleration influences the implosion and final stagnation properties, and therefore may depend on the initiation phase of the wires. Single wires ranging from 4 to 40 pm in diameter and comprised of material ranging from AI to W were investigated. Several diagnostics were employed to determine wire current, voltage, total emitted-light energy and power, along with the wire expansion velocity throughout the explosion. In a number of cases, the explosion process was also observed with x-ray backlighting using x-pinches. The experimental data indicates that the characteristics of a wire explosion depend dramatically on the rate of rise of the current, on the diameter of the wire, and on the heat of vaporization of the wire material. In this report, these characteristics will be described in detail. Of particular interest is the result that a faster current rise produces a higher energy deposition into the wire prior to explosion. This result introduces a different means of increasing the efficiency of wire heating. In this case, the energy deposition along the wire and its subsequent expansion, is uniform compared to a ''slow'' current rise (170 A/ns compared to 22 A /s current rise into a short circuit) and the expansion velocity is larger. The energy deposition and wire expansion is further modified by the wire diameter and material. Investigations of wire diameter indicate that the diameter primarily effects the expansion velocity and energy deposition; thicker wires explode with greater velocities but absorb less energy per atom. The heat of vaporization also categorizes the wire explosion; wires with a low heat of vaporization expand faster and emit less radiation than their high heat of vaporization counterparts.
An important capability in conducting underground nuclear tests is to be able to determine the nuclear test yield accurately within hours after a test. Due to a nuclear test moratorium, the seismic method that has been used in the past has not been exercised since a non-proliferation high explosive test in 1993. Since that time, the seismic recording system and the computing environment have been replaced with modern equipment. This report describes the actions that have been taken to preserve the capability for determining seismic yield, in the event that nuclear testing should resume. Specifically, this report describes actions taken to preserve seismic data, actions taken to modernize software, and actions taken to document procedures. It concludes with a summary of the current state of the data system and makes recommendations for maintaining this system in the future.
This report describes testing of prototype InfiniBand{trademark} host channel adapters from Intel Corporation, using the Linux(reg sign) operating system. Three generations of prototype hardware were obtained, and Linux device drivers were written which exercised the data movement capabilities of the cards. Latency and throughput results obtained were similar to other SAN technologies, but not significantly better.
This project set out to scientifically-tailor ''smart'' interfacial films and 3-D composite nanostructures to exhibit photochromic responses to specific, highly-localized chemical and/or mechanical stimuli, and to integrate them into optical microsystems. The project involved the design of functionalized chromophoric self-assembled materials that possessed intense and environmentally-sensitive optical properties (absorbance, fluorescence) enabling their use as detectors of specific stimuli and transducers when interfaced with optical probes. The conjugated polymer polydiacetylene (PDA) proved to be the most promising material in many respects, although it had some drawbacks concerning reversibility. Throughout his work we used multi-task scanning probes (AFM, NSOM), offering simultaneous optical and interfacial force capabilities, to actuate and characterize the PDA with localized and specific interactions for detailed characterization of physical mechanisms and parameters. In addition to forming high quality mono-, bi-, and tri-layers of PDA via Langmuir-Blodgett deposition, we were successful in using the diacetylene monomer precursor as a surfactant that directed the self-assembly of an ordered, mesostructured inorganic host matrix. Remarkably, the diacetylene was polymerized in the matrix, thus providing a PDA-silica composite. The inorganic matrix serves as a perm-selective barrier to chemical and biological agents and provides structural support for improved material durability in microsystems. Our original goal was to use the composite films as a direct interface with microscale devices as optical elements (e.g., intracavity mirrors, diffraction gratings), taking advantage of the very high sensitivity of device performance to real-time dielectric changes in the films. However, our optical physics colleagues (M. Crawford and S. Kemme) were unsuccessful in these efforts, mainly due to the poor optical quality of the composite films.
The intention of this project was to collaborate with Harvard University in the general area of nanoscale structures, biomolecular materials and their application in support of Sandia's MEMS technology. The expertise at Harvard was crucial in fostering these fundamentally interdisciplinary developments. Areas that were of interest included: (1) nanofabrication that exploits traditional methods (from Si technology) and developing new methods; (2) self-assembly of organic and inorganic systems; (3) assembly and dynamics of membranes and microfluidics; (4) study of the hierarchy of scales in assembly; (5) innovative imaging methods; and (6) hard (engineering)/soft (biological) interfaces. Specifically, we decided to work with Harvard to design and construct an experimental test station to measure molecular transport through single nanopores. The pore may be of natural origin, such as a self-assembled bacterial protein in a lipid bilayer, or an artificial structure in silicon or silicon nitride.
This report documents work supporting the Sandia National Laboratories initiative in Distributed Energy Resources (DERs) and Supervisory Control and Data Acquisition (SCADA) systems. One approach for real-time control of power generation assets using feedback control, Quantitative feedback theory (QFT), has recently been applied to voltage, frequency, and phase-control of power systems at Sandia. QFT provided a simple yet powerful philosophy for designing the control systems--allowing the designer to optimize the system by making design tradeoffs without getting lost in complex mathematics. The feedback systems were effective in reducing sensitivity to large and sudden changes in the power grid system. Voltage, frequency, and phase were accurately controlled, even with large disturbances to the power grid system.
This report is divided into two parts: a study of the glass transition in confined geometries, and formation mechanisms of block copolymer mesophases by solvent evaporation-induced self-assembly. The effect of geometrical confinement on the glass transition of polymers is a very important consideration for applications of polymers in nanotechnology applications. We hypothesize that the shift of the glass transition temperature of polymers in confined geometries can be attributed to the inhomogeneous density profile of the liquid. Accordingly, we assume that the glass temperature in the inhomogeneous state can be approximated by the Tg of a corresponding homogeneous, bulk polymer, but at a density equal to the average density of the inhomogeneous system. Simple models based on this hypothesis give results that are in remarkable agreement with experimental measurements of the glass transition of confined liquids. Evaporation-induced self-assembly (EISA) of block copolymers is a versatile process for producing novel, nanostructured materials and is the focus of much of the experimental work at Sandia in the Brinker group. In the EISA process, as the solvent preferentially evaporates from a cast film, two possible scenarios can occur: microphase separation or micellization of the block copolymers in solution. In the present investigation, we established the conditions that dictate which scenario takes place. Our approach makes use of scaling arguments to determine whether the overlap concentration c* occurs before or after the critical micelle concentration (CMC). These theoretical arguments are used to interpret recent experimental results of Yu and collaborators on EISA experiments on Silica/PS-PEO systems.
In exploring the question of how humans reason in ambiguous situations or in the absence of complete information, we stumbled onto a body of knowledge that addresses issues beyond the original scope of our effort. We have begun to understand the importance that philosophy, in particular the work of C. S. Peirce, plays in developing models of human cognition and of information theory in general. We have a foundation that can serve as a basis for further studies in cognition and decision making. Peircean philosophy provides a foundation for understanding human reasoning and capturing behavioral characteristics of decision makers due to cultural, physiological, and psychological effects. The present paper describes this philosophical approach to understanding the underpinnings of human reasoning. We present the work of C. S. Peirce, and define sets of fundamental reasoning behavior that would be captured in the mathematical constructs of these newer technologies and would be able to interact in an agent type framework. Further, we propose the adoption of a hybrid reasoning model based on his work for future computational representations or emulations of human cognition.
This article summarizes information related to the automated course of action (COA) development effort. The information contained in this document puts the COA effort into an operational perspective that addresses command and control theory, as well as touching on the military planning concept known as effects-based operations. The sections relating to the COA effort detail the rationale behind the functional models developed and identify technologies that could support the process functions. The functional models include a section related to adversarial modeling, which adds a dynamic to the COA process that is missing in current combat simulations. The information contained in this article lays the foundation for building a unique analytic capability.
This report is an update to previous ''smart gun'' work and the corresponding report that were completed in 1996. It incorporates some new terminology and expanded definitions. This effort is the product of an open source look at what has happened to the ''smart gun'' technology landscape since the 1996 report was published.
The Comprehensive Test Ban Treaty of 1996 banned any future nuclear explosions or testing of nuclear weapons and created the CTBTO in Vienna to implement the treaty. The U.S. response to this was the cessation of all above and below ground nuclear testing. As such, all stockpile reliability assessments are now based on periodic testing of subsystems being stored in a wide variety of environments. This data provides a wealth of information and feeds a growing web of deterministic, physics-based computer models for assessment of stockpile reliability. Unfortunately until 1996 it was difficult to relate the deterministic materials aging test data to component reliability. Since that time we have made great strides in mathematical techniques and computer tools that permit explicit relationships between materials degradation, e.g. corrosion, thermo-mechanical fatigue, and reliability. The resulting suite of tools is known as CRAX and the mathematical library supporting these tools is Cassandra. However, these techniques ignore the historical data that is also available on similar systems in the nuclear stockpile, the DoD weapons complex and even in commercial applications. Traditional statistical techniques commonly used in classical re liability assessment do not permit data from these sources to be easily included in the overall assessment of system reliability. An older, alternative approach based on Bayesian probability theory permits the inclusion of data from all applicable sources. Data from a variety of sources is brought together in a logical fashion through the repeated application of inductive mathematics. This research brings together existing mathematical methods, modifies and expands those techniques as required, permitting data from a wide variety of sources to be combined in a logical fashion to increase the confidence in the reliability assessment of the nuclear weapons stockpile. The application of this research is limited to those systems composed of discrete components, e.g. those that can be characterized as operating or not operating. However, there is nothing unique about the underlying principles and the extension to continuous subsystem/systems is straightforward. The framework is also laid for the consideration of systems with multiple correlated failure modes. While an important consideration, time and resources limited the specific demonstration of these methods.
This report summarizes the development of sensor particles for remote detection of trace chemical analytes over broad areas, e.g residual trinitrotoluene from buried landmines or other unexploded ordnance (UXO). We also describe the potential of the sensor particle approach for the detection of chemical warfare (CW) agents. The primary goal of this work has been the development of sensor particles that incorporate sample preconcentration, analyte molecular recognition, chemical signal amplification, and fluorescence signal transduction within a ''grain of sand''. Two approaches for particle-based chemical-to-fluorescence signal transduction are described: (1) enzyme-amplified immunoassays using biocompatible inorganic encapsulants, and (2) oxidative quenching of a unique fluorescent polymer by TNT.
This demonstration project is a collaboration among DOE, Sandia National Laboratories, the University of Texas, El Paso (UTEP), the International Boundary and Water Commission (IBWC), and the US Army Corps of Engineers (USACE). Sandia deployed and demonstrated a field measurement technology that enables the determination of erosion and transport potential of sediments in the Rio Grande. The technology deployed was the Mobile High Shear Stress Flume. This unique device was developed by Sandia's Carlsbad Programs for the USACE and has been used extensively in collaborative efforts on near shore and river systems throughout the United States. Since surface water quantity and quality along with human health is an important part of the National Border Technology Program, technologies that aid in characterizing, managing, and protecting this valuable resource from possible contamination sources is imperative.
Photonic crystals are periodically engineered ''materials'' which are the photonic analogues of electronic crystals. Much like electronic crystal, photonic crystal materials can have a variety of crystal symmetries, such as simple-cubic, closed-packed, Wurtzite and diamond-like crystals. These structures were first proposed in late 1980's. However, due mainly to fabrication difficulties, working photonic crystals in the near-infrared and visible wavelengths are only just emerging. In this article, we review the construction of two- and three-dimensional photonic crystals of different symmetries at infrared and optical wavelengths using advanced semiconductor processing. We further demonstrate that this process lends itself to the creation of line defects (linear waveguides) and point defects (micro-cavities), which are the most basic building blocks for optical signal processing, filtering and routing.
Monitoring the condition of critical structures is vital for not only assuring occupant safety and security during naturally occurring and malevolent events, but also to determine the fatigue rate under normal aging conditions and to allow for efficient upgrades. This project evaluated the feasibility of applying integrated, remotely monitored micro-sensor systems to assess the structural performance of critical infrastructure. These measurement systems will provide forensic data on structural integrity, health, response, and overall structural performance in load environments such as aging, earthquake, severe wind, and blast attacks. We have investigated the development of ''self-powered'' sensor tags that can be used to monitor the state-of-health of a structure and can be embedded in that structure without compromising the integrity of the structure. A sensor system that is powered by converting structural stresses into electrical power via piezoelectric transducers has been demonstrated including work toward integration of that sensor with a novel radio frequency (RF) tagging technology as a means of remotely reading the data from the sensor.
Perhaps the most basic barrier to the widespread deployment of remote manipulators is that they are very difficult to use. Remote manual operations are fatiguing and tedious, while fully autonomous systems are seldom able to function in changing and unstructured environments. An alternative approach to these extremes is to exploit computer control while leaving the operator in the loop to take advantage of the operator's perceptual and decision-making capabilities. This report describes research that is enabling gradual introduction of computer control and decision making into operator-supervised robotic manipulation systems, and its integration on a commercially available, manually controlled mobile manipulator.
This document highlights the Discom's Distance computing and communication team activities at the 2000 Supercomputing conference in Dallas Texas. This conference is sponsored by the IEEE and ACM. Sandia's participation in the conference has now spanned a decade, for the last five years Sandia National Laboratories, Los Alamos National Lab and Lawrence Livermore National Lab have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives, Program rubric to demonstrate ASCI's emerging capabilities in computational science and our combined expertise in high performance computer science and communication networking developments within the program. At SC 2000, DISCOM demonstrated an infrastructure. DISCOM2 uses this forum to demonstrate and focus communication and pre-standard implementation of 10 Gigabit Ethernet, the first gigabyte per second data IP network transfer application, and VPN technology that enabled a remote Distributed Resource Management tools demonstration. Additionally a national OC48 POS network was constructed to support applications running between the show floor and home facilities. This network created the opportunity to test PSE's Parallel File Transfer Protocol (PFTP) across a network that had similar speed and distances as the then proposed DISCOM WAN. The SCINET SC2000 showcased wireless networking and the networking team had the opportunity to explore this emerging technology while on the booth. This paper documents those accomplishments, discusses the details of their convention exhibit floor. We also supported the production networking needs of the implementation, and describes how these demonstrations supports DISCOM overall strategies in high performance computing networking.
This report utilizes the results of the Solar Two project, as well as continuing technology development, to update the technical and economic status of molten-salt power towers. The report starts with an overview of power tower technology, including the progression from Solar One to the Solar Two project. This discussion is followed by a review of the Solar Two project--what was planned, what actually occurred, what was learned, and what was accomplished. The third section presents preliminary information regarding the likely configuration of the next molten-salt power tower plant. This section draws on Solar Two experience as well as results of continuing power tower development efforts conducted jointly by industry and Sandia National Laboratories. The fourth section details the expected performance and cost goals for the first commercial molten-salt power tower plant and includes a comparison of the commercial performance goals to the actual performance at Solar One and Solar Two. The final section summarizes the successes of Solar Two and the current technology development activities. The data collected from the Solar Two project suggest that the electricity cost goals established for power towers are reasonable and can be achieved with some simple design improvements.
This report describes the current state-of-the-art in Autonomous Land Vehicle (ALV) systems and technology. Five functional technology areas are identified and addressed. For each a brief, subjective, preface is first provided which envisions the necessary technology for the deployment of an operational ALV system. Subsequently, a detailed literature review is provided to support and elaborate these views. It is further established how these five technology areas fit together as a functioning whole. The essential conclusion of this report is that the necessary sensors, algorithms and methods to develop and demonstrate an operationally viable all-terrain ALV already exist and could be readily deployed. A second conclusion is that the successful development of an operational ALV system will rely on an effective approach to systems engineering. In particular, a precise description of mission requirements and a clear definition of component functionality is essential.
The {mu}ChemLab{trademark} Laboratory Directed Research and Development (LDRD) Grand Challenge project began in October 1996 and ended in September 2000. The technical managers of the {mu}ChemLab{trademark} project and the LDRD office, with the support of a consultant, conducted a competitive technical and market demand intelligence analysis of the {mu}ChemLab{trademark}. The managers used this knowledge to make project decisions and course adjustments. CTI/MDI positively impacted the project's technology development, uncovered potential technology partnerships, and supported eventual industry partner contacts. CTI/MDI analysis is now seen as due diligence and the {mu}ChemLab{trademark} project is now the model for other Sandia LDRD Grand Challenge undertakings. This document describes the CTI/MDI analysis and captures the more important ''lessons learned'' of this Grand Challenge project, as reported by the project's management team.
We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes.
Novel technologies often are born prior to identifying application arenas that can provide the financial support for their development and maturation. After creating new technologies, innovators rush to identify some previously difficult-to-meet product or process challenge. In this regard, microsystems technology is following a path that many other electronic technologies have previously faced. From this perspective, the development of a robust technology follows a three-stage approach. First there is the ''That idea will never work.'' stage, which is hurdled only by proving the concept. Next is the ''Why use such a novel (unproven) technology instead of a conventional one?'' stage. This stage is overcome when a particular important device cannot be made economically--or at all--through the existing technological base. This initial incorporation forces at least limited use of the new technology, which in turn provides the revenues and the user base to mature and sustain the technology. Finally there is the ''Sure that technology (e.g., microsystems) is good for that product (e.g., accelerometers and pressure sensors), but the problems are too severe for any other application'' stage which is only overcome with the across-the-board application of the new technology. With an established user base, champions for the technology become willing to apply the new technology as a potential solution to other problems. This results in the widespread diffusion of the previously shunned technology, making the formerly disruptive technology the new standard. Like many technologies in the microelectronics industry, the microsystems community is now traversing this well-worn path. This paper examines the evolution of microsystems technology from the perspective of Sandia National Laboratories' development of a sacrificial surface micromachining technology and the associated infrastructure.
Electromagnetic induction (EMI) by a magnetic dipole located above a dipping interface is of relevance to the petroleum well-logging industry. The problem is fully three-dimensional (3-D) when formulated as above, but reduces to an analytically tractable one-dimensional (1-D) problem when cast as a small tilted coil above a horizontal interface. The two problems are related by a simple coordinate rotation. An examination of the induced eddy currents and the electric charge accumulation at the interface help to explain the inductive and polarization effects commonly observed in induction logs from dipping geological formations. The equivalence between the 1-D and 3-D formulations of the problem enables the validation of a previously published finite element solver for 3-D controlled-source EMI.
This report presents a discussion of directory-enabled policy-based networking with an emphasis on its role as the foundation for securely scalable enterprise networks. A directory service provides the object-oriented logical environment for interactive cyber-policy implementation. Cyber-policy implementation includes security, network management, operational process and quality of service policies. The leading network-technology vendors have invested in these technologies for secure universal connectivity that transverses Internet, extranet and intranet boundaries. Industry standards are established that provide the fundamental guidelines for directory deployment scalable to global networks. The integration of policy-based networking with directory-service technologies provides for intelligent management of the enterprise network environment as an end-to-end system of related clients, services and resources. This architecture allows logical policies to protect data, manage security and provision critical network services permitting a proactive defense-in-depth cyber-security posture. Enterprise networking imposes the consideration of supporting multiple computing platforms, sites and business-operation models. An industry-standards based approach combined with principled systems engineering in the deployment of these technologies allows these issues to be successfully addressed. This discussion is focused on a directory-based policy architecture for the heterogeneous enterprise network-computing environment and does not propose specific vendor solutions. This document is written to present practical design methodology and provide an understanding of the risks, complexities and most important, the benefits of directory-enabled policy-based networking.
Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird[11.1] and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modeled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modeled using steric factors derived from Arrhenius reaction rates or in a manner similar to continuum modeling. Surface chemistry is modeled with surface reaction probabilities; an optional site density, energy dependent, coverage model is included. Electrons are modeled by either a local charge neutrality assumption or as discrete simulational particles. Ion chemistry is modeled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electro-static fields can either be: externally input, a Langmuir-Tonks model or from a Green's Function (Boundary Element) based Poison Solver. Icarus has been used for subsonic to hypersonic, chemically reacting, and plasma flows. The Icarus software package includes the grid generation, parallel processor decomposition, post-processing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. All of the software packages are written in standard Fortran.
Encapsulation is a common process used in manufacturing most non-nuclear components including: firing sets, neutron generators, trajectory sensing signal generators (TSSGs), arming, fusing and firing devices (AF and Fs), radars, programmers, connectors, and batteries. Encapsulation is used to contain high voltage, to mitigate stress and vibration and to protect against moisture. The purpose of the ASCI Encapsulation project is to develop a simulation capability that will allow us to aid in the encapsulation design process, especially for neutron generators. The introduction of an encapsulant poses many problems because of the need to balance ease of processing and properties necessary to achieve the design benefits such as tailored encapsulant properties, optimized cure schedule and reduced failure rates. Encapsulants can fail through fracture or delamination as a result of cure shrinkage, thermally induced residual stresses, voids or incomplete component embedding and particle gradients. Manufacturing design requirements include (1) maintaining uniform composition of particles in order to maintain the desired thermal coefficient of expansion (CTE) and density, (2) mitigating void formation during mold fill, (3) mitigating cure and thermally induced stresses during cure and cool down, and (4) eliminating delamination and fracture due to cure shrinkage/thermal strains. The first two require modeling of the fluid phase, and it is proposed to use the finite element code GOMA to accomplish this. The latter two require modeling of the solid state; however, ideally the effects of particle distribution would be included in the calculations, and thus initial conditions would be set from GOMA predictions. These models, once they are verified and validated, will be transitioned into the SIERRA framework and the ARIA code. This will facilitate exchange of data with the solid mechanics calculations in SIERRA/ADAGIO.
The Segmented Rail Phased Induction Motor (SERAPHIM) has been proposed as a propulsion method for urban maglev transit, advanced monorail, and other forms of high speed ground transportation. In this report we describe the technology, consider different designs, and examine its strengths and weaknesses.
A probabilistic, risk-based performance-assessment methodology is being developed to assist designers, regulators, and involved stakeholders in the selection, design, and monitoring of long-term covers for contaminated subsurface sites. This report presents an example of the risk-based performance-assessment method using a repository site in Monticello, Utah. At the Monticello site, a long-term cover system is being used to isolate long-lived uranium mill tailings from the biosphere. Computer models were developed to simulate relevant features, events, and processes that include water flux through the cover, source-term release, vadose-zone transport, saturated-zone transport, gas transport, and exposure pathways. The component models were then integrated into a total-system performance-assessment model, and uncertainty distributions of important input parameters were constructed and sampled in a stochastic Monte Carlo analysis. Multiple realizations were simulated using the integrated model to produce cumulative distribution functions of the performance metrics, which were used to assess cover performance for both present- and long-term future conditions. Performance metrics for this study included the water percolation reaching the uranium mill tailings, radon flux at the surface, groundwater concentrations, and dose. Results of this study can be used to identify engineering and environmental parameters (e.g., liner properties, long-term precipitation, distribution coefficients) that require additional data to reduce uncertainty in the calculations and improve confidence in the model predictions. These results can also be used to evaluate alternative engineering designs and to identify parameters most important to long-term performance.
In this paper, we discuss several specific threats directed at the routing data of an ad hoc network. We address security issues that arise from wrapping authentication mechanisms around ad hoc routing data. We show that this bolt-on approach to security may make certain attacks more difficult, but still leaves the network routing data vulnerable. We also show that under a certain adversarial model, most existing routing protocols cannot be secured with the aid of digital signatures.
The recent unprecedented growth of global network (Internet) usage has created an ever-increasing amount of congestion. Telecommunication companies (Telco) and Internet Service Providers (ISP's), which provide access and distribution through the network, are increasingly more aware of the need to manage this growth. Congestion, if left unmanaged, will result in a degradation of the over-all network. These access and distribution networks currently lack formal mechanisms to select Quality of Service (QoS) attributes for data transport. Network services with a requirement for expediency or consistent amounts of bandwidth cannot function properly in a communication environment without the implementation of a QoS structure. This report describes and implements such a structure that results in the ability to identify, prioritize, and police critical application flows.
We investigate the rate of convergence of stochastic basis elements to the solution of a stochastic operator equation. As in deterministic finite elements, the solution may be approximately represented as the linear combination of basis elements. In the stochastic case, however, the solution belongs to a Hilbert space of functions defined on a cross product domain endowed with the product of a deterministic and probabilistic measure. We show that if the dimension of the stochastic space is n, and the desired accuracy is of order {var_epsilon}, the number of stochastic elements required to achieve this level of precision, in the Galerkin method, is on the order of | ln {var_epsilon} |{sup n}.
The GEO-SEQ Project is investigating methods for geological sequestration of CO{sub 2}. This project, which is directed by LBNL and includes a number of other industrial, university, and national laboratory partners, is evaluating computer simulation methods including TOUGH2 for this problem. The TOUGH2 code, which is a widely used code for flow and transport in porous and fractured media, includes simplified methods for gas diffusion based on a direct application of Fick's law. As shown by Webb (1998) and others, the Dusty Gas Model (DGM) is better than Fick's Law for modeling gas-phase diffusion in porous media. In order to improve gas-phase diffusion modeling for the GEO-SEQ Project, the EOS7R module in the TOUGH2 code has been modified to include the Dusty Gas Model as documented in this report. In addition, the liquid diffusion model has been changed from a mass-based formulation to a mole-based model. Modifications for separate and coupled diffusion in the gas and liquid phases have also been completed. The results from the DGM are compared to the Fick's law behavior for TCE and PCE diffusion across a capillary fringe. The differences are small due to the relatively high permeability (k = 10{sup -11} m{sup 2}) of the problem and the small mole fraction of the gases. Additional comparisons for lower permeabilities and higher mole fractions may be useful.
Measurement and signal intelligence demands has created new requirements for information management and interoperability as they affect surveillance and situational awareness. Integration of on-board autonomous learning and adaptive control structures within a remote sensing platform architecture would substantially improve the utility of intelligence collection by facilitating real-time optimization of measurement parameters for variable field conditions. A problem faced by conventional digital implementations of intelligent systems is the conflict between a distributed parallel structure on a sequential serial interface functionally degrading bandwidth and response time. In contrast, optically designed networks exhibit the massive parallelism and interconnect density needed to perform complex cognitive functions within a dynamic asynchronous environment. Recently, all-optical self-organizing neural networks exhibiting emergent collective behavior which mimic perception, recognition, association, and contemplative learning have been realized using photorefractive holography in combination with sensory systems for feature maps, threshold decomposition, image enhancement, and nonlinear matched filters. Such hybrid information processors depart from the classical computational paradigm based on analytic rules-based algorithms and instead utilize unsupervised generalization and perceptron-like exploratory or improvisational behaviors to evolve toward optimized solutions. These systems are robust to instrumental systematics or corrupting noise and can enrich knowledge structures by allowing competition between multiple hypotheses. This property enables them to rapidly adapt or self-compensate for dynamic or imprecise conditions which would be unstable using conventional linear control models. By incorporating an intelligent optical neuroprocessor in the back plane of an imaging sensor, a broad class of high-level cognitive image analysis problems including geometric change detection, pattern recognition, and correlated feature extraction can be realized in an inherently parallel fashion without information bottlenecking or external supervision, Using this approach, we believe that autonomous control systems embodied with basic adaptive decision-theoretic capabilities can be developed for imaging and surveillance sensors to improve discrimination in stressing operational environments.
Conventional systems surety analysis is basically restricted to measurable or physical-model-derived data. However, most analyses, including high-consequence system surety analysis, must also utilize subjective information. In order to address this need, there has been considerable effort on analytically incorporating engineering judgment. For example, Dempster-Shafer theory establishes a framework in which frequentist probability and Bayesian incorporation of new data are subsets. Although Bayesian and Dempster-Shafer methodology both allow judgment, neither derives results that can indicate the relative amounts of subjective judgment and measurable data in the results. The methodology described in this report addresses these problems through a hybrid-mathematics-based process that allows tracking of the degree of subjective information in the output, thereby providing more informative (as well as more appropriate) results. In addition, most high consequence systems offer difficult-to-analyze situations. For example, in the Sandia National Laboratories nuclear weapons program, the probability that a weapon responds safely when exposed to an abnormal environment (e.g., lightning, crush, metal-melting temperatures) must be assured to meet a specific requirement. There are also non-probabilistic DOE and DoD requirements (e.g., for determining the adequacy of positive measures). The type of processing required for these and similar situations transcends conventional probabilistic and human factors methodology. The results described herein address these situations by efficiently utilizing subjective and objective information in a hybrid mathematical structure in order to directly apply to the surety assessment of high consequence systems. The results can also improve the quality of the information currently provided to decision-makers. To this end, objective inputs are processed in a conventional manner; while subjective inputs are derived from the combined engineering judgment of experts in the appropriate disciplines. In addition to providing output constituents (including portrayal of uncertainty) corresponding to combination of these input types, their individual contributions to the resultant uncertainty are determined and provided as part of the output information. Finally, the safety assessment is complemented by a latent effects analysis, facilitated by soft-aggregation accumulation of observed operational constituents.
An effort is underway at Sandia National Laboratories to develop a library of algorithms to search for potential interactions between surfaces represented by analytic and discretized topological entities. This effort is also developing algorithms to determine forces due to these interactions for transient dynamics applications. This document describes the Application Programming Interface (API) for the ACME (Algorithms for Contact in a Multiphysics Environment) library.
We study the ballistic and diffusive magnetoquantum transport using a typical quantum point contact geometry for single and tunnel-coupled double wires that are wide (less than or similar to1 mum) in one perpendicular direction with densely populated sublevels and extremely confined in the other perpendicular (i.e., growth) direction. A general analytic solution to the Boltzmann equation is presented for multisublevel elastic scattering at low temperatures. The solution is employed to study interesting magnetic-field dependent behavior of the conductance such as a large enhancement and quantum oscillations of the conductance for various structures and field orientations. These phenomena originate from the following field-induced properties: magnetic confinement, displacement of the initial- and final-state wave functions for scattering, variation of the Fermi velocities, mass enhancement, depopulation of the sublevels and anticrossing (in double quantum wires). The magnetoconductance is strikingly different in long diffusive (or rough. dirty) wires from the quantized conductance in short ballistic (or clean) wires. Numerical results obtained for the rectangular confinement potentials in the growth direction are satisfactorily interpreted in terms of the analytic solutions based on harmonic confinement potentials. Some of the predicted features of the field-dependent diffusive and quantized conductances are consistent with recent data from GaAs/AlxGa1-xAs double quantum wires.
Reduced prestressing and degradation of prestressing tendons in concrete containment vessels were investigated using finite element analysis of a typical prestressed containment vessel. The containment was analyzed during a loss of coolant accident (LOCA) with varying levels of prestress loss and with reduced tendon area. It was found that when selected hoop prestressing tendons were completely removed (as if broken) or when the area of selected hoop tendons was reduced, there was a significant impact on the ultimate capacity of the containment vessel. However, when selected hoop prestressing tendons remained, but with complete loss of prestressing, the predicted ultimate capacity was not significantly affected for this specific loss of coolant accident. Concrete cracking occurred at much lower levels for all cases. For cases where selected vertical tendons were analyzed with reduced prestressing or degradation of the tendons, there also was not a significant impact on the ultimate load carrying capacity for the specific accident analyzed. For other loading scenarios (such as seismic loading) the loss of hoop prestressing with the tendons remaining could be more significant on the ultimate capacity of the containment vessel than found for the accident analyzed. A combination of loss of prestressing and degradation of the vertical tendons could also be more critical during other loading scenarios.
The Sandia coilgun [1,2,3,4,5] is an inductive electromagnetic launcher. It consists of a sequence of powered, multi-turn coils surrounding a flyway of circular cross-section through which a conducting armature passes. When the armature is properly positioned with respect to a coil, a charged capacitor is switched into the coil circuit. The rising coil currents induce a current in the armature, producing a repulsive accelerating force. The basic numerical tool for modeling the coilgun is the SLINGSHOT code, an expanded, user-friendly successor to WARP-10 [6]. SLINGSHOT computes the currents in the coils and armature, finds the forces produced by those currents, and moves the armature through the array of coils. In this approach, the cylindrically symmetric coils and armature are subdivided into concentric hoops with rectangular cross-section, in each of which the current is assumed to be uniform. The ensemble of hoops are treated as coupled circuits. The specific heats and resistivities of the hoops are found as functions of temperature and used to determine the resistive heating. The code calculates the resistances and inductances for all hoops, and the mutual inductances for all hoop pairs. Using these, it computes the hoop currents from their circuit equations, finds the forces from the products of these currents and the mutual inductance gradient, and moves the armature. Treating the problem as a set of coupled circuits is a fast and accurate approach compared to solving the field equations. Its use, however, is restricted to problems in which the symmetry dictates the current paths. This paper is divided into three parts. The first presents a demonstration of the code. The second describes the input and output. The third part describes the physical models and numerical methods used in the code. It is assumed that the reader is familiar with coilguns.
The methodology in this report addresses the safety effects of organizational and operational factors that can be measured through ''inspection.'' The investigation grew out of a preponderance of evidence that the safety ''culture'' (attitude of employees and management toward safety) was frequently one of the major root causes behind accidents or safety-relevant failures. The approach is called ''Markov latent effects'' analysis. Since safety also depends on a multitude of factors that are best measured through well known risk analysis methods (e.g., fault trees, event trees, FMECA, physical response modeling, etc.), the Markov latent effects approach supplements conventional safety assessment and decision analysis methods. A top-down mathematical approach is developed for decomposing systems, for determining the most appropriate items to be measured, and for expressing the measurements as imprecise subjective metrics through possibilistic or fuzzy numbers. A mathematical model is developed that facilitates combining (aggregating) inputs into overall metrics and decision aids, also portraying the inherent uncertainty. A major goal of the modeling is to help convey the top-down system perspective. Metrics are weighted according to significance of the attribute with respect to subsystems and are aggregated nonlinearly. Since the accumulating effect responds less and less to additional contribution, it is termed ''soft'' mathematical aggregation, which is analogous to how humans frequently make decisions. Dependence among the contributing factors is accounted for by incorporating subjective metrics on commonality and by reducing the overall contribution of these combinations to the overall aggregation. Decisions derived from the results are facilitated in several ways. First, information is provided on input ''Importance'' and ''Sensitivity'' (both Primary and Secondary) in order to know where to place emphasis on investigation of root causes and in considering new controls that may be necessary. Second, trends in inputs and outputs are tracked in order to obtain significant information, including cyclic information, for the decision process. Third, Early Alerts are provided in order to facilitate pre-emptive action. Fourth, the outputs are compared to soft thresholds provided by sigmoid functions. The methodology has been implemented in a software tool.
The unique properties of carbon have made it both a fascinating and an important subject of experimental and theoretical studies for many years [1]-[4]. The contrast between its best-known elemental forms, graphite and diamond, is particularly striking. Graphite is black, has a rather low density and high compressibility (close to that of magnesium), and is greasy enough to be useful as a lubricant and in pencil leads. Diamond is brilliantly translucent, 60% more dense than graphite, less compressible than either tungsten or corundum, and its hardness makes it useful for polishing and cutting. This variability in properties, as well as that observed among the many classes of carbon compounds, arises because of profound differences in electronic structure of the carbon bonds [5]. A number of other solid forms of carbon are known. Pyrolytic graphite [6] is a polycrystalline material in which the individual crystallites have a structure quite similar to that of natural graphite. Fullerite (solid C 60), discovered only ten years ago [7], consists of giant molecules in which the atoms are arranged into pentagons and hexagons on the surface of a spherical cage. Amorphous carbon [8][9], including carbon black and ordinary soot, is a disordered form of graphite in which the hexagonally bonded layers are randomly oriented. Glassy carbons [9][10], on the other hand, have more random structures. Many other structures have been discussed [1][9].
This report describes least squares solution methods and linearized estimates of solution errors caused by data errors. These methods are applied to event locating systems which use time-of-arrival (TOA) data. Analyses are presented for algorithms that use the TOA data in a ''direct'' manner and for algorithms utilizing Time-of-arrival Squared (TSQ) methods. Location and error estimation results were applied to a ''typical'' satellite TOA detecting system. Using Monte Carlo methods, it was found that the linearized location error estimates were valid for random data errors with relatively large variances and relatively poor event/sensor geometries. In addition to least squares methods, which use an L{sub 2} norm, methods were described for L{sub 1} and L{sub {infinity}} norms. In general, these latter norms offered little improvement over least squares methods. Reduction of the location error variances can be effected by using information in addition to the TOA data themselves by adding judiciously chosen ''conditioning'' equation(s) to the least squares system. However, the added information can adversely affect the mean errors. Also, conditioned systems may offer location solutions where nonconditioned scenarios may not be solvable. Solution methods and linearized error estimates are given for ''conditioned'' systems. It was found that for significant data errors, the linearized estimates were also close to the Monte Carlo results.
Aluminum oxide (ALOX) filled epoxy is the dielectric encapsulant in shock driven high-voltage power supplies. ALOX encapsulants display a high dielectric strength under purely electrical stress, but minimal information is available on the combined effects of high voltage and mechanical shock. We report breakdown results from applying electrical stress in the form of a unipolar high-voltage pulse of the order of 10-{micro}s duration, and our findings may establish a basis for understanding the results from proposed combined-stress experiments. A test specimen geometry giving approximately uniform fields is used to compare three ALOX encapsulant formulations, which include the new-baseline 459 epoxy resin encapsulant and a variant in which the Alcoa T-64 alumina filler is replaced with Sumitomo AA-10 alumina. None of these encapsulants show a sensitivity to ionizing radiation. We also report results from specimens with sharp-edged electrodes that cause strong, localized field enhancement as might be present near electrically-discharged mechanical fractures in an encapsulant. Under these conditions the 459-epoxy ALOX encapsulant displays approximately 40% lower dielectric strength than the older Z-cured Epon 828 formulation. An investigation of several processing variables did not reveal an explanation for this reduced performance. The 459-epoxy encapsulant appears to suffer electrical breakdown if the peak field anywhere reaches a critical level. The stress-strain characteristics of Z-cured ALOX encapsulant are measured under high triaxial pressure and we find that this stress causes permanent deformation and a network of microscopic fractures. Recommendations are made for future experimental work.
Hafnium diboride-silicon carbide (HS) and zirconium diboride-silicon carbide (ZS) composites are potential materials for high temperature, thermal shock applications such as for components on re-entry vehicles. In order to establish material constants necessary for evaluation of in situ fracture, bars fractured in four-point flexure were examined using fractographic principles. The fracture toughness was determined from measurements of the critical crack sizes and the strength values and the crack branching constants were established to use in forensic fractography for future in-flight tests. The fracture toughnesses range from about 13 MPam{sup 1/2} at room temperature to about 6 MPam{sup 1/2} at 1400 C for ZrB{sub 2}-Sic composites and from about 13 MPam{sup 1/2} at room temperature to about 4 MPam{sup 1/2} at 1400 C for HfB{sub 2}-SiC composites. Thus, the toughnesses of either the HS or ZS composites have the potential for use in thermal shock applications. Processing and manufacturing defects limited the strength of the test bars. However, examination of the microstructure on the fracture surfaces shows that the processing of these composites can be improved. There is potential for high toughness composites with high strength to be used in thermal shock conditions if the processing and handling are controlled.
One of the tasks performed routinely by the Electromagnetics and Plasma Physics Analysis Department at Sandia National Laboratories is analyzing the effects of direct-strike lightning on Faraday cages that protect sensitive items. The Faraday cages analyzed thus far have many features in common. This report is an attempt to collect equations and other information that have been routinely used in the past in order to facilitate future analysis.
The design of experiments (DOEx) approach was used to characterize the Precision Laser Beam Welding Process with respect to four processing factors: Angle of Attack, Volts, Pulse Length, and Focus. The experiment was performed with Lap Joints, Nickel-Wire Joints, and Kovar-Wire Joints. The laser welding process and these types of welds are used in the manufacture of MC4368A Neutron Generators. For each weld type an individual optimal condition and operating window was identified. The widths of the operating windows that were identified by experimentation indicate that the laser weld process is very robust. This is highly desirable because it means that the quality of the resulting welds is not sensitive to the exact values of the processing factors within the operating windows. Statistical process control techniques can be used to ensure that the processing factors stay well within the operating window.
An optical sensor system has been developed for the autonomous monitoring of NO{sub 2} evolution in energetic material aging studies. The system is minimally invasive, requiring only the presence of a small sensor film within the aging chamber. The sensor material is a perylene/PMMA film that is excited by a blue LED light source and the fluorescence detected with a CCD spectrometer. Detection of NO{sub 2} gas is done remotely through the glass window of the aging chamber. Irreversible reaction of NO{sub 2} with perylene, producing the non-fluorescent nitroperylene, provides the optical sensing scheme. The rate of fluorescence intensity loss over time can be modeled using a numerical solution to the coupled diffusion and a nonlinear chemical reaction problem to evaluate NO{sub 2} concentration levels. The light source, spectrometer, spectral acquisition, and data processing were controlled through a Labivew program run by a laptop PC. Due to the long times involved with materials aging studies the system was designed to turn on, warm up, acquire data, power itself off, then recycle at a specific time interval. This allowed the monitoring of aging HE material over the period of several weeks with minimal power consumption and stable LED light output. Despite inherent problems with gas leakage of the aging chamber they were able to test the sensor system in the field under an accelerated aging study of rocket propellant. They found that the propellant evolved NO{sub 2} at a rate that yielded a concentration of between 10 and 100 ppm. The sensor system further revealed that the propellant, over an aging period of 25 days, evolves NO{sub 2} with cyclic behavior between active and dormant periods.
Dynamic thermography is a promising technology for inspecting metallic and composite structures used in high-consequence industries. However, the reliability and inspection sensitivity of this technology has historically been limited by the need for extensive operator experience and the use of human judgment and visual acuity to detect flaws in the large volume of infrared image data collected. To overcome these limitations new automated data analysis algorithms and software is needed. The primary objectives of this research effort were to develop a data processing methodology that is tied to the underlying physics, which reduces or removes the data interpretation requirements, and which eliminates the need to look at significant numbers of data frames to determine if a flaw is present. Considering the strengths and weakness of previous research efforts, this research elected to couple both the temporal and spatial attributes of the surface temperature. Of the possible algorithms investigated, the best performing was a radiance weighted root mean square Laplacian metric that included a multiplicative surface effect correction factor and a novel spatio-temporal parametric model for data smoothing. This metric demonstrated the potential for detecting flaws smaller than 0.075 inch in inspection areas on the order of one square foot. Included in this report is the development of a thermal imaging model, a weighted least squares thermal data smoothing algorithm, simulation and experimental flaw detection results, and an overview of the ATAC (Automated Thermal Analysis Code) software that was developed to analyze thermal inspection data.
This report documents how active structural control was used to significantly enhance the metal removal rate of a milling machine. An active structural control system integrates actuators, sensors, a control law and a processor into a structure for the purpose of improving the dynamic characteristics of the structure. Sensors measure motion, and the control law, implemented in the processor, relates this motion to actuator forces. Closed-loop dynamics can be enhanced by proper control law design. Actuators and sensors were imbedded within a milling machine for the purpose of modifying dynamics in such a way that mechanical energy, produced during cutting, was absorbed. This limited the on-set of instabilities and allowed for greater depths of cut. Up to an order of magnitude improvement in metal removal rate was achieved using this system. Although demonstrations were very successful, the development of an industrial prototype awaits improvements in the technology. In particular, simpler system designs that assure controllability and observability and control algorithms that allow for adaptability need to be developed.
Many problems in aeronautics can be described in terms of nonlinear systems of equations. Carleman developed a technique to linearize such equations that could lead to analytical solutions of nonlinear problems. Nonlinear problems are difficult to solve in closed form and therefore the construction of such solutions is usually nontrivial. This research will apply the Carleman linearization technique to three model problems: a two-degree-of-freedom (2DOF) ballistic trajectory, Blasius' boundary layer, and Van der Pol's equation and evaluate how well the technique can adequately approximate the solutions of these ordinary differential equations.
This research explores four experiments of adaptive host-based intrusion detection (ID) techniques in an attempt to develop systems that can detect novel exploits. The technique considered to have the most potential is adaptive critic designs (ACDs) because of their utilization of reinforcement learning, which allows learning exploits that are difficult to pinpoint in sensor data. Preliminary results of ID using an ACD, an Elman recurrent neural network, and a statistical anomaly detection technique demonstrate an ability to learn to distinguish between clean and exploit data. We used the Solaris Basic Security Module (BSM) as a data source and performed considerable preprocessing on the raw data. A detection approach called generalized signature-based ID is recommended as a middle ground between signature-based ID, which has an inability to detect novel exploits, and anomaly detection, which detects too many events including events that are not exploits. The primary results of the ID experiments demonstrate the use of custom data for generalized signature-based intrusion detection and the ability of neural network-based systems to learn in this application environment.
A new scheme to simulate elastic collisions in particle simulation codes is presented. The new scheme aims at simulating the collisions in the highly collisional regime, in which particle simulation techniques typically become computationally expensive. The new scheme is based on the concept of a grid-based collision field. According to this scheme, the particles perform a single collision with the background grid during a time step. The properties of the background field are calculated from the moments of the distribution function accumulated on the grid. The collision operator is based on the Langevin equation. Based on comparisons with other methods, it is found that the Langevin method overestimates the collision frequency for dilute gases.
A pointing control system is developed and tested for a flying gimbaled telescope. The two-axis pointing system is capable of sub-microradian pointing stability and high accuracy in the presence of large host vehicle jitter. The telescope also has high agility - it is capable of a 50° retarget (in both axes simultaneously) in less than 2 s. To achieve the design specifications, high-accuracy, high-resolution, two-speed resolvers were used, resulting in gimbal-angle measurements stable to 1.5 μrad. In addition, on-axis inertial angle displacement sensors were mounted on the telescope to provide host-vehicle jitter cancellation. The inertial angle sensors are accurate to about 100 nrad, but do not measure low-frequency displacements below 2 Hz. The gimbal command signal includes host-vehicle attitude information, which is band-limited. This provides jitter data below 20 Hz, but includes a variable latency between 15 and 25 ms. One of the most challenging aspects of this design was to combine the inertial-angle-sensor data with the less perfect information in the command signal to achieve maximum jitter reduction. The optimum blending of these two signals, along with the feedback compensation were designed using Quantitative Feedback Theory.
This report describes the underlying principles and goals of the Sandia ASCI Verification and Validation Program Validation Metrics Project. It also gives a technical description of two case studies, one in structural dynamics and the other in thermomechanics, that serve to focus the technical work of the project in Fiscal Year 2001.
This report documents an exploratory FY 00 LDRD project that sought to demonstrate the first steps toward a realistic computational representation of the variability encountered in individual human behavior. Realism, as conceptualized in this project, required that the human representation address the underlying psychological, cultural, physiological, and environmental stressors. The present report outlines the researchers' approach to representing cognitive, cultural, and physiological variability of an individual in an ambiguous situation while faced with a high-consequence decision that would greatly impact subsequent events. The present project was framed around a sensor-shooter scenario as a soldier interacts with an unexpected target (two young Iraqi girls). A software model of the ''Sensor Shooter'' scenario from Desert Storm was developed in which the framework consisted of a computational instantiation of Recognition Primed Decision Making in the context of a Naturalistic Decision Making model [1]. Recognition Primed Decision Making was augmented with an underlying foundation based on our current understanding of human neurophysiology and its relationship to human cognitive processes. While the Gulf War scenario that constitutes the framework for the Sensor Shooter prototype is highly specific, the human decision architecture and the subsequent simulation are applicable to other problems similar in concept, intensity, and degree of uncertainty. The goal was to provide initial steps toward a computational representation of human variability in cultural, cognitive, and physiological state in order to attain a better understanding of the full depth of human decision-making processes in the context of ambiguity, novelty, and heightened arousal.
Virtual Private Networking is a new communications technology that promises lower cost, more secure wide area communications by leveraging public networks such as the Internet. Sandia National Laboratories has embraced the technology for interconnecting remote sites to Sandia's corporate network, and for enabling remote access users for both dial-up and broadband access.
Public concern regarding the effects of noise generated by the detonation of excess and obsolete explosive munitions at U.S. Army demolition ranges is a continuing issue for the Army's demilitarization and disposal groups. Recent concerns of citizens living near the McAlester Army Ammunition Plant (MCAAP) in Oklahoma have lead the U.S. Army Defense Ammunition Center (DAC) to conduct a demonstration and evaluation of noise abatement techniques that could be applied to the MCAAP demolition range. With the support of the DAC, MCAAP, and Sandia National Laboratories (SNL), three types of noise abatement techniques were applied: aqueous foams, overburden (using combinations of sand beds and dirt coverings), and rubber or steel blast mats. Eight test configurations were studied and twenty-four experiments were conducted on the MCAAP demolition range in July of 2000. Instrumentation and data acquisition systems were fielded for the collection of near-field blast pressures, far-field acoustic pressures, plant boundary seismic signals, and demolition range meteorological conditions. The resulting data has been analyzed and reported, and a ranking of each technique's effects has been provided to the DAC.
A simplified and bounding methodology for analyzing the pressure buildup and hydrogen concentration within an unvented 2R container was developed (the 2R is a sealed container within a 6M package). The specific case studied was the gas buildup due to alpha radiolysis of water moisture sorbed on small quantities (less than 20 Ci per package) of plutonium oxide. Analytical solutions for gas pressure buildup and hydrogen concentration within the unvented 2R container were developed. Key results indicated that internal pressure buildup would not be significant for a wide range of conditions. Hydrogen concentrations should also be minimal but are difficult to quantify due to a large variation/uncertainty in model parameters. Additional assurance of non-flammability can be obtained by the use of an inert backfill gas in the 2R container.
The efficiency of the design-to-analysis process for translating solid-model-based design data to computational analysis model data plays a central role in the application of computational analysis to engineering design and certification. A review of the literature from within Sandia as well as from industry shows that the design-to-analysis process involves a number of complex organizational and technological issues. This study focuses on the design-to-analysis process from a business process standpoint and is intended to generate discussion regarding this important issue. Observations obtained from Sandia staff member and management interviews suggest that the current Sandia design-to-analysis process is not mature and that this cross-organizational issue requires committed high-level ownership. A key recommendation of the study is that additional resources should be provided to the computer aided design organizations to support design-to-analysis. A robust community of practice is also needed to continuously improve the design-to-analysis process and to provide a corporate perspective.
A Synthetic Aperture Radar (SAR) image is a two-dimensional projection of the radar reflectivity from a 3-dimensional object or scene. Stereoscopic SAR employs two SAR images from distinct flight paths that can be processed together to extract information of the third collapsed dimension (typically height) with some degree of accuracy. However, more than two SAR images of the same scene can similarly be processed to further improve height accuracy, and hence 3-dimensional position accuracy. This report shows how.
This work points out that the costates are actually discontinuous functions of time for optimal control problems with Coulomb friction. In particular these discontinuities occur at the time points where the velocity of the system changes sign. To our knowledge, this has not been noted before. This phenomenon is demonstrated on a minimum-time problem with Coulomb friction and the consistency of discontinuous costates and switching functions with respect to the input switches is shown.
In order to achieve higher rendering performance, the use of parallel sort-last architecture on a PC cluster is presented. The sort-last library (libpglc) can be linked to an existing parallel application to achieve high rendering rates. The efficient use of 64 commodity graphics cards enables to establish pace-setting rendering performance of 300 million triangles per second on extremely large data.
This report is a summary of the work completed in FY00 for science-based characterization of the processes used to fabricate cermet vias in source feedthrus. In particular, studies were completed to characterize the CND50 cermet slurry, characterize solvent imbibition, and identify critical via filling variables. These three areas of interest are important to several processes pertaining to the production of neutron generator tubes. Rheological characterization of CND50 slurry prepared with 94ND2 and Sandi94 primary powders were also compared. The 94ND2 powder was formerly produced at the GE Pinellas Plant and the Sandi94 is the new replacement powder produced at CeramTec. Processing variables that may effect the via-filling process were also studied and include: the effect of solids loading in the CND50 slurry; the effect of milling time; and the effect of Nuosperse (a slurry ''conditioner''). Imbibition characterization included a combination of experimental, theoretical, and computational strategies to determine solvent migration though complex shapes, specifically vias in the source feedthru component. Critical factors were determined using a controlled set of experiments designed to identify those variables that influence the occurrence of defects within the cermet filled via. These efforts were pursued to increase part production reliability, understand selected fundamental issues that impact the production of slurry-filled parts, and validate the ability of the computational fluid dynamics code, GOMA, to simulate these processes. Suggestions are made for improving the slurry filling of source feedthru vias.
As part of the full scale fuel fire experimental program, a series of JP-8 pool fire experiments with a large cylindrical calorimeter (3.66 m diameter), representing a C-141 aircraft fuselage, at the lee end of the fuel pool were performed at Naval Air Warfare Center, Weapons Division (NAWCWPNS). The series was designed to support Weapon System Safety Assessment (WSSA) needs by addressing the case of a transport aircraft subjected to a large fuel fire. The data collected from this mock series will allow for characterization of the fire environment via a survivable test fixture. This characterization will provide important background information for a future test series utilizing the same fuel pool with an actual C-141 aircraft in place of the cylindrical calorimeter.
This report provides an independent assessment of information on mixed waste streams, chemical compatibility information on polymers, and standard test methods for polymer properties. It includes a technology review of mixed low-level waste (LLW) streams and material compatibilities, validation for the plan to test the compatibility of simulated mixed wastes with potential seal and liner materials, and the test plan itself. Potential packaging materials were reviewed and evaluated for compatibility with expected hazardous wastes. The chemical and physical property measurements required for testing container materials were determined. Test methodologies for evaluating compatibility were collected and reviewed for applicability. A test plan to meet US Department of Energy and Environmental Protection Agency requirements was developed. The expected wastes were compared with the chemical resistances of polymers, the top-ranking polymers were selected for testing, and the most applicable test methods for candidate seal and liner materials were determined. Five recommended solutions to simulate mixed LLW streams are described. The test plan includes descriptions of test materials, test procedures, data collection protocols, safety and environmental considerations, and quality assurance procedures. The recommended order of testing to be conducted is specified.
In this study, the erosion properties of four sediments related to the Canaveral Ocean Dredged Material Disposal Site have been determined as a function of density, consolidation, and shear stress by means of a high shear stress sediment erosion flume at Sandia National Laboratories. Additional analysis was completed for each sediment to determine mineralogy, particle size, and organic content. This was done to support numerical modeling efforts, aid in effective management, and minimize environmental impact. The motivation for this work is based on concerns of dredged material transporting beyond the designated site and estimates of site capacity.
This paper provides an overview John Holland's Echo model, describes an implementation of the model, documents results from preliminary experiments using the model, and proposes further research in using Echo to study complex adaptive systems. Echo simulates the behavior of complex adaptive systems and can provide an experimental testbed for exploring theories of, and developing tools useful for analyzing these systems. Preliminary results indicate that the dynamic behavior of Echo can be used to generate interesting, time-series data that will be useful for evaluating the applicability of and developing tools, techniques, and possibly general theories, for the analysis of specific complex adaptive systems.
This report contains the design basis for a generic molten-salt solar power tower. A solar power tower uses a field of tracking mirrors (heliostats) that redirect sunlight on to a centrally located receiver mounted on top a tower, which absorbs the concentrated sunlight. Molten nitrate salt, pumped from a tank at ground level, absorbs the sunlight, heating it up to 565 C. The heated salt flows back to ground level into another tank where it is stored, then pumped through a steam generator to produce steam and make electricity. This report establishes a set of criteria upon which the next generation of solar power towers will be designed. The report contains detailed criteria for each of the major systems: Collector System, Receiver System, Thermal Storage System, Steam Generator System, Master Control System, and Electric Heat Tracing System. The Electric Power Generation System and Balance of Plant discussions are limited to interface requirements. This design basis builds on the extensive experience gained from the Solar Two project and includes potential design innovations that will improve reliability and lower technical risk. This design basis document is a living document and contains several areas that require trade-studies and design analysis to fully complete the design basis. Project- and site-specific conditions and requirements will also resolve open To Be Determined issues.
This report documents an investigation of irreversible electrical breakdown in ZnO varistors due to short pulses of high electric field and current density. For those varistors that suffer breakdown, there is a monotonic, pulse-by-pulse degradation in the switching electric field. The electrical and structural characteristics of varistors during and after breakdown are described qualitatively and quantitatively. Once breakdown is nucleated, the degradation typically follows a well-defined relationship between the number of post-initiation pulses and the degraded switching voltage. In some cases the degraded varistor has a remnant 20 {micro}m diameter hollow track showing strong evidence of once-molten ZnO. A model is developed for both electrical and thermal effects during high energy pulsing. The breakdown is assumed to start at one electrode and advance towards the other electrode as a thin filament of conductive material that grows incrementally with each successive pulse. The model is partially validated by experiments in which the varistor rod is cut at several different lengths from the electrode. Invariably one section of the cut varistor has a switching field that is not degraded while the other section(s) are heavily degraded. Based on the experiments and models of behavior during breakdown, some speculations about the nature of the nucleating mechanism are offered in the last section.
This report describes the 19-foot diameter blast tunnel at Sandia National Laboratories. The blast tunnel configuration consists of a 6 foot diameter by 200 foot long shock tube, a 6 foot diameter to 19 foot diameter conical expansion section that is 40 feet long, and a 19 foot diameter test section that is 65 feet long. Therefore, the total blast tunnel length is 305 feet. The development of this 19-foot diameter blast tunnel is presented. The small scale research test results using 4 inch by 8 inch diameter and 2 foot by 6 foot diameter shock tube facilities are included. Analytically predicted parameters are compared to experimentally measured blast tunnel parameters in this report. The blast tunnel parameters include distance, time, static, overpressure, stagnation pressure, dynamic pressure, reflected pressure, shock Mach number, flow Mach number, shock velocity, flow velocity, impulse, flow duration, etc. Shadowgraphs of the shock wave are included for the three different size blast tunnels.
Strain-induced self-assembly during semiconductor heteroepitaxy offers a promising approach to produce quantum nanostructures for nanologic and optoelectronics applications. Our current research direction aims to move beyond self-assembly of the basic quantum dot towards the fabrication of more complex, potentially functional structures such as quantum dot molecules and quantum wires. This report summarizes the steps taken to improve the growth quality of our GeSi molecular beam epitaxy process, and then highlights the outcomes of this effort.
Methods to determine unsaturated hydraulic properties can exhibit random and nonunique behavior. We assess the causes for these behaviors by visualizing microscale phase displacement processes that occur during equilibrium retention and transient outflow experiments. For both types of experiments we observe the drainage process to be composed of a mixture of fast air fingering and slower air back-filling. The influence of each of these microscale processes is controlled by a combination of the size and the speed of the applied boundary step, the initial saturation and its structure, and small-scale heterogeneities. Because the mixture of these microscale processes yields macroscale effective behavior, measured unsaturated flow properties are also a function of these controls. Such results suggest limitations on the current definitions and uniqueness of unsaturated hydraulic properties.
As a participating national lab in the inter-institutional effort to resolve performance issues of the non-elutable ion exchange technology for Cs extraction, they have carried out a series of characterization studies of UOP IONSIV{reg_sign} IE-911 and its component parts. IE-911 is a bound form (zirconium hydroxide-binder) of crystalline silicotitanate (CST) ion exchanger. The crystalline silicotitanate removes Cs from solutions by selective ion exchange. The performance issues of primary concern are: (1) excessive Nb leaching and subsequent precipitation of column-plugging Nb-oxide material, and (2) precipitation of aluminosilicate on IE-911 pellet surfaces, which may be initiated by dissolution of Si from the IE-911, thus creating a supersaturated solution with respect to silica. In this work, they have identified and characterized Si- and Nb-oxide based impurity phases in IE-911, which are the most likely sources of leachable Si and Nb, respectively. Furthermore, they have determined the criteria and mechanism for removal from IE-911 of the Nb-based impurity phase that is responsible for the Nb-oxide column plugging incidents.
This report summarizes progress from the Laboratory Directed Research and Development (LDRD) program during fiscal year 2000. In addition to a programmatic and financial overview, the report includes progress reports from 244 individual R and D projects in 13 categories.
Umbra is a new Sandia-developed modeling and simulation framework. The Umbra framework allows users to quickly build models and simulations for intelligent system development, analysis, experimentation, and control and supports tradeoff analyses of complex robotic systems, device, and component concepts. Umbra links together heterogeneous collections of modeling tools. The models in Umbra include 3D geometry and physics models of robots, devices and their environments. Model components can be built with varying levels of fidelity and readily switched to allow models built with low fidelity for conceptual analysis to be gradually converted to high fidelity models for later phase detailed analysis. Within control environments, the models can be readily replaced with actual control elements. This paper describes Umbra at a functional level and describes issues that Sandia uses Umbra to address.
The construction of inverse states in a finite field F{sub P{sub P{alpha}}} enables the organization of the mass scale by associating particle states with residue class designations. With the assumption of perfect flatness ({Omega}total = 1.0), this approach leads to the derivation of a cosmic seesaw congruence which unifies the concepts of space and mass. The law of quadratic reciprocity profoundly constrains the subgroup structure of the multiplicative group of units F{sub P{sub {alpha}}}* defined by the field. Four specific outcomes of this organization are (1) a reduction in the computational complexity of the mass state distribution by a factor of {approximately}10{sup 30}, (2) the extension of the genetic divisor concept to the classification of subgroup orders, (3) the derivation of a simple numerical test for any prospective mass number based on the order of the integer, and (4) the identification of direct biological analogies to taxonomy and regulatory networks characteristic of cellular metabolism, tumor suppression, immunology, and evolution. It is generally concluded that the organizing principle legislated by the alliance of quadratic reciprocity with the cosmic seesaw creates a universal optimized structure that functions in the regulation of a broad range of complex phenomena.
This report describes the use of PorSalsa, a parallel-processing, finite-element-based, unstructured-grid code for the simulation of subsurface nonisothermal two-phase, two component flow through heterogeneous porous materials. PorSalsa can also model the advective-dispersive transport of any number of species. General source term and transport coefficient implementation greatly expands possible applications. Spatially heterogeneous flow and transport data are accommodated via a flexible interface. Discretization methods include both Galerkin and control volume finite element methods, with various options for weighting of nonlinear coefficients. Time integration includes both first and second-order predictor/corrector methods with automatic time step selection. Parallel processing is accomplished by domain decomposition and message passing, using MPI, enabling seamless execution on single computers, networked clusters, and massively parallel computers.
Arithmetic conditions relating particle masses can be defined on the basis of (1) the supersymmetric conservation of congruence and (2) the observed characteristics of particle reactions and stabilities. Stated in the form of common divisors, these relations can be interpreted as expressions of genetic elements that represent specific particle characteristics. In order to illustrate this concept, it is shown that the pion triplet ({pi}{sup {+-}}, {pi}{sup 0}) can be associated with the existence of a greatest common divisor d{sub 0{+-}} in a way that can account for both the highly similar physical properties of these particles and the observed {pi}{sup {+-}}/{pi}{sup 0} mass splitting. These results support the conclusion that a corresponding statement holds generally for all particle multiplets. Classification of the respective physical states is achieved by assignment of the common divisors to residue classes in a finite field F{sub P{sub {alpha}}} and the existence of the multiplicative group of units F{sub P{sub {alpha}}} enables the corresponding mass parameters to be associated with a rich subgroup structure. The existence of inverse states in F{sub P{sub {alpha}}} allows relationships connecting particle mass values to be conveniently expressed in a form in which the genetic divisor structure is prominent. An example is given in which the masses of two neutral mesons (K{degree} {r_arrow} {pi}{degree}) are related to the properties of the electron (e), a charged lepton. Physically, since this relationship reflects the cascade decay K{degree} {r_arrow} {pi}{degree} + {pi}{degree}/{pi}{degree} {r_arrow} e{sup +} + e{sup {minus}}, in which a neutral kaon is converted into four charged leptons, it enables the genetic divisor concept, through the intrinsic algebraic structure of the field, to provide a theoretical basis for the conservation of both electric charge and lepton number. It is further shown that the fundamental source of supersymmetry can be expressed in terms of hierarchical relationships between odd and even order subgroups of F{sub P{sub {alpha}}}, an outcome that automatically reflects itself in the phenomenon of fermion/boson pairing of individual particle systems. Accordingly, supersymmetry is best represented as a group rather than a particle property. The status of the Higgs subgroup of order 4 is singular; it is isolated from the hierarchical pattern and communicates globally to the mass scale through the seesaw congruence by (1) fusing the concepts of mass and space and (2) specifying the generators of the physical masses.
The stress of scandium dideuteride, ScD{sub 2}, thin films is investigated during each stage of vacuum processing including metal deposition via evaporation, reaction and cooldown. ScD{sub 2} films with thin Cr underlayers are fabricated on three different substrate materials: molybdenum-alumina cermet, single crystal sapphire and quartz. In all experiments, the evaporated Cr and Sc metal is relatively stress-free. However, reaction of scandium metal with deuterium at elevated temperature to form a stoichiometric dideuteride phase leads to a large compressive in-plane film stress. Compression during hydriding results from an increased atomic density compared with the as-deposited metal film. After reaction with deuterium, samples are cooled to ambient temperature, and a tensile stress develops due to mismatched coefficients of thermal expansion (CTE) of the substrate-film couple. The residual film stress and the propensity for films to crack during cooldown depends principally on the substrate material when using identical process parameters. Films deposited onto quartz substrates show evidence of stress relief during cooldown due to a large CTE misfit; this is correlated with crack nucleation and propagation within films. All ScD{sub 2} layers remain in a state of tension when cooled to 30 C. An in-situ, laser-based, wafer curvature sensor is designed and implemented for studies of ScD{sub 2} film stress during processing. This instrument uses a two-dimensional array of laser beams to noninvasively monitor stress during sample rotation and with samples stationary. Film stress is monitored by scattering light off the backside of substrates, i.e., side opposite of the deposition flux.
An experiment to measure surface pressure data on a series of three stainless steel simulated parachute ribbons was conducted. During the first phase of the test, unsteady pressure measurements were made on the windward and leeward sides of the ribbons to determine the statistical properties of the surface pressures. Particle Image Velocimetry (PIV) measurements were simultaneously made to establish the velocity field in the wake of the ribbons and its correlation with the pressure measurements. In the second phase of the test, steady-state pressure measurements were made to establish the pressure distributions. In the third phase, the stainless steel ribbons were replaced with nylon ribbons and PIV measurements were made in the wake. A detailed error analysis indicates that the accuracy of the pressure measurements was very good. However, an anomaly in the flow field caused the wake behind the stainless steel ribbons to establish itself in a stable manner on one side of the model. This same stability was not present for the nylon ribbon model although an average of the wake velocity data indicated an apparent 2{degree} upwash in the wind tunnel flow field. Since flow angularity upstream of the model was not measured, the use of the data for code validation is not recommended without a second experiment to establish that upstream boundary condition.
In support of two major SNL programs, the Long-term Inflow and Structural Test (LIST) program and the Blade Manufacturing Initiative (BMI), three Micon 65/13M wind turbines have been erected at the USDA Agriculture Research Service (ARS) center in Bushland, Texas. The inflow and structural response of these turbines are being monitored with an array of 60 instruments: 34 to characterize the inflow, 19 to characterize structural response and 7 to characterize the time-varying state of the turbine. The primary characterization of the inflow into the LIST turbine relies upon an array of five sonic anemometers. Primary characterization of the structural response of the turbine uses several sets of strain gauges to measure bending loads on the blades and the tower and two accelerometers to measure the motion of the nacelle. Data are sampled at a rate of 30 Hz using a newly developed data acquisition system. The system features a time-synchronized continuous data stream and telemetered data from the turbine rotor. This paper documents the instruments and infrastructure that have been developed to monitor these turbines and their inflow.
Numerical methods may require derivatives of functions whose values are known only on irregularly spaced calculation points. This document presents and quantifies the performance of Moving Least-Squares (MLS), a method of derivative evaluation on irregularly spaced points that has a number of inherent advantages. The user selects both the spatial dimension of the problem and order of the highest conserved moment. The accuracy of calculations is maintained on highly irregularly spaced points. Not required are creation of additional calculation points or interpolation of the calculation points onto a regular grid. Implementation of the method requires the use of only a relatively small number of calculation points. The method is fast, robust and provides smooth results even as the order of the derivative increases.
Meso-scale manufacturing processes are bridging the gap between silicon-based MEMS processes and conventional miniature machining. These processes can fabricate two and three-dimensional parts having micron size features in traditional materials such as stainless steels, rare earth magnets, ceramics, and glass. Meso-scale processes that are currently available include, focused ion beam sputtering, micro-milling, micro-turning, excimer laser ablation, femtosecond laser ablation, and micro electro discharge machining. These meso-scale processes employ subtractive machining technologies (i.e., material removal), unlike LIGA, which is an additive meso-scale process. Meso-scale processes have different material capabilities and machining performance specifications. Machining performance specifications of interest include minimum feature size, feature tolerance, feature location accuracy, surface finish, and material removal rate. Sandia National Laboratories is developing meso-scale mechanical components and actuators which require meso-scale parts fabricated in a variety of materials. Subtractive meso-scale manufacturing processes expand the functionality of meso-scale components and complement silicon based MEMS and LIGA technologies.
This technical report presents the initial proposal and renewable proposals for an LDRD project whose intended goal was to enable applications to take full advantage of the hardware available on Sandia's current and future massively parallel supercomputers by analyzing various ways of combining distributed-memory and shared-memory programming models. Despite Sandia's enormous success with distributed-memory parallel machines and the message-passing programming model, clusters of shared-memory processors appeared to be the massively parallel architecture of the future at the time this project was proposed. They had hoped to analyze various hybrid programming models for their effectiveness and characterize the types of application to which each model was well-suited. The report presents the initial research proposal and subsequent continuation proposals that highlight the proposed work and summarize the accomplishments.
This paper studies the implementation of polar format, synthetic aperture radar image formation in modern Field Programmable Gate Arrays (FPGA's). The polar format algorithm is described in rough terms and each of the processing steps is mapped to FPGA logic. This FPGA logic is analyzed with respect to throughput and circuit size for compatibility with airborne image formation.