Sandia National Laboratories, New Mexico, conducts the Energy Storage Systems Program, which is sponsored by the US Department of Energy`s Office of Utility Technologies. The goal of this program is to collaborate with industry in developing cost-effective electric energy storage systems for many high-value stationary applications. Sandia National Laboratories is responsible for the engineering analyses, contracted development, and testing of energy storage components and systems. This report details the technical achievements realized during fiscal year 1997. 46 figs., 20 tabs.
This is the third report in a series of studies to examine how US attitudes about nuclear security are evolving in the post-Cold War era and to identify trends in public perceptions and preferences relevant to the evolution of US nuclear security policy. It presents findings from three surveys: a nationwide telephone survey of randomly selected members of the US general public; a written survey of randomly selected members of American Men and Women of Science; and a written survey of randomly selected state legislators from all fifty US states. Key areas of investigation included nuclear security, cooperation between US and Russian scientists about nuclear issues, vulnerabilities of critical US infrastructures and responsibilities for their protection, and broad areas of US national science policy. While international and US national security were seen to be slowly improving, the primary nuclear threat to the US was perceived to have shifted from Russia to China. Support was found for nuclear arms control measures, including mutual reductions in stockpiles. However, respondents were pessimistic about eliminating nuclear armaments, and nuclear deterrence continued to be highly values. Participants favored decreasing funding f/or developing and testing new nuclear weapons, but supported increased investments in nuclear weapons infrastructure. Strong concerns were expressed about nuclear proliferation and the potential for nuclear terrorism. Support was evident for US scientific cooperation with Russia to strengthen security of Russian nuclear assets. Elite and general public perceptions of external and domestic nuclear weapons risks and external and domestic nuclear weapons benefits were statistically significantly related to nuclear weapons policy options and investment preferences. Demographic variables and individual belief systems were systematically related both to risk and benefit perceptions and to policy and spending preferences.
Classified designs usually include lesser classified (including unclassified) components. An engineer working on such a design needs access to the various sub-designs at lower classification levels. For simplicity, the problem is presented with only two levels: high and low. If the low-classification component designs are stored in the high network, they become inaccessible to persons working on a low network. In order to keep the networks separate, the component designs may be duplicated in all networks, resulting in a synchronization problem. Alternatively, they may be stored in the low network and brought into the high network when needed. The latter solution results in the use of sneaker-net (copying the files from the low system to a tape and carrying the tape to a high system) or a file transfer guard. This paper shows how an FTP Guard was constructed and implemented without degrading the security of the underlying B3 platform. The paper then shows how the guard can be extended to an FTP proxy server or an HTTP proxy server. The extension is accomplished by allowing the high-side user to select among items that already exist on the low-side. No high-side data can be directly compromised by the extension, but a mechanism must be developed to handle the low-bandwidth covert channel that would be introduced by the application.
In many manufacturing environments such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards white building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper the authors address the issue of low volume statistical process control. They investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. The authors develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, they study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. They show that far fewer data values are needed than is typically recommended for process control applications. And they demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.
Risk assessment methodologies are ready to enter their third generation. In this next generation, assessment will be based on a whole system understanding of the system to be assessed. To realize this vision of risk management, the authors have begun development of an extensible software tool kit. This tool kit breaks with the traditional approach to assessment by having the analyst spend the majority of the assessment time building an explicit model that documents in a single framework the various facets of the system, such as the system`s behavior, structure, and history. Given this explicit model of the system, a computer is able to automatically produce a standard assessment products, such as fault trees and event trees. This brings with it a number of advantages relative to current risk management tools. Among these are a greater sense of completeness and correctness in assessment results and the ability to preserve and later employ lessons learned.
The sol-gel chemistry of a variety of trialkoxysilanes with different organic substituents, with methoxide or ethoxide substituents on silicon was examined at varying monomer concentrations ranging up to neat monomer and with different catalysts. Gels were prepared from tetramethoxysilane and tetraethoxysilane at identical concentrations for purposes of comparison. The polymerization reactions were monitored for the formation of gels, insoluble precipitates, soluble polymers, or polyhedral oligosilsesquioxanes.
A comprehensive critical infrastructure analysis of the People`s Republic of China was performed to address questions about China`s ability to meet its long-term grain requirements and energy needs and to estimate greenhouse gas emissions in China likely to result from increased agricultural production and energy use. Four dynamic computer simulation models of China`s infrastructures--water, agriculture, energy and greenhouse gas--were developed to simulate, respectively, the hydrologic budgetary processes, grain production and consumption, energy demand, and greenhouse gas emissions in China through 2025. The four models were integrated into a state-of-the-art comprehensive critical infrastructure model for all of China. This integrated model simulates diverse flows of commodities, such as water and greenhouse gas, between the separate models to capture the overall dynamics of the integrated system. The model was used to generate projections of China`s available water resources and expected water use for 10 river drainage regions representing 100% of China`s mean annual runoff and comprising 37 major river basins. These projections were used to develop estimates of the water surpluses and/or deficits in the three end-use sectors--urban, industrial, and agricultural--through the year 2025. Projections of the all-China demand for the three major grains (corn, wheat, and rice), meat, and other (other grains and fruits and vegetables) were also generated. Each geographic region`s share of the all-China grain demand (allocated on the basis of each region`s share of historic grain production) was calculated in order to assess the land and water resources in each region required to meet that demand. Growth in energy use in six historically significant sectors and growth in greenhouse gas loading were projected for all of China.
This paper explores the use of discrete-event simulation for the design and control of physical protection systems for fixed-site facilities housing items of significant value. It begins by discussing several modeling and simulation activities currently performed in designing and analyzing these protection systems and then discusses capabilities that design/analysis tools should have. The remainder of the article then discusses in detail how some of these new capabilities have been implemented in software to achieve a prototype design and analysis tool. The simulation software technology provides a communications mechanism between a running simulation and one or more external programs. In the prototype security analysis tool, these capabilities are used to facilitate human-in-the-loop interaction and to support a real-time connection to a virtual reality (VR) model of the facility being analyzed. This simulation tool can be used for both training (in real-time mode) and facility analysis and design (in fast mode).
Mixed arsenide/antimonide materials have unique properties which make them potentially valuable for use in VCSELs operating at wavelengths longer than 1 {micro}m. The authors present their progress in applying these materials to VCSEL designs for 1--1.55 {micro}m.
This report describes the results of an analysis to determine the economic and operational value of battery storage to wind and photovoltaic (PV) generation technologies to the Sacramento Municipal Utility District (SMUD) system. The analysis approach consisted of performing a benefit-cost economic assessment using established SMUD financial parameters, system expansion plans, and current system operating procedures. This report presents the results of the analysis. Section 2 describes expected wind and PV plant performance. Section 3 describes expected benefits to SMUD associated with employing battery storage. Section 4 presents preliminary benefit-cost results for battery storage added at the Solano wind plant and the Hedge PV plant. Section 5 presents conclusions and recommendations resulting from this analysis. The results of this analysis should be reviewed subject to the following caveat. The assumptions and data used in developing these results were based on reports available from and interaction with appropriate SMUD operating, planning, and design personnel in 1994 and early 1995 and are compatible with financial assumptions and system expansion plans as of that time. Assumptions and SMUD expansion plans have changed since then. In particular, SMUD did not install the additional 45 MW of wind that was planned for 1996. Current SMUD expansion plans and assumptions should be obtained from appropriate SMUD personnel.
Single event gate rupture (SEGR) is a catastrophic failure mode that occurs in dielectric materials that are struck by energetic heavy ions while biased under a high electric field condition. SEGR can reduce the critical electric field to breakdown to less than half the value observed in normal voltage ramp reliability tests. As electric fields in gate oxides increase to greater than 5 MV/cm in advanced MOS technologies, the impact of SEGR on the reliability of space based electronics must be assessed. In this summary, the authors explore the nature of SEGR in oxides with thickness from 7 nm to less than 5 nm, where soft breakdown is often observed during traditional reliability tests. They discuss the possible connection between the present understanding of SEGR and voltage stress breakdown models.
The Waste Isolation Pilot Plant (WIPP) is under development by the US Department of Energy (DOE) for the geologic disposal of transuranic waste. The primary regulatory requirements (i.e., 40 CFR 191 and 40 CFR 194) placed on the WIPP by the US Environmental Protection Agency (EPA) involve a complementary cumulative distribution function (CCDF) for normalized radionuclide releases to the accessible environment. The interpretation and use of this CCDF from a decision analysis perspective is discussed and illustrated with results from the 1996 performance assessment for the WIPP, which was carried out to support a compliance certification application by the DOE to the EPA for the WIPP.
Non-shrinking polymers are desirable as encapsulants for strain-free packaging for electronics. Ring-opening polymerizations of cyclic monomers such as lactams, cyclic ethers, and cyclic oligosiloxanes have proven an effective strategy for reducing shrinkage. In this report the authors examined the loss of volume during the ring-opening polymerization of neat 2,2,5,5-tetramethyl-l-oxa-2,5-disilacyclopentane to give poly(1,2-ethylene-bis(dimethyl-siloxane)). Monomer 1 is under sufficient strain (8--12 kcal/mole) to permit its facile base-catalyzed polymerization to afford high molecular polymer. Monomer 1 was prepared by hydrolyzing and condensing either 1,2-bis(chlorodimethylsilyl)ethane or 1,m2-bis(dimethylethoxysilyl)ethane to give a low molecular weight oligomer. Pyrolysis of this oligomer with potassium hydroxide at 280 C afforded the cyclic monomer in good yield (60--70%). The ease with which the oligomer can be converted to monomer also led the authors to investigate the potential for recycling the high molecular weight polymer.
A backfill system has been designed for the Waste Isolation Pilot Plant (WIPP) which will control the chemical environment of the post-closure repository to a domain where the actinide solubility is within its lowest region. The actinide solubility is highly dependent on the chemical species which constitute the fluid, the resulting pH of the fluid, and the oxidation state of the actinide which is stable under the specific conditions. The use of magnesium oxide (MgO) has the backfill material not only controls the pH of the expected fluids, but also effectively removes carbonate from the system, which has a significant impact on actinide solubility. The backfill selection process, emplacement system design, and confirmatory experimental results are presented.
In this paper the authors discuss the leveling process by which a business process ontology is formed in a distributed, multi-lingual, multi-stakeholder environment, with attention to realizing elicitation mechanisms that maintain registration of users` terms with the common ontology. Business processes are recognized from use-case analysis, specified in terms of the common ontology, and realized as operations on the components of a transaction: a temporally extended, complex, distributed object. A primary advantage of this approach is that users see private terminologies while the transaction object is specified in terms of the common ontology, and registration between the two is automatic and continuous. Ready realization of multiple interfaces to stakeholders, independently constructed validation and verification mechanisms, distributed data, and a standard elicitation mechanism and process are other advantages. The language formation process was used successfully during the development of the Border Trade Facilitation System, an agent-oriented mechanism that conducts international border-crossing transactions. In this implementation, agents operating within a federated architecture construct, populate, verify, certify, and manipulate a distributed composite transaction object to effect transport of goods over the US/Mexico border.
In order to promote internatinal confidence that the US and Russia are disarming per their commitments under Article 6 of the Non-Proliferation Treaty, an international verification regime may be applied to US and Russian excess fissile materials. Initially, it is envisioned that this verification regime would be applied at storage facilities; however, it should be anticipated that the verificatino regime would continue throughout any material disposition activities, should such activities be pursued. once the materials are accepted into the verification regime, it is assumed that long term monitoring will be used to maintain continuity of knowledge. The requirements for long term storage monitoring include unattended operation for extended periods of time, minimal intrusiveness on the host nation`s safety and security activities, data collection incorporating data authentication, and monitoring redundancy to allow resolution of anomalies and to continue coverage in the event of equipment failures. Additional requirements include effective data review and analysis processes, operation during storage facility loading, procedure for removal of inventory items for safety-related surveillance, and low cost, reliable equipment. A monitoring system might include both continuous monitoring of storagecontainers and continuous area monitoring. These would be complemented with periodic on-site inspections. A fissile material storage facility is not a static operation. The initial studies have shown there are a number of volid reasions why a host nation may need them to remove material from the storage facility. A practical monitoring system must be able to accommodate necessary material movements.
Deep-reactive ion etching (DRIE) of silicon, also known as high-aspect-ratio silicon etching (HARSE), is distinguished by fast etch rates ({approximately}3 {micro}m/min), crystal orientation independence, anisotropy, vertical sidewall profiles and CMOS compatibility. By using through-wafer HARSE and stopping on a dielectric film placed on the opposite side of the wafer, freestanding dielectric membranes were produced. Dielectric membrane-based sensors and actuators fabricated in this way include microhotplates, flow sensors, valves and magnetically-actuated flexural plate wave (FPW) devices. Unfortunately, low-stress silicon nitride, a common membrane material, has an appreciable DRI etch rate. To overcome this problem HARSE can be followed by a brief wet chemical etch. This approach has been demonstrated using KOH or HF/Nitric/Acetic etchants, both of which have significantly smaller etch rates on silicon nitride than does DRIE. Composite membranes consisting of silicon dioxide and silicon nitride layers are also under evaluation due to the higher DRIE selectivity to silicon dioxide.
For more than two decades, risk analysts have relied on powerful logic-based models to perform their analyses. However, the applicability of these models has been limited because they can be complex and expensive to develop. Analysts must frequently start from scratch when analyzing a new (but similar) system because the understanding of how the system works exists only in the mind of the analyst and is only incompletely instantiated in the actual logic model. This paper introduces the notion of using explicit object-oriented system models, such as those embodied in computer-aided software engineering (CASE) tools, to document the analyst`s understanding of the system and appropriately capture how the system works. It also shows that from these models, standard assessment products, such as fault trees and event trees, can be automatically derived.
The authors describe the design, modeling, fabrication and initial testing of a new test structure for friction measurement in MEMS. The device consists of a cantilevered forked beam and a friction pad attached via a hinge. Compared to previous test structures, the proposed structure can measure friction over much larger pressure ranges, yet occupies one hundred times less area. The placement of the hinge is crucial to obtaining a well-known and constant pressure distribution in the device. Static deflections on the device were measured and modeled numerically, Preliminary results indicate that friction pad slip is sensitive to friction pad normal force.
In this paper a hybrid, finite element/boundary element method which can be used to solve for particle diffusion in semi-infinite domains containing geometric obstructions and a variable advective field is presented. In previous work either boundary element or finite element/difference methods were used to solve for particle concentrations in an advective domain. These methods of solution had a number of limitations. Due to limitations in computing spatially dependent Green`s functions, the boundary element method of solution was limited to domains containing only constant advective fields, and due to its inherent formulation, finite element/difference methods were limited to only domains of finite spatial extent. Thus, where the finite element solution was limited, the boundary element solution was not, and where the boundary element solution was limited, the finite element solution was not. In this paper it is proposed to split the total domain into two sub-domains where each method of solution is applicable. For each of these sub-domains, the appropriate solution method is used; thereby, producing a general method of solution for the total semi-infinite domain.
Recent developments in pulsed power technology demonstrate use of intense radiation sources (Z pinches) for driving planar shock waves in samples with spatial dimensions larger than possible with other radiation sources. Initial indications are that the use of Z pinch sources can be used to produce planar shock waves in samples with diameters of a few millimeters and thicknesses approaching one half millimeter. These dimensions allow increased accuracy of both shock velocity and particle velocity measurements. The Z pinch radiation source uses imploding metal plasma induced by self-magnetic fields applied to wire arrays to produce high temperature x-ray environments in vacuum hohlraum enclosures. Previous experiments have demonstrated that planar shock waves can be produced with this approach. A photograph of a wire array located inside the vacuum hohlraum is shown here. Typically, a few hundred individual wires are used to produce the Z pinch source. For the shock wave experiments being designed, arrays of 120 to 240 tungsten wires with a diameter of 40 mm and with individual diameters of about 10 {micro}m are used. Preliminary experiments have been performed on the Z pulsed radiation source to demonstrate the ability to obtain VISAR measurements in the Z accelerator environment. Analysis of these results indicate that another effect, not initially anticipated, is an apparent change in refractive index that occurs in the various optical components used in the system. This effect results in an apparent shift in the frequency of reflected laser light, and causes an error in the measured particle velocity. Experiments are in progress to understand and minimize this effect.
Chemical sensor arrays are an alternative to the tedious development of highly specific single-analyte detectors. Recent efforts have focused on the chemical and physical diversity of interface materials for SAW sensor arrays. However, the issues of wide dynamic range and high sensitivity must also be addressed for sensor arrays to compete in applications requiring low detection limits. Because SAW devices respond in proportion to change in mass per nominal unit area of the device surface, sensitivity is enhanced by surface modification with high-area, thin-film coating materials: a greater mass of analyte is adsorbed at a given ambient concentration. The authors are exploring several classes of electrochemically prepared high-area films, materials whose formulations and processing are well documented for applications other than chemical sensors. They present results from films formed by anodization, chemical conversion, and electroplating, yielding surface area enhancements as high as 170x.
The development and application of energy technologies for all aspects from generation to storage have improved dramatically with the advent of advanced computational tools, particularly modeling and simulation. Modeling and simulation are not new to energy technology development, and have been used extensively ever since the first commercial computers were available. However, recent advances in computing power and access have broadened the extent and use, and, through increased fidelity (i.e., accuracy) of the models due to greatly enhanced computing power, the increased reliance on modeling and simulation has shifted the balance point between modeling and experimentation. The complex nature of energy technologies has motivated researchers to use these tools to understand better performance, reliability and cost issues related to energy. The tools originated in sciences such as the strength of materials (nuclear reactor containment vessels); physics, heat transfer and fluid flow (oil production); chemistry, physics, and electronics (photovoltaics); and geosciences and fluid flow (oil exploration and reservoir storage). Other tools include mathematics, such as statistics, for assessing project risks. This paper describes a few advancements made possible by these tools and explores the benefits and costs of their use, particularly as they relate to the acceleration of energy technology development. The computational complexity ranges from basic spreadsheets to complex numerical simulations using hardware ranging from personal computers (PCs) to Cray computers. In all cases, the benefits of using modeling and simulation relate to lower risks, accelerated technology development, or lower cost projects.
The author presents a novel method for calculating the penetration of soft targets by hard projectiles by using a combination of ALE and contact surface techniques. This method allows the bifurcation in the softer material (at the point of the projectile) to be represented without sacrificing the Lagrangian representation of either the harder material or the contact interface. A series of calculations using this method show good agreement with the experimental data of Forrestal et al. This method may prove useful for a range of semi-fluid/structure interactions with friction, including simulations of manufacturing processes.
The Integrated Safety Management System (ISMS) was established to define a framework for the essential functions of managing work safely. There are five Safety Management Functions in the model of the ISMS process: (1) work planning, (2) hazards analysis, (3) hazards control, (4) work performance, and (5) feedback and improve. Recent activities at the Radioactive and Mixed Waste Management Facility underscored the importance and effectiveness of integrating the ISMS process to safely manage high-hazard work with a minimum of personnel in a timely and efficient manner. This report describes how project personnel followed the framework of the ISMS process to successfully repackage tritium-contaminated oils. The main objective was to open the boxes without allowing the gaseous tritium oxide, which had built up inside the boxes, to release into the sorting room. The boxes would be vented out the building stack until tritium concentration levels were acceptable. The carboys would be repackaged into 30-gallon drums and caulked shut. Sealing the drums would decrease the tritium off-gassing into the RMWMF.
Sandia has used flyback transformers for many years, primarily to charge capacitors for capacitive discharge units. Important characteristics of the transformer design are to meet inductance, turns ratio, and high voltage breakdown requirements as well as not magnetically saturating during each energy transfer cycle. Sandia has taken over production responsibility for magnetic components from a previous GE/LM, General Electric/Lockheed Martin, facility in Florida that produced {approximately} 50 K units per year. Vanguard Electronics is working with Sandia to transfer many of these designs to Vanguard`s small manufacturing facility in Gardena, CA. The challenge is to achieve the required high reliability and meet all the other electrical requirements with such small quantities of parts, {approximately} 100 per year. DOE requirements include high reliability {le} 3 failures per 10,000 components per 20 years while meeting numerous other environmental requirements. The basic design and prove-in required four lots of preproduction parts, extensive environmental testing, and numerous design changes. The manufacturing problems that affected performance of the transformer will be presented. These include encapsulation voids and core alignment. Also, some extended life test data that predicts long term reliability of newly produced transformers versus older designs will be compared.
This paper presents a Generalized Logistic (gLG) distribution as a unified model for Log-domain synthetic aperture Radar (SAR) data. This model stems from a special case of the G-distribution known as the G{sup 0}-distribution. The G-distribution arises from a multiplicative SAR model and has the classical K-distribution as another special case. The G{sup 0}-distribution, however, can model extremely heterogeneous clutter regions that the k-distribution cannot model. This flexibility is preserved in the unified gLG model, which is capable of modeling non-polarimetric SAR returns from clutter as well as man-made objects. Histograms of these two types of SAR returns have opposite skewness. The flexibility of the gLG model lies in its shape and shift parameters. The shape parameter describes the differing skewness between target and clutter data while the shift parameter compensates for movements in the mean as the shape parameter changes. A Maximum Likelihood (ML) estimate of the shape parameter gives an optimal measure of the skewness of the SAR data. This measure provides a basis for an optimal target detection algorithm.
In August 1995, the Nuclear Regulatory Commission (NRC) issued a policy statement proposing improved regulatory decisionmaking by increasing the use of PRA [probabilistic risk assessment] in all regulatory matters to the extent supported by the state-of-the-art in PRA methods and data. A key aspect in using PRA in risk-informed regulatory activities is establishing the appropriate scope and attributes of the PRA. In this regard, ASME decided to develop a consensus PRA Standard. The objective is to develop a PRA Standard such that the technical quality of nuclear plant PRAs will be sufficient to support risk-informed regulatory applications. This paper presents examples recommendations for the systems analysis element of a PRA for incorporation into the ASME PRA Standard.
Deep reactive ion etching (DRIE) of silicon was utilized to fabricate dielectric membrane-based devices such as microhotplates, valves and flexural plate wave (FPW) devices. Through-wafer DRIE is characterized by fast etch rates ({approximately} 3 {micro}m/min), crystal orientation independence, vertical sidewall profiles and CMOS compatibility. Low-stress silicon nitride, a popular membrane material, has an appreciable DRIE etch rate. To overcome this limitations DRIE can be accompanied by a brief wet chemical etch. This approach has been demonstrated using KOH or HF/Nitric/Acetic etchants, both of which have significantly lower etch rates on silicon nitride than does DRIE. The DRIE etch properties of composite membranes consisting of silicon dioxide and silicon nitride layers are also under evaluation due to the higher DRIE selectivity to silicon dioxide.
Molecular recognition is an important topic when searching for new, selective coating materials for chemical sensing. Recently, the general idea of molecular recognition in the gas phase was challenged by Grate et al. However, in earlier thickness-shear mode resonator (TSMR) investigations, convincing evidence was presented for specific recognition of particular analyte target molecules. In this study, the authors systematically investigated coatings previously shown to be highly selective, such as the bucket-like cyclodextrins for chiral recognition, Ni-camphorates for the specific detection of the bases pyridine and DMMP (dimethylmethylphosphonate), and phthalocyanines to specifically detect benzene, toluene, and xylene (BTX).
chemical sensor arrays eliminate the need to develop a high-selectivity material for every analyte. The application of pattern recognition to the simultaneous responses of different microsensors enables the identification and quantification of multiple analytes with a small array. Maximum materials diversity is the surest means to create an effective array for many analytes, but using a single material family simplifies coating development. Here the authors report the successful combination of an array of six dendrimer films with mass-sensitive SAW (surface acoustic wave) sensors to correctly identify 18 organic analytes over wide concentration ranges, with 99.5% accuracy. The set of materials for the array is selected and the results evaluated using Sandia`s Visual-Empirical Region of Influence (VERI) pattern recognition (PR) technique. The authors evaluated eight dendrimer films and one self-assembled monolayer (SAM) as potential SAW array coatings. The 18 organic analytes they examined were: cyclohexane, n-hexane, i-octane, kerosene, benzene, toluene, chlorobenzene, carbon tetrachloride, trichloroethylene, methanol, n-propanol, pinacolyl alcohol, acetone, methyl isobutyl ketone, dimethylmethylphosphate, diisopropylmethylphosphonate, tributylphosphate, and water.
Chemical-Mechanical-Polishing (CMP), first used as a planarization technology in the manufacture of multi-level metal interconnects for high-density Integrated Circuits (IC), is readily adapted as an enabling technology in MicroElectroMechanical Systems (MEMS) fabrication, particularly polysilicon surface micromachining. The authors have demonstrated that CMP enhances the design and manufacturability of MEMS devices by eliminating several photolithographic definition and film etch issues generated by severe topography. In addition, CMP planarization readily allows multi-level polysilicon structures comprised of 4- or more levels of polysilicon, eliminates design compromise generated by non-planar topography, and provides an avenue for integrating different process technologies. A recent investigation has also shown that CMP is a valuable tool for assuring acceptable optical flatness of micro-optical components such as micromirrors. Examples of these enhancements include: an extension of polysilicon surface-micromachining fabrication to a 5-level technology, a method of monolithic integration of electronics and MEMS, and optically flat micromirrors.
The declining state of the Russian military and precarious Russian economic condition will give the US considerable advantages at the START III bargaining table. Taking the US-RF asymmetries into account, this paper discusses a menu of START III measures the US could ask for, and measures it could offer in return, in attempting to negotiate an equitable treaty. Measures the US might seek in a START III treaty include: further reductions in deployed strategic nuclear warheads, irreversibility of reductions through warhead dismantlement; beginning to bring theater nuclear weapons under mutual control, and increased transparency into the Russian nuclear weapons complex. The US may, however, wish to apply its bargaining advantages to attempting to achieve the first steps toward two long-range goals that would enhance US security: bringing theater nuclear weapons into the US-RF arms control arena, and increasing transparency into the Russian nuclear weapons complex. In exchange for measures relating to these objectives, the US might consider offering to Russia: Further strategic weapons reductions approaching levels at which the Russians believe they could maintain a degree of parity with the US; Measures to decrease the large disparities in potential deliver-system uploading capabilities that appear likely under current START II/START III scenarios; and Financial assistance in achieving START II/START III reductions as rapidly as is technically possible.
This paper presents an overview of the issues associated with applying a domain-decomposition message-passing paradigm to the parallel implementation of both explicit and semi-implicit projection algorithms. The use of an element-based domain decomposition with an efficient solution strategy for the pressure field is shown to yield a scalable, parallel solution method capable of treating complex flow problems where high-resolution grids are required. In addition, the use of an SSOR or Jacobi preconditioned conjugate gradient solver with an A-conjugate projection reduces the computational time for the solution of the pressure field, and yields parallel efficiencies above 80% for computations with O(250) elements per processor. The parallel projection solver is verified using a series of 2-D and 3-D benchmarks designed to evaluate time-accurate flow solution methods. Finally, the extension of the projection algorithm to reacting flows is demonstrated for a time-dependent vortex-shedding problem.
The ability to make in-situ performance measurements of MEMS operating at high speeds has been demonstrated using a new image analysis system. Significant improvements in performance and reliability have directly resulted from the use of this system.
Despite evidence of significant management contributions to the causes of major accidents, recent events at Millstone Nuclear Power Station in the US and Ontario Hydro in Canada might lead one to conclude that the significance of safety culture, and the role of management in developing and maintaining an appropriate safety culture, is either not being understood or not being taken serious as integral to the safe operation of some complex, high-reliability operations. It is the purpose of this paper to address four aspects of management that are particularly important to safety culture, and to illustrate how development of an appropriate safety culture is more a matter of common sense than rocket science.
Emergent technologies often suffer from a lack of an installed manufacturing base and an obvious dominant manufacturing technique. Firms which base their search for competitive advantage on emergent disruptive technologies must make hard production choices and endure major manufacturing discontinuities. The authors as well as many other firms, are now facing these challenges with the embrace of microsystems technologies. They add to the literature by providing a set of criteria for firms investing in emergent disruptive technologies. Sandia has long been associated as a pioneer in the development of new manufacturing techniques. Microsystems is just the current in a long line of manufacturing technologies that have been considered for mission critical system applications. The authors as well as others, have had to make the hard choice of investing in specific microsystems manufacturing techniques. Important considerations in the technique choice include: the existing internal manufacturing bases, commonality with existing commercial manufacturing infrastructure, current and projected critical performance characteristics, learning curves, the promise to add new but un-thought-of functionally to existing systems, and the anticipated ability to qualify devices built from the technique for mission critical applications.
For many modern day portable electronic applications, low power high speed devices have become very desirable. Very high values of f{sub T} and f{sub MAX} have been reported with InGaAs/InP heterojunction bipolar transistors (HBTs), but only under high bias and high current level operating conditions. An InGaAs/InP ultra-lowpower HBT with f{sub MAX} greater than 10 GHz operating at less than 20 {micro}A has been reported for the first time in this work. The results are obtained on a 2.5 x 5 {micro}m{sup 2} device, corresponding to less than 150 A/cm{sup 2} of current density. These are the lowest current levels at which f{sub MAX} {ge} 10 GHz has been reported.
In the past, most defense microelectronics components were packaged in ceramic, hermetic enclosures. PEMs are not hermetic because the plastic molding compounds are permeable to moisture. This lack of hermeticity creates an unknown liability, especially with respect to corrosion of the metallization features. This potential liability must be addressed to ensure long-term reliability of these systems is maintained under conditions of long-term dormant storage. However, the corrosion process is difficult to monitor because it occurs under the encapsulating plastic and is therefore not visible. The authors have developed techniques that allow them to study corrosion of Al bondpads and traces under relevant atmospheric corrosion conditions. The cornerstone of this capability is the ATC 2.6, a microelectronic test device designed at Sandia National Laboratories. Corrosion tests were performed by exposing test chips to aggressive environments. The electrical response of the ATC indicated an increase in bondpad resistance with exposure time. Note that the change in resistance is not uniform from one bondpad to another. This illustrates the stochastic nature of the corrosion process. The change in resistance correlated with visual observation of corrosion of the bondpads on the unencapsulated test chips.
Multispectral image analysis of magnetic resonance imaging (MRI) data has been performed using an empirically-derived clustering algorithm. This algorithm groups image pixels into distinct classes which exhibit similar response in the T{sub 2} 1st and 2nd-echo, and T{sub 1} (with ad without gadolinium) MRI images. The grouping is performed in an n-dimensional mathematical space; the n-dimensional volumes bounding each class define each specific tissue type. The classification results are rendered again in real-space by colored-coding each grouped class of pixels (associated with differing tissue types). This classification method is especially well suited for class volumes with complex boundary shapes, and is also expected to robustly detect abnormal tissue classes. The classification process is demonstrated using a three dimensional data set of MRI scans of a human brain tumor.
Researchers at the National Wind Technology Center have identified a need to acquire data on the rotor of an operating wind turbine at precisely the same time as other data is acquired on the ground or on a non-rotating part of the wind turbine. The researchers will analyze that combined data with statistical and correlation techniques to clearly establish phase information and loading paths and insights into the structural loading of wind turbines. A data acquisition unit has been developed to acquire the data from the rotating system at precise universal times specified by the user. The unit utilizes commercial data acquisition hardware, spread-spectrum radio modems, and a Global Positioning Satellite receiver as well as a custom-built programmable logic device. A prototype of the system is now operational, and initial field deployment is anticipated this summer.
A specialized hyperspectral imager has been developed that preprocesses the spectra from an image before the light reaches the detectors. This "optical computer" does not allow the flexibility of digital post-processing. However, the processing is done in real time and the system can examine = 2 x 10{sup 6} scene pixels/sec. Therefore, outdoors it could search for pollutants, vegetation types, minerals, or man-made objects. On a high- speed production line it could identify defects in sheet products like plastic wrap or film, or on painted or plastic parts. ISIS is a line scan imager. A spectrally dispersed slit image is projected on a Spatial Light Modulator. The SLM is programmed to take the inner product of the spectral intensity vector and a spectral basis vector. The SLM directs the positive and negative parts of the inner product to different linear detector arrays so the signal difference equals the inner product. We envision a system with one telescope and =4 SLMS.
Engineered Surety Using the Risk Equation (EnSURE) is a new approach being developed by Sandia National Laboratories for determining and mitigating risk. The EnSURE approach is based on the risk equation, which can be defined by the following equation: R = (Pa)(1-Pe)(C). Where R is risk, Pa is the likelihood of attack, Pe is the system effectiveness and C is the consequence. EnSURE considers each of the components of risk to help in assessing surety (e.g. security, safety, environmental) and providing for the most cost-effective ways to reduce risk. EnSURE is intended to help in evaluating and reducing the risk from either man-caused or natural events. It will help the decision-makers identify possible targets, evaluate the consequences of an event, assess the risk based on the threat and the existing conditions and then help in the application of mitigating measures. EnSURE is in the development stages. It builds on existing and ongoing development activities at Sandia, as well as the considerable work done in the fields of consequence analysis, risk analysis and intelligence. The components of EnSURE include consequences, constraints, threat, target/goal identification, facility/process characterization, evaluation and analysis, system improvement, and decision making. This paper provides a brief description of EnSURE.
This contribution describes three interoperability scenarios for the ATM Security Message Exchange (SME) protocol. These scenarios include network-wide signaling support for the Security Services Information Element, partial signaling support wherethe SSIE is only supported in private or workgroup ATM networks, and the case where the SSIE is nonsupported by any network elements (exceptthosethat implement security services). Explanatory text is proposed for inclusion infection 2.3 of the ATM Security Specification, Version 1.0.
Department of Energy and Defense Programs systems are becoming increasingly reliant on the use of optical technologies that must perform under a range of ionizing radiation environments. In particular, the radiation response of materials under consideration for applications in direct optical initiation (D.O.I.) schemes must be well characterized. In this report, transient radiation effects observed in a KD*P crystal are characterized. Under gamma exposure with 2 MeV photons in a 20--30 nsec pulse, the authors observe induced absorption at 1.06 {micro}m that causes a peak decrease in overall sample transmittance of only 10%. This induced loss is seen to recover fully within the first 30 {micro}sec.
Department of Energy and Defense Programs systems are becoming increasingly reliant on the use of optical technologies that must perform under a range of ionizing radiation environments. In particular, the radiation response of materials under consideration for applications in direct optical initiation (D.O.I.) schemes must be well characterized. In this report, transient radiation effects observed in Schott filter glass S-7010 are characterized. Under gamma exposure with 2 MeV photons in a 20--30 nsec pulse, the authors observe strong initial induced fluorescence in the red region of the spectrum followed by significant induced absorption over the same spectral region. Peak induced absorption coefficients of 0.113 cm{sup {minus}1} and 0.088 cm{sup {minus}1} were calculated at 800 nm and 660 nm respectively.
From July 29 to 31, 1997, the Surety Assessment Center at Sandia National Laboratories hosted the second international symposium on High Consequence Operations Safety, HCOSSII. The two and one-half day symposium allowed participants to share strategies, methodologies, and experiences in high consequence engineering and system design. The symposium addressed organizational influences on high consequence safety, assessment and analysis processes, lessons-learned from high consequence events, human factors in safety, and software safety. A special session at the end of the symposium featured a presentation by Federal Nuclear Center--All Russian Research Institute of Experimental Physics and Sandia National Laboratories personnel on their joint efforts to establish the International Surety Center for Energy Intensive and High Consequence Systems and Infrastructures.
Often the drilling of an oil well is followed by a logging process to characterize the region immediately surrounding the well bore. The electromagnetic (EM) induction tool, which provides the formation resistivity, is among the most frequently run logs. A preliminary study has been conducted to analyze the feasibility of three dimensional (3D) electromagnetic (EM) imaging from a single borehole. The logging tool consists of a vertical magnetic dipole source and multiple 3 component magnetic field receivers offset at different distances from the source. Synthetic data calculated with a 3D finite difference code demonstrate that the phase of the horizontal magnetic fields provides the critical information on the three dimensionality of the medium. A 3D inversion algorithm is then employed to demonstrate the plausibility of 3D inversion using 3 component magnetic field data. Finally, problems associated with introducing biased noise into the horizontal components of the field through misalignment of the logging tool is discussed.
The United States conducted over 100 atmospheric nuclear tests at the Nevada Test Site from 1951 through 1962. Some of the earliest tests caused unexpected damage, primarily broken glass and cracked plaster, in Las Vegas and other surrounding communities. To address this problem, Sandia initiated a program to monitor and predict the pressure waves around NTS. Infrasound recording systems were developed, then field for all tests beginning with Operation Buster in October 1951. Investigators soon discovered that near-surface temperature inversions and wind profiles caused the damaging pressures in Las Vegas. A typical test was recorded at about a dozen stations from the Control Point on NTS to as far away as Pasadena, CA. In addition, some tests in the South Pacific were monitored, as well as numerous chemical explosions. Strip charts recorded signals in the frequency band from 0.05 to 30 Hz, and the paper tapes were achieved at Sandia in the early 1970s. The NTS events ranged in yield from below 1 ton to 74 kilotons; source altitudes varied from near ground level (including some cratering experiments) to as high as 11 km. The resulting data contain a wealth of information on the source function, yield scaling and regional propagation of infrasound signals from atmospheric explosions. The renewed interest in infrasonic monitoring for CTBT verification has prompted the authors to exhume some of the archived records. The authors plan to digitize the signals from several tests and evaluate their applicability to CTBT issues. In addition, they will collect any existing parametric measurements for these records (arrival times, amplitudes, etc.). All data will be converted to CSS database format and made available to the research community. If appropriate, the resulting information could also be included in the Knowledge Base under development for CTBT monitoring.
The US Department of Energy is funding the development of the Multi-spectral Thermal Imager (MTI), a satellite-based multi-spectral (MS) thermal imaging sensor scheduled for launch in October 1999. MTI is a research and development (R and D) platform to test the applicability of multispectral and thermal imaging technology for detecting and monitoring signs of proliferation of weapons of mass destruction. During its three-year mission, MTI will periodically record images of participating government, industrial and natural sites in fifteen visible and infrared spectral bands to provide a variety of image data associated with weapons production activities. The MTI satellite will have spatial resolution in the visible bands that is five times better than LANDSAT TM in each dimension and will have five thermal bands. In this work, the authors quantify the separability between specific materials and the natural background by applying Receiver Operating Curve (ROC) analysis to the residual errors from a linear unmixing. The authors apply the ROC analysis to quantify performance of the MTI. They describe the MTI imager and simulate its data by filtering HYDICE hyperspectral imagery both spatially and spectrally and by introducing atmospheric effects corresponding to the MTI satellite altitude. They compare and contrast the individual effects on performance of spectral resolution, spatial resolution, atmospheric corrections, and varying atmospheric conditions.
A new failure analysis technique has been developed for backside and frontside localization of open and shorted interconnections on ICs. This scanning optical microscopy technique takes advantage of the interactions between IC defects and localized heating using a focused infrared laser ({lambda} = 1,340 nm). Images are produced by monitoring the voltage changes across a constant current supply used to power the IC as the laser beam is scanned across the sample. The method utilizes the Seebeck Effect to localize open interconnections and Thermally-Induced Voltage Alteration (TIVA) to detects shorts. The interaction physics describing the signal generation process and several examples demonstrating the localization of opens and shorts are described. Operational guidelines and limitations are also discussed.
The near eutectic 60Sn-40Pb alloy is the most commonly used solder for electrical interconnections in electronic packages. This alloy has a number of processing advantages (suitable melting point of 183 C and good wetting behavior). However, under conditions of cyclic strain and temperature (thermomechanical fatigue), the microstructure of this alloy undergoes a heterogeneous coarsening and failure process that makes the prediction of solder joint lifetime complex. A viscoplastic constitutive model for solder with an internal state variable that tracks microstructural evolution is currently under development. This constitutive model was implemented in to several finite element codes. With this computational capability, the thermomechanical response of solder interconnects, including microstructural evolution, can be predicted. This capability was applied to predict the thermomechanical response of a ball grid array (BGA) solder interconnect. BGAs with both homogeneous and heterogeneous initial microstructures were evaluated. In this paper, the constitutive model used to describe the solder will first be briefly discussed. The results of computational studies to determine the thermomechanical response of BGA solder interconnects will then be presented.
Radiation detectors have been included in several remote monitoring field trial systems to date. The present study considers detectors at Embalse, Argentina, and Oarai, Japan. At Embalse four gamma detectors have been operating in the instrumentation tubes of spent fuel storage silos for up to three years. Except for minor fluctuations, three of the detectors have operated normally. One of the detectors appears never to have operated correctly. At Oarai two gamma detectors have been monitoring a spent-fuel transfer hatch for over 18 months. These detectors have operated normally throughout the period, although one shows occasional noise spikes.
Materials Protection, Control and Accounting (MPC and A) equipment upgrades are complete at the Institute of Theoretical and Experimental Physics (ITEP), a site that has significant quantities of weapons-potential nuclear materials. Cooperative work was initiated at this Moscow facility as a part of the US-Russian program to upgrade MPC and A systems. An initial site visit and assessment were conducted in September 1996 to establish communication between ITEP, the US Department of Energy (DOE), and participating US National Laboratories. Subsequently, an agreement was reached to develop two master plans for MPC and A upgrades. Los Alamos National Laboratory (LANL) and Oak Ridge National Laboratory (ORNL) assisted in developing a plan for Material Control and Accounting (MC and A) upgrades, and Sandia National Laboratories (SNL) assisted in developing a plan for Physical Protection System (PPS) upgrades. The MC and A plan included MC and A training, a mass measurement program, nondestructive assay instrumentation, item identification (bar coding), physical inventory taking, portal and hand-held nuclear material monitors, and a nuclear materials accounting system. The PPS plan included basic PPS design training, Central Alarm Station (CAS) relocation and equipment upgrades, a site and critical-building access control system, intrusion detection, alarm assessment, and guard force communications.
The US Department of Energy (DOE) and the Russian Special Scientific and Production State Enterprise Eleron have teamed to lead a project to enhance the overall security of Russian Ministry of Atomic Energy (MINATOM) transportation of Special Nuclear Material (SNM) shipments. The effort is called the Railcar Transportation Security Project and is part of the overall DOE Material Protection, Control, and Accounting (MPC and A) program addressing the enhancement of nuclear material control, accounting, and physical protection for Russian SNM. The goal of this MPC and A project is to significantly increase the security of Russian MINATOM highly enriched SNM rail shipments. To accomplish this, the MPC and A Railcar Transportation Security program will provide an enhanced, yet cost effective, railcar transportation security system. The system incorporates a balance between the traditional detection, communications, delay, and response security elements to significantly improve the security of MINATOM SNM shipments. The strategy of this program is to use rapid upgrades to implement mature security technologies as quickly as possible. The rapid upgrades emphasize rapidly deployable delay elements, enhanced radio communications, and intrusion detection and surveillance. Upgraded railcars have begun operation during FY98. Subsequent upgrades will build upon the rapid upgrades and eventually be integrated into a final deployed system configuration. This paper provides an overview of the program, with a summary of performance of the deployed railcars.
A combined engineering and geochemistry approach is recommended for the stabilization of waste in decommissioned tanks and contaminated soils at the AX Tank Farm, Hanford, WA. A two-part strategy of desiccation and gettering is proposed for treatment of the in-tank residual wastes. Dry portland cement and/or fly ash are suggested as an effective and low-cost desiccant for wicking excess moisture from the upper waste layer. Getters work by either ion exchange or phase precipitation to reduce radionuclide concentrations in solution. The authors recommend the use of specific natural and man-made compounds, appropriately proportioned to the unique inventory of each tank. A filler design consisting of multilayered cementitous grout with interlayered sealant horizons should serve to maintain tank integrity and minimize fluid transport to the residual waste form. External tank soil contamination is best mitigated by placement of grouted skirts under and around each tank, together with installation of a cone-shaped permeable reactive barrier beneath the entire tank farm. Actinide release rates are calculated from four tank closure scenarios ranging from no action to a comprehensive stabilization treatment plan (desiccant/getters/grouting/RCRA cap). Although preliminary, these calculations indicate significant reductions in the potential for actinide transport as compared to the no-treatment option.
A database has been created for use with the Jacobs-Cowperthwaite-Zwisler-3 equation-of-state (JCZ3-EOS) to determine thermochemical equilibrium for detonation and expansion states of energetic materials. The JCZ3-EOS uses the exponential 6 intermolecular potential function to describe interactions between molecules. All product species are characterized by r*, the radius of the minimum pair potential energy, and {var_epsilon}/k, the well depth energy normalized by Boltzmann`s constant. These parameters constitute the JCZS (S for Sandia) EOS database describing 750 gases (including all the gases in the JANNAF tables), and have been obtained by using Lennard-Jones potential parameters, a corresponding states theory, pure liquid shock Hugoniot data, and fit values using an empirical EOS. This database can be used with the CHEETAH 1.40 or CHEETAH 2.0 interface to the TIGER computer program that predicts the equilibrium state of gas- and condensed-phase product species. The large JCZS-EOS database permits intermolecular potential based equilibrium calculations of energetic materials with complex elemental composition.
The ability to locate an individual atom on a surface, remove it in a controlled fashion, and determine its chemical identity makes the atom-probe field-ion microscope an extremely powerful tool for the analysis of solid surfaces. By itself, the field ion microscope has contributed significantly to our understanding of surface atomic structure, single-atom surface diffusion, and the detailed interactions that occur between atoms and defects on surfaces.1 When used in combination with the atom-probe mass spectrometer there have been several additional areas within the traditional definition of "surface science"where the chemical identification capability of the atom probe has led to new insights. In this paper these applications are reviewed focusing on two specific areas: surface segregation in intermetallic alloys and chemical reactions on metal surfaces. The equilibrium distribution of component species in the near surface region of solid solution alloy may be different from the distribution in the bulk.
The time-averaged, daylight fractional statistical cloud coverages as a function of cloud optical thickness and selected values of cloud transmission were determined for various geographic areas using D1 data from the International Satellite Cloud Climatology Project (ISCCP). The regions of interest chosen for this report are: global earth, global sea, global land, global coast, and the six 30{degree}-latitude bands over sea, over land, and over coast with longitude 0{degree}--360{degree}. This statistical information is deduced from data determined from satellite measurements of terrestrial, atmospheric and cloud properties by the International Satellite Cloud Climatology Project. In particular the results are based on the ISCCP D1 data base.
The Waste Isolation Pilot Plant (WIPP) is under development by the US Department of Energy (DOE) for the geologic disposal of transuranic (TRU) waste that has been generated at government defense installations in the United States. The WIPP is located in an area of low population density in southeastern New Mexico. Waste disposal will take place in excavated chambers in a bedded salt formation approximately 655 m below the land surface. This presentation describes a performance assessment (PA) carried out at Sandia National Laboratories (SNL) to support the Compliance Certification Application (CCA) made by the DOE to the US Environmental Protection Agency (EPA) in October, 1996, for the certification of the WIPP for the disposal of TRU waste. Based on the CCA supported by the PA described in this presentation, the EPA has issued a preliminary decision to certify the WIPP for the disposal of TRU waste. At present (April 1998), it appears likely that the WIPP will be in operation by the end of 1998.
The appropriate treatment of uncertainty is now widely recognized as an essential component of performance assessments (PAs) for complex systems. When viewed at a high-level, the uncertainty in such analyses can typically be partitioned into two types: (1) stochastic uncertainty, which arises because the system can behave in many different ways and is thus a property of the system itself, and (2) subjective uncertainty, which arises from a lack of knowledge about quantities that are believed to have (or, at least, are assumed to have) fixed values and is thus a property of the analysts carrying out the study. The 1996 PA for the Waste Isolation Pilot Plant (WIPP) carried out at Sandia National Laboratories (SNL) will be used to illustrate the treatment of these two types of uncertainty in a real analysis. In particular, this PA supported a compliance certification application (CCA) by the US Department of Energy (DOE) to the US Environmental Protection Agency (EPA) for the certification of the WIPP for the geologic disposal of transuranic waste. Insights on the conceptual and computational structure of PAs for complex systems gained from these and other analyses are being incorporated into a new software system under development at SNL to facilitate the performance of analyses that maintain a separation between stochastic and subjective uncertainty.
Cost or performance targets for new bit technologies can be established with the aid of a drilling cost model. In this paper the authors make simplifying assumptions in a detailed drilling cost model that reduce the comparison of two technologies to a linear function of relative cost and performance parameters. This simple model, or analysis tool, is not intended to provide absolute well cost but is intended to compare the relative costs of different methods or technologies to accomplish the same drilling task. Comparing the simplified model to the detailed well cost model shows that the simple linear cost model provides a very efficient tool for screening certain new drilling methods, techniques, and technologies based on economic value. This tool can be used to divide the space defined by the set of parameters: bit cost, bit life, rate of penetration, and operational cost into two areas with a linear boundary. The set of all the operating points in one area will result in an economic advantage in drilling the well with the new technology, while any set of operating points in the other area indicates that any economic advantage is either questionable or does not exist. In addition, examining the model results can develop insights into the economics associated with bit performance, life, and cost. This paper includes development of the model, examples of employing the model to develop should cost or should perform goals for new bit technologies, a discussion of the economic insights in terms of bit cost and performance, and an illustration of the consequences when the basic assumptions are violated.
This report documents the history of the major buildings in Sandia National Laboratories` Technical Area II. It was prepared in support of the Department of Energy`s compliance with Section 106 of the National Historic Preservation Act. Technical Area II was designed and constructed in 1948 specifically for the final assembly of the non-nuclear components of nuclear weapons, and was the primary site conducting such assembly until 1952. Both the architecture and location of the oldest buildings in the area reflect their original purpose. Assembly activities continued in Area II from 1952 to 1957, but the major responsibility for this work shifted to other sites in the Atomic Energy Commission`s integrated contractor complex. Gradually, additional buildings were constructed and the original buildings were modified. After 1960, the Area`s primary purpose was the research and testing of high-explosive components for nuclear weapons. In 1994, Sandia constructed new facilities for work on high-explosive components outside of the original Area II diamond-shaped parcel. Most of the buildings in the area are vacant and Sandia has no plans to use them. They are proposed for decontamination and demolition as funding becomes available.
This work develops some practical approximations needed to simulate a high plasma density volume bounded by walls made of dielectrics or metals which may be either biased or floating in potential. Solving Poisson`s equation in both the high-density bulk and the sheath region poses a difficult computational problem due to the large electron plasma frequency. A common approximation is to assume the electric field is computed in the ambipolar approximation in the bulk and to couple this to a sheath model at the boundaries. Unfortunately, this treatment is not appropriate when some surfaces are biased with respect to others and a net current is present within the plasma. This report develops some ideas on the application of quasi-static external electric fields to plasmas and the self-consistent treatment of boundary conditions at the surfaces. These constitute a generalization of Ohm`s law for a plasma body that entails solving for the internal fields within the plasma and the potential drop and currents through the sheaths surrounding the plasma.
The Photovoltaic Manufacturing Technology (PVMaT) project is a partnership between the US government (through the US Department of Energy [DOE]) and the PV industry. Part of its purpose is to conduct manufacturing technology research and development to address the issues and opportunities identified by industry to advance photovoltaic (PV) systems and components. The project was initiated in 1990 and has been conducted in several phases to support the evolution of PV industrial manufacturing technology. Early phases of the project stressed PV module manufacturing. Starting with Phase 4A and continuing in Phase 5A, the goals were broadened to include improvement of component efficiency, energy storage and manufacturing and system or component integration to bring together all elements for a PV product. This paper summarizes PV manufacturers` accomplishments in components, system integration, and alternative manufacturing methods. Their approaches have resulted in improved hardware and PV system performance, better system compatibility, and new system capabilities. Results include new products such as Underwriters Laboratories (UL)-listed AC PV modules, modular inverters, and advanced inverter designs that use readily available and standard components. Work planned in Phase 5A1 includes integrated residential and commercial roof-top systems, PV systems with energy storage, and 300-Wac to 4-kWac inverters.
The Photovoltaic Systems Assistance Center (PVSAC) of Sandia National Laboratories (SNL) has been supporting the development and implementation of off-grid PV hybrid power systems for many years. Technical support has included: refining hardware; understanding system design techniques; obtaining operation and maintenance data; studying use of energy produced. As part of the program, the PVSAC has provided technical expertise on hybrid systems to many federal agencies including the National Park Service, the Forest Service, the Bureau of Land Management, and the Department of Defense. The goal of these partnerships has been to ensure that reliable and safe PV hybrid systems are specified and procured. At present, a critical review of performance and costs of several representative PV hybrid systems is underway. This paper presents a summary of the performance and economical analyses conducted on three PV hybrid systems.
In the Phase 2 project, Abacus Controls Inc. did research and development of hybrid systems that combine the energy sources from photovoltaics, batteries, and diesel-generators and demonstrated that they are economically feasible for small power plants in many parts of the world. The Trimode Power Processor reduces the fuel consumption of the diesel-generator to its minimum by presenting itself as the perfect electrical load to the generator. A 30-kW three-phase unit was tested at Sandia National Laboratories to prove its worthiness in actual field conditions. The use of photovoltaics at remote locations where reliability of supply requires a diesel-generator will lower costs to operate by reducing the run time of the diesel generator. The numerous benefits include longer times between maintenance for the diesel engine and better power quality from the generator. 32 figs.
This report summarizes the results of a study to develop and evaluate low temperature glass sealing technologies for photovoltaic applications. This work was done as part of Cooperative Research and Development Agreement (CRADA) No. SC95/01408. The sealing technologies evaluated included low melting temperature glass frits and solders. Because the glass frit joining required a material with a melting temperature that exceeded the allowable temperature for the active elements on the photovoltaic panels a localized heating scheme was required for sealing the perimeter of the glass panels. Thermal and stress modeling were conducted to identify the feasibility of this approach and to test strategies designed to minimize heating of the glass panel away from its perimeter. Hardware to locally heat the glass panels during glass frit joining was designed, fabricated, and successfully tested. The same hardware could be used to seal the glass panels using the low temperature solders. Solder adhesion to the glass required metal coating of the glass. The adhesion strength of the solder was dependent on the surface finish of the glass. Strategies for improving the polyisobutylene (PIB) adhesive currently being used to seal the panels and the use of Parylene coatings as a protective sealant deposited on the photovoltaic elements were also investigated. Starting points for further work are included.
This study was initiated when a new type of breakdown occurred in a high voltage experimental test assembly. An anomalous current pulse was observed, which indicated partial discharges, some leading to total breakdowns. High voltage insulator defects are shown along with their effect on the electrostatic fields in the breakdown region. OPERA electromagnetic field modeling software is used to calculate the fields and present a cause for the discharge. Several design modifications are investigated and one of the simplest resulted in a 25% decrease in the field at the discharge surface.
This paper proposes a definition for a Non-Islanding Inverter. This paper also presents methods that can be used to implement such an inverter, along with references to prior work on the subject. Justification for the definition is provided on both a theoretical basis and results from tests conducted at Sandia National Laboratories and Ascension Technology, Inc.
A suite of Probabilistic Risk Assessment Compatible Fire Models (RACFMs) has been developed to represent the hazard posed by a pool fire to weapon systems transported on the B52-H aircraft. These models represent both stand-off (i.e., the weapon system is outside of the flame zone but exposed to the radiant heat load from fire) and fully-engulfing scenarios (i.e., the object is fully covered by flames). The approach taken in developing the RACFMs for both scenarios was to consolidate, reconcile, and apply data and knowledge from all available resources including: data and correlations from the literature, data from an extensive full-scale fire test program at the Naval Air Warfare Center (NAWC) at China Lake, and results from a fire field model (VULCAN). In the past, a single, effective temperature, T{sub f}, was used to represent the fire. The heat flux to an object exposed to a fire was estimated using the relationship for black body radiation, {sigma}T{sub f}{sup 4}. Significant improvements have been made by employing the present approach which accounts for the presence of temperature distributions in fully-engulfing fires, and uses best available correlations to estimate heat fluxes in stand-off scenarios.
The Regulations and Training Projects are part of the US-Russian Federation Materials Protection, Control, and Accounting (MPC&A) cooperative program to protect Russian Navy Fuels. This paper describes the general status of the projects, progress achieved to date, and long-term plans for further work in producing regulatory documents and training to support this ewffort. The regulatory development will result in a document set that will include general requirements and rules for the Russian Navy MPC&A as well as specific instructions for operation and maintenance of each facility. The goals of the training program are to instill in managers a culture of sustainable commitment to MPC&A through the understanding of its principles and philosophies. In addition, the training program will help ensure that upgrades are effectively utilized and maintained by training operators and maintenance personnel in MPC&A principles as well in as the detailed operations of the systems.
This paper describes the Active-Bridge Oscillator (ABO), a new concept in high-stability oscillator design. The ABO is ab ridge-type oscillator design that is easly to design and overcomes many of the operational and design difficulties associated with standard bridge oscillator designs. The ABO will oscillate with a very stable output amplitude over a wide range of operating conditions without the use of an automatic-level-control (ALC). A standard bridge oscillator design requires an ALC to maintain the desired amplitude of oscillation. for this and other reasons, bridge oscilaltors are not used in mainstream designs. Bridge oscillators are generally relegated to relatively low-volume, high-performance applications. The Colpitts and Pierce designs are the most popular oscillators but are typically less stable than a bridge-type oscillator.
Smoke from plastics can cause immediate problems in electrical equipment in the form of shorting and increased leakage currents, as well as long-term corrosion (metal loss). The short-term problems can be especially serious for critical control instrumentation such as that found in nuclear reactors or telecommunications systems. The US Nuclear Regulatory Commission and Sandia National Laboratories are sponsoring a program to determine the modes and probabilities of digital equipment failure during exposure to smoke and up to 24 hours after the exposure. Early tests on computer systems have shown that the most common immediate problems are temporary and are likely to be caused by increased leakage currents. High-voltage circuits are especially vulnerable since the charged particles in smoke are drawn to those surfaces. To study failure probabilities, smoke exposure tests with real-time measurements will be carried out to determine how the electrical properties of the environment are affected by smoke concentration and content. Digital communication cable will be included in the tests because temporary shorts that cannot be detected through dc measurements may cause interruptions in communications between computers. The reaction of the equipment to changed electrical properties of the environment will be modeled. Equipment that can be used for testing and modeling is being solicited.
The sequential manner in which materials and processes for a manufactured product are selected is inherently less than optimal. Designers` tendency to choose processes and materials with which they are familiar exacerbate this problem. A method for concurrent selection of materials and a joining process based on product requirements using a knowledge-based, constraint satisfaction approach is presented.
A new approach to modeling solar thermal electric plants using the TRNSYS simulation environment is discussed. The TRNSYS environment offers many advantages over currently used tools, including the option to more easily study the hybrid solar/fossil plant configurations that have been proposed to facilitate market penetration of solar thermal technologies. A component library developed for Rankine cycle, Brayton cycle, and solar system modeling is presented. A comparison between KPRO and TRNSYS results for a simple Rankine cycle show excellent correlation.
At the request of US sponsors Spencer Management Associates (SMA) and Sun{diamond}Lab, China`s Center for Renewable Energy Development and former Ministry of Electric Power conducted an initial appraisal of the issues involved with developing China`s first solar thermal electric power plant in the sunbelt regions of Tibet or Xinjiang provinces. The appraisal concerns development of a large-scale, grid-connected solar trough or tower project capable of producing 30 or more megawatts of electricity. Several of the findings suggest that Tibet could be a niche market for solar thermal power because a solar plant may be the low-cost option relative to other methods of generating electricity. China has studied the concept of a solar thermal power plant for quite some time. In 1992, it completed a pre-feasibility study for a SEGS-type parabolic trough plant with the aid of Israel`s United Development Limited. Because the findings were positive, both parties agreed to conduct a full-scale feasibility study. However, due to funding constraints, the study was postponed. Most recently, Sun{diamond}Lab and SMA asked China to broaden the analysis to include tower as well as trough concepts. The findings of this most recent investigation completed i November of 1997, are the subject of this paper. The main conclusions of all studies conducted to date suggest that a region in the proximity of Lhasa, Tibet, offers the best near-term opportunity within China. The opportunities for solar thermal power plants in other regions of China were also investigated.
Density functional theories (DFT) for inhomogeneous fluids have been used profitably to study the structure of fluids near surfaces, and to predict solvation forces, adsorption isotherm, and a variety of surface induced phase transitions. However, in nearly all cases, only geometries with 2 symmetry planes (e.g. fluid near a uniform planar interface or a fluid in a uniform cylindrical pore) have been considered. In this paper the authors discuss the generalization of the DFT to cases with either one or no symmetry planes. They present their computational approach, as well as results for charged cylindrical polyelectrolytes and planar surfaces with inhomogeneous chemistry.
The dual control volume grand canonical molecular dynamics (DCV-GCMD) method, designed to enable the dynamic simulation of a system with a steady state chemical potential gradient is first briefly reviewed. A new, novel implementation of the method which enables the establishment of a steady state chemical potential gradient in a multicomponent system without having to insert or delete one of the components is then presented and discussed.
A quick look at the geothermal industry shows a small industry producing about $1 billion in electric sales annually. The industry is becoming older and in need of new innovative solutions to instrumentation problems. A quick look at problem areas is given along with basic instrumentation requirements. The focus of instrumentation is on high temperature electronics.
The desire to reduce the time and cost of design engineering on new components or to validate existing designs in new applications is stimulating the development of modeling and simulation tools. The authors are applying a model-based design approach to low and moderate rate versions of the Li/SOCl{sub 2} D-size cell with success. Three types of models are being constructed and integrated to achieve maximum capability and flexibility in the final simulation tool. A phenomenology based electrochemical model links performance and the cell design, chemical processes, and material properties. An artificial neural network model improves computational efficiency and fills gaps in the simulation capability when fundamental cell parameters are too difficult to measure or the forms of the physical relationships are not understood. Finally, a PSpice-based model provides a simple way to test the cell under realistic electrical circuit conditions. Integration of these three parts allows a complete link to be made between fundamental battery design characteristics and the performance of the rest of the electrical subsystem.
At Sandia National Laboratories the authors are evaluating the energy and power characteristics of commercially available Li-ion cells. Cells of several different sizes (20 Ah, 1.1 Ah, 0.750 Ah and {approximately}0.5 Ah) and geometries (cylindrical and prismatic) from several manufacturers were studied. The cells were pulsed discharged at increasing currents (50 mA to 1,000 mA) over a range of temperatures (+35 C to {minus}40 C) and at different states of charge (4.1 V, open circuit voltage (OCV), fully charged, 3.6 V OCV partially discharged and 3.1 V OCV nearly discharged) and the voltage drop was recorded. The voltage drop was small at ambient and near ambient temperatures indicating that the total cell internal impedance was small under these conditions. However, at {minus} 40 C the voltage drop was significant due to an increase in the cell internal impedance. At a given temperature, the voltage drop increases with decreasing state-of-charge (SOC) or OCV. The cell impedance and other electrochemical properties as a function of temperature and SOC were also measured. The Ragone data indicate that the specific power and specific energy of Li-ion cells of different sizes are comparable and therefore scaling up to {approximately}20 Ah does not affect either the energy or the power.
Romero, Daniel J.; Parma, Edward J.; Busch, Robert D.
Sandia National Laboratories has been chosen by the US Department of Energy as the primary domestic source for the production of molybdenum-99, utilizing the Annular Core Research Reactor. The method to be used to produce 99Mo through the fission of 235U in 93% enriched UO2 is based on the process formerly used by Cintichem, Inc. of Tuxedo, New York. The UO2 is electroplated in a thin coating to the inside of stainless steel Cintichem targets which will be irradiated in the central region of the reactor core. The proposed on-site storage plan for the unirradiated targets is to store them in a dry, secure compartment similar to a file cabinet. Each cabinet drawer will be initially filled with targets and emptied as targets are removed for irradiation. The main objective of this analysis was to postulate and model a set of incredible accident scenarios beyond the proposed storage plan which would possibly induce criticality with the targets in the safe, and determine the k-effective and its associated standard deviation for these conditions. A parametric analysis was performed using Los Alamos National Laboratory`s MCNP (Monte Carlo Neutral Particle) code, Version 4A.
This report provides an updated set of users` instructions for PRONTO3D. PRONTO3D is a three-dimensional, transient, solid dynamics code for analyzing large deformations of highly nonlinear materials subjected to extremely high strain rates. This Lagrangian finite element program uses an explicit time integration operator to integrate the equations of motion. Eight-node, uniform strain, hexahedral elements and four-node, quadrilateral, uniform strain shells are used in the finite element formulation. An adaptive time step control algorithm is used to improve stability and performance in plasticity problems. Hourglass distortions can be eliminated without disturbing the finite element solution using either the Flanagan-Belytschko hourglass control scheme or an assumed strain hourglass control scheme. All constitutive models in PRONTO3D are cast in an unrotated configuration defined using the rotation determined from the polar decomposition of the deformation gradient. A robust contact algorithm allows for the impact and interaction of deforming contact surfaces of quite general geometry. The Smooth Particle Hydrodynamics method has been embedded into PRONTO3D using the contact algorithm to couple it with the finite element method.
Sandia National Laboratories/New Mexico (SNL/NM) is a research and development facility that generates many highly diverse, low-volume mixed waste streams. Under the Federal Facility Compliance Act, SNL/NM must treat its mixed waste in storage to meet the Land Disposal Restrictions treatment standards. Since 1989, approximately 70 cubic meters (2500 cubic feet) of heterogeneous, poorly characterized and inventoried mixed waste was placed in storage that could not be treated as specified in the SNL/NM Site Treatment Plan. A process was created to sort the legacy waste into sixteen well- defined, properly characterized, and precisely inventoried mixed waste streams (Treatability Groups) and two low-level waste streams ready for treatment or disposal. From June 1995 through September 1996, the entire volume of this stored mixed waste was sorted and inventoried through this process. This process was planned to meet the technical requirements of the sorting operation and to identify and address the hazards this operation presented. The operations were routinely adapted to safely and efficiently handle a variety of waste matrices, hazards, and radiological conditions. This flexibility was accomplished through administrative and physical controls integrated into the sorting operations. Many Department of Energy facilities are currently facing the prospect of sorting, characterizing, and treating a large inventory of mixed waste. The process described in this paper is a proven method for preparing a diverse, heterogeneous mixed waste volume into segregated, characterized, inventoried, and documented waste streams ready for treatment or disposal.
In 1993 Sandia was directed to design containers for the long-term storage and transport of nuclear weapons origin fissile material. This program was undertaken at the direction of the US Department of Energy and in cooperation with Lawrence Livermore National Laboratory and Los Alamos National Laboratory. Lawrence Livermore National Laboratory and Los Alamos National Laboratory were tasked with developing the internal fixturing for the contents. The hardware is being supplied by AlliedSignal Federal Manufacturing and Technologies, and the packaging process has been developed at Mason and Hanger Corporation`s Pantex Plant. The unique challenge was to design a container that could be sealed with the fissile material contents; and, anytime during the next 50 years, the container could be transported with only the need for the pre-shipment leak test. This required a rigorous design capable of meeting the long-term storage and transportation requirements. This report addresses the final testing that was undertaken to demonstrate compliance with US radioactive materials transport regulations.
A prototype high frequency tuning fork oscillator has been fabricated and tested in an integrated surface micromachining technology. The amplifier circuitry uses a capacitive current detection method, which offers superior noise performance over previous resistive methods. The prototype device has an output frequency of 1.022 MHz and exhibits a noise floor of {minus}88 dBc/Hz at a distance of 500 Hz from the carrier. The dominant source of frequency instability is the nonlinearity introduced by the use of parallel plate actuation.
The authors attempt to extend their previous efforts towards a reliable control scheme that guarantees a specified degree of reliability for civil engineering structures. Herein, a two degree of freedom system is examined. Covariance control techniques are explored to design a compensator that will provide optimal closed loop performance, while satisfying a constraint on system reliability. It was found for the system under examination that a stable control does not exist that also meets the target reliability level. Alternate formulations continue to be investigated.
This paper summarizes the results of cutting tests performed using an actively damped boring bar to minimize chatter in metal cutting. A commercially available 2 inch diameter boring bar was modified to incorporate PZT stack actuators for controlling tool bending vibrations encountered during metal removal. The extensional motion of the actuators induce bending moments in the host structure through a two-point preloaded mounting scheme. Cutting tests performed at various speeds and depths of cuts on a hardened steel workpiece illustrate the bar`s effectiveness toward eliminating chatter vibrations and improving workpiece surface finish.
The application of physico-chemical phenomena to either increase machinability of hard materials, improve the wear resistance of cutting surfaces, or enhance sintering of particle compacts can have large economic impact on technologies ranging from materials forming processes to oil well drilling. Unfortunately, the broad application of these physico-chemical principles is limited by the authors ability to predict the optimum conditions for a wide variety of materials surfaces. Predictive models must be built upon understanding of the elementary events involved in surface damage and mobility. The authors have developed a new approach to examine the fundamental mechanisms controlling physico-chemical surface stability that combines: (1) atomic-scale control of surface contact forces and displacements under well controlled adsorbate conditions using the Interfacial Force Microscope, (2) atomic-level imaging of surface and near-surface structure and defects using Field Ion Microscopy, and (3) first-principles modeling of the effect of surface stress on adsorbate bonding interactions and the subsequent generation of surface damage. This unique combination of approaches has provided new insights into observed physico-chemical phenomena and provided the basis for developing true predictive models that are needed for wide application of these important new approaches to modifying the surface sensitive properties of materials.
The Single Heater Test (SHT) is a sixteen-month-long heating and cooling experiment begun in August, 1996, located underground within the unsaturated zone near the potential geologic repository at Yucca Mountain, Nevada. During the 9 month heating phase of the test, roughly 15 m{sup 3} of rock were raised to temperatures exceeding 100 C. In this paper, temperatures measured in sealed boreholes surrounding the heater are compared to temperatures predicted by 3D thermal-hydrologic calculations performed with a finite difference code. Three separate model runs using different values of bulk rock permeability (4 microdarcy to 5.2 darcy) yielded significantly different predicted temperatures and temperature distributions. All the models differ from the data, suggesting that to accurately model the thermal-hydrologic behavior of the SHT, the Equivalent Continuum Model (ECM), the conceptual basis for dealing with the fractured porous medium in the numerical predictions, should be discarded in favor of more sophisticated approaches.
The occurrence of gas in salt mines and caverns has presented some serious problems to facility operators. Salt mines have long experienced sudden, usually unexpected expulsions of gas and salt from a production face, commonly known as outbursts. Outbursts can release over one million cubic feet of methane and fractured salt, and are responsible for the lives of numerous miners and explosions. Equipment, production time, and even entire mines have been lost due to outbursts. An outburst creates a cornucopian shaped hole that can reach heights of several hundred feet. The potential occurrence of outbursts must be factored into mine design and mining methods. In caverns, the occurrence of outbursts and steady infiltration of gas into stored product can effect the quality of the product, particularly over the long-term, and in some cases renders the product unusable as is or difficult to transport. Gas has also been known to collect in the roof traps of caverns resulting in safety and operational concerns. The intent of this paper is to summarize the existing knowledge on gas releases from salt. The compiled information can provide a better understanding of the phenomena and gain insight into the causative mechanisms that, once established, can help mitigate the variety of problems associated with gas releases from salt. Outbursts, as documented in mines, are discussed first. This is followed by a discussion of the relatively slow gas infiltration into stored crude oil, as observed and modeled in the caverns of the US Strategic Petroleum Reserve. A model that predicts outburst pressure kicks in caverns is also discussed.
A cooperative national laboratory/industry research program was initiated in 1994 that improved understanding of the geomechanical processes causing well casing damage during oil production from weak, compactible formations. The program focused on the shallow diatomaceous oil reservoirs located in California`s San Joaquin Valley, and combined analyses of historical field data, experimental determination of rock mechanical behavior, and geomechanical simulation of the reservoir and overburden response to production and injection. Sandia National Laboratories` quasi-static, large-deformation structural mechanics finite element code JAS3D was used to perform the three-dimensional geomechanical simulations. One of the material models implemented in JAS3D to simulate the time-independent inelastic (non-linear) deformation of geomaterials is a generalized version of the Sandler and Rubin cap plasticity model (Sandler and Rubin, 1979). This report documents the experimental rock mechanics data and material cap plasticity models that were derived to describe the Belridge Diatomite reservoir rock at the South Belridge Diatomite Field, Section 33.
The Waste Isolation Pilot Plant (WIPP) Compliance Certification Application (CCA) Performance Assessment (PA) Parameter Database and its ties to supporting information evolved over the course of two years. When the CCA was submitted to the Environmental Protection Agency (EPA) in October 1996, information such as identification of parameter value or distribution source was documented using processes established by Sandia National Laboratories WIPP Quality Assurance Procedures. Reviewers later requested additional supporting documentation, links to supporting information, and/or clarification for many parameters. This guidebook is designed to document a pathway through the complex parameter process and help delineate flow paths to supporting information for all WIPP CCA parameters. In addition, this report is an aid for understanding how model parameters used in the WIPP CCA were developed and qualified. To trace the source information for a particular parameter, a dual-route system was established. The first route uses information from the Parameter Records Package as it existed when the CCA calculations were run. The second route leads from the EPA Parameter Database to additional supporting information.
The authors performed a series of experiments on the Particle Beam Fusion Accelerator II (PBFA II) in May, 1994, and obtained a brightness temperature of 61 {+-} 2 eV for an ion-beam heated hohlraum. The hohlraum was a 4-mm-diameter, right-circular cylinder with a 1.5-mm-thick gold wall, a low-density CH foam fill, and a 1.5- or 3-mm-diameter diagnostic aperture in the top. The nominal parameters of the radially-incident PBFA II Li ion beam were 9 MeV peak energy ({approximately}10 MeV at the gas cell) at the target at a peak power of 2.5 {+-} 0.3 TW/cm{sup 2} and a 15 ns pulse width. Azimuthal variations in intensity of a factor of 3, with respect to the mean, were observed. Nonuniformities in thermal x-ray emission across the area of the diagnostic hole were also observed. Time-dependent hole-closure velocities were measured: the time-averaged velocity of {approximately}2 cm/{micro}s is in good agreement with sound speed estimates. Unfolded x-ray spectra and brightness temperatures as a function of time are reported and compared to simulations. Hole closure corrections are discussed with comparisons between XRD and bolometer measurements. Temperature scaling with power on target is also presented.
This report summarizes a two-year Laboratory-Directed Research and Development (LDRD) program to gain understanding and control of the important parameters which govern the optical performance of rare-earth (RE) doped ceramics. This LDRD developed the capability to determine stable atomic arrangements in RE doped alumina using local density functional theory, and to model the luminescence from RE-doped alumina using molecular dynamic simulations combined with crystal-field calculations. Local structural features for different phases of alumina were examined experimentally by comparing their photoluminescence spectra and the atomic arrangement of the amorphous phase was determined to be similar to that of the gamma phase. The luminescence lifetimes were correlated to these differences in the local structure. The design of both high and low-phonon energy host materials was demonstrated through the growth of Er-doped aluminum oxide and lanthanum oxide. Multicomponent structures of rare-earth doped telluride glass in an alumina and silica matrix were also prepared. Finally, the optical performance of Er-doped alumina was determined as a function of hydrogen content in the host matrix. This LDRD is the groundwork for future experimentation to understand the effects of ionizing radiation on the optical properties of RE-doped ceramic materials used in space and other radiation environments.
Recommendations for improving the process for expert panel reviews of technical and programmatic aspects of science and technology programs are provided based on an evaluation study of pilot reviews for two programs at Sandia National Laboratories. These reviews were part of a larger Technical Review Pilot for the US Department of Energy (DOE). Both the Sandia Pulse Power program and Solar Thermal Electric program (a virtual lab with NREL) reviews used the recommended four DOE review criteria, but motivation for the review and the review process differed. These differences provide insight into recommendations for ways to improve the review of DOE`s multifaceted technical programs. Recommendations are: (1) Review when the program has specific need for information or validation. There is no one size fits all correct time or reason to review technical programs. (2) Tailor the four DOE criteria to the program and its need for information and explain them to the Review Panel. (3) Pay attention to the review process. Spend more time in preparation and pre-review and on briefings on the review outcomes. (4) Evaluate reviews to determine how to do them better. The survey instrument is provided for those who wish to modify it for their own use.
That there are significant definitional differences between languages is a statement of the obvious. It logically follows that definitional ambiguity occurs when translating a term from one language to another. The far-reaching implications of this fact, however, are not as widely recognized. One word that has been and will continue to be significant is warhead. This analysis (1) examines the different translations and definitions of the word warhead in English and Russian; (2) discusses the usage of warhead in the context of arms control; and (3) explores the implications definitional differences have for future negotiations. It specifically utilizes treaty texts, as well as the Helsinki agreement text, to construct a contextual use of warhead. It is concluded that if US policymakers are committed to including nuclear explosive devices in START III force reductions, negotiators must identify and use a more specific term than warhead or boyegolovka. Also included as an appendix are copies of the signed Helsinki agreement in both English and Russian.
As part of the design of the Process and Environmental Technology Laboratory (PETL) in FY97, an energy conservation report (ECR) was completed. The original energy baseline for the building, established in Title 1 design, was 595,000 BTU/sq. ft./yr, site energy use. Following the input of several reviewers and the incorporation of the various recommendations into the Title 2 design, the projected energy consumption was reduced to 341,000 BTU/sq. ft./yr. Of this reduction, it is estimated that about 150,000 BTU/sq. ft./yr resulted from inclusion of more energy efficient options into the design. The remaining reductions resulted from better accounting of energy consumption between Title 1 ECR and the final ECR. The energy efficient features selected by the outcome of the ECR were: (1) Energy Recovery system, with evaporative cooling assist, for the Exhaust/Make-up Air System; (2) Chilled Water Thermal Storage system; (3) Premium efficiency motors for large, year-round applications; (4) Variable frequency drives for all air handling fan motors; (4) Premium efficiency multiple boiler system; and (5) Lighting control system. The annual energy cost savings due to these measures will be about $165,000. The estimated annual energy savings are two million kWhrs electric, and 168,000 therms natural gas, the total of which is equivalent to 23,000 million BTUs per year. Put into the perspective of a typical office/light lab at SNL/NM, the annual energy savings is equal the consumption of a 125,000 square foot building. The reduced air emissions are approximately 2,500 tons annually.
Thermite (metal oxide) mixtures, intermetallic reactants, and metal fuels have long been used in pyrotechnic applications. Advantages of these systems typically include high energy density, impact insensitivity, high combustion temperature, and a wide range of gas production. They generally exhibit high temperature stability, and possess insensitive ignition properties. In this paper, the authors review the applications, benefits, and characteristics of thermite mixtures, intermetallic reactants, and metal fuels. Calculated values for reactant density, heat of reaction (per unit mass and per unit volume), and reaction temperature (without and with consideration of phase changes and the variation of specific heat values) are tabulated. These data are ranked in several ways, according to density, heat of reaction, reaction temperature, and gas production.
Surface acoustic wave (SAW) measurements were combined with direct, in-situ molecular spectroscopy to understand the interactions of surface-confined sensing films with gas-phase analytes. This was accomplished by collecting Fourier-transform infrared external-reflectance spectra (FTIR-ERS) on operating SAW devices during dosing of their specifically coated surfaces with key analytes.
The reduction of NpO{sub 2}{sup 2+} and PuO{sub 2}{sup 2+} by oxalate, citrate, and ethylenediaminetetraacetic acid (EDTA) was investigated in low ionic strength media and brines. This was done to help establish the stability of the An(VI) oxidation state depended on the pH nd relative strength of the various oxidation state-specific complexes. At low ionic strength and pH 6, NpO{sub 2}{sup 2+} was rapidly reduced to form NpO{sub 2}{sup +} organic complexes. At longer times, Np(IV) organic complexes were observed in the presence of citrate. PuO{sub 2}{sup 2+} was predominantly reduced to Pu{sup 4+}, resulting in the formation of organic complexes or polymeric/hydrolytic precipitates. The relative rates of reduction to the An(V) complex were EDTA > citrate > oxalate. Subsequent reduction to An(IV) complexes, however, occurred in the following order: citrate > EDTA > oxalate because of the stability of the An(VI)-EDTA complex. The presence of organic complexants led to the rapid reduction of NpO{sub 2}{sup 2+} and PuO{sub 2}{sup 2+} in G-Seep brine at pHs 5 and 7. At pHs 8 and 10 in ERDA-6 brine, carbonate and hydrolytic complexes predominated and slowed down or prevented the reduction of An(VI) by the organics present.
Pollution Prevention (P2) programs and projects within the DOE Environmental Restoration (ER) and Decontamination and Decommissioning (D and D) Programs have been independently developed and implemented at various sites. As a result, unique, innovative solutions used at one site may not be known to other sites, and other sites may continue to duplicate efforts to develop and implement similar solutions. Several DOE Program offices have funded the development of tools to assist ER/D and D P2 projects. To realize the full value of these tools, they need to be evaluated and publicized to field sites. To address these needs and concerns, Sandia National Laboratory (SNL/NM), Los Alamos National Laboratory (LANL), and the Oak Ridge Field Office (DOE-OR) have teamed to pilot test DOE training and tracking tools; transfer common P2 analyses between sites, and evaluate and expand P2 tools and methodologies. The project is supported by FY 98 DOE Pollution Prevention Complex-Wide Project Funds. This paper presents the preliminary results for each of the following project modules: Training, Waste Tracking Pilot, Information Exchange, Evaluate P2 Tools for ER/D and D, Field Test of P2 Tools; and DOE Information Exchange.
The Sandia Bicycle Commuters Group (SBCG) formed three years ago for the purpose of addressing issues that impact the bicycle commuting option. The meeting that launched the SBCG was scheduled in conjunction with National Bike-to-Work day in May 1995. Results from a survey handed out at the meeting solidly confirmed the issues and that an advocacy group was needed. The purpose statement for the Group headlines its web site and brochure: ``Existing to assist and educate the SNL workforce bicyclist on issues regarding Kirtland Air Force Base (KAFB) access, safety and bicycle-supporting facilities, in order to promote bicycling as an effective and enjoyable means of commuting.`` The SNL Pollution Prevention (P2) Team`s challenge to the SNL workforce is to ``prevent pollution, conserve natural resources, and save money``. In the first winter of its existence, the SBCG sponsored a winter commute contest in conjunction with the City`s Clean Air Campaign (CAC). The intent of the CAC is to promote alternative (to the single-occupant vehicle) commuting during the Winter Pollution Advisory Period (October 1--February 28), when the City runs the greatest risk of exceeding federal pollution limits.
At Sandia National Laboratories, the authors are developing the ability to accurately predict motions for arbitrary numbers of bodies of arbitrary shapes experiencing multiple applied forces and intermittent contacts. In particular, they are concerned with the simulation of systems such as part feeders or mobile robots operating in realistic environments. Preliminary investigation of commercial dynamics software packages led us to the conclude that they could use a commercial code to provide everything they needed except for the contact model. They found that ADAMS best fit the needs for a simulation package. To simulate intermittent contacts, they need collision detection software that can efficiently compute the distances between non-convex objects and return the associated witness features. They also require a computationally efficient contact model for rapid simulation of impact, sustained contact under load, and transition to and from contact conditions. This paper provides a technical review of a custom hierarchical distance computation engine developed at Sandia, called the C-Space Toolkit (CSTk). In addition, the authors describe an efficient contact model using a non-linear damping term developed at Ohio State. Both the CSTk and the non-linear damper have been incorporated in a simplified two-body testbed code, which is used to investigate how to correctly model the contact using these two utilities. They have incorporated this model into ADAMS SOLVER using the callable function interface. An example that illustrate the capabilities of the 9.02 release of ADAMS with the extensions is provided.
In the past three years, tremendous strides have been made in x-ray production using high-current z-pinches. Today, the x-ray energy and power output of the Z accelerator (formerly PBFA II) is the largest available in the laboratory. These z-pinch x-ray sources have great potential to drive high-yield inertial confinement fusion (ICF) reactions at affordable cost if several challenging technical problems can be overcome. Technical challenges in three key areas are discussed in this paper: (1) the design of a target for high yield, (2) the development of a suitable pulsed power driver, and (3) the design of a target chamber capable of containing the high fusion yield.
The Reproducing Kernel Particle Method (RKPM) has many attractive properties that make it ideal for treating a broad class of physical problems. RKPM may be implemented in a mesh-full or a mesh-free manner and provides the ability to tune the method, via the selection of a dilation parameter and window function, in order to achieve the requisite numerical performance. RKPM also provides a framework for performing hierarchical computations making it an ideal candidate for simulating multi-scale problems. Although RKPM has many appealing attributes, the method is quite new and its numerical performance is still being quantified with respect to more traditional discretization methods. In order to assess the numerical performance of RKPM, detailed studies of RKPM on a series of model partial differential equations has been undertaken. The results of von Neumann analyses for RKPM semi-discretizations of one and two-dimensional, first and second-order wave equations are presented in the form of phase and group errors. Excellent dispersion characteristics are found for the consistent mass matrix with the proper choice of dilation parameter. In contrast, the influence of row-sum lumping the mass matrix is shown to introduce severe lagging phase errors. A higher-order mass matrix improves the dispersion characteristics relative to the lumped mass matrix but delivers severe lagging phase errors relative to the fully integrated, consistent mass matrix.
In the past thirty-six months, great progress has been made in x-ray production using high-current z-pinches. Today, the x-ray energy and power output of the Z accelerator (formerly PBFA-II) is the largest available in the laboratory. These z-pinch x-ray sources have the potential to drive high-yield ICF reactions at affordable cost if several challenging technical problems can be overcome. In this paper, the recent technical progress with Z-pinches will be described, and a technical strategy for achieving high-yield ICF with z-pinches will be presented.
A parachute system was designed and prototypes built to deploy a telemetry package behind an earth-penetrating weapon just before impact. The parachute was designed to slow the 10 lb. telemetry package and wire connecting it to the penetrator to 50 fps before impact occurred. The parachute system was designed to utilize a 1.3-ft-dia cross pilot parachute and a 10.8-ft-dia main parachute. A computer code normally used to model the deployment of suspension lines from a packed parachute system was modified to model the deployment of wire from the weapon forebody. Results of the design calculations are presented. Two flight tests of the WBS were conducted, but initiation of parachute deployment did not occur in either of the tests due to difficulties with other components. Thus, the trajectory calculations could not be verified with data. Draft drawings of the major components of the parachute system are presented.
A new visualization technique is reported, which dramatically improves interactivity for scientific visualizations by working directly with voxel data and by employing efficient algorithms and data structures. This discussion covers the research software, the file structures, examples of data creation, data search, and triangle rendering codes that allow geometric surfaces to be extracted from volumetric data. Uniquely, these methods enable greater interactivity by allowing an analyst to dynamically specify both the desired isosurface threshold and required level-of-detail to be used while rendering the image. The key idea behind this visualization paradigm is that various levels-of-detail are represented as differently sized hexahedral virtual voxels, which are stored in a three-dimensional kd-tree; thus the level-of-detail representation is done in voxel space instead of the traditional approach which relies on surface or geometry space decimations. This algorithm has been implemented as an integral component in the EIGEN/VR project at Sandia National Laboratories, which provides a rich environment for scientists to interactively explore and visualize the results of very large-scale simulations performed on massively parallel supercomputers.
Coupled blast-structural computational simulations using supercomputer capabilities will significantly advance the understanding of how complex structures respond under dynamic loads caused by explosives and earthquakes, an understanding with application to the surety of both federal and nonfederal buildings. Simulation of the effects of explosives on structures is a challenge because the explosive response can best be simulated using Eulerian computational techniques and structural behavior is best modeled using Lagrangian methods. Due to the different methodologies of the two computational techniques and code architecture requirements, they are usually implemented in different computer programs. Explosive and structure modeling in two different codes make it difficult or next to impossible to do coupled explosive/structure interaction simulations. Sandia National Laboratories has developed two techniques for solving this problem. The first is called Smoothed Particle Hydrodynamics (SPH), a relatively new gridless method comparable to Eulerian, that is especially suited for treating liquids and gases such as those produced by an explosive. The SPH capability has been fully implemented into the transient dynamics finite element (Lagrangian) codes PRONTO-2D and -3D. A PRONTO-3D/SPH simulation of the effect of a blast on a protective-wall barrier is presented in this paper. The second technique employed at Sandia National Laboratories uses a relatively new code called ALEGRA which is an ALE (Arbitrary Lagrangian-Eulerian) wave code with specific emphasis on large deformation and shock propagation. ALEGRA is capable of solving many shock-wave physics problems but it is especially suited for modeling problems involving the interaction of decoupled explosives with structures.
The US Department of Energy`s (DOE) Accelerated Strategic Computing Initiative (ASCI) program calls for the development of high end computing and advanced application simulations as one component of a program to eliminate reliance upon nuclear testing in the US nuclear weapons program. This paper presents results from the ASCI program`s examination of needs for focused validation and verification (V and V). These V and V activities will ensure that 100 TeraOP-scale ASCI simulation code development projects apply the appropriate means to achieve high confidence in the use of simulations for stockpile assessment and certification. The authors begin with an examination of the roles for model development and validation in the traditional scientific method. The traditional view is that the scientific method has two foundations, experimental and theoretical. While the traditional scientific method does not acknowledge the role for computing and simulation, this examination establishes a foundation for the extension of the traditional processes to include verification and scientific software development that results in the notional framework known as Sargent`s Framework. This framework elucidates the relationships between the processes of scientific model development, computational model verification and simulation validation. This paper presents a discussion of the methodologies and practices that the ASCI program will use to establish confidence in large-scale scientific simulations. While the effort for a focused program in V and V is just getting started, the ASCI program has been underway for a couple of years. The authors discuss some V and V activities and preliminary results from the ALEGRA simulation code that is under development for ASCI. The breadth of physical phenomena and the advanced computational algorithms that are employed by ALEGRA make it a subject for V and V that should typify what is required for many ASCI simulations.
CPA -- Cost and Performance Analysis -- is a methodology that joins Activity Based Cost (ABC) estimation with performance based analysis of physical protection systems. CPA offers system managers an approach that supports both tactical decision making and strategic planning. Current exploratory applications of the CPA methodology are addressing analysis of alternative conceptual designs. To support these activities, the original architecture for CPA, is being expanded to incorporate results from a suite of performance and consequence analysis tools such as JTS (Joint Tactical Simulation), ERAD (Explosive Release Atmospheric Dispersion) and blast effect models. The process flow for applying CPA to the development and analysis conceptual designs is illustrated graphically.
The detection and removal of buried unexploded ordnance (UXO) and landmines is one of the most important problems facing the world today. Numerous detection strategies are being developed, including infrared, electrical conductivity, ground-penetrating radar, and chemical sensors. Chemical sensors rely on the detection of TNT molecules, which are transported from buried UXO/landmines by advection and diffusion in the soil. As part of this effort, numerical models are being developed to predict TNT transport in soils including the effect of precipitation and evaporation. Modifications will be made to TOUGH2 for application to the TNT chemical sensing problem. Understanding the fate and transport of TNT in the soil will affect the design, performance and operation of chemical sensors by indicating preferred sensing strategies.
The US Department of Energy (DOE) is investigating Yucca Mountain, Nevada as a potential site for the disposal of high-level nuclear waste. The site is located near the southwest corner of the Nevada Test Site (NTS) in southern Nye County, Nevada. The underground Exploratory Studies Facility (ESF) tunnel traverses part of the proposed repository block. Alcove 5, located within the ESF is being used to field two in situ ESF thermal tests: the Single Heater Test (SHT) and the Drift Scale Test (DST). Laboratory test specimens were collected from three sites within Alcove 5 including each in situ field test location and one additional site. The aim of the laboratory tests was to determine site-specific thermal and mechanical rock properties including thermal expansion, thermal conductivity, unconfined compressive strength, and elastic moduli. In this paper, the results obtained for the SHT and DST area characterization are compared with data obtained from other locations at the proposed repository site. Results show that thermal expansion, and mechanical properties of Alcove 5 laboratory specimens are slightly different than the average values obtained on specimens from surface drillholes.
The International Thermonuclear Experimental Reactor (ITER) is envisioned to be the next major step in the world`s fusion program from the present generation of tokamaks and is designed to study fusion plasmas with a reactor relevant range of plasma parameters. During normal operation, it is expected that a fraction of the unburned tritium, that is used to routinely fuel the discharge, will be retained together with deuterium on the surfaces and in the bulk of the plasma facing materials (PFMs) surrounding the core and divertor plasma. The understanding of he basic retention mechanisms (physical and chemical) involved and their dependence upon plasma parameters and other relevant operation conditions is necessary for the accurate prediction of the amount of tritium retained at any given time in the ITER torus. Accurate estimates are essential to assess the radiological hazards associated with routine operation and with potential accident scenarios which may lead to mobilization of tritium that is not tenaciously held. Estimates are needed to establish the detritiation requirements for coolant water, to determine the plasma fueling and tritium supply requirements, and to establish the needed frequency and the procedures for tritium recovery and clean-up. The organization of this paper is as follows. Section 2 provides an overview of the design and operating conditions of the main components which define the plasma boundary of ITER. Section 3 reviews the erosion database and the results of recent relevant experiments conducted both in laboratory facilities and in tokamaks. These data provide the experimental basis and serve as an important benchmark for both model development (discussed in Section 4) and calculations (discussed in Section 5) that are required to predict tritium inventory build-up in ITER. Section 6 emphasizes the need to develop and test methods to remove the tritium from the codeposited C-based films and reviews the status and the prospects of the most attractive techniques. Section 7 identifies the unresolved issues and provides some recommendations on potential R and D avenues for their resolution. Finally, a summary is provided in Section 8.
This report describes the research accomplishments achieved under the LDRD Project ``Double Electron Layer Tunneling Transistor.`` The main goal of this project was to investigate whether the recently discovered phenomenon of 2D-2D tunneling in GaAs/AlGaAs double quantum wells (DQWs), investigated in a previous LDRD, could be harnessed and implemented as the operating principle for a new type of tunneling device the authors proposed, the double electron layer tunneling transistor (DELTT). In parallel with this main thrust of the project, they also continued a modest basic research effort on DQW physics issues, with significant theoretical support. The project was a considerable success, with the main goal of demonstrating a working prototype of the DELTT having been achieved. Additional DELTT advances included demonstrating good electrical characteristics at 77 K, demonstrating both NMOS and CMOS-like bi-stable memories at 77 K using the DELTT, demonstrating digital logic gates at 77 K, and demonstrating voltage-controlled oscillators at 77 K. In order to successfully fabricate the DELTT, the authors had to develop a novel flip-chip processing scheme, the epoxy-bond-and-stop-etch (EBASE) technique. This technique was latter improved so as to be amenable to electron-beam lithography, allowing the fabrication of DELTTs with sub-micron features, which are expected to be extremely high speed. In the basic physics area they also made several advances, including a measurement of the effective mass of electrons in the hour-glass orbit of a DQW subject to in-plane magnetic fields, and both measurements and theoretical calculations of the full Landau level spectra of DQWs in both perpendicular and in-plane magnetic fields. This last result included the unambiguous demonstration of magnetic breakdown of the Fermi surface. Finally, they also investigated the concept of a far-infrared photodetector based on photon assisted tunneling in a DQW. Absorption calculations showed a narrowband absorption which persisted to temperatures much higher than the photon energy being detected. Preliminary data on prototype detectors indicated that the absorption is not only narrowband, but can be tuned in energy through the application of a gate voltage.
The Yucca Mountain Project is currently evaluating the coupled thermal-mechanical-hydrological-chemical (TMHC) response of the potential repository host rock through an in situ thermal testing program. A drift scale test (DST) was constructed during 1997 and heaters were turned on in December 1997. The DST includes nine canister-sized containers with thirty operating heaters each located within the heated drift (HD) and fifty wing heaters located in boreholes in both ribs with a total power output of nominally 210kW. A total of 147 boreholes (combined length of 3.3 km) houses most of the over 3700 TMHC sensors connected with 201 km of cabling to a central data acquisition system. The DST is located in the Exploratory Studies Facility in a 5-m diameter drift approximately 50 m in length. Heating will last up to four years and cooling will last another four years. The rock mass surrounding the DST will experience a harsh thermal environment with rock surface temperatures expected to reach a maximum of about 200 C. This paper describes the process of designing the DST. The first 38 m of the 50-m long Heated Drift (HD) is dedicated to collection of data that will lead to a better understanding of the complex coupled TMHC processes in the host rock of the proposed repository. The final 12 m is dedicated to evaluating the interactions between the heated rock mass and cast-in-place (CIP) concrete ground support systems at elevated temperatures. In addition to a description of the DST design, data from site characterization, and a general description of the analyses and analysis approach used to design the test and make pretest predictions are presented. Test-scoping and pretest numerical predictions of one way thermal-hydrologic, thermal-mechanical, and thermal-chemical behaviors have been completed (TRW, 1997a). These analyses suggest that a dry-out zone will be created around the DST and a 10,000 m{sup 3} volume of rock will experience temperatures above 100 C. The HD will experience large stress increases, particularly in the crown of the drift. Thermoelastic displacements of up to about 16 mm are predicted for some thermomechanical gages. Additional analyses using more complex models will be performed during the conduct of the DST and the results compared with measured data.
Here, the authors report on the lubricating effects of self-assembled monolayers (SAMs) on MEMS by measuring static and dynamic friction with two polysilicon surface- micromachined devices. The first test structure is used to study friction between laterally sliding surfaces and with the second, friction between vertical sidewalls can be investigated. Both devices are SAM-coated following the sacrificial oxide etch and the microstructures emerge released and dry from the final water rinse. The coefficient of static friction, {mu}{sub s} was found to decrease from 2.1 {+-} 0.8 for the SiO{sub 2} coating to 0.11 {+-} 0.01 and 0.10 {+-} 0.01 for films derived from octadecyltrichloro-silane (OTS) and 1H,1H,2H,2H-perfluorodecyl-trichlorosilane (FDTS). Both OTS and FDTS SAM-coated structures exhibit dynamic coefficients of friction, {mu}{sub d} of 0.08 {+-} 0.01. These values were found to be independent of the apparent contact area, and remain unchanged after 1 million impacts at 5.6 {micro}N (17 kPa), indicating that these SAMs continue to act as boundary lubricants despite repeated impacts. Measurements during sliding friction from the sidewall friction testing structure give comparable initial {mu}{sub d} values of 0.02 at a contact pressure of 84 MPa. After 15 million wear cycles, {mu}{sub d} was found to rise to 0.27. Wear of the contacting surfaces was examined by SEM. Standard deviations in the {mu} data for SAM treatments indicate uniform coating coverage.
Development of well-controlled hypervelocity launch capabilities is the first step to understand material behavior at extreme pressures and temperatures not available using conventional gun technology. In this paper, techniques used to extend both the launch capabilities of a two-stage light-gas gun to 10 km/s and their use to determine material properties at pressures and temperature states higher than those ever obtained in the laboratory are summarized. Time-resolved interferometric techniques have been used to determine shock loading and release characteristics of materials impacted by titanium and aluminum fliers launched by the only developed three-stage light-gas gun at 10 km/s. In particular, the Sandia three stage light gas gun, also referred to as the hypervelocity launcher, HVL, which is capable of launching 0.5 mm to 1.0 mm thick by 6 mm to 19 mm diameter plates to velocities approaching 16 km/s has been used to obtain the necessary impact velocities. The VISAR, interferometric particle-velocity techniques has been used to determine shock loading and release profiles in aluminum and titanium at impact velocities of 10 km/s.
Economic and political demands are driving computational investigation of systems and processes like never before. It is foreseen that questions of safety, optimality, risk, robustness, likelihood, credibility, etc. will increasingly be posed to computational modelers. This will require the development and routine use of computing infrastructure that incorporates computational physics models within the framework of larger meta-analyses involving aspects of optimization, nondeterministic analysis, and probabilistic risk assessment. This paper describes elements of an ongoing case study involving the computational solution of several meta-problems in optimization, nondeterministic analysis, and optimization under uncertainty pertaining to the surety of a generic weapon safing device. The goal of the analyses is to determine the worst-case heating configuration in a fire that most severely threatens the integrity of the device. A large, 3-D, nonlinear, finite element thermal model is used to determine the transient thermal response of the device in this coupled conduction/radiation problem. Implications of some of the numerical aspects of the thermal model on the selection of suitable and efficient optimization and nondeterministic analysis algorithms are discussed.
The Russia-US joint program on the safe management of nuclear materials was initiated to address common technical issues confronting the US and Russia in the management of excess weapons grade nuclear materials. The program was initiated after the 1993 Tomsk-7 accident. This paper provides an update on program activities since 1996. The Fourth US Russia Nuclear Materials Safety Management Workshop was conducted in March 1997. In addition, a number of contracts with Russian Institutes have been placed by Lawrence Livermore National Laboratory (LLNL) and Sandia National Laboratories (SNL). These contracts support research related to the safe disposition of excess plutonium (Pu) and highly enriched uranium (HEU). Topics investigated by Russian scientists under contracts with SNL and LLNL include accident consequence studies, the safety of anion exchange processes, underground isolation of nuclear materials, and the development of materials for the immobilization of excess weapons Pu.
This report provides an introduction to the various probabilistic methods developed roughly between 1956--1985 for performing reliability or probabilistic uncertainty analysis on complex systems. This exposition does not include the traditional reliability methods (e.g. parallel-series systems, etc.) that might be found in the many reliability texts and reference materials (e.g. and 1977). Rather, the report centers on the relatively new, and certainly less well known across the engineering community, analytical techniques. Discussion of the analytical methods has been broken into two reports. This particular report is limited to those methods developed between 1956--1985. While a bit dated, methods described in the later portions of this report still dominate the literature and provide a necessary technical foundation for more current research. A second report (Analytical Techniques 2) addresses methods developed since 1985. The flow of this report roughly follows the historical development of the various methods so each new technique builds on the discussion of strengths and weaknesses of previous techniques. To facilitate the understanding of the various methods discussed, a simple 2-dimensional problem is used throughout the report. The problem is used for discussion purposes only; conclusions regarding the applicability and efficiency of particular methods are based on secondary analyses and a number of years of experience by the author. This document should be considered a living document in the sense that as new methods or variations of existing methods are developed, the document and references will be updated to reflect the current state of the literature as much as possible. For those scientists and engineers already familiar with these methods, the discussion will at times become rather obvious. However, the goal of this effort is to provide a common basis for future discussions and, as such, will hopefully be useful to those more intimate with probabilistic analysis and design techniques. There are clearly alternative methods of dealing with uncertainty (e.g. fuzzy set theory, possibility theory), but this discussion will be limited to those methods based on probability theory.
When designing a high consequence system, considerable care should be taken to ensure that the system can not easily be placed into a high consequence failure state. A formal system design process should include a model that explicitly shows the complete state space of the system (including failure states) as well as those events (e.g., abnormal environmental conditions, component failures, etc.) that can cause a system to enter a failure state. In this paper the authors present such a model and formally develop a notion of risk-based refinement with respect to the model.
This paper presents a graph-based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The graph-based tool can identify the set of attack paths that have a high probability of success (or a low effort cost) for the attacker. The system could be used to test the effectiveness of making configuration changes, implementing an intrusion detection system, etc. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level-of-effort for the attacker, various graph algorithms such as shortest-path algorithms can identify the attack paths with the highest probability of success.
Sandia National Laboratories performed vibration and shock testing on a Savannah River Hydride Transport Vessel (HTV) which is used for bulk shipments of tritium. This testing is required to qualify the HTV for transport in the H1616 shipping container. The main requirement for shipment in the H1616 is that the contents (in this case the HTV) have a tritium leak rate of less than 1x10{sup {minus}7} cc/sec after being subjected to shock and vibration normally incident to transport. Helium leak tests performed before and after the vibration and shock testing showed that the HTV remained leaktight under the specified conditions. This report documents the tests performed and the test results.
Experiments were performed at SATURN, a high current z-pinch, to explore the feasibility of creating a hohlraum by imploding a tungsten wire array onto a low-density foam. Emission measurements in the 200--280 eV energy band were consistent with a 110--135 eV Planckian before the target shock heated, or stagnated, on-axis. Peak pinch radiation temperatures of nominally 160 eV were obtained. Measured early time x-ray emission histories and temperature estimates agree well with modeled performance in the 200--280 eV band using a 2D radiation magneto-hydrodynamics code. However, significant differences are observed in comparisons of the x-ray images and 2D simulations.
Field Jr., R.V.; Grigoriadis, K.M.; Bergman, L.A.; Skelton, R.E.
Random variations, whether they occur in the input signal or the system parameters, are phenomena that occur in nearly all engineering systems of interest. As a result, nondeterministic modeling techniques must somehow account for these variations to ensure validity of the solution. As might be expected, this is a difficult proposition and the focus of many current research efforts. Controlling seismically excited structures is one pertinent application of nondeterministic analysis and is the subject of the work presented herein. This overview paper is organized into two sections. First, techniques to assess system reliability, in a context familiar to civil engineers, are discussed. Second, and as a consequence of the first, active control methods that ensure good performance in this random environment are presented. It is the hope of the authors that these discussions will ignite further interest in the area of reliability assessment and design of controlled civil engineering structures.
Interpretation of compression stress-relaxation (CSR) experiments for elastomers in air is complicated by (1) the presence of both physical and chemical relaxation and (2) anomalous diffusion-limited oxidation (DLO) effects. For a butyl material, the authors first use shear relaxation data to indicate that physical relaxation effects are negligible during typical high temperature CSR experiments. They then show that experiments on standard CSR samples ({approximately}15 mm diameter when compressed) lead to complex non-Arrhenius behavior. By combining reaction kinetics based on the historic basic autoxidation scheme with a diffusion equation appropriate to disk-shaped samples, they derive a theoretical DLO model appropriate to CSR experiments. Using oxygen consumption and permeation rate measurements, the theory shows that important DLO effects are responsible for the observed non-Arrhenius behavior. To minimize DLO effects, they introduce a new CSR methodology based on the use of numerous small disk samples strained in parallel. Results from these parallel, minidisk experiments lead to Arrhenius behavior with an activation energy consistent with values commonly observed for elastomers, allowing more confident extrapolated predictions. In addition, excellent correlation is noted between the CSR force decay and the oxygen consumption rate, consistent with the expectation that oxidative scission processes dominate the CSR results.
The Optical Assembly (OA) for the Multispectral Thermal Imager (MTI) program has been fabricated, assembled, and successfully tested for its performance. It represents a major milestone achieved towards completion of this earth observing E-O imaging sensor that is to be operated in low earth orbit. Along with its wide-field-of-view (WFOV), 1.82{degree} along-track and 1.38{degree} cross-track, and comprehensive on-board calibration system, the pushbroom imaging sensor employs a single mechanically cooled focal plane with 15 spectral bands covering a wavelength range from 0.45 to 10.7 {micro}m. The OA has an off-axis three-mirror anastigmatic (TMA) telescope with a 36-cm unobscured clear aperture. The two key performance criteria, 80% enpixeled energy in the visible and radiometric stability of 1% 1{sigma} in the visible/near-infrared (VNIR) and short wavelength infrared (SWIR), of 1.45% 1{sigma} in the medium wavelength infrared (MWIR), and of 0.53% 1{sigma} long wavelength infrared (LWIR), as well as its low weight (less than 49 kg) and volume constraint (89 cm x 44 cm x 127 cm) drive the overall design configuration of the OA and fabrication requirements.
With the increased use of public key cryptography, faster modular multiplication has become an important cryptographic issue. Almost all public key cryptography, including most elliptic curve systems, use modular multiplication. Modular multiplication, particularly for the large public key modulii, is very slow. Increasing the speed of modular multiplication is almost synonymous with increasing the speed of public key cryptography. There are two parts to modular multiplication: multiplication and modular reduction. Though there are fast methods for multiplying and fast methods for doing modular reduction, they do not mix well. Most fast techniques require integers to be in a special form. These special forms are not related and converting from one form to another is more costly than using the standard techniques. To this date it has been better to use the fast modular reduction technique coupled with standard multiplication. Standard modular reduction is much more costly than standard multiplication. Fast modular reduction (Montgomery`s method) reduces the reduction cost to approximately that of a standard multiply. Of the fast multiplication techniques, the redundant number system technique (RNS) is one of the most popular. It is simple, converting a large convolution (multiply) into many smaller independent ones. Not only do redundant number systems increase speed, but the independent parts allow for parallelization. RNS form implies working modulo another constant. Depending on the relationship between these two constants; reduction OR division may be possible, but not both. This paper describes a new technique using ideas from both Montgomery`s method and RNS. It avoids the formula problem and allows fast reduction and multiplication. Since RNS form is used throughout, it also allows the entire process to be parallelized.
The authors conducted perforation experiments with 4340 Rc 38 and T-250 maraging steel, long rod projectiles and HY-100 steel target plates at striking velocities between 80 and 370 m/s. Flat-end rod projectiles with lengths of 89 and 282 mm were machined to nominally 30-mm-diameter so they could be launched from a 30-mm-powder gun without sabots. The target plates were rigidly clamped at a 305-mm-diameter and had nominal thicknesses of 5.3 and 10.5 mm. Four sets of experiments were conducted to show the effects of rod length and plate thickness on the measured ballistic limit and residual velocities. In addition to measuring striking and residual projectile velocities, they obtained framing camera data on the back surfaces of several plates that showed clearly the plate deformation and plug ejection process. They also present a beam model that exhibits qualitatively the experimentally observed mechanisms.
This paper introduces a new configuration of parallel manipulator call the Rotopod which is constructed from all revolute type joints. The Rotopod consists of two platforms connected by six legs and exhibits six Cartesian degrees of freedom. The Rotopod is initially compared with other all revolute joint parallel manipulators to show its similarities and differences. The inverse kinematics for this mechanism are developed and used to analyze the accessible workspace of the mechanism. Optimization is performed to determine the Rotopod design configurations which maximum the accessible workspace based on desirable functional constraints.
This paper investigates a new aspect of fine motion planning for the micro domain. As parts approach 1--10 {micro}m or less in outside dimensions, interactive forces such as van der Waals and electrostatic forces become major factors which greatly change the assembly sequence and path plans. It has been experimentally shown that assembly plans in the micro domain are not reversible, motions required to pick up a part are not the reverse of motions required to release a part. This paper develops the mathematics required to determine the goal regions for pick up, holding, and release of a micro-sphere being handled by a rectangular tool.
Sandia National Laboratories has developed a distributed, high fidelity simulation system for training and planning small team Operations. The system provides an immersive environment populated by virtual objects and humans capable of displaying complex behaviors. The work has focused on developing the behaviors required to carry out complex tasks and decision making under stress. Central to this work are techniques for creating behaviors for virtual humans and for dynamically assigning behaviors to CGF to allow scenarios without fixed outcomes. Two prototype systems have been developed that illustrate these capabilities: MediSim, a trainer for battlefield medics and VRaptor, a system for planning, rehearsing and training assault operations.
This paper presents an analysis of the thermal effects on radioactive (RAM) transportation packages with a fire in an adjacent compartment. An assumption for this analysis is that the adjacent hold fire is some sort of engine room fire. Computational fluid dynamics (CFD) analysis tools were used to perform the analysis in order to include convective heat transfer effects. The analysis results were compared to experimental data gathered in a series of tests on tile US Coast Guard ship Mayo Lykes located at Mobile, Alabama.
The Agile Manufacturing Prototyping System (AMPS) is being integrated at Sandia National Laboratories. AMPS consists of state of the industry flexible manufacturing hardware and software enhanced with Sandia advancements in sensor and model based control; automated programming, assembly and task planning; flexible fixturing; and automated reconfiguration technology. AMPS is focused on the agile production of complex electromechanical parts. It currently includes 7 robots (4 Adept One, 2 Adept 505, 1 Staubli RX90), conveyance equipment, and a collection of process equipment to form a flexible production line capable of assembling a wide range of electromechanical products. This system became operational in September 1995. Additional smart manufacturing processes will be integrated in the future. An automated spray cleaning workcell capable of handling alcohol and similar solvents was added in 1996 as well as parts cleaning and encapsulation equipment, automated deburring, and automated vision inspection stations. Plans for 1997 and out years include adding manufacturing processes for the rapid prototyping of electronic components such as soldering, paste dispensing and pick-and-place hardware.
The direct connection of information, captured in forms such as CAD databases, to the factory floor is enabling a revolution in manufacturing. Rapid response to very dynamic market conditions is becoming the norm rather than the exception. In order to provide economical rapid fabrication of small numbers of variable products, one must design with manufacturing constraints in mind. In addition, flexible manufacturing systems must be programmed automatically to reduce the time for product change over in the factory and eliminate human errors. Sensor based machine control is needed to adapt idealized, model based machine programs to uncontrolled variables such as the condition of raw materials and fabrication tolerances.
The Internet and the applications it supports are revolutionizing the way people work together. This paper presents four case studies in engineering collaboration that new Internet technologies have made possible. These cases include assembly design and analysis, simulation, intelligent machine system control, and systems integration. From these cases, general themes emerge that can guide the way people will work together in the coming decade.
Data authentication as provided by digital signatures is a well known technique for verifying data sent via untrusted network links. Recent work has extended digital signatures to allow jointly generated signatures using threshold techniques. In addition, new proactive mechanisms have been developed to protect the joint private key over long periods of time and to allow each of the parties involved to verify the actions of the other parties. In this paper, the authors describe an application in which proactive digital signature techniques are a particularly valuable tool. They describe the proactive DSA protocol and discuss the underlying software tools that they found valuable in developing an implementation. Finally, the authors briefly describe the protocol and note difficulties they experienced and continue to experience in implementing this complex cryptographic protocol.
This paper presents a graph based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level of effort for the attacker, various graph algorithms such as shortest path algorithms can identify the attack paths with the highest probability of success.
This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a {open_quotes}best estimate{close_quotes} of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards.
The original DAMP (DAta Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA{trademark} (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs. time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of Unix{reg_sign}-based workstations, a replacement was needed. This package uses the IDL{reg_sign} software, available from Research Systems Incorporated in Boulder, Colorado, as the engine, and creates a set of widgets to manipulate the data in a manner similar to the original DAMP and earlier versions of xdamp. IDL is currently supported on a wide variety of Unix platforms such as IBM{reg_sign} workstations, Hewlett Packard workstations, SUN{reg_sign} workstations, Microsoft{reg_sign} Windows{trademark} computers, Macintosh{reg_sign} computers and Digital Equipment Corporation VMS{reg_sign} and Alpha{reg_sign} systems. Thus, xdamp is portable across many platforms. The author has verified operation, albeit with some minor IDL bugs, on personal computers using Windows 95 and Windows NT; IBM Unix platforms; and DEC alpha and VMS systems; HP 9000/700 series workstations; and Macintosh computers, both regular and PowerPC{trademark} versions. Version 3 adds the capability to manipulate images to the original xdamp capabilities.
In this work the authors report results of narrowband amplifiers designed for milliwatt and submilliwatt power consumption using JFET and pseudomorphic high electron mobility transistors (PHEMT) GaAs-based technologies. Enhancement-mode JFETs were used to design both a hybrid amplifier with off-chip matching as well as a monolithic microwave integrated circuit (MMIC) with on-chip matching. The hybrid amplifier achieved 8--10 dB of gain at 2.4 GHz and 1 mW. The MMIC achieved 10 dB of gain at 2.4 GHz and 2 mW. Submilliwatt circuits were also explored by using 0.25 {micro}m PHEMTs. 25 {micro}W power levels were achieved with 5 dB of gain for a 215 MHz hybrid amplifier. These results significantly reduce power consumption levels achievable with the JFETs or prior MESFET, heterostructure field effect transistor (HFET), or Si bipolar results from other laboratories.
The performance of vertical cavity surface emitting lasers (VCSELs) has improved greatly in recent years. Much of this improvement can be attributed to the use of native oxide layers within the laser structure, providing both electrical and optical transverse confinement. Understanding this optical confinement will be vital for the future realization of yet smaller lasers with ultralow threshold currents. Here the authors report the spectral and modal properties of small (0.5 {micro}m to 5 {micro}m current aperture) VCSELs and identify Joule heating as a dominant effect in the resonator properties of the smallest lasers.
The authors review the use of in-situ normal incidence reflectance, combined with a virtual interface model, to monitor and control the growth of complex compound semiconductor devices. The technique is being used routinely on both commercial and research metal-organic chemical vapor deposition (MOCVD) reactors and in molecular beam epitaxy (MBE) to measure growth rates and high temperature optical constants of compound semiconductor alloys. The virtual interface approach allows one to extract the calibration information in an automated way without having to estimate the thickness or optical constants of the alloy, and without having to model underlying thin film layers. The method has been used in a variety of data analysis applications collectively referred to as ADVISOR (Analysis of Deposition using Virtual Interfaces and Spectroscopic Optical Reflectance). This very simple and robust monitor and ADVISOR method provides one with the equivalent of a real-time reflection high energy electron reflectance (RHEED) tool for both MBE and MOCVD applications.
The phase-out of the ozone-depleting solvents has forced industry to look to solvents such as alcohol, terpenes and other flammable solvents to perform the critical cleaning processes. These solvents are not as efficient as the ozone-depleting solvents in terms of soil loading, cleaning time and drying when used in standard cleaning processes such as manual sprays or ultrasonic baths. They also require special equipment designs to meet part cleaning specifications and operator safety requirements. This paper describes a cleaning system that incorporates the automated spraying of flammable solvents to effectively perform precision cleaning processes. Key to the project`s success was the development of software that controls the robotic system and automatically generates robotic cleaning paths from three dimensional CAD models of the items to be cleaned.
Deep high-aspect ratio Si etching (HARSE) has shown potential application for passive self-alignment of dissimilar materials and devices on Si carriers or waferboards. The Si can be etched to specific depths and; lateral dimensions to accurately place or locate discrete components (i.e lasers, photodetectors, and fiber optics) on a Si carrier. It is critical to develop processes which maintain the dimensions of the mask, yield highly anisotropic profiles for deep features, and maintain the anisotropy at the base of the etched feature. In this paper the authors report process conditions for HARSE which yield etch rates exceeding 3 {micro}m/min and well controlled, highly anisotropic etch profiles. Examples for potential application to advanced packaging technologies will also be shown.
Moving commercial cargo across the US-Mexico border is currently a complex, paper-based, error-prone process that incurs expensive inspections and delays at several ports of entry in the Southwestern US. Improved information handling will dramatically reduce border dwell time, variation in delivery time, and inventories, and will give better control of the shipment process. The Border Trade Facilitation System (BTFS) is an agent-based collaborative work environment that assists geographically distributed commercial and government users with transshipment of goods across the US-Mexico border. Software agents mediate the creation, validation and secure sharing of shipment information and regulatory documentation over the Internet, using the World Wide Web to interface with human actors. Agents are organized into Agencies. Each agency represents a commercial or government agency. Agents perform four specific functions on behalf of their user organizations: (1) agents with domain knowledge elicit commercial and regulatory information from human specialists through forms presented via web browsers; (2) agents mediate information from forms with diverse otologies, copying invariant data from one form to another thereby eliminating the need for duplicate data entry; (3) cohorts of distributed agents coordinate the work flow among the various information providers and they monitor overall progress of the documentation and the location of the shipment to ensure that all regulatory requirements are met prior to arrival at the border; (4) agents provide status information to human actors and attempt to influence them when problems are predicted.
On March 20, 1998, Sandia National Laboratories performed a double-blind test of the DKL LifeGuard human presence detector and tracker. The test was designed to allow the device to search for individuals well within the product`s published operational parameters. The Test Operator of the DKL LifeGuard was provided by the manufacturer and was a high-ranking member of DKL management. The test was developed and implemented to verify the performance of the device as specified by the manufacturer. The device failed to meet its published specifications and it performed no better than random chance.
Despite the best preventative measures, ruptured hoses, spills and leaks occur with use of all hydraulic equipment. Although these releases do not usually produce a RCRA regulated waste, they are often a reportable occurrence. Clean-up and subsequent administrative procedure involves additional costs, labor and work delays. Concerns over these releases, especially related to Sandia National Laboratories (SNL) vehicles hauling waste on public roads prompted Fleet Services (FS) to seek an alternative to the standard petroleum based hydraulic fluid. Since 1996 SNL has participated in a pilot program with the University of Iowa (UNI) and selected vehicle manufacturers, notably John Deere, to field test hydraulic fluid produced from soybean oil in twenty of its vehicles. The vehicles included loaders, graders, sweepers, forklifts and garbage trucks. Research was conducted for several years at UNI to modify and market soybean oils for industrial uses. Soybean oil ranks first in worldwide production of vegetable oils (29%), and represents a tremendous renewable resource. Initial tests with soybean oil showed excellent lubrication and wear protection properties. Lack of oxidative stability and polymerization of the oil were concerns. These concerns were being addressed through genetic alteration, chemical modification and use of various additives, and the improved lubricant is in the field testing stage.
The aim of this laboratory-directed research and development project was to study amorphous carbon (a-C) thin films for eventual cold-cathode electron emitter applications. The development of robust, cold-cathode emitters are likely to have significant implications for modern technology and possibly launch a new industry: vacuum micro-electronics (VME). The potential impact of VME on Sandia`s National Security missions, such as defense against military threats and economic challenges, is profound. VME enables new microsensors and intrinsically radiation-hard electronics compatible with MOSFET and IMEM technologies. Furthermore, VME is expected to result in a breakthrough technology for the development of high-visibility, low-power flat-panel displays. This work covers four important research areas. First, the authors studied the nature of the C-C bonding structures within these a-C thin films. Second, they determined the changes in the film structures resulting from thermal annealing to simulate the effects of device processing on a-C properties. Third, they performed detailed electrical transport measurements as a function of annealing temperature to correlate changes in transport properties with structural changes and to propose a model for transport in these a-C materials with implications on the nature of electron emission. Finally, they used scanning atom probes to determine important aspects on the nature of emission in a-C.
Net erosion rates of carbon target plates have been measured in situ for the DIII-D lower divertor. The principal method of obtaining this data is the DiMES sample probe. Recent experiments have focused on erosion at the outer strike-point of two divertor plasma conditions: (1) attached (Te > 40 eV) ELMing plasmas and (2) detached (Te < 2 eV) ELMing plasmas. The erosion rates for the attached cases are > 10 cm/year, even with incident heat flux < 1 MW/m{sup 2}. In this case, measurements and modeling agree for both gross and net carbon erosion, showing the near-surface transport and redeposition of the carbon is well understood and that effective sputtering yields are > 10%. In ELM-free discharges, this erosion rate can account for the rate of carbon accumulation in the core plasma. Divertor plasma detachment eliminates physical sputtering, while spectroscopically measured chemical erosion yields are also found to be low (Y(C/D{sup +}) {le} 2.0 {times} 10{sup {minus}3}). This leads to suppression of net erosion at the outer strike-point, which becomes a region of net redeposition ({approximately} 4 cm/year). The private flux wall is measured to be a region of net redeposition with dense, high neutral pressure, attached divertor plasmas. Leading edges intercepting parallel heat flux ({approximately} 50 MW/m{sup 2}) have very high net erosion rates ({approximately} 10 {micro}m/s) at the OSP of an attached plasma. Leading edge erosion, and subsequent carbon redeposition, caused by tile gaps can account for half of the deuterium codeposition in the DIII-D divertor.
This paper describes the design and design issues associated with silicon surface micromachined device design Some of the tools described are adaptations of macro analysis tools. Design issues in the microdomain differ greatly from design issues encountered in the macrodomain. Microdomain forces caused by electrostatic attraction, surface tension, Van der Walls forces, and others can be more significant than inertia, friction, or gravity. Design and analysis tools developed for macrodomain devices are inadequate in most cases for microdomain devices. Microdomain specific design and analysis tools are being developed, but are still immature and lack adequate functionality. The fundamental design process for surface micromachined devices is significantly different than the design process employed in the design of macro-sized devices. In this paper, MEMS design will be discussed as well as the tools used to develop the designs and the issues relating fabrication processes to design. Design and analysis of MEMS devices is directly coupled to the silicon micromachining processes used to fabricate the devices. These processes introduce significant design limitations and must be well understood before designs can be successfully developed. In addition, some silicon micromachining fabrication processes facilitate the integration of silicon micromachines with microelectronics on-chip. For devices requiring on-chip electronics, the fabrication processes introduce additional design constraints that must be taken into account during design and analysis.
Many manufacturing companies today expend more effort on upgrade and disposal projects than on clean-slate design, and this trend is expected to become more prevalent in coming years. However, commercial CAD tools are better suited to initial product design than to the product`s full life cycle. Computer-aided analysis, optimization, and visualization of life cycle assembly processes based on the product CAD data can help ensure accuracy and reduce effort expended in planning these processes for existing products, as well as provide design-for-lifecycle analysis for new designs. To be effective, computer aided assembly planning systems must allow users to express the plan selection criteria that apply to their companies and products as well as to the life cycles of their products. Designing products for easy assembly and disassembly during its entire life cycle for purposes including service, field repair, upgrade, and disposal is a process that involves many disciplines. In addition, finding the best solution often involves considering the design as a whole and by considering its intended life cycle. Different goals and constraints (compared to initial assembly) require one to re-visit the significant fundamental assumptions and methods that underlie current assembly planning techniques. Previous work in this area has been limited to either academic studies of issues in assembly planning or applied studies of life cycle assembly processes, which give no attention to automatic planning. It is believed that merging these two areas will result in a much greater ability to design for; optimize, and analyze life cycle assembly processes.
An agile microsystem manufacturing technology has been developed that provides unprecedented 5 levels of independent polysilicon surface-micromachine films for the designer. Typical surface-micromachining processes offer a maximum of 3 levels, making this the most complex surface-micromachining process technology developed to date. Leveraged from the extensive infrastructure present in the microelectronics industry, the manufacturing method of polysilicon surface-micromachining offers similar advantages of high-volume, high-reliability, and batch-fabrication to microelectromechanical systems (MEMS) as has been accomplished with integrated circuits (ICs). These systems, comprised of microscopic-sized mechanical elements, are laying the foundation for a rapidly expanding, multi-billion dollar industry 2 which impacts the automotive, consumer product, and medical industries to name only a few.
Composite doublers, or repair patches, provide an innovative repair technique which can enhance the way aircraft are maintained. Instead of riveting multiple steel or aluminum plates to facilitate an aircraft repair, it is possible to bond a single boron-epoxy composite doubler to the damaged structure. In order for the use of composite doublers to achieve widespread use in the civil aviation industry, it is imperative that methods be developed which can quickly and reliably assess the integrity of the doubler. In this study, a specific composite application was chosen on an L-1011 aircraft in order to focus the tasks on application and operation issues. Primary among inspection requirements for these doublers is the identification of disbonds, between the composite laminate and aluminum parent material, and delaminations in the composite laminate. Surveillance of cracks or corrosion in the parent aluminum material beneath the doubler is also a concern. No single nondestructive inspection (NDI) method can inspect for every flaw type, therefore it is important to be aware of available NDI techniques and to properly address their capabilities and limitations. A series of NDI tests were conducted on laboratory test structures and on full-scale aircraft fuselage sections. Specific challenges, unique to bonded composite doubler applications, were highlighted. An array of conventional and advanced NDI techniques were evaluated. Flaw detection sensitivity studies were conducted on applicable eddy current, ultrasonic, X-ray and thermography based devices. The application of these NDI techniques to composite doublers and the results from test specimens, which were loaded to provide a changing flaw profile, are presented in this report. It was found that a team of these techniques can identify flaws in composite doubler installations well before they reach critical size.
This report describes ship accident event trees, ship collision and ship fire frequencies, representative ships and shipping practices, a model of ship penetration depths during ship collisions, a ship fire spread model, cask to environment release fractions during ship collisions and fires, and illustrative consequence calculations. This report contains the following appendices: Appendix 1 -- Representative Ships and Shipping Practices; Appendix 2 -- Input Data for Minorsky Calculations; Appendix 3 -- Port Ship Speed Distribution; and Appendix 4 -- Cask-to-Environment Release Fractions.
The engineering of advanced semiconductor heterostructure materials and devices requires a detailed understanding of, and control over, the structure and properties of semiconductor materials and devices at the atomic to nanometer scale. Cross-sectional scanning tunneling microscopy has emerged as a unique and powerful method to characterize structural morphology and electronic properties in semiconductor epitaxial layers and device structures at these length scales. The basic experimental techniques in cross-sectional scanning tunneling microscopy are described, and some representative applications to semiconductor heterostructure characterization drawn from recent investigations in the authors laboratory are discussed. Specifically, they describe some recent studies of InP/InAsP and InAsP/InAsSb heterostructures in which nanoscale compositional clustering has been observed and analyzed.
On July 1--2, 1997, Sandia National Laboratories hosted the External Committee to Evaluate Sandia`s Risk Expertise. Under the auspices of SIISRS (Sandia`s International Institute for Systematic Risk Studies), Sandia assembled a blue-ribbon panel of experts in the field of risk management to assess their risk programs labs-wide. Panelists were chosen not only for their own expertise, but also for their ability to add balance to the panel as a whole. Presentations were made to the committee on the risk activities at Sandia. In addition, a tour of Sandia`s research and development programs in support of the US Nuclear Regulatory Commission was arranged. The panel attended a poster session featuring eight presentations and demonstrations for selected projects. Overviews and viewgraphs from the presentations are included in Volume 1 of this report. Presentations are related to weapons, nuclear power plants, transportation systems, architectural surety, environmental programs, and information systems.
An exploratory study was conducted under the Architectural Surety Program to examine the possibility of modifying fracture of glass in the shock-wave environment associated with terrorist bombings. The intent was to explore strategies to reduce the number and severity of injuries resulting from those attacks. The study consisted of a series of three experiments at the Energetic Materials Research and Testing Center (EMRTC) of the New Mexico Institute of Mining and Technology at Socorro, NM, in which annealed and tempered glass sheets were exposed to blast waves at several different levels of overpressure and specific impulse. A preliminary assessment of the response of tempered glass to the blast environment suggested that inducing early failure would result in lowering fragment velocity as well as reducing the loading from the window to the structure. To test that possibility, two different and novel procedures (indentation flaws and spot annealing) were used to reduce the failure strength of the tempered glass while maintaining its ability to fracture into small cube-shaped fragments. Each experiment involved a comparison of the performance of four sheets of glass with different treatments.
SPH (Smoothed Particle Hydrodynamics) is a gridless Lagrangian technique which is appealing as a possible alternative to numerical techniques currently used to analyze high deformation impulsive loading events. Previously, the SPH algorithm has been subjected to detailed testing and analysis to determine the feasibility of using the coupled finite-element/SPH code PRONTO/SPH for the analysis of various types of underwater explosion problems involving fluid-structure and shock-structure interactions. Here, SPH and Eulerian simulations are used to study the details of underwater bubble collapse, particularly the formation of re-entrant jets during collapse, and the loads generated on nearby structures by the jet and the complete collapse of the bubble. Jet formation is shown to be due simply to the asymmetry caused by nearby structures which disrupt the symmetry of the collapse. However, the load generated by the jet is a minor precursor to the major loads which occur at the time of complete collapse of the bubble.
This paper discusses thermal analysis in support of probabilistic risk assessment (PRA) to predict the heating of cargoes shipped in vehicles like the Safe Secure Trailer. Fire environments contribute very significantly to the risk associated with ground transport of special nuclear materials. The tradeoff between thermal model complexity and the affordable number of scenarios used to represent the hazard space is discussed as it impacts PRA. The relevant heat transfer mechanisms are discussed along with the applicability of methods from the literature for analysis of these mechanisms. Many of the subject`s real problems remain too complex for affordable and rigorous analysis. Available models are generally restricted to idealizations that are quickly obviated by real effects. Approximate treatment methods, striving to produce conservative, realistic estimates are also discussed.
The suitability of modified thermal-battery technology for use as a potential power source for geothermal borehole applications is under investigation. As a first step, the discharge processes that take place in LiSi/LiBr-KBr-LiF/FeS{sub 2} thermal cells were studied at temperatures of 350 C and 400 C using pelletized cells with immobilized electrolyte. Incorporation of a reference electrode allowed the relative contribution of each electrode to the overall cell polarization to be determined. The results of single-cell tests are presented, along with preliminary data for cells based on a lower-melting CsBr-LiBr-KBr eutectic salt.
An array of chemically-sensitive field-effect transistors (CHEMFETs) that measure both work function and bulk resistance changes in thin films was used to detect volatile organic compounds. Carbon black/organic polymer composite films were deposited onto the CHEMFETs using an automated microdispensing method.
Reeves, M.; Fryar, D.G.; Statham, W.H.; Knowles, M.K.
The Waste Isolation Pilot Plant (WIPP) is a planned geologic repository for permanent disposal of transuranic waste generated by US government defense programs. Located near Carlsbad in southeastern New Mexico, the facility`s disposal regions are mined from the bedded salt of the Salado Formation at a depth of approximately 652 m. Four shafts service the operational needs of the facility for air intake, exhaust, waste handling and salt handling. These shafts range in diameter from 3.5 to 6.1 m and extend from the ground surface to the repository. During repository closure, following an operational life of approximately 50 years, these shafts will be sealed in accordance with an acceptable design. Under contract to the US Department of Energy (DOE), the Repository Isolation Systems Department (RISD) of Sandia National Laboratories has developed a design for the WIPP shaft sealing system. This design has been reviewed by the US Environmental Protection Agency (EPA) as part of the 1996 WIPP Compliance Certification Application (CCA). An effective shaft sealing system for the WIPP will limit liquid and gas flows, and permanently prevent the migration of radiological or other hazardous constituents through the sealed shafts from repository to accessible environment. Because of these performance objectives, a significant effort has been directed toward evaluation of the seal design. Whereas RISD (1996) provides a comprehensive discussion, this paper focuses on only one aspect of the evaluation effort, namely a full shaft, fluid flow model.
The `public outcomes` from research universities are educated students and research that extends the frontiers of knowledge. Measures of these `public outcomes` are inadequate to permit either research or education consumers to select research universities based on quantitative performance data. Research universities annually spend over $20 billion on research; 60% of these funds are provided by Federal sources. Federal funding for university research has recently grown at an annual rate near 6% during a time period when other performers of Federal research have experienced real funding cuts. Ten universities receive about 25% of the Federal funds spent on university research. Numerous studies of US research universities are reporting storm clouds. Concerns include balancing research and teaching, the narrow focus of engineering education, college costs, continuing education, and public funding of foreign student education. The absence of research on the `public outcomes` from university research results in opinion, politics, and mythology forming the basis of too many decisions. Therefore, the authors recommend studies of other nations` research universities, studies of various economic models of university research, analysis of the peer review process and how well it identifies the most capable research practitioners and at what cost, and studies of research university ownership of intellectual property that can lead to increased `public outcomes` from publicly-funded research performed by research universities. They advocate two practices that could increase the `public outcomes` from university research. These are the development of science roadmaps that link science research to `public outcomes` and `public outcome` metrics. Changes in the university research culture and expanded use of the Internet could also lead to increased `public outcomes`. They recommend the use of tax incentives to encourage companies to develop research partnerships with research universities.
Vapor diffusion in porous media in the presence of its own liquid may be enhanced due to pore-scale processes, such as condensation and evaporation across isolated liquid islands. Webb and Ho (1997) developed a mechanistic pore-scale model of these processes under steady-state conditions in which condensation and evaporation on the liquid island were equal. The vapor diffusion rate was significantly enhanced by these liquid island processes by up to an order of magnitude compared to a dry porous media. However, vapor transport by diffusion is often complicated by transient effects, such as in drying applications, in which net evaporation of liquid may further augment the vapor flux from diffusion. The influence of transient effects on the enhancement factors for vapor diffusion is evaluated in this paper. In addition, the effect of vapor pressure lowering on the enhancement factor and on porescale vapor fluxes is shown.
The magnetically controlled plasma opening switch (MCPOS) is an advanced plasma opening switch that utilizes magnetic fields to improve operation. Magnetic fields always dominate terawatt, pulsed power plasma opening switches. For that reason, the MCPOS uses controlled applied magnetic fields with magnitude comparable to the self-magnetic field of the storage inductor. One applied field holds the plasma in place while energy accumulates in the storage inductor, then another applied field pushes the plasma away from the cathode to allow energy to flow downstream. Over a ten month period, an MCPOS was designed, built, and tested on DECADE Module 2 at Physics International. The peak drive current was 1.8 MA in 250 ns. The output parameters were up to 1 MA into an electron beam load. The radiation temporal pulse width averaged 60 nanoseconds full-width at half-maximum. The peak load voltage ranged from one to two megavolts. The experiments demonstrated efficient power flow through a long, low-impedance magnetically insulated transmission line between the magnetically controlled plasma opening switch and the load.
Economic and other factors accompanying developments in physics, mathematics and particularly in computer technology are shifting a substantial portion of the experimental resources associated with large scale engineering projects from physical testing to modeling and simulation. In the process, the priorities of selecting meaningful and informative tests and simulations to perform are also changing. This paper describes issues related to experimental design and how the goals and priorities of the experimental design for these problems are changing to accommodate the this shift in experimentation. Issues, priorities and new methods of approach are discussed.
The exposure of ATG graphite sample to DIII-D divertor plasma was provided by the DiMES (Divertor Material Evaluation System) mechanism. The graphite sample arranged to receive the parallel heat flux on a small region of the surface was exposed to 600ms of outer strike point plasma. The sample was constructed to collect the eroded material directed downward into a trapping zone onto s Si disk collector. The average heat flux onto the graphite sample during the exposure was about 200W/cm{sup 2}, and the parallel heat flux was about 10 KW/cm{sup 2}. After the exposure the graphite sample and Si collector disk were analyzed using SEM, NRA, RBS, Auger spectroscopy. IR and Raman spectroscopy. The thermal desorption was studied also. The deposited coating on graphite sample is amorphous carbon layer. Just upstream of the high heat flux zone the redeposition layer has a globular structure. The deposition layer on Si disk is composed also from carbon but has a diamond-like structure. The areal density of C and D in the deposited layer on Si disk varied in poloidal and toroidal directions. The maximum D/C areal density ratio is about 0.23, maximum carbon density is about 3.8 {times} 10{sup 18}cm{sup {minus}2}, maximum D area density is about 3 {times} 10{sup 17}cm{sup 2}. The thermal desorption spectrum had a peak at 1,250K.
Recent observations have indicated that lithium pellet injection wall conditioning plays an important role in achieving the enhanced supershot regime in TFTR. However, little is understood about the behavior of lithium-coated limiter walls, interacting with edge plasmas. In the final campaign of TFTR, a cylindrical carbon fiber composite probe was inserted into the boundary plasma region and exposed to ohmically-heated deuterium discharges with lithium pellet injection. The ion-drift side probe surface exhibits a sign of codeposition of lithium, carbon, oxygen, and deuterium, whereas the electron side essentially indicates high-temperature erosion. It is found that lithium is incorporated in these codeposits in the form of oxide at the concentration of a few percent. In the electron side, lithium has been found to penetrate deeply into the probe material, presumably via rapid diffusion through interplane spaces in the graphite crystalline. Though it is not conclusive, materials mixing in the carbon and lithium system appears to be a key process in successful lithium wall conditioning.
The advanced networking department at Sandia National Laboratories has used the annual Supercomputing conference sponsored by the IEEE and ACM for the past several years as a forum to demonstrate and focus communication and networking developments. At SC `97, Sandia National Laboratories (SNL), Los Alamos National Laboratory (LANL), and Lawrence Livermore National Laboratory (LLNL) combined their SC `97 activities within a single research booth under the Advance Strategic Computing Initiative (ASCI) banner. For the second year in a row, Sandia provided the network design and coordinated the networking activities within the booth. At SC `97, Sandia elected to demonstrate the capability of the Computation Plant, the visualization of scientific data, scalable ATM encryption, and ATM video and telephony capabilities. At SC `97, LLNL demonstrated an application, called RIPTIDE, that also required significant networking resources. The RIPTIDE application had computational visualization and steering capabilities. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia`s overall strategies in ATM networking.
The SAMPLL (Simplified Analytical Model of Penetration with Lateral Loading) computer code was originally developed in 1984 to realistically yet economically predict penetrator/target interactions. Since the code`s inception, its use has spread throughout the conventional and nuclear penetrating weapons community. During the penetrator/target interaction, the resistance of the material being penetrated imparts both lateral and axial loads on the penetrator. These loads cause changes to the penetrator`s motion (kinematics). SAMPLL uses empirically based algorithms, formulated from an extensive experimental data base, to replicate the loads the penetrator experiences during penetration. The lateral loads resulting from angle of attack and trajectory angle of the penetrator are explicitly treated in SAMPLL. The loads are summed and the kinematics calculated at each time step. SAMPLL has been continually improved, and the current version, Version 6.0, can handle cratering and spall effects, multiple target layers, penetrator damage/failure, and complex penetrator shapes. Version 6 uses the latest empirical penetration equations, and also automatically adjusts the penetrability index for certain target layers to account for layer thickness and confinement. This report describes the SAMPLL code, including assumptions and limitations, and includes a user`s guide.
Sandia National Laboratories designed the H1616 container for transport of Type B quantities of radioactive materials. During the most recent recertification cycle, questions were raised concerning the ability of drum type containers with locking rings to survive the hypothetical accident sequence when the puncture test was oriented to specifically attack the locking ring. A series of tests has been performed that conclusively demonstrates that the specially designed locking ring on the H1616 performs adequately in this environment.
Recent K-shell scaling experiments on the 20 MA Z accelerator at Sandia National Laboratories have shown that large diameter (40 and 55 mm) arrays can be imploded with 80 to 210 wires of titanium or stainless steel. These implosions have produced up to 150 kJ of > 4.5 keV x-rays and 65 kJ of > 6.0 keV x-rays in 7 to 18 ns FWHM pulses. This is a major advance in plasma radiation source (PRS) capability since there is presently limited test capability above 3 keV. In fact, Z produces more > 4.5 keV x-rays than previous aboveground simulators produced at 1.5 keV. Z also produces some 200 kJ of x-rays between 1 and 3 keV in a continuous spectrum for these loads. The measured spectra and yields are consistent with 1-dimensional MHD calculations performed by NRL. Thermoelastic calorimeters, PVDF gauges, and optical impulse gauges have been successfully fielded with these sources.
Sandia National Labs has been investigating the use of rigid polyurethane foam (RPF) for military use, particularly for mine protection for the past two years. Results of explosive experiments and mine/foam interaction experiments are presented. The RPF has proved to be effective in absorbing direct shock from explosives. Quantitative data are presented. As reported elsewhere, it has proved effective in reducing the signature of vehicles passing over anti-tank (AT) mines to prevent the mine from firing. This paper presents the results of experiments done to understand the interaction of RPF with anti-craft (AC) mines during foam formation in shallow water in a scaled surf environment.
We describe use of AlAsSb/AlGaAsSb lattice matched to InP for distributed Bragg reflectors. These structures are integral to several surface normal devices, in particular vertical cavity surface emitting lasers. The high refractive index ratio of these materials allows formation of a highly reflective mirror with relatively few mirror pairs. As a result, we have been able to show for the first time the 77K CW operation of an optically pumped, monolithic, all-epitaxial vertical cavity laser, emitting at 1.56 {mu}m.
Sandia National Laboratories performs systems analysis of high risk, high consequence systems. In particular, Sandia is responsible for the engineering of nuclear weapons, exclusive of the explosive physics package. In meeting this responsibility, Sandia has developed fundamental approaches to safety and a process for evaluating safety based on modeling and simulation. These approaches provide confidence in the safety of our nuclear weapons. Similar concepts may be applied to improve the safety of other high consequence systems.
Further improvements to the Waveform Correlation Event Detection System (WCEDS) developed by Sandia Laboratory have made it possible to test the system on the accepted Comprehensive Test Ban Treaty (CTBT) seismic monitoring network. For our test interval we selected a 24-hour period from December 1996, and chose to use the Reviewed Event Bulletin (REB) produced by the Prototype International Data Center (PIDC) as ground truth for evaluating the results. The network is heterogeneous, consisting of array and three-component sites, and as a result requires more flexible waveform processing algorithms than were available in the first version of the system. For simplicity and superior performance, we opted to use the spatial coherency algorithm of Wagner and Owens (1996) for both types of sites. Preliminary tests indicated that the existing version of WCEDS, which ignored directional information, could not achieve satisfactory detection or location performance for many of the smaller events in the REB, particularly those in the south Pacific where the network coverage is unusually sparse. To achieve an acceptable level of performance, we made modifications to include directional consistency checks for the correlations, making the regions of high correlation much less ambiguous. These checks require the production of continuous azimuth and slowness streams for each station, which is accomplished by means of FK processing for the arrays and power polarization processing for the three-component sites. In addition, we added the capability to use multiple frequency-banded data streams for each site to increase sensitivity to phases whose frequency content changes as a function of distance.
The on-site inspection provisions in many current and proposed arms control agreements require extensive preparation and training on the part of both the Inspection Teams (inspectors) and Inspected Parties (hosts). Traditional training techniques include lectures, table-top inspections, and practice inspections. The Augmented Computer Exercise for Inspection Training (ACE-IT), an interactive computer training tool, increases the utility of table-top inspections. ACE-IT is used for training both inspectors and hosts to conduct a hypothetical challenge inspection under the Chemical Weapons Convention (CWC). The training covers the entire sequence of events in the challenge inspection regime, from initial notification of an inspection through post-inspection activities. The primary emphasis of the training tool is on conducting the inspection itself, and in particular, implementing the concept of managed access. (Managed access is a technique used to assure the inspectors that the facility is in compliance with the CWC, while at the same time protecting sensitive information unrelated to the CWC.) Information for all of the activities is located in the electronic {open_quotes}Exercise Manual.{close_quotes} In addition, interactive menus are used to negotiate access to each room and to alternate information during the simulated inspection. ACE-IT also demonstrates how various inspection provisions impact compliance determination and the protection of sensitive information.
In this user`s guide, details for running BREAKUP are discussed. BREAKUP allows the widely used overset grid method to be run in a parallel computer environment to achieve faster run times for computational field simulations over complex geometries. The overset grid method permits complex geometries to be divided into separate components. Each component is then gridded independently. The grids are computationally rejoined in a solver via interpolation coefficients used for grid-to-grid communications of boundary data. Overset grids have been in widespread use for many years on serial computers, and several well-known Navier-Stokes flow solvers have been extensively developed and validated to support their use. One drawback of serial overset grid methods has been the extensive compute time required to update flow solutions one grid at a time. Parallelizing the overset grid method overcomes this limitation by updating each grid or subgrid simultaneously. BREAKUP prepares overset grids for parallel processing by subdividing each overset grid into statically load-balanced subgrids. Two-dimensional examples with sample solutions, and three-dimensional examples, are presented.
This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users` guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers` guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in the quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence.
Design metaphors play an important role in the development of many software projects. However, the influence of metaphors on project functionality, design methodology and the interactions among members of the development team is not well understood. This paper seeks insights into these issues by examining the experiences of a design team in building a system under the influence of a particularly strong design metaphor.
One important application of mobile robots is searching a geographical region to locate the origin of a specific sensible phenomenon. Mapping mine fields, extraterrestrial and undersea exploration, the location of chemical and biological weapons, and the location of explosive devices are just a few potential applications. Teams of robotic bloodhounds have a simple common goal; to converge on the location of the source phenomenon, confirm its intensity, and to remain aggregated around it until directed to take some other action. In cases where human intervention through teleoperation is not possible, the robot team must be deployed in a territory without supervision, requiring an autonomous decentralized coordination strategy. This paper presents the alpha beta coordination strategy, a family of collective search algorithms that are based on dynamic partitioning of the robotic team into two complementary social roles according to a sensor based status measure. Robots in the alpha role are risk takers, motivated to improve their status by exploring new regions of the search space. Robots in the beta role are motivated to improve but are conservative, and tend to remain aggregated and stationary until the alpha robots have identified better regions of the search space. Roles are determined dynamically by each member of the team based on the status of the individual robot relative to the current state of the collective. Partitioning the robot team into alpha and beta roles results in a balance between exploration and exploitation, and can yield collective energy savings and improved resistance to sensor noise and defectors. Alpha robots waste energy exploring new territory, and are more sensitive to the effects of ambient noise and to defectors reporting inflated status. Beta robots conserve energy by moving in a direct path to regions of confirmed high status.
A summary of recent advances in cryogenic-aerosol-based wafer-processing technology for semiconductor wafer cleaning is presented. An argon/nitrogen cryogenic-aerosol-based tool has been developed and optimized for removal of particulate contaminants. The development of the tool involved a combination of theoretical (modeling) and experimental efforts aimed at understanding the mechanisms of aerosol formation and the relation between aerosol characteristics and particle-removal ability. It is observed that the highest cleaning efficiencies are achieved, in general, when the cryogenic aerosol is generated by the explosive atomization of an initially liquid jet of the cryogenic mixture.
Proton implantation into the buried oxide of Si/SiO{sub 2}/Si structures does not introduce mobile protons. The cross section for capture of radiation-induced electrons by mobile protons is two orders of magnitude smaller than for electron capture by trapped holes. The data provide new insights into the atomic mechanisms governing the generation and radiation tolerance of mobile protons in SiO{sub 2}. This can lead to improved techniques for production and radiation hardening of radiation tolerant memory devices.
A sensor system simulation has been developed which aids in the evaluation of a proposed fast framing staring sensor as it will perform in its operational environment. Beginning with a high resolution input image, a sequence of frames at the target sensor resolution are produced using the assumed platform motion and the contribution of various noise sources as input data. The resulting frame sequence can then be used to help define system requirements, to aid algorithm development, and to predict system performance. In order to assess the performance of a sensor system, the radiance measured by the system is modeled using a variety of scenarios. For performance prediction, the modeling effort is directed toward providing the ability to determine the minimum Noise Equivalent Target (NET) intensities for each band of the sensor system. The NET is calculated at the entrance pupil of the instrument in such a way that the results can be applied to a variety of point source targets and collection conditions. The intent is to facilitate further study within the user community as new mission areas and/or targets of interest develop that are not addressed explicitly during sensor conceptual design.
The 91.84Sn-3.33Ag-4.83Bi and 96.5Sn-3.5Ag Pb-free solders were evaluated for surface mount circuit board interconnects. The 63Sn-37Pb solder provided the baseline data. All three solders exhibited suitable manufacturability per a defect analyses of circuit board test vehicles. Thermal cycling had no significant effect on the 91.84Sn-3.33Ag-4.83Bi solder joints. Some degradation in the form of grain boundary sliding was observed in 96.5Sn-3.5Ag and 63Sn-37Pb solder joints. The quality of the solder joint microstructures showed a slight degree of degradation under thermal shock exposure for all of the solders tested. Trends in the solder joint shear strengths could be traced to the presence of Pd in the solder, the source of which was the Pd/Ni finish on the circuit board conductor features. The higher, intrinsic strengths of the Pb-free solders encouraged the failure path to be located in proximity to the solder/substrate interface where Pd combined with Sn to form brittle PdSn{sub 4} particles, resulting in reduced shear strengths.
The telecommunications sector plays a pivotal role in the system of increasingly connected and interdependent networks that make up national infrastructure. An assessment of the probable structure and function of the bit-moving industry in the twenty-first century must include issues associated with the surety of telecommunications. The term surety, as used here, means confidence in the acceptable behavior of a system in both intended and unintended circumstances. This paper outlines various engineering approaches to surety in systems, generally, and in the telecommunications infrastructure, specifically. It uses the experience and expectations of the telecommunications system of the US as an example of the global challenges. The paper examines the principal factors underlying the change to more distributed systems in this sector, assesses surety issues associated with these changes, and suggests several possible strategies for mitigation. It also studies the ramifications of what could happen if this sector became a target for those seeking to compromise a nation`s security and economic well being. Experts in this area generally agree that the U. S. telecommunications sector will eventually respond in a way that meets market demands for surety. Questions remain open, however, about confidence in the telecommunications sector and the nation`s infrastructure during unintended circumstances--such as those posed by information warfare or by cascading software failures. Resolution of these questions is complicated by the lack of clear accountability of the private and the public sectors for the surety of telecommunications.
The research focuses on the measurement of the nanomechanical properties associated with the interphase region of a polymer matrix fiber composite with a nanometer resolution in chemically characterized model composites. The Interfacial Force Microscope (IFM) is employed to measure, with nanometer resolution, the mechanical properties of the interphase region of epoxy/glass fiber composites. The chemistry of the interphase is altered by the adsorption on to the fiber surface a coupling agent, 3-aminopropyltrimethoxy silane ({gamma}-APS) which is known to covalently bond to the glass fiber surface and the epoxy resin. Recent work utilizing FT-IR fiber optic evanescent wave spectroscopy provides a method for the characterization of the interphase chemistry. This technique has been used to investigate the interphase chemistry of epoxy/amine curing agent/amine-terminated organosilane coupling agent/silica optical fiber model composites. This body of work has shown that a substantial fraction of the amine of the organosilane-coupling agent does not participate in a reaction with the epoxy resin. This evidence suggests an interphase that will have mechanical properties significantly different than the bulk epoxy/amine matrix. Previous research has shown that drastic changes occur in the coupling agent chemistry, interphase chemistry, and composite mechanical properties as the amount of adsorbed coupling agent is varied over the industrially relevant range used in this work. A commercially available epoxy resin, EPON 828, and aliphatic amine-curing agent, EPI-CURE 3283, make up the polymer matrix in this study. The reinforcement is silica optical or E-glass fibers.
This report describes a 5 year, $10 million Sandia/Industry project to develop an advanced borehole seismic source for use in oil and gas exploration and production. The development Team included Sandia, Chevron, Amoco, Conoco, Exxon, Raytheon, Pelton, and GRI. The seismic source that was developed is a vertically oriented, axial point force, swept frequency, clamped, reaction-mass vibrator design. It was based on an early Chevron prototype, but the new tool incorporates a number of improvements which make it far superior to the original prototype. The system consists of surface control electronics, a special heavy duty fiber optic wireline and draw works, a cablehead, hydraulic motor/pump module, electronics module, clamp, and axial vibrator module. The tool has a peak output of 7,000 lbs force and a useful frequency range of 5 to 800 Hz. It can operate in fluid filled wells with 5.5-inch or larger casing to depths of 20,000 ft and operating temperatures of 170 C. The tool includes fiber optic telemetry, force and phase control, provisions to add seismic receiver arrays below the source for single well imaging, and provisions for adding other vibrator modules to the tool in the future. The project yielded four important deliverables: a complete advanced borehole seismic source system with all associated field equipment; field demonstration surveys funded by industry showing the utility of the system; industrial sources for all of the hardware; and a new service company set up by their industrial partner to provide commercial surveys.
The use of a fracture mechanics based design for the radioactive material transport (RAM) packagings has been the subject of extensive research for more than a decade. Sandia National Laboratories (SNL) has played an important role in the research and development of the application of this technology. Ductile iron has been internationally accepted as an exemplary material for the demonstration of a fracture mechanics based method of RAM packaging design and therefore is the subject of a large portion of the research discussed in this report. SNL`s extensive research and development program, funded primarily by the U. S. Department of Energy`s Office of Transportation, Energy Management and Analytical Services (EM-76) and in an auxiliary capacity, the office of Civilian Radioactive Waste Management, is summarized in this document along with a summary of the research conducted at other institutions throughout the world. In addition to the research and development work, code and standards development and regulatory positions are also discussed.
Construction of a prestressed concrete containment vessel (PCCV) model is underway as part of a cooperative containment research program at Sandia National Laboratories. The work is co-sponsored by the Nuclear Power Engineering Corporation (NUPEC) of Japan and US Nuclear Regulatory Commission (NRC). Preliminary analyses of the Sandia 1:4 Scale PCCV Model have determined axisymmetric global behavior and have estimated the potential for failure in several areas, including the wall-base juncture and near penetrations. Though the liner tearing failure mode has been emphasized, the assumption of a liner tearing failure mode is largely based on experience with reinforced concrete containments. For the PCCV, the potential for shear failure at or near the liner tearing pressure may be considerable and requires detailed investigation. This paper examines the behavior of the PCCV in the region most susceptible to a radial shear failure, the wall-basemat juncture region. Prediction of shear failure in concrete structures is a difficult goal, both experimentally and analytically. As a structure begins to deform under an applied system of forces that produce shear, other deformation modes such as bending and tension/compression begin to influence the response. Analytically, difficulties lie in characterizing the decrease in shear stiffness and shear stress and in predicting the associated transfer of stress to reinforcement as cracks become wider and more extensive. This paper examines existing methods for representing concrete shear response and existing criteria for predicting shear failure, and it discusses application of these methods and criteria to the study of the 1:4 scale PCCV.
Federal laboratories have successfully filled many roles for the public; however, as the 21st Century nears it is time to rethink and reevaluate how Federal laboratories can better support the public and identify new roles for this class of publicly-owned institutions. The productivity of the Federal laboratory system can be increased by making use of public outcome metrics, by benchmarking laboratories, by deploying innovative new governance models, by partnerships of Federal laboratories with universities and companies, and by accelerating the transition of federal laboratories and the agencies that own them into learning organizations. The authors must learn how government-owned laboratories in other countries serve their public. Taiwan`s government laboratory, Industrial Technology Research Institute, has been particularly successful in promoting economic growth. It is time to stop operating Federal laboratories as monopoly institutions; therefore, competition between Federal laboratories must be promoted. Additionally, Federal laboratories capable of addressing emerging 21st century public problems must be identified and given the challenge of serving the public in innovative new ways. Increased investment in case studies of particular programs at Federal laboratories and research on the public utility of a system of Federal laboratories could lead to increased productivity of laboratories. Elimination of risk-averse Federal laboratory and agency bureaucracies would also have dramatic impact on the productivity of the Federal laboratory system. Appropriately used, the US Federal laboratory system offers the US an innovative advantage over other nations.
Radionuclide transport experiments were carried out using intact cores obtained from the Culebra member of the Rustler Formation inside the Waste Isolation Pilot Plant, Air Intake Shaft. Twenty-seven separate tests are reported here and include experiments with {sup 3}H, {sup 22}Na, {sup 241}Am, {sup 239}Np, {sup 228}Th, {sup 232}U and {sup 241}Pu, and two brine types, AIS and ERDA 6. The {sup 3}H was bound as water and provides a measure of advection, dispersion, and water self-diffusion. The other tracers were injected as dissolved ions at concentrations below solubility limits, except for americium. The objective of the intact rock column flow experiments is to demonstrate and quantify transport retardation coefficients, (R) for the actinides Pu, Am, U, Th and Np, in intact core samples of the Culebra Dolomite. The measured R values are used to estimate partition coefficients, (kd) for the solute species. Those kd values may be compared to values obtained from empirical and mechanistic adsorption batch experiments, to provide predictions of actinide retardation in the Culebra. Three parameters that may influence actinide R values were varied in the experiments; core, brine and flow rate. Testing five separate core samples from four different core borings provided an indication of sample variability. While most testing was performed with Culebra brine, limited tests were carried out with a Salado brine to evaluate the effect of intrusion of those lower waters. Varying flow rate provided an indication of rate dependent solute interactions such as sorption kinetics.
Wind-energy researchers at the National Wind Technology Center (NWTC), representing Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL), are developing a new, light-weight, modular data acquisition unit capable of acquiring long-term, continuous time-series data from small and/or dynamic wind-turbine rotors. The unit utilizes commercial data acquisition hardware, spread-spectrum radio modems, and Global Positioning System receivers, and a custom-built programmable logic device. A prototype of the system is now operational, and initial field deployment is expected this summer. This paper describes the major subsystems comprising the unit, summarizes the current status of the system, and presents the current plans for near-term development of hardware and software.
Electronic sensing circuitry and micro electro mechanical sense elements can be integrated to produce inertial instruments for applications unheard of a few years ago. This paper will describe the Sandia M3EMS fabrication process, inertial instruments that have been fabricated, and the results of initial characterization tests of micro-machined accelerometers.
Micromachining technologies enable the development of low-cost devices capable of sensing motion in a reliable and accurate manner. The development of various surface micromachined accelerometers and gyroscopes to sense motion is an ongoing activity at Sandia National Laboratories. In addition, Sandia has developed a fabrication process for integrating both the micromechanical structures and microelectronics circuitry of Micro-Electro-Mechanical Systems (MEMS) on the same chip. This integrated surface micromachining process provides substantial performance and reliability advantages in the development of MEMS accelerometers and gyros. A Sandia MEMS team developed a single-axis, micromachined silicon accelerometer capable of surviving and measuring very high accelerations, up to 50,000 times the acceleration due to gravity or 50 k-G (actually measured to 46,000 G). The Sandia integrated surface micromachining process was selected for fabrication of the sensor due to the extreme measurement sensitivity potential associated with integrated microelectronics. Measurement electronics capable of measuring at to Farad (10{sup {minus}18} Farad) changes in capacitance were required due to the very small accelerometer proof mass (< 200 {times} 10{sup {minus}9} gram) used in this surface micromachining process. The small proof mass corresponded to small sensor deflections which in turn required very sensitive electronics to enable accurate acceleration measurement over a range of 1 to 50 k-G. A prototype sensor, based on a suspended plate mass configuration, was developed and the details of the design, modeling, and validation of the device will be presented in this paper. The device was analyzed using both conventional lumped parameter modeling techniques and finite element analysis tools. The device was tested and performed well over its design range.