In the past three years, tremendous strides have been made in x-ray production using high-current z-pinches. Today, the x-ray energy and power output of the Z accelerator (formerly PBFA II) is the largest available in the laboratory. These z-pinch x-ray sources have great potential to drive high-yield inertial confinement fusion (ICF) reactions at affordable cost if several challenging technical problems can be overcome. Technical challenges in three key areas are discussed in this paper: (1) the design of a target for high yield, (2) the development of a suitable pulsed power driver, and (3) the design of a target chamber capable of containing the high fusion yield.
The Reproducing Kernel Particle Method (RKPM) has many attractive properties that make it ideal for treating a broad class of physical problems. RKPM may be implemented in a mesh-full or a mesh-free manner and provides the ability to tune the method, via the selection of a dilation parameter and window function, in order to achieve the requisite numerical performance. RKPM also provides a framework for performing hierarchical computations making it an ideal candidate for simulating multi-scale problems. Although RKPM has many appealing attributes, the method is quite new and its numerical performance is still being quantified with respect to more traditional discretization methods. In order to assess the numerical performance of RKPM, detailed studies of RKPM on a series of model partial differential equations has been undertaken. The results of von Neumann analyses for RKPM semi-discretizations of one and two-dimensional, first and second-order wave equations are presented in the form of phase and group errors. Excellent dispersion characteristics are found for the consistent mass matrix with the proper choice of dilation parameter. In contrast, the influence of row-sum lumping the mass matrix is shown to introduce severe lagging phase errors. A higher-order mass matrix improves the dispersion characteristics relative to the lumped mass matrix but delivers severe lagging phase errors relative to the fully integrated, consistent mass matrix.
In the past thirty-six months, great progress has been made in x-ray production using high-current z-pinches. Today, the x-ray energy and power output of the Z accelerator (formerly PBFA-II) is the largest available in the laboratory. These z-pinch x-ray sources have the potential to drive high-yield ICF reactions at affordable cost if several challenging technical problems can be overcome. In this paper, the recent technical progress with Z-pinches will be described, and a technical strategy for achieving high-yield ICF with z-pinches will be presented.
A parachute system was designed and prototypes built to deploy a telemetry package behind an earth-penetrating weapon just before impact. The parachute was designed to slow the 10 lb. telemetry package and wire connecting it to the penetrator to 50 fps before impact occurred. The parachute system was designed to utilize a 1.3-ft-dia cross pilot parachute and a 10.8-ft-dia main parachute. A computer code normally used to model the deployment of suspension lines from a packed parachute system was modified to model the deployment of wire from the weapon forebody. Results of the design calculations are presented. Two flight tests of the WBS were conducted, but initiation of parachute deployment did not occur in either of the tests due to difficulties with other components. Thus, the trajectory calculations could not be verified with data. Draft drawings of the major components of the parachute system are presented.
A new visualization technique is reported, which dramatically improves interactivity for scientific visualizations by working directly with voxel data and by employing efficient algorithms and data structures. This discussion covers the research software, the file structures, examples of data creation, data search, and triangle rendering codes that allow geometric surfaces to be extracted from volumetric data. Uniquely, these methods enable greater interactivity by allowing an analyst to dynamically specify both the desired isosurface threshold and required level-of-detail to be used while rendering the image. The key idea behind this visualization paradigm is that various levels-of-detail are represented as differently sized hexahedral virtual voxels, which are stored in a three-dimensional kd-tree; thus the level-of-detail representation is done in voxel space instead of the traditional approach which relies on surface or geometry space decimations. This algorithm has been implemented as an integral component in the EIGEN/VR project at Sandia National Laboratories, which provides a rich environment for scientists to interactively explore and visualize the results of very large-scale simulations performed on massively parallel supercomputers.
Coupled blast-structural computational simulations using supercomputer capabilities will significantly advance the understanding of how complex structures respond under dynamic loads caused by explosives and earthquakes, an understanding with application to the surety of both federal and nonfederal buildings. Simulation of the effects of explosives on structures is a challenge because the explosive response can best be simulated using Eulerian computational techniques and structural behavior is best modeled using Lagrangian methods. Due to the different methodologies of the two computational techniques and code architecture requirements, they are usually implemented in different computer programs. Explosive and structure modeling in two different codes make it difficult or next to impossible to do coupled explosive/structure interaction simulations. Sandia National Laboratories has developed two techniques for solving this problem. The first is called Smoothed Particle Hydrodynamics (SPH), a relatively new gridless method comparable to Eulerian, that is especially suited for treating liquids and gases such as those produced by an explosive. The SPH capability has been fully implemented into the transient dynamics finite element (Lagrangian) codes PRONTO-2D and -3D. A PRONTO-3D/SPH simulation of the effect of a blast on a protective-wall barrier is presented in this paper. The second technique employed at Sandia National Laboratories uses a relatively new code called ALEGRA which is an ALE (Arbitrary Lagrangian-Eulerian) wave code with specific emphasis on large deformation and shock propagation. ALEGRA is capable of solving many shock-wave physics problems but it is especially suited for modeling problems involving the interaction of decoupled explosives with structures.
The US Department of Energy`s (DOE) Accelerated Strategic Computing Initiative (ASCI) program calls for the development of high end computing and advanced application simulations as one component of a program to eliminate reliance upon nuclear testing in the US nuclear weapons program. This paper presents results from the ASCI program`s examination of needs for focused validation and verification (V and V). These V and V activities will ensure that 100 TeraOP-scale ASCI simulation code development projects apply the appropriate means to achieve high confidence in the use of simulations for stockpile assessment and certification. The authors begin with an examination of the roles for model development and validation in the traditional scientific method. The traditional view is that the scientific method has two foundations, experimental and theoretical. While the traditional scientific method does not acknowledge the role for computing and simulation, this examination establishes a foundation for the extension of the traditional processes to include verification and scientific software development that results in the notional framework known as Sargent`s Framework. This framework elucidates the relationships between the processes of scientific model development, computational model verification and simulation validation. This paper presents a discussion of the methodologies and practices that the ASCI program will use to establish confidence in large-scale scientific simulations. While the effort for a focused program in V and V is just getting started, the ASCI program has been underway for a couple of years. The authors discuss some V and V activities and preliminary results from the ALEGRA simulation code that is under development for ASCI. The breadth of physical phenomena and the advanced computational algorithms that are employed by ALEGRA make it a subject for V and V that should typify what is required for many ASCI simulations.
CPA -- Cost and Performance Analysis -- is a methodology that joins Activity Based Cost (ABC) estimation with performance based analysis of physical protection systems. CPA offers system managers an approach that supports both tactical decision making and strategic planning. Current exploratory applications of the CPA methodology are addressing analysis of alternative conceptual designs. To support these activities, the original architecture for CPA, is being expanded to incorporate results from a suite of performance and consequence analysis tools such as JTS (Joint Tactical Simulation), ERAD (Explosive Release Atmospheric Dispersion) and blast effect models. The process flow for applying CPA to the development and analysis conceptual designs is illustrated graphically.
The detection and removal of buried unexploded ordnance (UXO) and landmines is one of the most important problems facing the world today. Numerous detection strategies are being developed, including infrared, electrical conductivity, ground-penetrating radar, and chemical sensors. Chemical sensors rely on the detection of TNT molecules, which are transported from buried UXO/landmines by advection and diffusion in the soil. As part of this effort, numerical models are being developed to predict TNT transport in soils including the effect of precipitation and evaporation. Modifications will be made to TOUGH2 for application to the TNT chemical sensing problem. Understanding the fate and transport of TNT in the soil will affect the design, performance and operation of chemical sensors by indicating preferred sensing strategies.
The US Department of Energy (DOE) is investigating Yucca Mountain, Nevada as a potential site for the disposal of high-level nuclear waste. The site is located near the southwest corner of the Nevada Test Site (NTS) in southern Nye County, Nevada. The underground Exploratory Studies Facility (ESF) tunnel traverses part of the proposed repository block. Alcove 5, located within the ESF is being used to field two in situ ESF thermal tests: the Single Heater Test (SHT) and the Drift Scale Test (DST). Laboratory test specimens were collected from three sites within Alcove 5 including each in situ field test location and one additional site. The aim of the laboratory tests was to determine site-specific thermal and mechanical rock properties including thermal expansion, thermal conductivity, unconfined compressive strength, and elastic moduli. In this paper, the results obtained for the SHT and DST area characterization are compared with data obtained from other locations at the proposed repository site. Results show that thermal expansion, and mechanical properties of Alcove 5 laboratory specimens are slightly different than the average values obtained on specimens from surface drillholes.
The Southwest Surety Institute was formed in 1996 to create unique, science-based educational programs in security engineering. The programs will integrate business, technology, and criminal justice elements to educate a new generation of security professionals. Graduates of the programs will better understand basic security system design and evaluation and contribute to strengthening of the body of knowledge in the area of security. A systematic approach incorporating people, procedures, and equipment will be taught that will emphasize basic security principles and establish the science of security engineering. The use of performance measures in the analysis of designed systems will enable effective decisions by an enterprise and provide the rationale for investment in security systems. Along with educational programs, Institute members will conduct original research and development built on existing relationships with sponsors from government and industry in areas such as counterterroism, microelectronics, banking, aviation, and sensor development. Additional information and updates on the Southwest Surety Institute are available via the Institute home page at www.emrtc.nmt.edu/ssi.
The International Thermonuclear Experimental Reactor (ITER) is envisioned to be the next major step in the world`s fusion program from the present generation of tokamaks and is designed to study fusion plasmas with a reactor relevant range of plasma parameters. During normal operation, it is expected that a fraction of the unburned tritium, that is used to routinely fuel the discharge, will be retained together with deuterium on the surfaces and in the bulk of the plasma facing materials (PFMs) surrounding the core and divertor plasma. The understanding of he basic retention mechanisms (physical and chemical) involved and their dependence upon plasma parameters and other relevant operation conditions is necessary for the accurate prediction of the amount of tritium retained at any given time in the ITER torus. Accurate estimates are essential to assess the radiological hazards associated with routine operation and with potential accident scenarios which may lead to mobilization of tritium that is not tenaciously held. Estimates are needed to establish the detritiation requirements for coolant water, to determine the plasma fueling and tritium supply requirements, and to establish the needed frequency and the procedures for tritium recovery and clean-up. The organization of this paper is as follows. Section 2 provides an overview of the design and operating conditions of the main components which define the plasma boundary of ITER. Section 3 reviews the erosion database and the results of recent relevant experiments conducted both in laboratory facilities and in tokamaks. These data provide the experimental basis and serve as an important benchmark for both model development (discussed in Section 4) and calculations (discussed in Section 5) that are required to predict tritium inventory build-up in ITER. Section 6 emphasizes the need to develop and test methods to remove the tritium from the codeposited C-based films and reviews the status and the prospects of the most attractive techniques. Section 7 identifies the unresolved issues and provides some recommendations on potential R and D avenues for their resolution. Finally, a summary is provided in Section 8.
This report describes the research accomplishments achieved under the LDRD Project ``Double Electron Layer Tunneling Transistor.`` The main goal of this project was to investigate whether the recently discovered phenomenon of 2D-2D tunneling in GaAs/AlGaAs double quantum wells (DQWs), investigated in a previous LDRD, could be harnessed and implemented as the operating principle for a new type of tunneling device the authors proposed, the double electron layer tunneling transistor (DELTT). In parallel with this main thrust of the project, they also continued a modest basic research effort on DQW physics issues, with significant theoretical support. The project was a considerable success, with the main goal of demonstrating a working prototype of the DELTT having been achieved. Additional DELTT advances included demonstrating good electrical characteristics at 77 K, demonstrating both NMOS and CMOS-like bi-stable memories at 77 K using the DELTT, demonstrating digital logic gates at 77 K, and demonstrating voltage-controlled oscillators at 77 K. In order to successfully fabricate the DELTT, the authors had to develop a novel flip-chip processing scheme, the epoxy-bond-and-stop-etch (EBASE) technique. This technique was latter improved so as to be amenable to electron-beam lithography, allowing the fabrication of DELTTs with sub-micron features, which are expected to be extremely high speed. In the basic physics area they also made several advances, including a measurement of the effective mass of electrons in the hour-glass orbit of a DQW subject to in-plane magnetic fields, and both measurements and theoretical calculations of the full Landau level spectra of DQWs in both perpendicular and in-plane magnetic fields. This last result included the unambiguous demonstration of magnetic breakdown of the Fermi surface. Finally, they also investigated the concept of a far-infrared photodetector based on photon assisted tunneling in a DQW. Absorption calculations showed a narrowband absorption which persisted to temperatures much higher than the photon energy being detected. Preliminary data on prototype detectors indicated that the absorption is not only narrowband, but can be tuned in energy through the application of a gate voltage.
The Yucca Mountain Project is currently evaluating the coupled thermal-mechanical-hydrological-chemical (TMHC) response of the potential repository host rock through an in situ thermal testing program. A drift scale test (DST) was constructed during 1997 and heaters were turned on in December 1997. The DST includes nine canister-sized containers with thirty operating heaters each located within the heated drift (HD) and fifty wing heaters located in boreholes in both ribs with a total power output of nominally 210kW. A total of 147 boreholes (combined length of 3.3 km) houses most of the over 3700 TMHC sensors connected with 201 km of cabling to a central data acquisition system. The DST is located in the Exploratory Studies Facility in a 5-m diameter drift approximately 50 m in length. Heating will last up to four years and cooling will last another four years. The rock mass surrounding the DST will experience a harsh thermal environment with rock surface temperatures expected to reach a maximum of about 200 C. This paper describes the process of designing the DST. The first 38 m of the 50-m long Heated Drift (HD) is dedicated to collection of data that will lead to a better understanding of the complex coupled TMHC processes in the host rock of the proposed repository. The final 12 m is dedicated to evaluating the interactions between the heated rock mass and cast-in-place (CIP) concrete ground support systems at elevated temperatures. In addition to a description of the DST design, data from site characterization, and a general description of the analyses and analysis approach used to design the test and make pretest predictions are presented. Test-scoping and pretest numerical predictions of one way thermal-hydrologic, thermal-mechanical, and thermal-chemical behaviors have been completed (TRW, 1997a). These analyses suggest that a dry-out zone will be created around the DST and a 10,000 m{sup 3} volume of rock will experience temperatures above 100 C. The HD will experience large stress increases, particularly in the crown of the drift. Thermoelastic displacements of up to about 16 mm are predicted for some thermomechanical gages. Additional analyses using more complex models will be performed during the conduct of the DST and the results compared with measured data.
Here, the authors report on the lubricating effects of self-assembled monolayers (SAMs) on MEMS by measuring static and dynamic friction with two polysilicon surface- micromachined devices. The first test structure is used to study friction between laterally sliding surfaces and with the second, friction between vertical sidewalls can be investigated. Both devices are SAM-coated following the sacrificial oxide etch and the microstructures emerge released and dry from the final water rinse. The coefficient of static friction, {mu}{sub s} was found to decrease from 2.1 {+-} 0.8 for the SiO{sub 2} coating to 0.11 {+-} 0.01 and 0.10 {+-} 0.01 for films derived from octadecyltrichloro-silane (OTS) and 1H,1H,2H,2H-perfluorodecyl-trichlorosilane (FDTS). Both OTS and FDTS SAM-coated structures exhibit dynamic coefficients of friction, {mu}{sub d} of 0.08 {+-} 0.01. These values were found to be independent of the apparent contact area, and remain unchanged after 1 million impacts at 5.6 {micro}N (17 kPa), indicating that these SAMs continue to act as boundary lubricants despite repeated impacts. Measurements during sliding friction from the sidewall friction testing structure give comparable initial {mu}{sub d} values of 0.02 at a contact pressure of 84 MPa. After 15 million wear cycles, {mu}{sub d} was found to rise to 0.27. Wear of the contacting surfaces was examined by SEM. Standard deviations in the {mu} data for SAM treatments indicate uniform coating coverage.
Development of well-controlled hypervelocity launch capabilities is the first step to understand material behavior at extreme pressures and temperatures not available using conventional gun technology. In this paper, techniques used to extend both the launch capabilities of a two-stage light-gas gun to 10 km/s and their use to determine material properties at pressures and temperature states higher than those ever obtained in the laboratory are summarized. Time-resolved interferometric techniques have been used to determine shock loading and release characteristics of materials impacted by titanium and aluminum fliers launched by the only developed three-stage light-gas gun at 10 km/s. In particular, the Sandia three stage light gas gun, also referred to as the hypervelocity launcher, HVL, which is capable of launching 0.5 mm to 1.0 mm thick by 6 mm to 19 mm diameter plates to velocities approaching 16 km/s has been used to obtain the necessary impact velocities. The VISAR, interferometric particle-velocity techniques has been used to determine shock loading and release profiles in aluminum and titanium at impact velocities of 10 km/s.
Economic and political demands are driving computational investigation of systems and processes like never before. It is foreseen that questions of safety, optimality, risk, robustness, likelihood, credibility, etc. will increasingly be posed to computational modelers. This will require the development and routine use of computing infrastructure that incorporates computational physics models within the framework of larger meta-analyses involving aspects of optimization, nondeterministic analysis, and probabilistic risk assessment. This paper describes elements of an ongoing case study involving the computational solution of several meta-problems in optimization, nondeterministic analysis, and optimization under uncertainty pertaining to the surety of a generic weapon safing device. The goal of the analyses is to determine the worst-case heating configuration in a fire that most severely threatens the integrity of the device. A large, 3-D, nonlinear, finite element thermal model is used to determine the transient thermal response of the device in this coupled conduction/radiation problem. Implications of some of the numerical aspects of the thermal model on the selection of suitable and efficient optimization and nondeterministic analysis algorithms are discussed.
The Russia-US joint program on the safe management of nuclear materials was initiated to address common technical issues confronting the US and Russia in the management of excess weapons grade nuclear materials. The program was initiated after the 1993 Tomsk-7 accident. This paper provides an update on program activities since 1996. The Fourth US Russia Nuclear Materials Safety Management Workshop was conducted in March 1997. In addition, a number of contracts with Russian Institutes have been placed by Lawrence Livermore National Laboratory (LLNL) and Sandia National Laboratories (SNL). These contracts support research related to the safe disposition of excess plutonium (Pu) and highly enriched uranium (HEU). Topics investigated by Russian scientists under contracts with SNL and LLNL include accident consequence studies, the safety of anion exchange processes, underground isolation of nuclear materials, and the development of materials for the immobilization of excess weapons Pu.
This report provides an introduction to the various probabilistic methods developed roughly between 1956--1985 for performing reliability or probabilistic uncertainty analysis on complex systems. This exposition does not include the traditional reliability methods (e.g. parallel-series systems, etc.) that might be found in the many reliability texts and reference materials (e.g. and 1977). Rather, the report centers on the relatively new, and certainly less well known across the engineering community, analytical techniques. Discussion of the analytical methods has been broken into two reports. This particular report is limited to those methods developed between 1956--1985. While a bit dated, methods described in the later portions of this report still dominate the literature and provide a necessary technical foundation for more current research. A second report (Analytical Techniques 2) addresses methods developed since 1985. The flow of this report roughly follows the historical development of the various methods so each new technique builds on the discussion of strengths and weaknesses of previous techniques. To facilitate the understanding of the various methods discussed, a simple 2-dimensional problem is used throughout the report. The problem is used for discussion purposes only; conclusions regarding the applicability and efficiency of particular methods are based on secondary analyses and a number of years of experience by the author. This document should be considered a living document in the sense that as new methods or variations of existing methods are developed, the document and references will be updated to reflect the current state of the literature as much as possible. For those scientists and engineers already familiar with these methods, the discussion will at times become rather obvious. However, the goal of this effort is to provide a common basis for future discussions and, as such, will hopefully be useful to those more intimate with probabilistic analysis and design techniques. There are clearly alternative methods of dealing with uncertainty (e.g. fuzzy set theory, possibility theory), but this discussion will be limited to those methods based on probability theory.
When designing a high consequence system, considerable care should be taken to ensure that the system can not easily be placed into a high consequence failure state. A formal system design process should include a model that explicitly shows the complete state space of the system (including failure states) as well as those events (e.g., abnormal environmental conditions, component failures, etc.) that can cause a system to enter a failure state. In this paper the authors present such a model and formally develop a notion of risk-based refinement with respect to the model.
This paper presents a graph-based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The graph-based tool can identify the set of attack paths that have a high probability of success (or a low effort cost) for the attacker. The system could be used to test the effectiveness of making configuration changes, implementing an intrusion detection system, etc. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level-of-effort for the attacker, various graph algorithms such as shortest-path algorithms can identify the attack paths with the highest probability of success.
Sandia National Laboratories performed vibration and shock testing on a Savannah River Hydride Transport Vessel (HTV) which is used for bulk shipments of tritium. This testing is required to qualify the HTV for transport in the H1616 shipping container. The main requirement for shipment in the H1616 is that the contents (in this case the HTV) have a tritium leak rate of less than 1x10{sup {minus}7} cc/sec after being subjected to shock and vibration normally incident to transport. Helium leak tests performed before and after the vibration and shock testing showed that the HTV remained leaktight under the specified conditions. This report documents the tests performed and the test results.
Experiments were performed at SATURN, a high current z-pinch, to explore the feasibility of creating a hohlraum by imploding a tungsten wire array onto a low-density foam. Emission measurements in the 200--280 eV energy band were consistent with a 110--135 eV Planckian before the target shock heated, or stagnated, on-axis. Peak pinch radiation temperatures of nominally 160 eV were obtained. Measured early time x-ray emission histories and temperature estimates agree well with modeled performance in the 200--280 eV band using a 2D radiation magneto-hydrodynamics code. However, significant differences are observed in comparisons of the x-ray images and 2D simulations.
Field Jr., R.V.; Grigoriadis, K.M.; Bergman, L.A.; Skelton, R.E.
Random variations, whether they occur in the input signal or the system parameters, are phenomena that occur in nearly all engineering systems of interest. As a result, nondeterministic modeling techniques must somehow account for these variations to ensure validity of the solution. As might be expected, this is a difficult proposition and the focus of many current research efforts. Controlling seismically excited structures is one pertinent application of nondeterministic analysis and is the subject of the work presented herein. This overview paper is organized into two sections. First, techniques to assess system reliability, in a context familiar to civil engineers, are discussed. Second, and as a consequence of the first, active control methods that ensure good performance in this random environment are presented. It is the hope of the authors that these discussions will ignite further interest in the area of reliability assessment and design of controlled civil engineering structures.
Interpretation of compression stress-relaxation (CSR) experiments for elastomers in air is complicated by (1) the presence of both physical and chemical relaxation and (2) anomalous diffusion-limited oxidation (DLO) effects. For a butyl material, the authors first use shear relaxation data to indicate that physical relaxation effects are negligible during typical high temperature CSR experiments. They then show that experiments on standard CSR samples ({approximately}15 mm diameter when compressed) lead to complex non-Arrhenius behavior. By combining reaction kinetics based on the historic basic autoxidation scheme with a diffusion equation appropriate to disk-shaped samples, they derive a theoretical DLO model appropriate to CSR experiments. Using oxygen consumption and permeation rate measurements, the theory shows that important DLO effects are responsible for the observed non-Arrhenius behavior. To minimize DLO effects, they introduce a new CSR methodology based on the use of numerous small disk samples strained in parallel. Results from these parallel, minidisk experiments lead to Arrhenius behavior with an activation energy consistent with values commonly observed for elastomers, allowing more confident extrapolated predictions. In addition, excellent correlation is noted between the CSR force decay and the oxygen consumption rate, consistent with the expectation that oxidative scission processes dominate the CSR results.
The Optical Assembly (OA) for the Multispectral Thermal Imager (MTI) program has been fabricated, assembled, and successfully tested for its performance. It represents a major milestone achieved towards completion of this earth observing E-O imaging sensor that is to be operated in low earth orbit. Along with its wide-field-of-view (WFOV), 1.82{degree} along-track and 1.38{degree} cross-track, and comprehensive on-board calibration system, the pushbroom imaging sensor employs a single mechanically cooled focal plane with 15 spectral bands covering a wavelength range from 0.45 to 10.7 {micro}m. The OA has an off-axis three-mirror anastigmatic (TMA) telescope with a 36-cm unobscured clear aperture. The two key performance criteria, 80% enpixeled energy in the visible and radiometric stability of 1% 1{sigma} in the visible/near-infrared (VNIR) and short wavelength infrared (SWIR), of 1.45% 1{sigma} in the medium wavelength infrared (MWIR), and of 0.53% 1{sigma} long wavelength infrared (LWIR), as well as its low weight (less than 49 kg) and volume constraint (89 cm x 44 cm x 127 cm) drive the overall design configuration of the OA and fabrication requirements.
With the increased use of public key cryptography, faster modular multiplication has become an important cryptographic issue. Almost all public key cryptography, including most elliptic curve systems, use modular multiplication. Modular multiplication, particularly for the large public key modulii, is very slow. Increasing the speed of modular multiplication is almost synonymous with increasing the speed of public key cryptography. There are two parts to modular multiplication: multiplication and modular reduction. Though there are fast methods for multiplying and fast methods for doing modular reduction, they do not mix well. Most fast techniques require integers to be in a special form. These special forms are not related and converting from one form to another is more costly than using the standard techniques. To this date it has been better to use the fast modular reduction technique coupled with standard multiplication. Standard modular reduction is much more costly than standard multiplication. Fast modular reduction (Montgomery`s method) reduces the reduction cost to approximately that of a standard multiply. Of the fast multiplication techniques, the redundant number system technique (RNS) is one of the most popular. It is simple, converting a large convolution (multiply) into many smaller independent ones. Not only do redundant number systems increase speed, but the independent parts allow for parallelization. RNS form implies working modulo another constant. Depending on the relationship between these two constants; reduction OR division may be possible, but not both. This paper describes a new technique using ideas from both Montgomery`s method and RNS. It avoids the formula problem and allows fast reduction and multiplication. Since RNS form is used throughout, it also allows the entire process to be parallelized.
The authors conducted perforation experiments with 4340 Rc 38 and T-250 maraging steel, long rod projectiles and HY-100 steel target plates at striking velocities between 80 and 370 m/s. Flat-end rod projectiles with lengths of 89 and 282 mm were machined to nominally 30-mm-diameter so they could be launched from a 30-mm-powder gun without sabots. The target plates were rigidly clamped at a 305-mm-diameter and had nominal thicknesses of 5.3 and 10.5 mm. Four sets of experiments were conducted to show the effects of rod length and plate thickness on the measured ballistic limit and residual velocities. In addition to measuring striking and residual projectile velocities, they obtained framing camera data on the back surfaces of several plates that showed clearly the plate deformation and plug ejection process. They also present a beam model that exhibits qualitatively the experimentally observed mechanisms.
This paper introduces a new configuration of parallel manipulator call the Rotopod which is constructed from all revolute type joints. The Rotopod consists of two platforms connected by six legs and exhibits six Cartesian degrees of freedom. The Rotopod is initially compared with other all revolute joint parallel manipulators to show its similarities and differences. The inverse kinematics for this mechanism are developed and used to analyze the accessible workspace of the mechanism. Optimization is performed to determine the Rotopod design configurations which maximum the accessible workspace based on desirable functional constraints.
This paper investigates a new aspect of fine motion planning for the micro domain. As parts approach 1--10 {micro}m or less in outside dimensions, interactive forces such as van der Waals and electrostatic forces become major factors which greatly change the assembly sequence and path plans. It has been experimentally shown that assembly plans in the micro domain are not reversible, motions required to pick up a part are not the reverse of motions required to release a part. This paper develops the mathematics required to determine the goal regions for pick up, holding, and release of a micro-sphere being handled by a rectangular tool.
Sandia National Laboratories has developed a distributed, high fidelity simulation system for training and planning small team Operations. The system provides an immersive environment populated by virtual objects and humans capable of displaying complex behaviors. The work has focused on developing the behaviors required to carry out complex tasks and decision making under stress. Central to this work are techniques for creating behaviors for virtual humans and for dynamically assigning behaviors to CGF to allow scenarios without fixed outcomes. Two prototype systems have been developed that illustrate these capabilities: MediSim, a trainer for battlefield medics and VRaptor, a system for planning, rehearsing and training assault operations.
This paper presents an analysis of the thermal effects on radioactive (RAM) transportation packages with a fire in an adjacent compartment. An assumption for this analysis is that the adjacent hold fire is some sort of engine room fire. Computational fluid dynamics (CFD) analysis tools were used to perform the analysis in order to include convective heat transfer effects. The analysis results were compared to experimental data gathered in a series of tests on tile US Coast Guard ship Mayo Lykes located at Mobile, Alabama.
The Agile Manufacturing Prototyping System (AMPS) is being integrated at Sandia National Laboratories. AMPS consists of state of the industry flexible manufacturing hardware and software enhanced with Sandia advancements in sensor and model based control; automated programming, assembly and task planning; flexible fixturing; and automated reconfiguration technology. AMPS is focused on the agile production of complex electromechanical parts. It currently includes 7 robots (4 Adept One, 2 Adept 505, 1 Staubli RX90), conveyance equipment, and a collection of process equipment to form a flexible production line capable of assembling a wide range of electromechanical products. This system became operational in September 1995. Additional smart manufacturing processes will be integrated in the future. An automated spray cleaning workcell capable of handling alcohol and similar solvents was added in 1996 as well as parts cleaning and encapsulation equipment, automated deburring, and automated vision inspection stations. Plans for 1997 and out years include adding manufacturing processes for the rapid prototyping of electronic components such as soldering, paste dispensing and pick-and-place hardware.
The direct connection of information, captured in forms such as CAD databases, to the factory floor is enabling a revolution in manufacturing. Rapid response to very dynamic market conditions is becoming the norm rather than the exception. In order to provide economical rapid fabrication of small numbers of variable products, one must design with manufacturing constraints in mind. In addition, flexible manufacturing systems must be programmed automatically to reduce the time for product change over in the factory and eliminate human errors. Sensor based machine control is needed to adapt idealized, model based machine programs to uncontrolled variables such as the condition of raw materials and fabrication tolerances.
The Internet and the applications it supports are revolutionizing the way people work together. This paper presents four case studies in engineering collaboration that new Internet technologies have made possible. These cases include assembly design and analysis, simulation, intelligent machine system control, and systems integration. From these cases, general themes emerge that can guide the way people will work together in the coming decade.
Data authentication as provided by digital signatures is a well known technique for verifying data sent via untrusted network links. Recent work has extended digital signatures to allow jointly generated signatures using threshold techniques. In addition, new proactive mechanisms have been developed to protect the joint private key over long periods of time and to allow each of the parties involved to verify the actions of the other parties. In this paper, the authors describe an application in which proactive digital signature techniques are a particularly valuable tool. They describe the proactive DSA protocol and discuss the underlying software tools that they found valuable in developing an implementation. Finally, the authors briefly describe the protocol and note difficulties they experienced and continue to experience in implementing this complex cryptographic protocol.
This paper presents a graph based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level of effort for the attacker, various graph algorithms such as shortest path algorithms can identify the attack paths with the highest probability of success.
This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a {open_quotes}best estimate{close_quotes} of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards.
The original DAMP (DAta Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA{trademark} (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs. time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of Unix{reg_sign}-based workstations, a replacement was needed. This package uses the IDL{reg_sign} software, available from Research Systems Incorporated in Boulder, Colorado, as the engine, and creates a set of widgets to manipulate the data in a manner similar to the original DAMP and earlier versions of xdamp. IDL is currently supported on a wide variety of Unix platforms such as IBM{reg_sign} workstations, Hewlett Packard workstations, SUN{reg_sign} workstations, Microsoft{reg_sign} Windows{trademark} computers, Macintosh{reg_sign} computers and Digital Equipment Corporation VMS{reg_sign} and Alpha{reg_sign} systems. Thus, xdamp is portable across many platforms. The author has verified operation, albeit with some minor IDL bugs, on personal computers using Windows 95 and Windows NT; IBM Unix platforms; and DEC alpha and VMS systems; HP 9000/700 series workstations; and Macintosh computers, both regular and PowerPC{trademark} versions. Version 3 adds the capability to manipulate images to the original xdamp capabilities.
In this work the authors report results of narrowband amplifiers designed for milliwatt and submilliwatt power consumption using JFET and pseudomorphic high electron mobility transistors (PHEMT) GaAs-based technologies. Enhancement-mode JFETs were used to design both a hybrid amplifier with off-chip matching as well as a monolithic microwave integrated circuit (MMIC) with on-chip matching. The hybrid amplifier achieved 8--10 dB of gain at 2.4 GHz and 1 mW. The MMIC achieved 10 dB of gain at 2.4 GHz and 2 mW. Submilliwatt circuits were also explored by using 0.25 {micro}m PHEMTs. 25 {micro}W power levels were achieved with 5 dB of gain for a 215 MHz hybrid amplifier. These results significantly reduce power consumption levels achievable with the JFETs or prior MESFET, heterostructure field effect transistor (HFET), or Si bipolar results from other laboratories.
The performance of vertical cavity surface emitting lasers (VCSELs) has improved greatly in recent years. Much of this improvement can be attributed to the use of native oxide layers within the laser structure, providing both electrical and optical transverse confinement. Understanding this optical confinement will be vital for the future realization of yet smaller lasers with ultralow threshold currents. Here the authors report the spectral and modal properties of small (0.5 {micro}m to 5 {micro}m current aperture) VCSELs and identify Joule heating as a dominant effect in the resonator properties of the smallest lasers.
The authors review the use of in-situ normal incidence reflectance, combined with a virtual interface model, to monitor and control the growth of complex compound semiconductor devices. The technique is being used routinely on both commercial and research metal-organic chemical vapor deposition (MOCVD) reactors and in molecular beam epitaxy (MBE) to measure growth rates and high temperature optical constants of compound semiconductor alloys. The virtual interface approach allows one to extract the calibration information in an automated way without having to estimate the thickness or optical constants of the alloy, and without having to model underlying thin film layers. The method has been used in a variety of data analysis applications collectively referred to as ADVISOR (Analysis of Deposition using Virtual Interfaces and Spectroscopic Optical Reflectance). This very simple and robust monitor and ADVISOR method provides one with the equivalent of a real-time reflection high energy electron reflectance (RHEED) tool for both MBE and MOCVD applications.
The phase-out of the ozone-depleting solvents has forced industry to look to solvents such as alcohol, terpenes and other flammable solvents to perform the critical cleaning processes. These solvents are not as efficient as the ozone-depleting solvents in terms of soil loading, cleaning time and drying when used in standard cleaning processes such as manual sprays or ultrasonic baths. They also require special equipment designs to meet part cleaning specifications and operator safety requirements. This paper describes a cleaning system that incorporates the automated spraying of flammable solvents to effectively perform precision cleaning processes. Key to the project`s success was the development of software that controls the robotic system and automatically generates robotic cleaning paths from three dimensional CAD models of the items to be cleaned.
Deep high-aspect ratio Si etching (HARSE) has shown potential application for passive self-alignment of dissimilar materials and devices on Si carriers or waferboards. The Si can be etched to specific depths and; lateral dimensions to accurately place or locate discrete components (i.e lasers, photodetectors, and fiber optics) on a Si carrier. It is critical to develop processes which maintain the dimensions of the mask, yield highly anisotropic profiles for deep features, and maintain the anisotropy at the base of the etched feature. In this paper the authors report process conditions for HARSE which yield etch rates exceeding 3 {micro}m/min and well controlled, highly anisotropic etch profiles. Examples for potential application to advanced packaging technologies will also be shown.
Moving commercial cargo across the US-Mexico border is currently a complex, paper-based, error-prone process that incurs expensive inspections and delays at several ports of entry in the Southwestern US. Improved information handling will dramatically reduce border dwell time, variation in delivery time, and inventories, and will give better control of the shipment process. The Border Trade Facilitation System (BTFS) is an agent-based collaborative work environment that assists geographically distributed commercial and government users with transshipment of goods across the US-Mexico border. Software agents mediate the creation, validation and secure sharing of shipment information and regulatory documentation over the Internet, using the World Wide Web to interface with human actors. Agents are organized into Agencies. Each agency represents a commercial or government agency. Agents perform four specific functions on behalf of their user organizations: (1) agents with domain knowledge elicit commercial and regulatory information from human specialists through forms presented via web browsers; (2) agents mediate information from forms with diverse otologies, copying invariant data from one form to another thereby eliminating the need for duplicate data entry; (3) cohorts of distributed agents coordinate the work flow among the various information providers and they monitor overall progress of the documentation and the location of the shipment to ensure that all regulatory requirements are met prior to arrival at the border; (4) agents provide status information to human actors and attempt to influence them when problems are predicted.
On March 20, 1998, Sandia National Laboratories performed a double-blind test of the DKL LifeGuard human presence detector and tracker. The test was designed to allow the device to search for individuals well within the product`s published operational parameters. The Test Operator of the DKL LifeGuard was provided by the manufacturer and was a high-ranking member of DKL management. The test was developed and implemented to verify the performance of the device as specified by the manufacturer. The device failed to meet its published specifications and it performed no better than random chance.
Despite the best preventative measures, ruptured hoses, spills and leaks occur with use of all hydraulic equipment. Although these releases do not usually produce a RCRA regulated waste, they are often a reportable occurrence. Clean-up and subsequent administrative procedure involves additional costs, labor and work delays. Concerns over these releases, especially related to Sandia National Laboratories (SNL) vehicles hauling waste on public roads prompted Fleet Services (FS) to seek an alternative to the standard petroleum based hydraulic fluid. Since 1996 SNL has participated in a pilot program with the University of Iowa (UNI) and selected vehicle manufacturers, notably John Deere, to field test hydraulic fluid produced from soybean oil in twenty of its vehicles. The vehicles included loaders, graders, sweepers, forklifts and garbage trucks. Research was conducted for several years at UNI to modify and market soybean oils for industrial uses. Soybean oil ranks first in worldwide production of vegetable oils (29%), and represents a tremendous renewable resource. Initial tests with soybean oil showed excellent lubrication and wear protection properties. Lack of oxidative stability and polymerization of the oil were concerns. These concerns were being addressed through genetic alteration, chemical modification and use of various additives, and the improved lubricant is in the field testing stage.
The aim of this laboratory-directed research and development project was to study amorphous carbon (a-C) thin films for eventual cold-cathode electron emitter applications. The development of robust, cold-cathode emitters are likely to have significant implications for modern technology and possibly launch a new industry: vacuum micro-electronics (VME). The potential impact of VME on Sandia`s National Security missions, such as defense against military threats and economic challenges, is profound. VME enables new microsensors and intrinsically radiation-hard electronics compatible with MOSFET and IMEM technologies. Furthermore, VME is expected to result in a breakthrough technology for the development of high-visibility, low-power flat-panel displays. This work covers four important research areas. First, the authors studied the nature of the C-C bonding structures within these a-C thin films. Second, they determined the changes in the film structures resulting from thermal annealing to simulate the effects of device processing on a-C properties. Third, they performed detailed electrical transport measurements as a function of annealing temperature to correlate changes in transport properties with structural changes and to propose a model for transport in these a-C materials with implications on the nature of electron emission. Finally, they used scanning atom probes to determine important aspects on the nature of emission in a-C.
Net erosion rates of carbon target plates have been measured in situ for the DIII-D lower divertor. The principal method of obtaining this data is the DiMES sample probe. Recent experiments have focused on erosion at the outer strike-point of two divertor plasma conditions: (1) attached (Te > 40 eV) ELMing plasmas and (2) detached (Te < 2 eV) ELMing plasmas. The erosion rates for the attached cases are > 10 cm/year, even with incident heat flux < 1 MW/m{sup 2}. In this case, measurements and modeling agree for both gross and net carbon erosion, showing the near-surface transport and redeposition of the carbon is well understood and that effective sputtering yields are > 10%. In ELM-free discharges, this erosion rate can account for the rate of carbon accumulation in the core plasma. Divertor plasma detachment eliminates physical sputtering, while spectroscopically measured chemical erosion yields are also found to be low (Y(C/D{sup +}) {le} 2.0 {times} 10{sup {minus}3}). This leads to suppression of net erosion at the outer strike-point, which becomes a region of net redeposition ({approximately} 4 cm/year). The private flux wall is measured to be a region of net redeposition with dense, high neutral pressure, attached divertor plasmas. Leading edges intercepting parallel heat flux ({approximately} 50 MW/m{sup 2}) have very high net erosion rates ({approximately} 10 {micro}m/s) at the OSP of an attached plasma. Leading edge erosion, and subsequent carbon redeposition, caused by tile gaps can account for half of the deuterium codeposition in the DIII-D divertor.
This paper describes the design and design issues associated with silicon surface micromachined device design Some of the tools described are adaptations of macro analysis tools. Design issues in the microdomain differ greatly from design issues encountered in the macrodomain. Microdomain forces caused by electrostatic attraction, surface tension, Van der Walls forces, and others can be more significant than inertia, friction, or gravity. Design and analysis tools developed for macrodomain devices are inadequate in most cases for microdomain devices. Microdomain specific design and analysis tools are being developed, but are still immature and lack adequate functionality. The fundamental design process for surface micromachined devices is significantly different than the design process employed in the design of macro-sized devices. In this paper, MEMS design will be discussed as well as the tools used to develop the designs and the issues relating fabrication processes to design. Design and analysis of MEMS devices is directly coupled to the silicon micromachining processes used to fabricate the devices. These processes introduce significant design limitations and must be well understood before designs can be successfully developed. In addition, some silicon micromachining fabrication processes facilitate the integration of silicon micromachines with microelectronics on-chip. For devices requiring on-chip electronics, the fabrication processes introduce additional design constraints that must be taken into account during design and analysis.
Many manufacturing companies today expend more effort on upgrade and disposal projects than on clean-slate design, and this trend is expected to become more prevalent in coming years. However, commercial CAD tools are better suited to initial product design than to the product`s full life cycle. Computer-aided analysis, optimization, and visualization of life cycle assembly processes based on the product CAD data can help ensure accuracy and reduce effort expended in planning these processes for existing products, as well as provide design-for-lifecycle analysis for new designs. To be effective, computer aided assembly planning systems must allow users to express the plan selection criteria that apply to their companies and products as well as to the life cycles of their products. Designing products for easy assembly and disassembly during its entire life cycle for purposes including service, field repair, upgrade, and disposal is a process that involves many disciplines. In addition, finding the best solution often involves considering the design as a whole and by considering its intended life cycle. Different goals and constraints (compared to initial assembly) require one to re-visit the significant fundamental assumptions and methods that underlie current assembly planning techniques. Previous work in this area has been limited to either academic studies of issues in assembly planning or applied studies of life cycle assembly processes, which give no attention to automatic planning. It is believed that merging these two areas will result in a much greater ability to design for; optimize, and analyze life cycle assembly processes.
An agile microsystem manufacturing technology has been developed that provides unprecedented 5 levels of independent polysilicon surface-micromachine films for the designer. Typical surface-micromachining processes offer a maximum of 3 levels, making this the most complex surface-micromachining process technology developed to date. Leveraged from the extensive infrastructure present in the microelectronics industry, the manufacturing method of polysilicon surface-micromachining offers similar advantages of high-volume, high-reliability, and batch-fabrication to microelectromechanical systems (MEMS) as has been accomplished with integrated circuits (ICs). These systems, comprised of microscopic-sized mechanical elements, are laying the foundation for a rapidly expanding, multi-billion dollar industry 2 which impacts the automotive, consumer product, and medical industries to name only a few.
Composite doublers, or repair patches, provide an innovative repair technique which can enhance the way aircraft are maintained. Instead of riveting multiple steel or aluminum plates to facilitate an aircraft repair, it is possible to bond a single boron-epoxy composite doubler to the damaged structure. In order for the use of composite doublers to achieve widespread use in the civil aviation industry, it is imperative that methods be developed which can quickly and reliably assess the integrity of the doubler. In this study, a specific composite application was chosen on an L-1011 aircraft in order to focus the tasks on application and operation issues. Primary among inspection requirements for these doublers is the identification of disbonds, between the composite laminate and aluminum parent material, and delaminations in the composite laminate. Surveillance of cracks or corrosion in the parent aluminum material beneath the doubler is also a concern. No single nondestructive inspection (NDI) method can inspect for every flaw type, therefore it is important to be aware of available NDI techniques and to properly address their capabilities and limitations. A series of NDI tests were conducted on laboratory test structures and on full-scale aircraft fuselage sections. Specific challenges, unique to bonded composite doubler applications, were highlighted. An array of conventional and advanced NDI techniques were evaluated. Flaw detection sensitivity studies were conducted on applicable eddy current, ultrasonic, X-ray and thermography based devices. The application of these NDI techniques to composite doublers and the results from test specimens, which were loaded to provide a changing flaw profile, are presented in this report. It was found that a team of these techniques can identify flaws in composite doubler installations well before they reach critical size.
This report describes ship accident event trees, ship collision and ship fire frequencies, representative ships and shipping practices, a model of ship penetration depths during ship collisions, a ship fire spread model, cask to environment release fractions during ship collisions and fires, and illustrative consequence calculations. This report contains the following appendices: Appendix 1 -- Representative Ships and Shipping Practices; Appendix 2 -- Input Data for Minorsky Calculations; Appendix 3 -- Port Ship Speed Distribution; and Appendix 4 -- Cask-to-Environment Release Fractions.
The engineering of advanced semiconductor heterostructure materials and devices requires a detailed understanding of, and control over, the structure and properties of semiconductor materials and devices at the atomic to nanometer scale. Cross-sectional scanning tunneling microscopy has emerged as a unique and powerful method to characterize structural morphology and electronic properties in semiconductor epitaxial layers and device structures at these length scales. The basic experimental techniques in cross-sectional scanning tunneling microscopy are described, and some representative applications to semiconductor heterostructure characterization drawn from recent investigations in the authors laboratory are discussed. Specifically, they describe some recent studies of InP/InAsP and InAsP/InAsSb heterostructures in which nanoscale compositional clustering has been observed and analyzed.
On July 1--2, 1997, Sandia National Laboratories hosted the External Committee to Evaluate Sandia`s Risk Expertise. Under the auspices of SIISRS (Sandia`s International Institute for Systematic Risk Studies), Sandia assembled a blue-ribbon panel of experts in the field of risk management to assess their risk programs labs-wide. Panelists were chosen not only for their own expertise, but also for their ability to add balance to the panel as a whole. Presentations were made to the committee on the risk activities at Sandia. In addition, a tour of Sandia`s research and development programs in support of the US Nuclear Regulatory Commission was arranged. The panel attended a poster session featuring eight presentations and demonstrations for selected projects. Overviews and viewgraphs from the presentations are included in Volume 1 of this report. Presentations are related to weapons, nuclear power plants, transportation systems, architectural surety, environmental programs, and information systems.
Recent K-shell scaling experiments on the 20 MA Z accelerator at Sandia National Laboratories have shown that large diameter (40 and 55 mm) arrays can be imploded with 80 to 210 wires of titanium or stainless steel. These implosions have produced up to 150 kJ of > 4.5 keV x-rays and 65 kJ of > 6.0 keV x-rays in 7 to 18 ns FWHM pulses. This is a major advance in plasma radiation source (PRS) capability since there is presently limited test capability above 3 keV. In fact, Z produces more > 4.5 keV x-rays than previous aboveground simulators produced at 1.5 keV. Z also produces some 200 kJ of x-rays between 1 and 3 keV in a continuous spectrum for these loads. The measured spectra and yields are consistent with 1-dimensional MHD calculations performed by NRL. Thermoelastic calorimeters, PVDF gauges, and optical impulse gauges have been successfully fielded with these sources.
Sandia National Labs has been investigating the use of rigid polyurethane foam (RPF) for military use, particularly for mine protection for the past two years. Results of explosive experiments and mine/foam interaction experiments are presented. The RPF has proved to be effective in absorbing direct shock from explosives. Quantitative data are presented. As reported elsewhere, it has proved effective in reducing the signature of vehicles passing over anti-tank (AT) mines to prevent the mine from firing. This paper presents the results of experiments done to understand the interaction of RPF with anti-craft (AC) mines during foam formation in shallow water in a scaled surf environment.
We describe use of AlAsSb/AlGaAsSb lattice matched to InP for distributed Bragg reflectors. These structures are integral to several surface normal devices, in particular vertical cavity surface emitting lasers. The high refractive index ratio of these materials allows formation of a highly reflective mirror with relatively few mirror pairs. As a result, we have been able to show for the first time the 77K CW operation of an optically pumped, monolithic, all-epitaxial vertical cavity laser, emitting at 1.56 {mu}m.
Sandia National Laboratories performs systems analysis of high risk, high consequence systems. In particular, Sandia is responsible for the engineering of nuclear weapons, exclusive of the explosive physics package. In meeting this responsibility, Sandia has developed fundamental approaches to safety and a process for evaluating safety based on modeling and simulation. These approaches provide confidence in the safety of our nuclear weapons. Similar concepts may be applied to improve the safety of other high consequence systems.
Further improvements to the Waveform Correlation Event Detection System (WCEDS) developed by Sandia Laboratory have made it possible to test the system on the accepted Comprehensive Test Ban Treaty (CTBT) seismic monitoring network. For our test interval we selected a 24-hour period from December 1996, and chose to use the Reviewed Event Bulletin (REB) produced by the Prototype International Data Center (PIDC) as ground truth for evaluating the results. The network is heterogeneous, consisting of array and three-component sites, and as a result requires more flexible waveform processing algorithms than were available in the first version of the system. For simplicity and superior performance, we opted to use the spatial coherency algorithm of Wagner and Owens (1996) for both types of sites. Preliminary tests indicated that the existing version of WCEDS, which ignored directional information, could not achieve satisfactory detection or location performance for many of the smaller events in the REB, particularly those in the south Pacific where the network coverage is unusually sparse. To achieve an acceptable level of performance, we made modifications to include directional consistency checks for the correlations, making the regions of high correlation much less ambiguous. These checks require the production of continuous azimuth and slowness streams for each station, which is accomplished by means of FK processing for the arrays and power polarization processing for the three-component sites. In addition, we added the capability to use multiple frequency-banded data streams for each site to increase sensitivity to phases whose frequency content changes as a function of distance.
The on-site inspection provisions in many current and proposed arms control agreements require extensive preparation and training on the part of both the Inspection Teams (inspectors) and Inspected Parties (hosts). Traditional training techniques include lectures, table-top inspections, and practice inspections. The Augmented Computer Exercise for Inspection Training (ACE-IT), an interactive computer training tool, increases the utility of table-top inspections. ACE-IT is used for training both inspectors and hosts to conduct a hypothetical challenge inspection under the Chemical Weapons Convention (CWC). The training covers the entire sequence of events in the challenge inspection regime, from initial notification of an inspection through post-inspection activities. The primary emphasis of the training tool is on conducting the inspection itself, and in particular, implementing the concept of managed access. (Managed access is a technique used to assure the inspectors that the facility is in compliance with the CWC, while at the same time protecting sensitive information unrelated to the CWC.) Information for all of the activities is located in the electronic {open_quotes}Exercise Manual.{close_quotes} In addition, interactive menus are used to negotiate access to each room and to alternate information during the simulated inspection. ACE-IT also demonstrates how various inspection provisions impact compliance determination and the protection of sensitive information.
In this user`s guide, details for running BREAKUP are discussed. BREAKUP allows the widely used overset grid method to be run in a parallel computer environment to achieve faster run times for computational field simulations over complex geometries. The overset grid method permits complex geometries to be divided into separate components. Each component is then gridded independently. The grids are computationally rejoined in a solver via interpolation coefficients used for grid-to-grid communications of boundary data. Overset grids have been in widespread use for many years on serial computers, and several well-known Navier-Stokes flow solvers have been extensively developed and validated to support their use. One drawback of serial overset grid methods has been the extensive compute time required to update flow solutions one grid at a time. Parallelizing the overset grid method overcomes this limitation by updating each grid or subgrid simultaneously. BREAKUP prepares overset grids for parallel processing by subdividing each overset grid into statically load-balanced subgrids. Two-dimensional examples with sample solutions, and three-dimensional examples, are presented.
This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users` guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers` guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in the quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence.
One important application of mobile robots is searching a geographical region to locate the origin of a specific sensible phenomenon. Mapping mine fields, extraterrestrial and undersea exploration, the location of chemical and biological weapons, and the location of explosive devices are just a few potential applications. Teams of robotic bloodhounds have a simple common goal; to converge on the location of the source phenomenon, confirm its intensity, and to remain aggregated around it until directed to take some other action. In cases where human intervention through teleoperation is not possible, the robot team must be deployed in a territory without supervision, requiring an autonomous decentralized coordination strategy. This paper presents the alpha beta coordination strategy, a family of collective search algorithms that are based on dynamic partitioning of the robotic team into two complementary social roles according to a sensor based status measure. Robots in the alpha role are risk takers, motivated to improve their status by exploring new regions of the search space. Robots in the beta role are motivated to improve but are conservative, and tend to remain aggregated and stationary until the alpha robots have identified better regions of the search space. Roles are determined dynamically by each member of the team based on the status of the individual robot relative to the current state of the collective. Partitioning the robot team into alpha and beta roles results in a balance between exploration and exploitation, and can yield collective energy savings and improved resistance to sensor noise and defectors. Alpha robots waste energy exploring new territory, and are more sensitive to the effects of ambient noise and to defectors reporting inflated status. Beta robots conserve energy by moving in a direct path to regions of confirmed high status.
A summary of recent advances in cryogenic-aerosol-based wafer-processing technology for semiconductor wafer cleaning is presented. An argon/nitrogen cryogenic-aerosol-based tool has been developed and optimized for removal of particulate contaminants. The development of the tool involved a combination of theoretical (modeling) and experimental efforts aimed at understanding the mechanisms of aerosol formation and the relation between aerosol characteristics and particle-removal ability. It is observed that the highest cleaning efficiencies are achieved, in general, when the cryogenic aerosol is generated by the explosive atomization of an initially liquid jet of the cryogenic mixture.
Proton implantation into the buried oxide of Si/SiO{sub 2}/Si structures does not introduce mobile protons. The cross section for capture of radiation-induced electrons by mobile protons is two orders of magnitude smaller than for electron capture by trapped holes. The data provide new insights into the atomic mechanisms governing the generation and radiation tolerance of mobile protons in SiO{sub 2}. This can lead to improved techniques for production and radiation hardening of radiation tolerant memory devices.
A sensor system simulation has been developed which aids in the evaluation of a proposed fast framing staring sensor as it will perform in its operational environment. Beginning with a high resolution input image, a sequence of frames at the target sensor resolution are produced using the assumed platform motion and the contribution of various noise sources as input data. The resulting frame sequence can then be used to help define system requirements, to aid algorithm development, and to predict system performance. In order to assess the performance of a sensor system, the radiance measured by the system is modeled using a variety of scenarios. For performance prediction, the modeling effort is directed toward providing the ability to determine the minimum Noise Equivalent Target (NET) intensities for each band of the sensor system. The NET is calculated at the entrance pupil of the instrument in such a way that the results can be applied to a variety of point source targets and collection conditions. The intent is to facilitate further study within the user community as new mission areas and/or targets of interest develop that are not addressed explicitly during sensor conceptual design.
The 91.84Sn-3.33Ag-4.83Bi and 96.5Sn-3.5Ag Pb-free solders were evaluated for surface mount circuit board interconnects. The 63Sn-37Pb solder provided the baseline data. All three solders exhibited suitable manufacturability per a defect analyses of circuit board test vehicles. Thermal cycling had no significant effect on the 91.84Sn-3.33Ag-4.83Bi solder joints. Some degradation in the form of grain boundary sliding was observed in 96.5Sn-3.5Ag and 63Sn-37Pb solder joints. The quality of the solder joint microstructures showed a slight degree of degradation under thermal shock exposure for all of the solders tested. Trends in the solder joint shear strengths could be traced to the presence of Pd in the solder, the source of which was the Pd/Ni finish on the circuit board conductor features. The higher, intrinsic strengths of the Pb-free solders encouraged the failure path to be located in proximity to the solder/substrate interface where Pd combined with Sn to form brittle PdSn{sub 4} particles, resulting in reduced shear strengths.
The telecommunications sector plays a pivotal role in the system of increasingly connected and interdependent networks that make up national infrastructure. An assessment of the probable structure and function of the bit-moving industry in the twenty-first century must include issues associated with the surety of telecommunications. The term surety, as used here, means confidence in the acceptable behavior of a system in both intended and unintended circumstances. This paper outlines various engineering approaches to surety in systems, generally, and in the telecommunications infrastructure, specifically. It uses the experience and expectations of the telecommunications system of the US as an example of the global challenges. The paper examines the principal factors underlying the change to more distributed systems in this sector, assesses surety issues associated with these changes, and suggests several possible strategies for mitigation. It also studies the ramifications of what could happen if this sector became a target for those seeking to compromise a nation`s security and economic well being. Experts in this area generally agree that the U. S. telecommunications sector will eventually respond in a way that meets market demands for surety. Questions remain open, however, about confidence in the telecommunications sector and the nation`s infrastructure during unintended circumstances--such as those posed by information warfare or by cascading software failures. Resolution of these questions is complicated by the lack of clear accountability of the private and the public sectors for the surety of telecommunications.
The research focuses on the measurement of the nanomechanical properties associated with the interphase region of a polymer matrix fiber composite with a nanometer resolution in chemically characterized model composites. The Interfacial Force Microscope (IFM) is employed to measure, with nanometer resolution, the mechanical properties of the interphase region of epoxy/glass fiber composites. The chemistry of the interphase is altered by the adsorption on to the fiber surface a coupling agent, 3-aminopropyltrimethoxy silane ({gamma}-APS) which is known to covalently bond to the glass fiber surface and the epoxy resin. Recent work utilizing FT-IR fiber optic evanescent wave spectroscopy provides a method for the characterization of the interphase chemistry. This technique has been used to investigate the interphase chemistry of epoxy/amine curing agent/amine-terminated organosilane coupling agent/silica optical fiber model composites. This body of work has shown that a substantial fraction of the amine of the organosilane-coupling agent does not participate in a reaction with the epoxy resin. This evidence suggests an interphase that will have mechanical properties significantly different than the bulk epoxy/amine matrix. Previous research has shown that drastic changes occur in the coupling agent chemistry, interphase chemistry, and composite mechanical properties as the amount of adsorbed coupling agent is varied over the industrially relevant range used in this work. A commercially available epoxy resin, EPON 828, and aliphatic amine-curing agent, EPI-CURE 3283, make up the polymer matrix in this study. The reinforcement is silica optical or E-glass fibers.
This report describes a 5 year, $10 million Sandia/Industry project to develop an advanced borehole seismic source for use in oil and gas exploration and production. The development Team included Sandia, Chevron, Amoco, Conoco, Exxon, Raytheon, Pelton, and GRI. The seismic source that was developed is a vertically oriented, axial point force, swept frequency, clamped, reaction-mass vibrator design. It was based on an early Chevron prototype, but the new tool incorporates a number of improvements which make it far superior to the original prototype. The system consists of surface control electronics, a special heavy duty fiber optic wireline and draw works, a cablehead, hydraulic motor/pump module, electronics module, clamp, and axial vibrator module. The tool has a peak output of 7,000 lbs force and a useful frequency range of 5 to 800 Hz. It can operate in fluid filled wells with 5.5-inch or larger casing to depths of 20,000 ft and operating temperatures of 170 C. The tool includes fiber optic telemetry, force and phase control, provisions to add seismic receiver arrays below the source for single well imaging, and provisions for adding other vibrator modules to the tool in the future. The project yielded four important deliverables: a complete advanced borehole seismic source system with all associated field equipment; field demonstration surveys funded by industry showing the utility of the system; industrial sources for all of the hardware; and a new service company set up by their industrial partner to provide commercial surveys.
The use of a fracture mechanics based design for the radioactive material transport (RAM) packagings has been the subject of extensive research for more than a decade. Sandia National Laboratories (SNL) has played an important role in the research and development of the application of this technology. Ductile iron has been internationally accepted as an exemplary material for the demonstration of a fracture mechanics based method of RAM packaging design and therefore is the subject of a large portion of the research discussed in this report. SNL`s extensive research and development program, funded primarily by the U. S. Department of Energy`s Office of Transportation, Energy Management and Analytical Services (EM-76) and in an auxiliary capacity, the office of Civilian Radioactive Waste Management, is summarized in this document along with a summary of the research conducted at other institutions throughout the world. In addition to the research and development work, code and standards development and regulatory positions are also discussed.
Construction of a prestressed concrete containment vessel (PCCV) model is underway as part of a cooperative containment research program at Sandia National Laboratories. The work is co-sponsored by the Nuclear Power Engineering Corporation (NUPEC) of Japan and US Nuclear Regulatory Commission (NRC). Preliminary analyses of the Sandia 1:4 Scale PCCV Model have determined axisymmetric global behavior and have estimated the potential for failure in several areas, including the wall-base juncture and near penetrations. Though the liner tearing failure mode has been emphasized, the assumption of a liner tearing failure mode is largely based on experience with reinforced concrete containments. For the PCCV, the potential for shear failure at or near the liner tearing pressure may be considerable and requires detailed investigation. This paper examines the behavior of the PCCV in the region most susceptible to a radial shear failure, the wall-basemat juncture region. Prediction of shear failure in concrete structures is a difficult goal, both experimentally and analytically. As a structure begins to deform under an applied system of forces that produce shear, other deformation modes such as bending and tension/compression begin to influence the response. Analytically, difficulties lie in characterizing the decrease in shear stiffness and shear stress and in predicting the associated transfer of stress to reinforcement as cracks become wider and more extensive. This paper examines existing methods for representing concrete shear response and existing criteria for predicting shear failure, and it discusses application of these methods and criteria to the study of the 1:4 scale PCCV.
Federal laboratories have successfully filled many roles for the public; however, as the 21st Century nears it is time to rethink and reevaluate how Federal laboratories can better support the public and identify new roles for this class of publicly-owned institutions. The productivity of the Federal laboratory system can be increased by making use of public outcome metrics, by benchmarking laboratories, by deploying innovative new governance models, by partnerships of Federal laboratories with universities and companies, and by accelerating the transition of federal laboratories and the agencies that own them into learning organizations. The authors must learn how government-owned laboratories in other countries serve their public. Taiwan`s government laboratory, Industrial Technology Research Institute, has been particularly successful in promoting economic growth. It is time to stop operating Federal laboratories as monopoly institutions; therefore, competition between Federal laboratories must be promoted. Additionally, Federal laboratories capable of addressing emerging 21st century public problems must be identified and given the challenge of serving the public in innovative new ways. Increased investment in case studies of particular programs at Federal laboratories and research on the public utility of a system of Federal laboratories could lead to increased productivity of laboratories. Elimination of risk-averse Federal laboratory and agency bureaucracies would also have dramatic impact on the productivity of the Federal laboratory system. Appropriately used, the US Federal laboratory system offers the US an innovative advantage over other nations.
Radionuclide transport experiments were carried out using intact cores obtained from the Culebra member of the Rustler Formation inside the Waste Isolation Pilot Plant, Air Intake Shaft. Twenty-seven separate tests are reported here and include experiments with {sup 3}H, {sup 22}Na, {sup 241}Am, {sup 239}Np, {sup 228}Th, {sup 232}U and {sup 241}Pu, and two brine types, AIS and ERDA 6. The {sup 3}H was bound as water and provides a measure of advection, dispersion, and water self-diffusion. The other tracers were injected as dissolved ions at concentrations below solubility limits, except for americium. The objective of the intact rock column flow experiments is to demonstrate and quantify transport retardation coefficients, (R) for the actinides Pu, Am, U, Th and Np, in intact core samples of the Culebra Dolomite. The measured R values are used to estimate partition coefficients, (kd) for the solute species. Those kd values may be compared to values obtained from empirical and mechanistic adsorption batch experiments, to provide predictions of actinide retardation in the Culebra. Three parameters that may influence actinide R values were varied in the experiments; core, brine and flow rate. Testing five separate core samples from four different core borings provided an indication of sample variability. While most testing was performed with Culebra brine, limited tests were carried out with a Salado brine to evaluate the effect of intrusion of those lower waters. Varying flow rate provided an indication of rate dependent solute interactions such as sorption kinetics.
Wind-energy researchers at the National Wind Technology Center (NWTC), representing Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL), are developing a new, light-weight, modular data acquisition unit capable of acquiring long-term, continuous time-series data from small and/or dynamic wind-turbine rotors. The unit utilizes commercial data acquisition hardware, spread-spectrum radio modems, and Global Positioning System receivers, and a custom-built programmable logic device. A prototype of the system is now operational, and initial field deployment is expected this summer. This paper describes the major subsystems comprising the unit, summarizes the current status of the system, and presents the current plans for near-term development of hardware and software.
Electronic sensing circuitry and micro electro mechanical sense elements can be integrated to produce inertial instruments for applications unheard of a few years ago. This paper will describe the Sandia M3EMS fabrication process, inertial instruments that have been fabricated, and the results of initial characterization tests of micro-machined accelerometers.
Micromachining technologies enable the development of low-cost devices capable of sensing motion in a reliable and accurate manner. The development of various surface micromachined accelerometers and gyroscopes to sense motion is an ongoing activity at Sandia National Laboratories. In addition, Sandia has developed a fabrication process for integrating both the micromechanical structures and microelectronics circuitry of Micro-Electro-Mechanical Systems (MEMS) on the same chip. This integrated surface micromachining process provides substantial performance and reliability advantages in the development of MEMS accelerometers and gyros. A Sandia MEMS team developed a single-axis, micromachined silicon accelerometer capable of surviving and measuring very high accelerations, up to 50,000 times the acceleration due to gravity or 50 k-G (actually measured to 46,000 G). The Sandia integrated surface micromachining process was selected for fabrication of the sensor due to the extreme measurement sensitivity potential associated with integrated microelectronics. Measurement electronics capable of measuring at to Farad (10{sup {minus}18} Farad) changes in capacitance were required due to the very small accelerometer proof mass (< 200 {times} 10{sup {minus}9} gram) used in this surface micromachining process. The small proof mass corresponded to small sensor deflections which in turn required very sensitive electronics to enable accurate acceleration measurement over a range of 1 to 50 k-G. A prototype sensor, based on a suspended plate mass configuration, was developed and the details of the design, modeling, and validation of the device will be presented in this paper. The device was analyzed using both conventional lumped parameter modeling techniques and finite element analysis tools. The device was tested and performed well over its design range.
In developing secure applications and systems, the designers often must incorporate secure user identification in the design specification. In this paper, the authors study secure off line authenticated user identification schemes based on a biometric system that can measure a user`s biometric accurately (up to some Hamming distance). The schemes presented here enhance identification and authorization in secure applications by binding a biometric template with authorization information on a token such as a magnetic strip. Also developed here are schemes specifically designed to minimize the compromise of a user`s private biometrics data, encapsulated in the authorization information, without requiring secure hardware tokens. In this paper the authors furthermore study the feasibility of biometrics performing as an enabling technology for secure system and application design. The authors investigate a new technology which allows a user`s biometrics to facilitate cryptographic mechanisms.
This report summarizes work on the development of ultra-low power microwave CHFET integrated circuit development. Power consumption of microwave circuits has been reduced by factors of 50--1,000 over commercially available circuits. Positive threshold field effect transistors (nJFETs and PHEMTs) have been used to design and fabricate microwave circuits with power levels of 1 milliwatt or less. 0.7 {micro}m gate nJFETs are suitable for both digital CHFET integrated circuits as well as low power microwave circuits. Both hybrid amplifiers and MMICs were demonstrated at the 1 mW level at 2.4 GHz. Advanced devices were also developed and characterized for even lower power levels. Amplifiers with 0.3 {micro}m JFETs were simulated with 8--10 dB gain down to power levels of 250 microwatts ({mu}W). However 0.25 {micro}m PHEMTs proved superior to the JFETs with amplifier gain of 8 dB at 217 MHz and 50 {mu}W power levels but they are not integrable with the digital CHFET technology.
A compact, short pulse, repetitive accelerator has many useful military and commercial applications in biological counter proliferation, materials processing, radiography, and sterilization (medical instruments, waste, and food). The goal of this project was to develop and demonstrate a small, 700 kV accelerator, which can produce 7 kA particle beams with pulse lengths of 10--30 ns at rates up to 50 Hz. At reduced power levels, longer pulses or higher repetition rates (up to 10 kHz) could be achieved. Two switching technologies were tested: (1) spark gaps, which have been used to build low repetition rate accelerators for many years; and (2) high gain photoconductive semiconductor switches (PCSS), a new solid state switching technology. This plan was economical, because it used existing hardware for the accelerator, and the PCSS material and fabrication for one module was relatively inexpensive. It was research oriented, because it provided a test bed to examine the utility of other emerging switching technologies, such as magnetic switches. At full power, the accelerator will produce 700 kV and 7 kA with either the spark gap or PCSS pulser.
Sandia National Laboratories has a substantial effort in development of microelectromechanical system (MEMS) technologies. This miniaturization capability can lead to low-cost, small, high-performance systems-on-a-chip, and have many applications ranging from advanced military systems to large-volume commercial markets like automobiles, rf or land-based communications networks and equipment, or commercial electronics. One of the key challenges in realization of the microsystem is integration of several technologies including digital electronics; analog and rf electronics, optoelectronics, sensors and actuators, and advanced packaging technologies. In this work they describe efforts in integrating MEMS and optoelectronic or photonic functions and the fabrication constraints on both system components. the MEMS technology used in this work are silicon surface-machined systems fabricated using the SUMMiT (Sandia Ultraplanar Multilevel MEMS Technology) process developed at Sandia. This process includes chemical-mechanical polishing as an intermediate planarization step to allow the use of 4 or 5 levels of polysilicon.
An automated system for calibrating vacuum gauges over the pressure range of 10{sup {minus}6} to 0.1 Pa was designed and constructed at the National Institute of Standards and Technology (NIST) for the Department of Energy (DOE) Primary Standards Laboratory at Sandia National Laboratories (SNL). Calculable pressures are generated by passing a known flow of gas through an orifice of known conductance. The orifice conductance is derived from dimensional measurements and accurate flows are generated using metal capillary leaks. The expanded uncertainty (k = 2) in the generated pressure is estimated to be between 1% and 4% over the calibration range. The design, calibration results. and component uncertainties will be discussed.
Over the past several years, the US Nuclear Regulatory Commission (NRC) has sponsored the development of a new method for performing human reliability analyses (HRAs). A major impetus for the program was the recognized need for a method that would not only address errors of omission (EOOs), but also errors of commission (EOCs). Although several documents have been issued describing the basis and development of the new method referred to as ``A Technique for Human Event Analysis`` (ATHEANA), two documents were drafted to initially provide the necessary documentation for applying the method: the frame of reference (FOR) manual, which served as the technical basis document for the method and the implementation guideline (IG), which provided step by step guidance for applying the method. Upon the completion of the draft FOR manual and the draft IG in April 1997, along with several step-throughs of the process by the development team, the method was ready for a third-party test. The method was demonstrated at Seabrook Station in July 1997. The main goals of the demonstration were to (1) test the ATHENA process as described in the FOR manual and the IG, (2) test a training package developed for the method, (3) test the hypothesis that plant operators and trainers have significant insight into the EFCs that can make UAs more likely, and (4) identify ways to improve the method and its documentation. The results of the Seabrook demonstration are evaluated against the success criteria, and important findings and recommendations regarding ATHENA that were obtained from the demonstration are presented here.
Analysis of cost and performance of physical security systems can be a complex, multi-dimensional problem. There are a number of point tools that address various aspects of cost and performance analysis. Increased interest in cost tradeoffs of physical security alternatives has motivated development of an architecture called Cost and Performance Analysis (CPA), which takes a top-down approach to aligning cost and performance metrics. CPA incorporates results generated by existing physical security system performance analysis tools, and utilizes an existing cost analysis tool. The objective of this architecture is to offer comprehensive visualization of complex data to security analysts and decision-makers.
In most probabilistic risk assessments, there is a set of accident scenarios that involves the physical responses of a system to environmental challenges. Examples include the effects of earthquakes and fires on the operability of a nuclear reactor safety system, the effects of fires and impacts on the safety integrity of a nuclear weapon, and the effects of human intrusions on the transport of radionuclides from an underground waste facility. The physical responses of the system to these challenges can be quite complex, and their evaluation may require the use of detailed computer codes that are very time consuming to execute. Yet, to perform meaningful probabilistic analyses, it is necessary to evaluate the responses for a large number of variations in the input parameters that describe the initial state of the system, the environments to which it is exposed, and the effects of human interaction. Because the uncertainties of the system response may be very large, it may also be necessary to perform these evaluations for various values of modeling parameters that have high uncertainties, such as material stiffnesses, surface emissivities, and ground permeabilities. The authors have been exploring the use of artificial neural networks (ANNs) as a means for estimating the physical responses of complex systems to phenomenological events such as those cited above. These networks are designed as mathematical constructs with adjustable parameters that can be trained so that the results obtained from the networks will simulate the results obtained from the detailed computer codes. The intent is for the networks to provide an adequate simulation of the detailed codes over a significant range of variables while requiring only a small fraction of the computer processing time required by the detailed codes. This enables the authors to integrate the physical response analyses into the probabilistic models in order to estimate the probabilities of various responses.
GaN etching can be affected by a wide variety of parameters including plasma chemistry and plasma density. Chlorine-based plasmas have been the most widely used plasma chemistries to etch GaN due to the high volatility of the GaCl{sub 3} and NCl etch products. The source of Cl and the addition of secondary gases can dramatically influence the etch characteristics primarily due to their effect on the concentration of reactive Cl generated in the plasma. In addition, high-density plasma etch systems have yielded high quality etching of GaN due to plasma densities which are 2 to 4 orders of magnitude higher than reactive ion etch (RIE) plasma systems. The high plasma densities enhance the bond breaking efficiency of the GaN, the formation of volatile etch products, and the sputter desorption of the etch products from the surface. In this study, the authors report GaN etch results for a high-density inductively coupled plasma (ICP) as a function of BCl{sub 3}:Cl{sub 2} flow ratio, dc-bias, chamber-pressure, and ICP source power. GaN etch rates ranging from {approximately}100 {angstrom}/min to > 8,000 {angstrom}/min were obtained with smooth etch morphology and anisotropic profiles.
Enhanced tribological properties have been observed after treatment with pulsed high power ion beams, which results in rapid melting and resolidification of the surface. The authors have treated and tested 440C martensitic stainless steel (Fe-17 Cr-1 C). Ti and Al samples were sputter coated and ion beam treated to produce surface alloying. The samples were treated at the RHEPP-I facility at Sandia National Laboratories (0.5 MV, 0.5--1 {micro}s at sample location, <10 J/cm{sup 2}, 1--5 {micro}m ion range). They have observed a reduction in size of second phase particles and other microstructural changes in 440C steel. The hardness of treated 440C increases with ion beam fluence and a maximum hardness increase of a factor of 5 is obtained. Low wear rates are observed in wear tested of treated 440C steel. Surface alloyed Ti-Pt layers show improvements in hardness up to a factor of 3 over untreated Ti, and surface alloys of Al-Si result in a hardness increase of a factor of two over untreated Al. Both surface alloys show increased durability in wear testing. Rutherford Backscattering (RBS) measurements show overlayer mixing to the depth of the melted layer. X-ray Diffraction (XRD) and TEM confirm the existence of metastable states within the treated layer. Treated layer depths have been measured from 1--10 {micro}m.
The mission of the national laboratories has changed from weapon design and production to stockpile maintenance. Design engineers are becoming few in number and years worth of experience is about to be lost. What will happen when new weapons are designed or retrofits need to be made? Who will know the lessons learned in the past? What process will be followed? When and what software codes should be used? Intelligent design is the answer to the questions posed above for weapon design; for any design. An interactive design development environment will allow the designers of the future access to the knowledge of yesterday, today and tomorrow. Design guides, rules of thumb, lessons learned, production capabilities, production data, process flow, and analysis codes will be included in intelligent design. An intelligent design environment is being developed as a heuristic, knowledge based system and as a diagnostic design tool. The system provides the framework for incorporating rules of thumb from experienced design engineers, available manufacturing processes, including the newest ones, and manufacturing databases, with current data, to help reduce design margins. The system also has the capability to access analysis and legacy codes appropriately. A modular framework allows for various portions to be added or deleted based on the application. This paper presents the driving forces for developing an intelligent design environment and an overview of the system. This overview will include the system architecture and how it relates to the capture and utilization of design and manufacturing knowledge. The paper concludes with a discussion of realized and expected benefits.
The detection and refocus of moving targets in SAR imagery is of interest in a number of applications. In this paper the authors address the problem of refocusing a blurred signature that has by some means been identified as a moving target. They assume that the target vehicle velocity is constant, i.e., the motion is in a straight line with constant speed. The refocus is accomplished by application of a two-dimensional phase function to the phase history data obtained via Fourier transformation of an image chip that contains the blurred moving target data. By considering separately the phase effects of the range and cross-range components of the target velocity vector, they show how the appropriate phase correction term can be derived as a two-parameter function. They then show a procedure for estimating the two parameters, so that the blurred signature can be automatically refocused. The algorithm utilizes optimization of an image domain contrast metric. They present results of refocusing moving targets in real SAR imagery by this method.
Sandia National Laboratories has conducted research in chemical sensing and analysis of explosives for many years. Recently, that experience has been directed towards detecting mines and unexploded ordnance (UXO) by sensing the low-level explosive signatures associated with these objects. The authors focus has been on the classification of UXO in shallow water and anti-personnel/anti tank mines on land. The objective of this work is to develop a field portable chemical sensing system which can be used to examine mine-like objects (MLO) to determine whether there are explosive molecules associated with the MLO. Two sampling subsystems have been designed, one for water collection and one for soil/vapor sampling. The water sampler utilizes a flow-through chemical adsorbent canister to extract and concentrate the explosive molecules. Explosive molecules are thermally desorbed from the concentrator and trapped in a focusing stage for rapid desorption into an ion-mobility spectrometer (IMS). The authors describe a prototype system which consists of a sampler, concentrator-focuser, and detector. The soil sampler employs a light-weight probe for extracting and concentrating explosive vapor from the soil in the vicinity of an MLO. The chemical sensing system is capable of sub-part-per-billion detection of TNT and related explosive munition compounds. They present the results of field and laboratory tests on buried landmines which demonstrate their ability to detect the explosive signatures associated with these objects.
The disposition of the large backlog of plutonium residues at the Rocky Flats Environmental Technology Site (Rocky Flats) will require interim storage and subsequent shipment to a waste repository. Current plans call for disposal at the Waste Isolation Pilot Plant (WIPP) and the transportation to WIPP in the TRUPACT-II. The transportation phase will require the residues to be packaged in a container that is more robust than a standard 55 gallon waste drum. Rocky Flats has designed the Pipe Overpack Container to meet this need. The potential for damage to this container during onsite storage in unhardened structures for several hypothetical accident scenarios has been addressed using finite element calculations. This report will describe the initial conditions and assumptions for these analyses and the predicted response of the container.
This paper describes a novel digital signal processing algorithm for adaptively detecting and identifying signals buried in noise. The algorithm continually computes and updates the long-term statistics and spectral characteristics of the background noise. Using this noise model, a set of adaptive thresholds and matched digital filters are implemented to enhance and detect signals that are buried in the noise. The algorithm furthermore automatically suppresses coherent noise sources and adapts to time-varying signal conditions. Signal detection is performed in both the time-domain and the frequency-domain, thereby permitting the detection of both broad-band transients and narrow-band signals. The detection algorithm also provides for the computation of important signal features such as amplitude, timing, and phase information. Signal identification is achieved through a combination of frequency-domain template matching and spectral peak picking. The algorithm described herein is well suited for real-time implementation on digital signal processing hardware. This paper presents the theory of the adaptive algorithm, provides an algorithmic block diagram, and demonstrate its implementation and performance with real-world data. The computational efficiency of the algorithm is demonstrated through benchmarks on specific DSP hardware. The applications for this algorithm, which range from vibration analysis to real-time image processing, are also discussed.
It has been recognized that nondestructive inspection (NDI) techniques and instruments that have proven themselves in the laboratory do not always perform as well under field conditions. In this paper the authors explore combinations of formal laboratory and field experimentation to characterize NDI processes as they may be implemented in field conditions. They also discuss appropriate modeling for probability of detection (POD) curves as applied to data gathered under field conditions. A case is made for expanding the more traditional two-parameter models to models using either three or four parameters. They use NDI data gathered from various airframe inspection programs to illustrate the points.
Si{sup +} implant activation efficiencies above 90%, even at doses of 5 {times} 10{sup 15} cm{sup {minus}2}, have been achieved in GaN by RTP at 1,400--1,500 C for 10 secs. The annealing system utilizes with MoSi{sub 2} heating elements capable of operation up to 1,900 C, producing high heating and cooling rates (up to 100 C{center_dot}s{sup {minus}1}). Unencapsulated GaN show severe surface pitting at 1,300 C, and complete loss of the film by evaporation at 1,400 C. Dissociation of nitrogen from the surface is found to occur with an approximate activation energy of 3.8 eV for GaN (compared to 4.4 eV for AlN and 3.4 eV for InN). Encapsulation with either rf-magnetron reactively sputtered or MOMBE-grown AlN thin films provide protection against GaN surface degradation up to 1,400 C, where peak electron concentrations of {approximately} 5 {times} 10{sup 20} cm{sup {minus}3} can be achieved in Si-implanted GaN. SIMS profiling showed little measurable redistribution of Si, suggesting D{sub Si} {le} 10{sup {minus}13} cm{sup 2}{center_dot}s{sup {minus}1} at 1,400 C . The implant activation efficiency decreases at higher temperatures, which may result from Si{sub Ga} to Si{sub N} site switching and resultant self-compensation.
In this paper the authors give a construction of wavelets which are (a) semi-orthogonal with respect to an arbitrary elliptic bilinear form a({center_dot},{center_dot}) on the Sobolev space H{sub 0}{sup 1}((0, L)) and (b) continuous and piecewise linear on an arbitrary partition of [0, L]. They illustrate this construction using a model problem. They also construct alpha-orthogonal Battle-Lemarie type wavelets which fully diagonalize the Galerkin discretized matrix for the model problem with domain IR. Finally they describe a hybrid basis consisting of a combination of elements from the semi-orthogonal wavelet basis and the hierarchical Schauder basis. Numerical experiments indicate that this basis leads to robust scalable Galerkin discretizations of the model problem which remain well-conditioned independent of {epsilon}, L, and the refinement level K.
Investment casting is an important method for fabricating a variety of high quality components in mechanical systems. Cast components, unfortunately, have a large design and gate/runner build time associated with their fabrication. In addition, casting engineers often require many years of actual experience in order to consistently pour high quality castings. Since 1989, Sandia National Laboratories has been investigating casting technology and software that will reduce the time overhead involved in producing quality casts. Several companies in the casting industry have teamed up with Sandia to form the FASTCAST Consortium. One result of this research and the formation of the FASTCAST consortium is the creation of the WinMod software, an expert casting advisor that supports the decision making process of the casting engineer through visualization and advice to help eliminate possible casting defects.
The effect of temperature on the reversible and irreversible capacities of disordered carbons derived from polymethacryonitrile (PMAN) and divinylbenzene (DVB) copolymers was studied in 1 M LiPF{sub 6}/ethylene carbonate (EC)-dimethyl carbonate (DMC) (1:1 v/v) solution by galvanostatic cycling. The kinetics of passive film formation were examined by complex-impedance spectroscopy. Temperatures of 5, 21, and 35 C were used in the study.