Publications
Search results
Jump to search filtersReduction of Np(VI) and Pu(VI) by organic chelating agents
The reduction of NpO{sub 2}{sup 2+} and PuO{sub 2}{sup 2+} by oxalate, citrate, and ethylenediaminetetraacetic acid (EDTA) was investigated in low ionic strength media and brines. This was done to help establish the stability of the An(VI) oxidation state depended on the pH nd relative strength of the various oxidation state-specific complexes. At low ionic strength and pH 6, NpO{sub 2}{sup 2+} was rapidly reduced to form NpO{sub 2}{sup +} organic complexes. At longer times, Np(IV) organic complexes were observed in the presence of citrate. PuO{sub 2}{sup 2+} was predominantly reduced to Pu{sup 4+}, resulting in the formation of organic complexes or polymeric/hydrolytic precipitates. The relative rates of reduction to the An(V) complex were EDTA > citrate > oxalate. Subsequent reduction to An(IV) complexes, however, occurred in the following order: citrate > EDTA > oxalate because of the stability of the An(VI)-EDTA complex. The presence of organic complexants led to the rapid reduction of NpO{sub 2}{sup 2+} and PuO{sub 2}{sup 2+} in G-Seep brine at pHs 5 and 7. At pHs 8 and 10 in ERDA-6 brine, carbonate and hydrolytic complexes predominated and slowed down or prevented the reduction of An(VI) by the organics present.
Testing, expanding and implementing pollution prevention tools for environmental restoration and decontamination and decommissioning
Pollution Prevention (P2) programs and projects within the DOE Environmental Restoration (ER) and Decontamination and Decommissioning (D and D) Programs have been independently developed and implemented at various sites. As a result, unique, innovative solutions used at one site may not be known to other sites, and other sites may continue to duplicate efforts to develop and implement similar solutions. Several DOE Program offices have funded the development of tools to assist ER/D and D P2 projects. To realize the full value of these tools, they need to be evaluated and publicized to field sites. To address these needs and concerns, Sandia National Laboratory (SNL/NM), Los Alamos National Laboratory (LANL), and the Oak Ridge Field Office (DOE-OR) have teamed to pilot test DOE training and tracking tools; transfer common P2 analyses between sites, and evaluate and expand P2 tools and methodologies. The project is supported by FY 98 DOE Pollution Prevention Complex-Wide Project Funds. This paper presents the preliminary results for each of the following project modules: Training, Waste Tracking Pilot, Information Exchange, Evaluate P2 Tools for ER/D and D, Field Test of P2 Tools; and DOE Information Exchange.
Sandia bicycle commuters group -- pollution prevention at Sandia National Laboratories, New Mexico
The Sandia Bicycle Commuters Group (SBCG) formed three years ago for the purpose of addressing issues that impact the bicycle commuting option. The meeting that launched the SBCG was scheduled in conjunction with National Bike-to-Work day in May 1995. Results from a survey handed out at the meeting solidly confirmed the issues and that an advocacy group was needed. The purpose statement for the Group headlines its web site and brochure: ``Existing to assist and educate the SNL workforce bicyclist on issues regarding Kirtland Air Force Base (KAFB) access, safety and bicycle-supporting facilities, in order to promote bicycling as an effective and enjoyable means of commuting.`` The SNL Pollution Prevention (P2) Team`s challenge to the SNL workforce is to ``prevent pollution, conserve natural resources, and save money``. In the first winter of its existence, the SBCG sponsored a winter commute contest in conjunction with the City`s Clean Air Campaign (CAC). The intent of the CAC is to promote alternative (to the single-occupant vehicle) commuting during the Winter Pollution Advisory Period (October 1--February 28), when the City runs the greatest risk of exceeding federal pollution limits.
Contact force modeling between non convex objects using a nonlinear damping model
At Sandia National Laboratories, the authors are developing the ability to accurately predict motions for arbitrary numbers of bodies of arbitrary shapes experiencing multiple applied forces and intermittent contacts. In particular, they are concerned with the simulation of systems such as part feeders or mobile robots operating in realistic environments. Preliminary investigation of commercial dynamics software packages led us to the conclude that they could use a commercial code to provide everything they needed except for the contact model. They found that ADAMS best fit the needs for a simulation package. To simulate intermittent contacts, they need collision detection software that can efficiently compute the distances between non-convex objects and return the associated witness features. They also require a computationally efficient contact model for rapid simulation of impact, sustained contact under load, and transition to and from contact conditions. This paper provides a technical review of a custom hierarchical distance computation engine developed at Sandia, called the C-Space Toolkit (CSTk). In addition, the authors describe an efficient contact model using a non-linear damping term developed at Ohio State. Both the CSTk and the non-linear damper have been incorporated in a simplified two-body testbed code, which is used to investigate how to correctly model the contact using these two utilities. They have incorporated this model into ADAMS SOLVER using the callable function interface. An example that illustrate the capabilities of the 9.02 release of ADAMS with the extensions is provided.
X-1: The challenge of high fusion yield
In the past three years, tremendous strides have been made in x-ray production using high-current z-pinches. Today, the x-ray energy and power output of the Z accelerator (formerly PBFA II) is the largest available in the laboratory. These z-pinch x-ray sources have great potential to drive high-yield inertial confinement fusion (ICF) reactions at affordable cost if several challenging technical problems can be overcome. Technical challenges in three key areas are discussed in this paper: (1) the design of a target for high yield, (2) the development of a suitable pulsed power driver, and (3) the design of a target chamber capable of containing the high fusion yield.
Results of von Neumann analyses for reproducing kernel semi-discretizations
The Reproducing Kernel Particle Method (RKPM) has many attractive properties that make it ideal for treating a broad class of physical problems. RKPM may be implemented in a mesh-full or a mesh-free manner and provides the ability to tune the method, via the selection of a dilation parameter and window function, in order to achieve the requisite numerical performance. RKPM also provides a framework for performing hierarchical computations making it an ideal candidate for simulating multi-scale problems. Although RKPM has many appealing attributes, the method is quite new and its numerical performance is still being quantified with respect to more traditional discretization methods. In order to assess the numerical performance of RKPM, detailed studies of RKPM on a series of model partial differential equations has been undertaken. The results of von Neumann analyses for RKPM semi-discretizations of one and two-dimensional, first and second-order wave equations are presented in the form of phase and group errors. Excellent dispersion characteristics are found for the consistent mass matrix with the proper choice of dilation parameter. In contrast, the influence of row-sum lumping the mass matrix is shown to introduce severe lagging phase errors. A higher-order mass matrix improves the dispersion characteristics relative to the lumped mass matrix but delivers severe lagging phase errors relative to the fully integrated, consistent mass matrix.
Fusion with Z-pinches
In the past thirty-six months, great progress has been made in x-ray production using high-current z-pinches. Today, the x-ray energy and power output of the Z accelerator (formerly PBFA-II) is the largest available in the laboratory. These z-pinch x-ray sources have the potential to drive high-yield ICF reactions at affordable cost if several challenging technical problems can be overcome. In this paper, the recent technical progress with Z-pinches will be described, and a technical strategy for achieving high-yield ICF with z-pinches will be presented.
Development of the Weapon Borne Sensor parachute system
A parachute system was designed and prototypes built to deploy a telemetry package behind an earth-penetrating weapon just before impact. The parachute was designed to slow the 10 lb. telemetry package and wire connecting it to the penetrator to 50 fps before impact occurred. The parachute system was designed to utilize a 1.3-ft-dia cross pilot parachute and a 10.8-ft-dia main parachute. A computer code normally used to model the deployment of suspension lines from a packed parachute system was modified to model the deployment of wire from the weapon forebody. Results of the design calculations are presented. Two flight tests of the WBS were conducted, but initiation of parachute deployment did not occur in either of the tests due to difficulties with other components. Thus, the trajectory calculations could not be verified with data. Draft drawings of the major components of the parachute system are presented.
Dynamic isosurface extraction and level-of-detail in voxel space
A new visualization technique is reported, which dramatically improves interactivity for scientific visualizations by working directly with voxel data and by employing efficient algorithms and data structures. This discussion covers the research software, the file structures, examples of data creation, data search, and triangle rendering codes that allow geometric surfaces to be extracted from volumetric data. Uniquely, these methods enable greater interactivity by allowing an analyst to dynamically specify both the desired isosurface threshold and required level-of-detail to be used while rendering the image. The key idea behind this visualization paradigm is that various levels-of-detail are represented as differently sized hexahedral virtual voxels, which are stored in a three-dimensional kd-tree; thus the level-of-detail representation is done in voxel space instead of the traditional approach which relies on surface or geometry space decimations. This algorithm has been implemented as an integral component in the EIGEN/VR project at Sandia National Laboratories, which provides a rich environment for scientists to interactively explore and visualize the results of very large-scale simulations performed on massively parallel supercomputers.
Computational methods for predicting the response of critical as-built infrastructure to dynamic loads (architectural surety)
Coupled blast-structural computational simulations using supercomputer capabilities will significantly advance the understanding of how complex structures respond under dynamic loads caused by explosives and earthquakes, an understanding with application to the surety of both federal and nonfederal buildings. Simulation of the effects of explosives on structures is a challenge because the explosive response can best be simulated using Eulerian computational techniques and structural behavior is best modeled using Lagrangian methods. Due to the different methodologies of the two computational techniques and code architecture requirements, they are usually implemented in different computer programs. Explosive and structure modeling in two different codes make it difficult or next to impossible to do coupled explosive/structure interaction simulations. Sandia National Laboratories has developed two techniques for solving this problem. The first is called Smoothed Particle Hydrodynamics (SPH), a relatively new gridless method comparable to Eulerian, that is especially suited for treating liquids and gases such as those produced by an explosive. The SPH capability has been fully implemented into the transient dynamics finite element (Lagrangian) codes PRONTO-2D and -3D. A PRONTO-3D/SPH simulation of the effect of a blast on a protective-wall barrier is presented in this paper. The second technique employed at Sandia National Laboratories uses a relatively new code called ALEGRA which is an ALE (Arbitrary Lagrangian-Eulerian) wave code with specific emphasis on large deformation and shock propagation. ALEGRA is capable of solving many shock-wave physics problems but it is especially suited for modeling problems involving the interaction of decoupled explosives with structures.
Confidence in ASCI scientific simulations
The US Department of Energy`s (DOE) Accelerated Strategic Computing Initiative (ASCI) program calls for the development of high end computing and advanced application simulations as one component of a program to eliminate reliance upon nuclear testing in the US nuclear weapons program. This paper presents results from the ASCI program`s examination of needs for focused validation and verification (V and V). These V and V activities will ensure that 100 TeraOP-scale ASCI simulation code development projects apply the appropriate means to achieve high confidence in the use of simulations for stockpile assessment and certification. The authors begin with an examination of the roles for model development and validation in the traditional scientific method. The traditional view is that the scientific method has two foundations, experimental and theoretical. While the traditional scientific method does not acknowledge the role for computing and simulation, this examination establishes a foundation for the extension of the traditional processes to include verification and scientific software development that results in the notional framework known as Sargent`s Framework. This framework elucidates the relationships between the processes of scientific model development, computational model verification and simulation validation. This paper presents a discussion of the methodologies and practices that the ASCI program will use to establish confidence in large-scale scientific simulations. While the effort for a focused program in V and V is just getting started, the ASCI program has been underway for a couple of years. The authors discuss some V and V activities and preliminary results from the ALEGRA simulation code that is under development for ASCI. The breadth of physical phenomena and the advanced computational algorithms that are employed by ALEGRA make it a subject for V and V that should typify what is required for many ASCI simulations.
Cost and performance analysis of conceptual designs of physical protection systems
Hicks, M.J.; Snell, M.S.; Sandoval, J.S.; Potter, C.S.
CPA -- Cost and Performance Analysis -- is a methodology that joins Activity Based Cost (ABC) estimation with performance based analysis of physical protection systems. CPA offers system managers an approach that supports both tactical decision making and strategic planning. Current exploratory applications of the CPA methodology are addressing analysis of alternative conceptual designs. To support these activities, the original architecture for CPA, is being expanded to incorporate results from a suite of performance and consequence analysis tools such as JTS (Joint Tactical Simulation), ERAD (Explosive Release Atmospheric Dispersion) and blast effect models. The process flow for applying CPA to the development and analysis conceptual designs is illustrated graphically.
Prediction of the TNT signature from buried UXO/landmines
The detection and removal of buried unexploded ordnance (UXO) and landmines is one of the most important problems facing the world today. Numerous detection strategies are being developed, including infrared, electrical conductivity, ground-penetrating radar, and chemical sensors. Chemical sensors rely on the detection of TNT molecules, which are transported from buried UXO/landmines by advection and diffusion in the soil. As part of this effort, numerical models are being developed to predict TNT transport in soils including the effect of precipitation and evaporation. Modifications will be made to TOUGH2 for application to the TNT chemical sensing problem. Understanding the fate and transport of TNT in the soil will affect the design, performance and operation of chemical sensors by indicating preferred sensing strategies.
Site-specific thermal and mechanical property characterizations of in situ thermal test areas at Yucca Mountain, Nevada
The US Department of Energy (DOE) is investigating Yucca Mountain, Nevada as a potential site for the disposal of high-level nuclear waste. The site is located near the southwest corner of the Nevada Test Site (NTS) in southern Nye County, Nevada. The underground Exploratory Studies Facility (ESF) tunnel traverses part of the proposed repository block. Alcove 5, located within the ESF is being used to field two in situ ESF thermal tests: the Single Heater Test (SHT) and the Drift Scale Test (DST). Laboratory test specimens were collected from three sites within Alcove 5 including each in situ field test location and one additional site. The aim of the laboratory tests was to determine site-specific thermal and mechanical rock properties including thermal expansion, thermal conductivity, unconfined compressive strength, and elastic moduli. In this paper, the results obtained for the SHT and DST area characterization are compared with data obtained from other locations at the proposed repository site. Results show that thermal expansion, and mechanical properties of Alcove 5 laboratory specimens are slightly different than the average values obtained on specimens from surface drillholes.
In-vessel tritium retention and removal in ITER
The International Thermonuclear Experimental Reactor (ITER) is envisioned to be the next major step in the world`s fusion program from the present generation of tokamaks and is designed to study fusion plasmas with a reactor relevant range of plasma parameters. During normal operation, it is expected that a fraction of the unburned tritium, that is used to routinely fuel the discharge, will be retained together with deuterium on the surfaces and in the bulk of the plasma facing materials (PFMs) surrounding the core and divertor plasma. The understanding of he basic retention mechanisms (physical and chemical) involved and their dependence upon plasma parameters and other relevant operation conditions is necessary for the accurate prediction of the amount of tritium retained at any given time in the ITER torus. Accurate estimates are essential to assess the radiological hazards associated with routine operation and with potential accident scenarios which may lead to mobilization of tritium that is not tenaciously held. Estimates are needed to establish the detritiation requirements for coolant water, to determine the plasma fueling and tritium supply requirements, and to establish the needed frequency and the procedures for tritium recovery and clean-up. The organization of this paper is as follows. Section 2 provides an overview of the design and operating conditions of the main components which define the plasma boundary of ITER. Section 3 reviews the erosion database and the results of recent relevant experiments conducted both in laboratory facilities and in tokamaks. These data provide the experimental basis and serve as an important benchmark for both model development (discussed in Section 4) and calculations (discussed in Section 5) that are required to predict tritium inventory build-up in ITER. Section 6 emphasizes the need to develop and test methods to remove the tritium from the codeposited C-based films and reviews the status and the prospects of the most attractive techniques. Section 7 identifies the unresolved issues and provides some recommendations on potential R and D avenues for their resolution. Finally, a summary is provided in Section 8.
Final report on LDRD Project: The double electron layer tunneling transistor (DELTT)
This report describes the research accomplishments achieved under the LDRD Project ``Double Electron Layer Tunneling Transistor.`` The main goal of this project was to investigate whether the recently discovered phenomenon of 2D-2D tunneling in GaAs/AlGaAs double quantum wells (DQWs), investigated in a previous LDRD, could be harnessed and implemented as the operating principle for a new type of tunneling device the authors proposed, the double electron layer tunneling transistor (DELTT). In parallel with this main thrust of the project, they also continued a modest basic research effort on DQW physics issues, with significant theoretical support. The project was a considerable success, with the main goal of demonstrating a working prototype of the DELTT having been achieved. Additional DELTT advances included demonstrating good electrical characteristics at 77 K, demonstrating both NMOS and CMOS-like bi-stable memories at 77 K using the DELTT, demonstrating digital logic gates at 77 K, and demonstrating voltage-controlled oscillators at 77 K. In order to successfully fabricate the DELTT, the authors had to develop a novel flip-chip processing scheme, the epoxy-bond-and-stop-etch (EBASE) technique. This technique was latter improved so as to be amenable to electron-beam lithography, allowing the fabrication of DELTTs with sub-micron features, which are expected to be extremely high speed. In the basic physics area they also made several advances, including a measurement of the effective mass of electrons in the hour-glass orbit of a DQW subject to in-plane magnetic fields, and both measurements and theoretical calculations of the full Landau level spectra of DQWs in both perpendicular and in-plane magnetic fields. This last result included the unambiguous demonstration of magnetic breakdown of the Fermi surface. Finally, they also investigated the concept of a far-infrared photodetector based on photon assisted tunneling in a DQW. Absorption calculations showed a narrowband absorption which persisted to temperatures much higher than the photon energy being detected. Preliminary data on prototype detectors indicated that the absorption is not only narrowband, but can be tuned in energy through the application of a gate voltage.
The Yucca Mountain Project drift scale test
The Yucca Mountain Project is currently evaluating the coupled thermal-mechanical-hydrological-chemical (TMHC) response of the potential repository host rock through an in situ thermal testing program. A drift scale test (DST) was constructed during 1997 and heaters were turned on in December 1997. The DST includes nine canister-sized containers with thirty operating heaters each located within the heated drift (HD) and fifty wing heaters located in boreholes in both ribs with a total power output of nominally 210kW. A total of 147 boreholes (combined length of 3.3 km) houses most of the over 3700 TMHC sensors connected with 201 km of cabling to a central data acquisition system. The DST is located in the Exploratory Studies Facility in a 5-m diameter drift approximately 50 m in length. Heating will last up to four years and cooling will last another four years. The rock mass surrounding the DST will experience a harsh thermal environment with rock surface temperatures expected to reach a maximum of about 200 C. This paper describes the process of designing the DST. The first 38 m of the 50-m long Heated Drift (HD) is dedicated to collection of data that will lead to a better understanding of the complex coupled TMHC processes in the host rock of the proposed repository. The final 12 m is dedicated to evaluating the interactions between the heated rock mass and cast-in-place (CIP) concrete ground support systems at elevated temperatures. In addition to a description of the DST design, data from site characterization, and a general description of the analyses and analysis approach used to design the test and make pretest predictions are presented. Test-scoping and pretest numerical predictions of one way thermal-hydrologic, thermal-mechanical, and thermal-chemical behaviors have been completed (TRW, 1997a). These analyses suggest that a dry-out zone will be created around the DST and a 10,000 m{sup 3} volume of rock will experience temperatures above 100 C. The HD will experience large stress increases, particularly in the crown of the drift. Thermoelastic displacements of up to about 16 mm are predicted for some thermomechanical gages. Additional analyses using more complex models will be performed during the conduct of the DST and the results compared with measured data.
Lubrication of polysilicon micromechanisms with self-assembled monolayers
Srinivasan, U.; Foster, J.D.; Habib, U.; Howe, R.T.; Maboudian, R.; Senft, D.C.; Dugger, M.T.
Here, the authors report on the lubricating effects of self-assembled monolayers (SAMs) on MEMS by measuring static and dynamic friction with two polysilicon surface- micromachined devices. The first test structure is used to study friction between laterally sliding surfaces and with the second, friction between vertical sidewalls can be investigated. Both devices are SAM-coated following the sacrificial oxide etch and the microstructures emerge released and dry from the final water rinse. The coefficient of static friction, {mu}{sub s} was found to decrease from 2.1 {+-} 0.8 for the SiO{sub 2} coating to 0.11 {+-} 0.01 and 0.10 {+-} 0.01 for films derived from octadecyltrichloro-silane (OTS) and 1H,1H,2H,2H-perfluorodecyl-trichlorosilane (FDTS). Both OTS and FDTS SAM-coated structures exhibit dynamic coefficients of friction, {mu}{sub d} of 0.08 {+-} 0.01. These values were found to be independent of the apparent contact area, and remain unchanged after 1 million impacts at 5.6 {micro}N (17 kPa), indicating that these SAMs continue to act as boundary lubricants despite repeated impacts. Measurements during sliding friction from the sidewall friction testing structure give comparable initial {mu}{sub d} values of 0.02 at a contact pressure of 84 MPa. After 15 million wear cycles, {mu}{sub d} was found to rise to 0.27. Wear of the contacting surfaces was examined by SEM. Standard deviations in the {mu} data for SAM treatments indicate uniform coating coverage.
Time-resolved wave-profile measurements at impact velocities of 10 km/s
Development of well-controlled hypervelocity launch capabilities is the first step to understand material behavior at extreme pressures and temperatures not available using conventional gun technology. In this paper, techniques used to extend both the launch capabilities of a two-stage light-gas gun to 10 km/s and their use to determine material properties at pressures and temperature states higher than those ever obtained in the laboratory are summarized. Time-resolved interferometric techniques have been used to determine shock loading and release characteristics of materials impacted by titanium and aluminum fliers launched by the only developed three-stage light-gas gun at 10 km/s. In particular, the Sandia three stage light gas gun, also referred to as the hypervelocity launcher, HVL, which is capable of launching 0.5 mm to 1.0 mm thick by 6 mm to 19 mm diameter plates to velocities approaching 16 km/s has been used to obtain the necessary impact velocities. The VISAR, interferometric particle-velocity techniques has been used to determine shock loading and release profiles in aluminum and titanium at impact velocities of 10 km/s.
Optimization and nondeterministic analysis with large simulation models: Issues and directions
Economic and political demands are driving computational investigation of systems and processes like never before. It is foreseen that questions of safety, optimality, risk, robustness, likelihood, credibility, etc. will increasingly be posed to computational modelers. This will require the development and routine use of computing infrastructure that incorporates computational physics models within the framework of larger meta-analyses involving aspects of optimization, nondeterministic analysis, and probabilistic risk assessment. This paper describes elements of an ongoing case study involving the computational solution of several meta-problems in optimization, nondeterministic analysis, and optimization under uncertainty pertaining to the surety of a generic weapon safing device. The goal of the analyses is to determine the worst-case heating configuration in a fire that most severely threatens the integrity of the device. A large, 3-D, nonlinear, finite element thermal model is used to determine the transient thermal response of the device in this coupled conduction/radiation problem. Implications of some of the numerical aspects of the thermal model on the selection of suitable and efficient optimization and nondeterministic analysis algorithms are discussed.
Russia-U.S. joint program on the safe management of nuclear materials
The Russia-US joint program on the safe management of nuclear materials was initiated to address common technical issues confronting the US and Russia in the management of excess weapons grade nuclear materials. The program was initiated after the 1993 Tomsk-7 accident. This paper provides an update on program activities since 1996. The Fourth US Russia Nuclear Materials Safety Management Workshop was conducted in March 1997. In addition, a number of contracts with Russian Institutes have been placed by Lawrence Livermore National Laboratory (LLNL) and Sandia National Laboratories (SNL). These contracts support research related to the safe disposition of excess plutonium (Pu) and highly enriched uranium (HEU). Topics investigated by Russian scientists under contracts with SNL and LLNL include accident consequence studies, the safety of anion exchange processes, underground isolation of nuclear materials, and the development of materials for the immobilization of excess weapons Pu.
A survey of probabilistic methods used in reliability, risk and uncertainty analysis: Analytical techniques 1
This report provides an introduction to the various probabilistic methods developed roughly between 1956--1985 for performing reliability or probabilistic uncertainty analysis on complex systems. This exposition does not include the traditional reliability methods (e.g. parallel-series systems, etc.) that might be found in the many reliability texts and reference materials (e.g. and 1977). Rather, the report centers on the relatively new, and certainly less well known across the engineering community, analytical techniques. Discussion of the analytical methods has been broken into two reports. This particular report is limited to those methods developed between 1956--1985. While a bit dated, methods described in the later portions of this report still dominate the literature and provide a necessary technical foundation for more current research. A second report (Analytical Techniques 2) addresses methods developed since 1985. The flow of this report roughly follows the historical development of the various methods so each new technique builds on the discussion of strengths and weaknesses of previous techniques. To facilitate the understanding of the various methods discussed, a simple 2-dimensional problem is used throughout the report. The problem is used for discussion purposes only; conclusions regarding the applicability and efficiency of particular methods are based on secondary analyses and a number of years of experience by the author. This document should be considered a living document in the sense that as new methods or variations of existing methods are developed, the document and references will be updated to reflect the current state of the literature as much as possible. For those scientists and engineers already familiar with these methods, the discussion will at times become rather obvious. However, the goal of this effort is to provide a common basis for future discussions and, as such, will hopefully be useful to those more intimate with probabilistic analysis and design techniques. There are clearly alternative methods of dealing with uncertainty (e.g. fuzzy set theory, possibility theory), but this discussion will be limited to those methods based on probability theory.
Risk-based system refinement
When designing a high consequence system, considerable care should be taken to ensure that the system can not easily be placed into a high consequence failure state. A formal system design process should include a model that explicitly shows the complete state space of the system (including failure states) as well as those events (e.g., abnormal environmental conditions, component failures, etc.) that can cause a system to enter a failure state. In this paper the authors present such a model and formally develop a notion of risk-based refinement with respect to the model.
A graph-based system for network-vulnerability analysis
This paper presents a graph-based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The graph-based tool can identify the set of attack paths that have a high probability of success (or a low effort cost) for the attacker. The system could be used to test the effectiveness of making configuration changes, implementing an intrusion detection system, etc. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level-of-effort for the attacker, various graph algorithms such as shortest-path algorithms can identify the attack paths with the highest probability of success.
Hydride transport vessel vibration and shock test report
Sandia National Laboratories performed vibration and shock testing on a Savannah River Hydride Transport Vessel (HTV) which is used for bulk shipments of tritium. This testing is required to qualify the HTV for transport in the H1616 shipping container. The main requirement for shipment in the H1616 is that the contents (in this case the HTV) have a tritium leak rate of less than 1x10{sup {minus}7} cc/sec after being subjected to shock and vibration normally incident to transport. Helium leak tests performed before and after the vibration and shock testing showed that the HTV remained leaktight under the specified conditions. This report documents the tests performed and the test results.
Experimental results and modeling of a dynamic hohlraum on SATURN
Experiments were performed at SATURN, a high current z-pinch, to explore the feasibility of creating a hohlraum by imploding a tungsten wire array onto a low-density foam. Emission measurements in the 200--280 eV energy band were consistent with a 110--135 eV Planckian before the target shock heated, or stagnated, on-axis. Peak pinch radiation temperatures of nominally 160 eV were obtained. Measured early time x-ray emission histories and temperature estimates agree well with modeled performance in the 200--280 eV band using a 2D radiation magneto-hydrodynamics code. However, significant differences are observed in comparisons of the x-ray images and 2D simulations.
An overview of reliability assessment and control for design of civil engineering structures
Field Jr., R.V.; Grigoriadis, K.M.; Bergman, L.A.; Skelton, R.E.
Random variations, whether they occur in the input signal or the system parameters, are phenomena that occur in nearly all engineering systems of interest. As a result, nondeterministic modeling techniques must somehow account for these variations to ensure validity of the solution. As might be expected, this is a difficult proposition and the focus of many current research efforts. Controlling seismically excited structures is one pertinent application of nondeterministic analysis and is the subject of the work presented herein. This overview paper is organized into two sections. First, techniques to assess system reliability, in a context familiar to civil engineers, are discussed. Second, and as a consequence of the first, active control methods that ensure good performance in this random environment are presented. It is the hope of the authors that these discussions will ignite further interest in the area of reliability assessment and design of controlled civil engineering structures.
New method for predicting lifetime of seals from compression-stress relaxation experiments
Interpretation of compression stress-relaxation (CSR) experiments for elastomers in air is complicated by (1) the presence of both physical and chemical relaxation and (2) anomalous diffusion-limited oxidation (DLO) effects. For a butyl material, the authors first use shear relaxation data to indicate that physical relaxation effects are negligible during typical high temperature CSR experiments. They then show that experiments on standard CSR samples ({approximately}15 mm diameter when compressed) lead to complex non-Arrhenius behavior. By combining reaction kinetics based on the historic basic autoxidation scheme with a diffusion equation appropriate to disk-shaped samples, they derive a theoretical DLO model appropriate to CSR experiments. Using oxygen consumption and permeation rate measurements, the theory shows that important DLO effects are responsible for the observed non-Arrhenius behavior. To minimize DLO effects, they introduce a new CSR methodology based on the use of numerous small disk samples strained in parallel. Results from these parallel, minidisk experiments lead to Arrhenius behavior with an activation energy consistent with values commonly observed for elastomers, allowing more confident extrapolated predictions. In addition, excellent correlation is noted between the CSR force decay and the oxygen consumption rate, consistent with the expectation that oxidative scission processes dominate the CSR results.
Optical assembly of a visible through thermal infrared multispectral imaging system
The Optical Assembly (OA) for the Multispectral Thermal Imager (MTI) program has been fabricated, assembled, and successfully tested for its performance. It represents a major milestone achieved towards completion of this earth observing E-O imaging sensor that is to be operated in low earth orbit. Along with its wide-field-of-view (WFOV), 1.82{degree} along-track and 1.38{degree} cross-track, and comprehensive on-board calibration system, the pushbroom imaging sensor employs a single mechanically cooled focal plane with 15 spectral bands covering a wavelength range from 0.45 to 10.7 {micro}m. The OA has an off-axis three-mirror anastigmatic (TMA) telescope with a 36-cm unobscured clear aperture. The two key performance criteria, 80% enpixeled energy in the visible and radiometric stability of 1% 1{sigma} in the visible/near-infrared (VNIR) and short wavelength infrared (SWIR), of 1.45% 1{sigma} in the medium wavelength infrared (MWIR), and of 0.53% 1{sigma} long wavelength infrared (LWIR), as well as its low weight (less than 49 kg) and volume constraint (89 cm x 44 cm x 127 cm) drive the overall design configuration of the OA and fabrication requirements.
Modular redundant number systems
With the increased use of public key cryptography, faster modular multiplication has become an important cryptographic issue. Almost all public key cryptography, including most elliptic curve systems, use modular multiplication. Modular multiplication, particularly for the large public key modulii, is very slow. Increasing the speed of modular multiplication is almost synonymous with increasing the speed of public key cryptography. There are two parts to modular multiplication: multiplication and modular reduction. Though there are fast methods for multiplying and fast methods for doing modular reduction, they do not mix well. Most fast techniques require integers to be in a special form. These special forms are not related and converting from one form to another is more costly than using the standard techniques. To this date it has been better to use the fast modular reduction technique coupled with standard multiplication. Standard modular reduction is much more costly than standard multiplication. Fast modular reduction (Montgomery`s method) reduces the reduction cost to approximately that of a standard multiply. Of the fast multiplication techniques, the redundant number system technique (RNS) is one of the most popular. It is simple, converting a large convolution (multiply) into many smaller independent ones. Not only do redundant number systems increase speed, but the independent parts allow for parallelization. RNS form implies working modulo another constant. Depending on the relationship between these two constants; reduction OR division may be possible, but not both. This paper describes a new technique using ideas from both Montgomery`s method and RNS. It avoids the formula problem and allows fast reduction and multiplication. Since RNS form is used throughout, it also allows the entire process to be parallelized.
Dirac II series in 800 T fields: Reflectivity measurements on low-dimensional, low electron density materials
Physica B: Condensed Matter
We report reflectivity measurements at 810 nm wavelength on GaAs/GaAlAs multiple quantum wells and NbSe2 layers at 75 K up to magnetic fields of 800 T. In the GaAs system, we observed in two separate measurements new, reproducible oscillatory phenomena in the reflectivity between 30 and 800 T, and in a third measurement on 2H-NbSe2 we observed a decrease in reflectivity of about 50% above 200 T, with some additional evidence for oscillatory behavior. We discuss these measurements based on the expected behavior in terms of their known physical properties, and consider future prospects for the application of optical methods to study condensed matter physics under these extremes. © 1998 Elsevier Science B.V. All rights reserved.
Perforation of HY-100 steel plates with 4340 R{sub c} 38 and T-250 maraging steel rod projectiles
The authors conducted perforation experiments with 4340 Rc 38 and T-250 maraging steel, long rod projectiles and HY-100 steel target plates at striking velocities between 80 and 370 m/s. Flat-end rod projectiles with lengths of 89 and 282 mm were machined to nominally 30-mm-diameter so they could be launched from a 30-mm-powder gun without sabots. The target plates were rigidly clamped at a 305-mm-diameter and had nominal thicknesses of 5.3 and 10.5 mm. Four sets of experiments were conducted to show the effects of rod length and plate thickness on the measured ballistic limit and residual velocities. In addition to measuring striking and residual projectile velocities, they obtained framing camera data on the back surfaces of several plates that showed clearly the plate deformation and plug ejection process. They also present a beam model that exhibits qualitatively the experimentally observed mechanisms.
Analysis of the Rotopod: An all revolute parallel manipulator
This paper introduces a new configuration of parallel manipulator call the Rotopod which is constructed from all revolute type joints. The Rotopod consists of two platforms connected by six legs and exhibits six Cartesian degrees of freedom. The Rotopod is initially compared with other all revolute joint parallel manipulators to show its similarities and differences. The inverse kinematics for this mechanism are developed and used to analyze the accessible workspace of the mechanism. Optimization is performed to determine the Rotopod design configurations which maximum the accessible workspace based on desirable functional constraints.
Assembly planning at the micro scale
This paper investigates a new aspect of fine motion planning for the micro domain. As parts approach 1--10 {micro}m or less in outside dimensions, interactive forces such as van der Waals and electrostatic forces become major factors which greatly change the assembly sequence and path plans. It has been experimentally shown that assembly plans in the micro domain are not reversible, motions required to pick up a part are not the reverse of motions required to release a part. This paper develops the mathematics required to determine the goal regions for pick up, holding, and release of a micro-sphere being handled by a rectangular tool.
Creating virtual humans for simulation-based training and planning
Stansfield, S.
Sandia National Laboratories has developed a distributed, high fidelity simulation system for training and planning small team Operations. The system provides an immersive environment populated by virtual objects and humans capable of displaying complex behaviors. The work has focused on developing the behaviors required to carry out complex tasks and decision making under stress. Central to this work are techniques for creating behaviors for virtual humans and for dynamically assigning behaviors to CGF to allow scenarios without fixed outcomes. Two prototype systems have been developed that illustrate these capabilities: MediSim, a trainer for battlefield medics and VRaptor, a system for planning, rehearsing and training assault operations.
Modeling fires in adjacent ship compartments with computational fluid dynamics
This paper presents an analysis of the thermal effects on radioactive (RAM) transportation packages with a fire in an adjacent compartment. An assumption for this analysis is that the adjacent hold fire is some sort of engine room fire. Computational fluid dynamics (CFD) analysis tools were used to perform the analysis in order to include convective heat transfer effects. The analysis results were compared to experimental data gathered in a series of tests on tile US Coast Guard ship Mayo Lykes located at Mobile, Alabama.
Agile manufacturing prototyping system (AMPS)
Garcia, P.
The Agile Manufacturing Prototyping System (AMPS) is being integrated at Sandia National Laboratories. AMPS consists of state of the industry flexible manufacturing hardware and software enhanced with Sandia advancements in sensor and model based control; automated programming, assembly and task planning; flexible fixturing; and automated reconfiguration technology. AMPS is focused on the agile production of complex electromechanical parts. It currently includes 7 robots (4 Adept One, 2 Adept 505, 1 Staubli RX90), conveyance equipment, and a collection of process equipment to form a flexible production line capable of assembling a wide range of electromechanical products. This system became operational in September 1995. Additional smart manufacturing processes will be integrated in the future. An automated spray cleaning workcell capable of handling alcohol and similar solvents was added in 1996 as well as parts cleaning and encapsulation equipment, automated deburring, and automated vision inspection stations. Plans for 1997 and out years include adding manufacturing processes for the rapid prototyping of electronic components such as soldering, paste dispensing and pick-and-place hardware.
Rapid small lot manufacturing
The direct connection of information, captured in forms such as CAD databases, to the factory floor is enabling a revolution in manufacturing. Rapid response to very dynamic market conditions is becoming the norm rather than the exception. In order to provide economical rapid fabrication of small numbers of variable products, one must design with manufacturing constraints in mind. In addition, flexible manufacturing systems must be programmed automatically to reduce the time for product change over in the factory and eliminate human errors. Sensor based machine control is needed to adapt idealized, model based machine programs to uncontrolled variables such as the condition of raw materials and fabrication tolerances.
Manufacturing in the world of Internet collaboration
The Internet and the applications it supports are revolutionizing the way people work together. This paper presents four case studies in engineering collaboration that new Internet technologies have made possible. These cases include assembly design and analysis, simulation, intelligent machine system control, and systems integration. From these cases, general themes emerge that can guide the way people will work together in the coming decade.
Proactive DSA application and implementation
Data authentication as provided by digital signatures is a well known technique for verifying data sent via untrusted network links. Recent work has extended digital signatures to allow jointly generated signatures using threshold techniques. In addition, new proactive mechanisms have been developed to protect the joint private key over long periods of time and to allow each of the parties involved to verify the actions of the other parties. In this paper, the authors describe an application in which proactive digital signature techniques are a particularly valuable tool. They describe the proactive DSA protocol and discuss the underlying software tools that they found valuable in developing an implementation. Finally, the authors briefly describe the protocol and note difficulties they experienced and continue to experience in implementing this complex cryptographic protocol.
A graph-based network-vulnerability analysis system
This paper presents a graph based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level of effort for the attacker, various graph algorithms such as shortest path algorithms can identify the attack paths with the highest probability of success.
Review of the Palisades pressure vessel accumulated fluence estimate and of the least squares methodology employed
This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a {open_quotes}best estimate{close_quotes} of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards.
xdamp Version 3: An IDL{reg_sign}-based data and image manipulation program
The original DAMP (DAta Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA{trademark} (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs. time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of Unix{reg_sign}-based workstations, a replacement was needed. This package uses the IDL{reg_sign} software, available from Research Systems Incorporated in Boulder, Colorado, as the engine, and creates a set of widgets to manipulate the data in a manner similar to the original DAMP and earlier versions of xdamp. IDL is currently supported on a wide variety of Unix platforms such as IBM{reg_sign} workstations, Hewlett Packard workstations, SUN{reg_sign} workstations, Microsoft{reg_sign} Windows{trademark} computers, Macintosh{reg_sign} computers and Digital Equipment Corporation VMS{reg_sign} and Alpha{reg_sign} systems. Thus, xdamp is portable across many platforms. The author has verified operation, albeit with some minor IDL bugs, on personal computers using Windows 95 and Windows NT; IBM Unix platforms; and DEC alpha and VMS systems; HP 9000/700 series workstations; and Macintosh computers, both regular and PowerPC{trademark} versions. Version 3 adds the capability to manipulate images to the original xdamp capabilities.
GaAs-based JFET and PHEMT technologies for ultra-low-power microwave circuits operating at frequencies up to 2.4 GHz
In this work the authors report results of narrowband amplifiers designed for milliwatt and submilliwatt power consumption using JFET and pseudomorphic high electron mobility transistors (PHEMT) GaAs-based technologies. Enhancement-mode JFETs were used to design both a hybrid amplifier with off-chip matching as well as a monolithic microwave integrated circuit (MMIC) with on-chip matching. The hybrid amplifier achieved 8--10 dB of gain at 2.4 GHz and 1 mW. The MMIC achieved 10 dB of gain at 2.4 GHz and 2 mW. Submilliwatt circuits were also explored by using 0.25 {micro}m PHEMTs. 25 {micro}W power levels were achieved with 5 dB of gain for a 215 MHz hybrid amplifier. These results significantly reduce power consumption levels achievable with the JFETs or prior MESFET, heterostructure field effect transistor (HFET), or Si bipolar results from other laboratories.
Size dependence of selectively oxidized VCSEL transverse mode structure
The performance of vertical cavity surface emitting lasers (VCSELs) has improved greatly in recent years. Much of this improvement can be attributed to the use of native oxide layers within the laser structure, providing both electrical and optical transverse confinement. Understanding this optical confinement will be vital for the future realization of yet smaller lasers with ultralow threshold currents. Here the authors report the spectral and modal properties of small (0.5 {micro}m to 5 {micro}m current aperture) VCSELs and identify Joule heating as a dominant effect in the resonator properties of the smallest lasers.
In situ reflectance and virtual interface analysis for compound semiconductor process control
The authors review the use of in-situ normal incidence reflectance, combined with a virtual interface model, to monitor and control the growth of complex compound semiconductor devices. The technique is being used routinely on both commercial and research metal-organic chemical vapor deposition (MOCVD) reactors and in molecular beam epitaxy (MBE) to measure growth rates and high temperature optical constants of compound semiconductor alloys. The virtual interface approach allows one to extract the calibration information in an automated way without having to estimate the thickness or optical constants of the alloy, and without having to model underlying thin film layers. The method has been used in a variety of data analysis applications collectively referred to as ADVISOR (Analysis of Deposition using Virtual Interfaces and Spectroscopic Optical Reflectance). This very simple and robust monitor and ADVISOR method provides one with the equivalent of a real-time reflection high energy electron reflectance (RHEED) tool for both MBE and MOCVD applications.
Automated spray cleaning using flammable solvents in a glovebox
Garcia, P.; Meirans, L.
The phase-out of the ozone-depleting solvents has forced industry to look to solvents such as alcohol, terpenes and other flammable solvents to perform the critical cleaning processes. These solvents are not as efficient as the ozone-depleting solvents in terms of soil loading, cleaning time and drying when used in standard cleaning processes such as manual sprays or ultrasonic baths. They also require special equipment designs to meet part cleaning specifications and operator safety requirements. This paper describes a cleaning system that incorporates the automated spraying of flammable solvents to effectively perform precision cleaning processes. Key to the project`s success was the development of software that controls the robotic system and automatically generates robotic cleaning paths from three dimensional CAD models of the items to be cleaned.
Deep high-aspect ratio Si etching for advanced packaging technologies
Deep high-aspect ratio Si etching (HARSE) has shown potential application for passive self-alignment of dissimilar materials and devices on Si carriers or waferboards. The Si can be etched to specific depths and; lateral dimensions to accurately place or locate discrete components (i.e lasers, photodetectors, and fiber optics) on a Si carrier. It is critical to develop processes which maintain the dimensions of the mask, yield highly anisotropic profiles for deep features, and maintain the anisotropy at the base of the etched feature. In this paper the authors report process conditions for HARSE which yield etch rates exceeding 3 {micro}m/min and well controlled, highly anisotropic etch profiles. Examples for potential application to advanced packaging technologies will also be shown.
A multi-agent system for coordinating international shipping
Moving commercial cargo across the US-Mexico border is currently a complex, paper-based, error-prone process that incurs expensive inspections and delays at several ports of entry in the Southwestern US. Improved information handling will dramatically reduce border dwell time, variation in delivery time, and inventories, and will give better control of the shipment process. The Border Trade Facilitation System (BTFS) is an agent-based collaborative work environment that assists geographically distributed commercial and government users with transshipment of goods across the US-Mexico border. Software agents mediate the creation, validation and secure sharing of shipment information and regulatory documentation over the Internet, using the World Wide Web to interface with human actors. Agents are organized into Agencies. Each agency represents a commercial or government agency. Agents perform four specific functions on behalf of their user organizations: (1) agents with domain knowledge elicit commercial and regulatory information from human specialists through forms presented via web browsers; (2) agents mediate information from forms with diverse otologies, copying invariant data from one form to another thereby eliminating the need for duplicate data entry; (3) cohorts of distributed agents coordinate the work flow among the various information providers and they monitor overall progress of the documentation and the location of the shipment to ensure that all regulatory requirements are met prior to arrival at the border; (4) agents provide status information to human actors and attempt to influence them when problems are predicted.