Publications
Search results
Jump to search filtersZ, ZX, and X-1: A Realistic Path to High Fusion Yield
Z-pinches now constitute the most energetic and powerful sources of x-rays available by a large margin. The Z accelerator at Sandia National Laboratories has produced 1.8 MJ of x-ray energy, 280 TW of power, and hohlraum temperatures of 200 eV. These advances are being applied to inertial confinement fusion (ICF) experiments on Z. The requirements for high fusion yield are exemplified in the target to be driven by the X-1 accelerator. X-1 will drive two z-pinches, each producing 7 MJ of x-ray energy and about 1000 TW of x-ray power. Together, these radiation sources will heat a hohlraum containing the 4-mm diameter ICF capsule to a temperature exceeding 225 eV for about 10 ns, with the pulse shape required to drive the capsule to high fusion yield, in the range of 200--1000 MJ. Since X-1 consists of two identical accelerators, it is possible to mitigate the technical risk of high yield by constructing one accelerator. This accelerator, ZX, will bridge the gap from Z to X-1 by driving an integrated target experiment with a very efficient energy source, ZX will also provide experimental condition that the full specifications of the X-1 accelerator for high yield are achievable, and that a realistic path to high fission yield exists.
Recycling of Advanced Batteries for Electric Vehicles
The pace of development and fielding of electric vehicles is briefly described and the principal advanced battery chemistries expected to be used in the EV application are identified as Ni/MH in the near term and Li-ion/Li-polymer in the intermediate to long term. The status of recycling process development is reviewed for each of the two chemistries and future research needs are discussed.
Observations of Non-Close-Packed Arrangements in Multilayers of Passivated Gold Clusters
European Journal of Physics
The stacking of second and third layers of supercrystals of self-assembled passivated gold nanoparticles has been investigated using transmission electron microscopy. We report for the first time nanoparticles occupying the twofold saddle site in the third layer.
Brittle-Ductile Relaxation Kinetics of Strained AlGaN/GaN
Applied Physics Letters
Hearne, Sean J.; Han, J.; Lee, Stephen R.; Floro, Jerrold A.; Follstaedt, David M.
The authors have directly measured the stress evolution during metal organic chemical vapor deposition of AlGaN/GaN heterostructures on sapphire. In situ stress measurements were correlated with ex situ microstructural analysis to directly determine a critical thickness for cracking and the subsequent relaxation kinetics of tensile-strained Al{sub x}Ga{sub 1{minus}x}N on GaN. Cracks appear to initiate the formation of misfit dislocations at the AlGaN/GaN interface, which account for the majority of the strain relaxation.
Laser Assisted Plasma Arc Welding
Experiments have been performed using a coaxial end-effecter to combine a focused laser beam and a plasma arc. The device employs a hollow tungsten electrode, a focusing lens, and conventional plasma arc torch nozzles to co-locate the focused beam and arc on the workpiece. Plasma arc nozzles were selected to protect the electrode from laser generated metal vapor. The project goal is to develop an improved fusion welding process that exhibits both absorption robustness and deep penetration for small scale (< 1.5 mm thickness) applications. On aluminum alloys 6061 and 6111, the hybrid process has been shown to eliminate hot cracking in the fusion zone. Fusion zone dimensions for both stainless steel and aluminum were found to be wider than characteristic laser welds, and deeper than characteristic plasma arc welds.
The Chemical Exhaust Hazards of Dichlorosilane Deposits Determined with FT-ICR Mass Spectrometry
IEEE Transactions on Semiconductor Manufacturing
Jarek, Russell L.; Thornberg, Steve M.
Flammable deposits have been analyzed from the exhaust systems of tools employing dichlorosilane (DCS) as a processing gas. Exact mass determinations with a high-resolution Fourier-transform ion-cyclotron resonance (FT-ICR) mass spectrometer allowed the identification of various polysiloxane species present in such an exhaust flow. Ion-molecule reactions indicate the preferred reaction pathway of siloxane formation is through HCl loss, leading to the highly reactive polysiloxane that was detected in the flammable deposits.
Utility Test Results of a 2-Megawatt, 10-Second Reserve-Power System
This report documents the 1996 evaluation by Pacific Gas and Electric Company of an advanced reserve-power system capable of supporting 2 MW of load for 10 seconds. The system, developed under a DOE Cooperative Agreement with AC Battery Corporation of East Troy, Wisconsin, contains battery storage that enables industrial facilities to ''ride through'' momentary outages. The evaluation consisted of tests of system performance using a wide variety of load types and operating conditions. The tests, which included simulated utility outages and voltage sags, demonstrated that the system could provide continuous power during utility outages and other disturbances and that it was compatible with a variety of load types found at industrial customer sites.
Slimhole Handbook: Procedures and Recommendations for Slimhole Drilling and Testing in Geothermal Exploration
Finger, John T.; Jacobson, Ronald D.; Hickox Jr., Charles E.
Abstract not provided.
Concepts and Strategies for Transparency Monitoring of Nuclear Materials at the Back End of the Fuel/Weapons Cycle
Representatives of the Department of Energy, the national laboratories, the Waste Isolation Pilot Plant (WIPP), and others gathered to initiate the development of broad-based concepts and strategies for transparency monitoring of nuclear materials at the back end of the fuel/weapons cycle, including both geologic disposal and monitored retrievable storage. The workshop focused on two key questions: ''Why should we monitor?'' and ''What should we monitor?'' These questions were addressed by identifying the range of potential stakeholders, concerns that stakeholders may have, and the information needed to address those concerns. The group constructed a strategic framework for repository transparency implementation, organized around the issues of safety (both operational and environmental), diversion (assuring legitimate use and security), and viability (both political and economic). Potential concerns of the international community were recognized as the possibility of material diversion, the multinational impacts of potential radionuclide releases, and public and political perceptions of unsafe repositories. The workshop participants also identified potential roles that the WIPP may play as a monitoring technology development and demonstration test-bed facility. Concepts for WIPP'S potential test-bed role include serving as (1) an international monitoring technology and development testing facility, (2) an international demonstration facility, and (3) an education and technology exchange center on repository transparency technologies.
Expanding the Security Dimension of Surety
A small effort was conducted at Sandia National Laboratories to explore the use of a number of modern analytic technologies in the assessment of terrorist actions and to predict trends. This work focuses on Bayesian networks as a means of capturing correlations between groups, tactics, and targets. The data that was used as a test of the methodology was obtained by using a special parsing algorithm written in JAVA to create records in a database from information articles captured electronically. As a vulnerability assessment technique the approach proved very useful. The technology also proved to be a valuable development medium because of the ability to integrate blocks of information into a deployed network rather than waiting to fully deploy only after all relevant information has been assembled.
Functional Requirements for SIERRA Version 1.0 Beta
Taylor, Lee M.; Edwards, Harold C.; Stewart, James
The objective of the SIERRA framework is to provide a common software infrastructure for massively parallel computational mechanics applications. The SIERRA framework consolidates the mechanics-independent computational services required by a diverse set of mechanics applications into a shared framework. Consolidation of these computational services eliminates their redundant development and maintenance efforts and streamlines the coupling of independently developed computational mechanics capabilities into integrated multi-mechanics applications.
Reliability Impact of Stockpile Aging: Stress Voiding
The objective of this research is to statistically characterize the aging of integrated circuit interconnects. This report supersedes the stress void aging characterization presented in SAND99-0975, ''Reliability Degradation Due to Stockpile Aging,'' by the same author. The physics of the stress voiding, before and after wafer processing have been recently characterized by F. G. Yost in SAND99-0601, ''Stress Voiding during Wafer Processing''. The current effort extends this research to account for uncertainties in grain size, storage temperature, void spacing and initial residual stress and their impact on interconnect failure after wafer processing. The sensitivity of the life estimates to these uncertainties is also investigated. Various methods for characterizing the probability of failure of a conductor line were investigated including: Latin hypercube sampling (LHS), quasi-Monte Carlo sampling (qMC), as well as various analytical methods such as the advanced mean value (Ah/IV) method. The comparison was aided by the use of the Cassandra uncertainty analysis library. It was found that the only viable uncertainty analysis methods were those based on either LHS or quasi-Monte Carlo sampling. Analytical methods such as AMV could not be applied due to the nature of the stress voiding problem. The qMC method was chosen since it provided smaller estimation error for a given number of samples. The preliminary results indicate that the reliability of integrated circuits due to stress voiding is very sensitive to the underlying uncertainties associated with grain size and void spacing. In particular, accurate characterization of IC reliability depends heavily on not only the frost and second moments of the uncertainty distribution, but more specifically the unique form of the underlying distribution.
A Statistical Description of the Types and Severities of Accidents Involving Tractor Semi-Trailers, Updated Results for 1992-1996
This report provides a statistical description of the types and severities of tractor semi-trailer accidents involving at least one fatality. The data were developed for use in risk assessments of hazardous materials transportation. A previous study (SAND93-2580) reviewed the availability of accident data, identified the TIFA (Trucks Involved in Fatal Accidents) as the best source of accident data for accidents involving heavy trucks, and provided statistics on accident data collected between 1980 and 1990. The current study is an extension of the previous work and describes data collected for heavy truck accidents occurring between 1992 and 1996. The TIFA database created at the University of Michigan Transportation Research Institute was extensively utilized. Supplementary data on collision and fire severity, which was not available in the TIFA database, were obtained by reviewing police reports and interviewing responders and witnesses for selected TEA accidents. The results are described in terms of frequencies of different accident types and cumulative distribution functions for the peak contact velocity, rollover skid distance, effective fire temperature, fire size, fire separation, and fire duration.
3-D Target Location from Stereoscopic SAR Images
SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.
Fast Solutions of Maxwell's Equation for High Resolution Electromagnetic Imaging of Transport Pathways
Newman, Gregory A.; Day, David M.
A fast precondition technique has been developed which accelerates the finite difference solutions of the 3D Maxwell's equations for geophysical modeling. The technique splits the electric field into its curl free and divergence free projections, and allows for the construction of an inverse operator. Test examples show an order of magnitude speed up compared with a simple Jacobi preconditioner. Using this preconditioner a low frequency Neumann series expansion is developed and used to compute responses at multiple frequencies very efficiently. Simulations requiring responses at multiple frequencies, show that the Neumann series is faster than the preconditioned solution, which must compute solutions at each discrete frequency. A Neumann series expansion has also been developed in the high frequency limit along with spectral Lanczos methods in both the high and low frequency cases for simulating multiple frequency responses with maximum efficiency. The research described in this report was to have been carried out over a two-year period. Because of communication difficulties, the project was funded for first year only. Thus the contents of this report are incomplete with respect to the original project objectives.
Space-Variant Post-Filtering for Wavefront Curvature Correction in Polar-Formatted Spotlight-Mode SAR Imagery
Wavefront curvature defocus effects occur in spotlight-mode SAR imagery when reconstructed via the well-known polar-formatting algorithm (PFA) under certain imaging scenarios. These include imaging at close range, using a very low radar center frequency, utilizing high resolution, and/or imaging very large scenes. Wavefront curvature effects arise from the unrealistic assumption of strictly planar wavefronts illuminating the imaged scene. This dissertation presents a method for the correction of wavefront curvature defocus effects under these scenarios, concentrating on the generalized: squint-mode imaging scenario and its computational aspects. This correction is accomplished through an efficient one-dimensional, image domain filter applied as a post-processing step to PF.4. This post-filter, referred to as SVPF, is precalculated from a theoretical derivation of the wavefront curvature effect and varies as a function of scene location. Prior to SVPF, severe restrictions were placed on the imaged scene size in order to avoid defocus effects under these scenarios when using PFA. The SVPF algorithm eliminates the need for scene size restrictions when wavefront curvature effects are present, correcting for wavefront curvature in broadside as well as squinted collection modes while imposing little additional computational penalty for squinted images. This dissertation covers the theoretical development, implementation and analysis of the generalized, squint-mode SVPF algorithm (of which broadside-mode is a special case) and provides examples of its capabilities and limitations as well as offering guidelines for maximizing its computational efficiency. Tradeoffs between the PFA/SVPF combination and other spotlight-mode SAR image formation techniques are discussed with regard to computational burden, image quality, and imaging geometry constraints. It is demonstrated that other methods fail to exhibit a clear computational advantage over polar-formatting in conjunction with SVPF. This research concludes that PFA in conjunction with SVPF provides a computationally efficient spotlight-mode image formation solution that solves the wavefront curvature problem for most standoff distances and patch sizes, regardless of squint, resolution or radar center frequency. Additional advantages are that SVPF is not iterative and has no dependence on the visual contents of the scene: resulting in a deterministic computational complexity which typically adds only thirty percent to the overall image formation time.
Report on the Test and Evaluation of the Kinemetrics/Quanterra Q730B Borehole Digitizers
Kromer, Richard P.; Mcdonald, Timothy S.
Sandia National Laboratories has tested and evaluated the Kinemetrics/Quanterra Q730B-bb (broadband) and Q730B-sp (short period) borehole installation remote digitizers. The test results included in this report were for response to static and dynamic input signals, seismic application performance, data time-tag accuracy, and reference signal generator (calibrator) performance. Most test methodologies used were based on IEEE Standards 1057 for Digitizing Waveform Recorders and P1241 (Preliminary Draft) for Analog to Digital Converters; others were designed by Sandia specifically for seismic application evaluation and for supplementary criteria not addressed in the IEEE standards. When appropriate, test instrumentation calibration is traceable to the National Institute for Standards Technology (NIST).
A Regularized Galerkin Boundary Element Method (RGBEM) for Simulating Potential Flow About Zero Thickness Bodies
The prediction of potential flow about zero thickness membranes by the boundary element method constitutes an integral component of the Lagrangian vortex-boundary element simulation of flow about parachutes. To this end, the vortex loop (or the panel) method has been used, for some time now, in the aerospace industry with relative success [1, 2]. Vortex loops (with constant circulation) are equivalent to boundary elements with piecewise constant variation of the potential jump. In this case, extending the analysis in [3], the near field potential velocity evaluations can be shown to be {Omicron}(1). The accurate evaluation of the potential velocity field very near the parachute surface is particularly critical to the overall accuracy and stability of the vortex-boundary element simulations. As we will demonstrate in Section 3, the boundary integral singularities, which arise due to the application of low order boundary elements, may lead to severely spiked potential velocities at vortex element centers that are near the boundary. The spikes in turn cause the erratic motion of the vortex elements, and the eventual loss of smoothness of the vorticity field and possible numerical blow up. In light of the arguments above, the application of boundary elements with (at least) a linear variation of the potential jump--or, equivalently, piecewise constant vortex sheets--would appear to be more appropriate for vortex-boundary element simulations. For this case, two strategies are possible for obtaining the potential flow field. The first option is to solve the integral equations for the (unknown) strengths of the surface vortex sheets. As we will discuss in Section 2.1, the challenge in this case is to devise a consistent system of equations that imposes the solenoidality of the locally 2-D vortex sheets. The second approach is to solve for the unknown potential jump distribution. In this case, for commonly used C{sup o} shape functions, the boundary integral is singular at the collocation points. Unfortunately, the development of elements with C{sup 1} continuity for the potential jumps is quite complicated in 3-D. To this end, the application of Galerkin ''smoothing'' to the boundary integral equations removes the singularity at the collocation points; thus allowing the use of C{sup o} elements and potential jump distributions [4]. Successful implementations of the Galerkin Boundary Element Method to 2-D conduction [4] and elastostatic [5] problems have been reported in the literature. Thus far, the singularity removal algorithms have been based on a posterior and mathematically complex reasoning, which have required Taylor series expansion and limit processes. The application of these strategies to 3-D is expected to be significantly more complicated. In this report, we develop the formulation for a ''Regularized'' Galerkin Boundary Element Method (RGBEM). The regularization procedure involves simple manipulations using vector calculus to reduce the singularity of the hypersingular boundary integral equation by two orders for C{sup o} elements. For the case of linear potential jump distributions over plane triangles the regularized integral is simplified considerably to a double surface integral of the Green function. This is the case implemented and tested in this report. Using the example problem of flow normal to a square flat plate, the linear RGBEM predictions are demonstrated here to be more accurate, to converge faster, and to be significantly less spiked than the solutions obtained by the vortex loop method.
Brine and Gas Flow Patterns Between Excavated Areas and Disturbed Rock Zone in the 1996 Performance Assessment for the Waste Isolation Pilot Plant for a Single Drilling Intrusion that Penetrates Repository and Castile Brine Reservoir
Helton, Jon C.; Vaughn, Palmer
The Waste Isolation Pilot Plant (WIPP), which is located in southeastern New Mexico, is being developed for the geologic disposal of transuranic (TRU) waste by the U.S. Department of Energy (DOE). Waste disposal will take place in panels excavated in a bedded salt formation approximately 2000 ft (610 m) below the land surface. The BRAGFLO computer program which solves a system of nonlinear partial differential equations for two-phase flow, was used to investigate brine and gas flow patterns in the vicinity of the repository for the 1996 WIPP performance assessment (PA). The present study examines the implications of modeling assumptions used in conjunction with BRAGFLO in the 1996 WIPP PA that affect brine and gas flow patterns involving two waste regions in the repository (i.e., a single waste panel and the remaining nine waste panels), a disturbed rock zone (DRZ) that lies just above and below these two regions, and a borehole that penetrates the single waste panel and a brine pocket below this panel. The two waste regions are separated by a panel closure. The following insights were obtained from this study. First, the impediment to flow between the two waste regions provided by the panel closure model is reduced due to the permeable and areally extensive nature of the DRZ adopted in the 1996 WIPP PA, which results in the DRZ becoming an effective pathway for gas and brine movement around the panel closures and thus between the two waste regions. Brine and gas flow between the two waste regions via the DRZ causes pressures between the two to equilibrate rapidly, with the result that processes in the intruded waste panel are not isolated from the rest of the repository. Second, the connection between intruded and unintruded waste panels provided by the DRZ increases the time required for repository pressures to equilibrate with the overlying and/or underlying units subsequent to a drilling intrusion. Third, the large and areally extensive DRZ void volumes is a significant source of brine to the repository, which is consumed in the corrosion of iron and thus contributes to increased repository pressures. Fourth, the DRZ itself lowers repository pressures by providing storage for gas and access to additional gas storage in areas of the repository. Fifth, given the pathway that the DRZ provides for gas and brine to flow around the panel closures, isolation of the waste panels by the panel closures was not essential to compliance with the U.S. Environment Protection Agency's regulations in the 1996 WIPP PA.
Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access
Hipp, James R.; Young, Christopher J.; Moore, Susan G.; Shepherd, Ellen
The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis for accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.
Development of Zinc/Bromine Batteries for Load-Leveling Applications: Phase 2 Final Report
This report documents Phase 2 of a project to design, develop, and test a zinc/bromine battery technology for use in utility energy storage applications. The project was co-funded by the U.S. Department of Energy Office of Power Technologies through Sandia National Laboratories. The viability of the zinc/bromine technology was demonstrated in Phase 1. In Phase 2, the technology developed during Phase 1 was scaled up to a size appropriate for the application. Batteries were increased in size from 8-cell, 1170-cm{sup 2} cell stacks (Phase 1) to 8- and then 60-cell, 2500-cm{sup 2} cell stacks in this phase. The 2500-cm{sup 2} series battery stacks were developed as the building block for large utility battery systems. Core technology research on electrolyte and separator materials and on manufacturing techniques, which began in Phase 1, continued to be investigated during Phase 2. Finally, the end product of this project was a 100-kWh prototype battery system to be installed and tested at an electric utility.
Multi-Window Classical Least Squares Multivariate Calibration Methods for Quantitative ICP-AES Analyses
Applied Spectroscopy
Haaland, David M.; Chambers, William B.; Keenan, Michael R.; Melgaard, David K.
The advent of inductively coupled plasma-atomic emission spectrometers (ICP-AES) equipped with charge-coupled-device (CCD) detector arrays allows the application of multivariate calibration methods to the quantitative analysis of spectral data. We have applied classical least squares (CLS) methods to the analysis of a variety of samples containing up to 12 elements plus an internal standard. The elements included in the calibration models were Ag, Al, As, Au, Cd, Cr, Cu, Fe, Ni, Pb, Pd, and Se. By performing the CLS analysis separately in each of 46 spectral windows and by pooling the CLS concentration results for each element in all windows in a statistically efficient manner, we have been able to significantly improve the accuracy and precision of the ICP-AES analyses relative to the univariate and single-window multivariate methods supplied with the spectrometer. This new multi-window CLS (MWCLS) approach simplifies the analyses by providing a single concentration determination for each element from all spectral windows. Thus, the analyst does not have to perform the tedious task of reviewing the results from each window in an attempt to decide the correct value among discrepant analyses in one or more windows for each element. Furthermore, it is not necessary to construct a spectral correction model for each window prior to calibration and analysis: When one or more interfering elements was present, the new MWCLS method was able to reduce prediction errors for a selected analyte by more than 2 orders of magnitude compared to the worst case single-window multivariate and univariate predictions. The MWCLS detection limits in the presence of multiple interferences are 15 rig/g (i.e., 15 ppb) or better for each element. In addition, errors with the new method are only slightly inflated when only a single target element is included in the calibration (i.e., knowledge of all other elements is excluded during calibration). The MWCLS method is found to be vastly superior to partial least squares (PLS) in this case of limited numbers of calibration samples.
Corrosion issues in solder joint design and service
Welding Journal (Miami, Fla)
Corrosion is an important consideration in the design of a solder joint. In the case of a conduit, corrosion from both the outside service environment and the medium being transported within the pipe or tube must be addressed. Solder joints are susceptible to atmospheric corrosion, galvanic corrosion, voltage-assisted corrosion, stress corrosion cracking, and corrosion fatigue cracking. Galvanic corrosion is of particular concern, given the fact that solder joints are comprised of different metals or alloys in contact with one another.
Rigid Polyurethane Foam (RPF) Technology for Countermines (Sea) Program Phase II
Woodfin, Ronald L.; Faucett, David L.; Hance, Bradley G.
Abstract not provided.