Publications

Results 92801–92850 of 99,299

Search results

Jump to search filters

Ontological leveling and elicitation for complex industrial transactions

Phillips, Laurence R.

The authors present an agent-oriented mechanism that uses a central ontology as a means to conduct complex distributed transactions. This is done by instantiating a template object motivated solely by the ontology, then automatically and explicitly linking each temple element to an independently constructed interface component. Validation information is attached directly to the links so that the agent need not know a priori the semantics of data validity, merely how to execute a general validation process to satisfy the conditions given in the link. Ontological leveling is critical: all terms presented to informants must be semantically coherent within the central ontology. To illustrate this approach in an industrial setting, they discuss an existing implementation that conducted international commercial transactions on the World-Wide Web. Agents operating within a federated architecture construct, populate by Web-based elicitation, and manipulate a distributed composite transaction object to effect transport of goods over the US/Mexico border.

More Details

Experiences developing ALEGRA: A C++ coupled physics framework

Budge, Kent G.

ALEGRA is a coupled physics framework originally written to simulate inertial confinement fusion (ICF) experiments being conducted at the PBFA-II facility at Sandia National Laboratories. It has since grown into a large software development project supporting a number of computational programs at Sandia. As the project has grown, so has the development team, from the original two authors to a group of over fifteen programmers crossing several departments. In addition, ALEGRA now runs on a wide variety of platforms, from large PCs to the ASCI Teraflops massively parallel supercomputer. The authors discuss the reasons for ALEGRA`s success, which include the intelligent use of object-oriented techniques and the choice of C++ as the programming language. They argue that the intelligent use of development tools, such as build tools (e.g. make), compiler, debugging environment (e.g. dbx), version control system (e.g. cvs), and bug management software (e.g. ClearDDTS), is nearly as important as the choice of language and paradigm.

More Details

Plasma etching, texturing, and passivation of silicon solar cells

Ruby, Douglas S.

The authors improved a self-aligned emitter etchback technique that requires only a single emitter diffusion and no alignments to form self-aligned, patterned-emitter profiles. Standard commercial screen-printed gridlines mask a plasma-etchback of the emitter. A subsequent PECVD-nitride deposition provides good surface and bulk passivation and an antireflection coating. The authors used full-size multicrystalline silicon (mc-Si) cells processed in a commercial production line and performed a statistically designed multiparameter experiment to optimize the use of a hydrogenation treatment to increase performance. They obtained an improvement of almost a full percentage point in cell efficiency when the self-aligned emitter etchback was combined with an optimized 3-step PECVD-nitride surface passivation and hydrogenation treatment. They also investigated the inclusion of a plasma-etching process that results in a low-reflectance, textured surface on multicrystalline silicon cells. Preliminary results indicate reflectance can be significantly reduced without etching away the emitter diffusion.

More Details

DSMC Simulation of thermal transpiration and accomodation pumps

Hudson, M.L.

The Direct Simulation Monte Carlo (DSMC) technique is employed to evaluate several configurations of thermal transpiration and accommodation pumps. There is renewed interest in these rarefied flow pumping concepts for Micro-Electro-Mechanical Systems (MEMS) due to advances in micro-fabrication. The simulation results are compared with existing data to understand gas-surface interaction uncertainties in the experiments. Parametric studies are performed to determine the effects of Knudsen number and surface temperature and roughness on the maximum pump pressure ratio.

More Details

Swarms of UAVs and fighter aircraft

Wagner, John S.

This paper describes a method of modeling swarms of UAVs and/or fighter aircraft using particle simulation concepts. Recent investigations into the use of genetic algorithms to design neural networks for the control of autonomous vehicles (i.e., robots) led to the examination of methods of simulating large collections of robots. This paper describes the successful implementation of a model of swarm dynamics using particle simulation concepts. Several examples of the complex behaviors achieved in a target/interceptor scenario are presented.

More Details

A Monte Carlo model of Zener pinning which shows f{sup {minus}1} dependence

Miodownik, M.

A novel Monte Carlo (MC) model of Zener pinning has been developed. It differs from previous MC models in that it does not simulate polycrystalline grain growth. Instead a single boundary moving through an array of particles is simulated. The boundary curvature defines the driving force acting on the boundary; this is constant throughout the simulation. By incrementally increasing the volume fraction of particles, the pinning force is gradually increased. The boundary is eventually pinned when driving force equals the pinning force. This defines the Zener criterion and enables the volume fraction dependence of the model to be determined. The value of this approach is that there is no limit imposed on either the volume fraction of particles or their size. Simulations have been carried out over a range of volume fractions, from 0 < f < 0.25 for particles with volumes of 27 sites. The pinning force exerted by particles on a boundary is related to the characteristic shape during bypass, the so called dimple. When the simulation temperature is T{prime} = 0, dimples are not formed, the boundaries experience an artificially strong pinning force and the model exhibits an f{sup {minus}1/2} dependence. When T{prime} is greater than a critical value dimples are formed and the model shows an f{sup {minus}1} volume fraction dependence. The implications of this result for previously MC models of Zener pinning is discussed.

More Details

A robust line search for learning control

Driessen, B.J.; Kwok, K.S.; Sadegh, N.

In this paper a new line search for a Newton Rhapson learning control algorithm is presented. Theorems and rigorous proofs of its increased robustness over existing line searches are provided, and numerical examples are used to further validate the theorems. Also, the previously posed open question of whether robust optimal trajectory learning is possible is also addressed. It is shown that the answer is generally no, at least for gradient-based learning control algorithms.

More Details

Configuration space representation for micro-mechanism function

Allen, James J.

This paper describes the configuration space representation of mechanical function and shows how it supports the design of micro-mechanisms. The domain characteristics of curved geometry, joint play, and custom joints render traditional design tools inappropriate, but configuration spaces can model these characteristics. They represent the quantitative and the qualitative aspects of kinematic function in a concise geometric format that helps designers visualize system function under a range of operating conditions, find and correct design flaws, study joint play, and optimize performance. The approach is demonstrated on a surface micromachined counter meshing gear discrimination device developed at Sandia National Laboratories.

More Details

3D finite-difference seismic migration with parallel computers

Ober, Curtis C.

The ability to image complex geologies such as salt domes in the Gulf of Mexico and thrusts in mountainous regions is essential for reducing the risk associated with oil exploration. Imaging these structures, however, is computationally expensive as datasets can be terabytes in size. Traditional ray-tracing migration methods cannot handle complex velocity variations commonly found near such salt structures. Instead the authors use the full 3D acoustic wave equation, discretized via a finite difference algorithm. They reduce the cost of solving the apraxial wave equation by a number of numerical techniques including the method of fractional steps and pipelining the tridiagonal solves. The imaging code, Salvo, uses both frequency parallelism (generally 90% efficient) and spatial parallelism (65% efficient). Salvo has been tested on synthetic and real data and produces clear images of the subsurface even beneath complicated salt structures.

More Details

Electrochemical Evaluation of Pyrite Films Prepared by Plasma Spraying

Guidotti, Ronald A.

Thermally activated batteries use electrodes that are typically fabricated by cold pressing of powder. In the LiSi/FeS2 system, natural (mineral) pyrite is used for the cathode. In an effort to increase the energy density and specific energy of these batteries, flame and plasma spraying to form thin films of pyrite cathodes were evaluated. The films were deposited on a 304 stainless steel substrate (current collector) and were characterized by scanning electron microscopy and x-ray dlfllaction. The films were electrochemically tested in single cells at 5000C and the petiormance compared to that of standard cells made with cold-pressed powders. The best results were obtained with material deposited by de-arc plasma spraying with a proprietq additive to suppress thermal decomposion of the pyrite.

More Details

Neural Network Modeling of the Lithium/Thionyl Chloride Battery System

O'Gorman, Chris

Battery systems have traditionally relied on extensive build and test procedures for product realization. Analytical models have been developed to diminish this reliance, but have only been partially successful in consistently predicting the performance of battery systems. The complex set of interacting physical and chemical processes within battery systems has made the development of analytical models a significant challenge. Advanced simulation tools are needed to more accurately model battery systems which will reduce the time and cost required for product realization. Sandia has initiated an advanced model-based design strategy to battery systems, beginning with the performance of lithiumhhionyl chloride cells. As an alternative approach, we have begun development of cell performance modeling using non-phenomenological models for battery systems based on artificial neural networks (ANNs). ANNs are inductive models for simulating input/output mappings with certain advantages over phenomenological models, particularly for complex systems. Among these advantages is the ability to avoid making measurements of hard to determine physical parameters or having to understand cell processes sufficiently to write mathematical functions describing their behavior. For example, ANN models are also being studied for simulating complex physical processes within the Li/SOC12 cell, such as the time and temperature dependence of the anode interracial resistance. ANNs have been shown to provide a very robust and computationally efficient simulation tool for predicting voltage and capacity output for Li/SOC12 cells under a variety of operating conditions. The ANN modeling approach should be applicable to a wide variety of battery chemistries, including rechargeable systems.

More Details

Analysis of Hydrogen Depletion Using a Scaled Passive Autocatalytic Recombiner

Nuclear Engineering and Design Journal (NED)

Blanchat, Thomas K.

Hydrogen depletion tests of a scaled passive autocatalytic recombine (pAR) were performed in the Surtsey test vessel at Sandia National Laboratories (SNL). The experiments were used to determine the hydrogen depletion rate of a PAR in the presence of steam and also to evaluate the effect of scale (number of cartridges) on the PAR performance at both low and high hydrogen concentrations.

More Details

The Tunnel Sealing Experiment: An In Situ Demonstration of Technologies for Vault Sealing

Tillerson, J.

Two bulkheads, one composed of high performance concrete and the other of highly compacted sand-bentonite material, have been constructed in a tunnel in unfractured granite rock at the Underground Research Laboratory. The Tunnel Sealing Experiment will characterize the performance of the two bulkheads under applied hydraulic pressures. The chamber between the two bulkheads will be pressurized to approximately 4 MPa, a value representative of the ambient pore pressures in the rock at a depth of 420 m. Instrumentation in the experiment monitors the seepage through and around each bulkhead as well as the changes tot he pure water pressure, and hence changes to the flow directions,in the intact rock. Stresses and displacements in each bulkhead are also monitored. The objective of the experiment is to demonstrate technologies for contrustion of bentonite and concrete bulkheads and to quantify the performance of each bulkhead.

More Details

PDS/PIO: Lightweight Libraries for Collective Parallel I/O

Chen, P.; Christon, M.; Heermann, P.D.; Sturtevant, J.

PDS/PIO is a lightweight, parallel interface designed to support efficient transfers of massive, grid-based, simulation data among memory, disk, and tape subsystems. The higher-level PDS (Parallel Data Set) interface manages data with tensor and unstructured grid abstractions, while the lower-level PIO (Parallel Input/Output) interface accesses data arrays with arbitrary permutation, and provides communication and collective 1/0 operations. Higher-level data abstraction for finite element applications is provided by PXI (Parallel Exodus Interface), which supports, in parallel, functionality of Exodus 11, a finite element data model developed at Sandia National Laboratories. The entire interface is implemented in C with Fortran-callable PDS and PXI wrappers.

More Details

A Fission-Powered Interstellar Precursor Mission

Lipinski, Ronald

An 'interstellar precursor mission' lays the groundwork for eventual interstellar exploration by studying the interstellar medium and by stretching technologies that have potential application for eventual interstellar exploration. The numerous scientific goals for such a mission include generating a 3-D stellar map of our galaxy, studying Kuiper-belt and Oort cloud objects, and observing distant objects using the sun's gravitational lens as the primary of an enormous telescope. System equations are developed for a space tug which propels a 2500-kg scientific payload to 550 astronomical units in about 20 years. The tug to transport this payload uses electric propulsion with an Isp of 15,000 seconds and a fission reactor with a closed Brayton cycle to genemte the electricity. The optimal configuration may be to thrust for only about 6 years and then coast for the remaining 14 pars. This spacecraft does not require any physics breakthroughs or major advances in technology. The fission power syslem can be engineered and built by drawing upon known technologies developed for relatgd systems over the past 40 years. The tug system would eventually reach 1000 a.u in 33 years, and would have adequate power to relay large amounts of data throughout its journey.

More Details

A Three-Dimensional Photonic Crystal with Stop Band Between at 1.35 and 1.95 Microns

Optics Letters

Lin, Shawn-Yu

A combination of advanced silicon processing techniques were used to create three- dimensional (3D) photonic crystals with a 180 nano-meter minimum dimension. The resulting 3D crystal displays a strong stop band at optical wavelengths, L=l .35- 1.95pm. This is the smallest 3D crystal ever achieved with a complete 3D photonic band gap.

More Details

Comment on "Indication from Pioneer 10/11, Galileo, and Ulysses Data, of an Apparent Anomalous, Weak, Long-Range Acceleration"

Physical review Letters

Humphreys, D.R.

In a recent Letter Anderson et al. report some very intriguing radio observations flom various interplanetary spaceprobes over the past 18 years. They interpret this data as an anomalous deceleration of the spaceprobes. Here I offer a different interpretation: that the anomaly is related to the cosmological red shift.

More Details

Multi-Level Micromachined Systems-on-a-Chip: Technology and Applications

Allen, J.J.; Krygowski, T.W.; Miller, S.L.; Montague, S.; Rodgers, M.S.; Smith, J.H.; Sniegowski, J.J.

Researchers at Sandia have recently designed and built several research prototypes, which demonstrate that truly complex mechanical systems can now be realized in a surface micromachined technology. These MicroElectro- Mechanical Systems (MEMS) include advanced actuators, torque multiplying gear tmins, rack and pinion assemblies, positionable mirrors, and mechanical discriminators. All of tile mechanical components are batch fabricated on a single chip of silicon using the infrastructure origimdly developed to support today's highly reliabk; and robust microelectronics industry. Sand ia is also developing the technology 10 integrate microelectronic circuits onto the s,ime piece of silicon that is used to fabricate the MEMS devices. This significantly increases sensitivity and reliability, while fhrther reducing package size and fabrication costs. A review of the MEMS technology and capabilities available at Sandia National Laboratories is presented.

More Details

Assessing the Security Vulnerabilities of Correctional Facilities

Spencer, D.S.

The National Institute of Justice has tasked their Satellite Facility at Sandia National Laboratories and their Southeast Regional Technology Center in Charleston, South Carolina to devise new procedures and tools for helping correctional facilities to assess their security vulnerabilities. Thus, a team is visiting selected correctional facilities and performing vulnerability assessments. A vulnerability assessment helps to identi~ the easiest paths for inmate escape, for introduction of contraband such as drugs or weapons, for unexpected intrusion fi-om outside of the facility, and for the perpetration of violent acts on other inmates and correctional employees, In addition, the vulnerability assessment helps to quantify the security risks for the facility. From these initial assessments will come better procedures for performing vulnerability assessments in general at other correctional facilities, as well as the development of tools to assist with the performance of such vulnerability assessments.

More Details

On Practical Modifications to the Barnes-Hut Multipole Method for Electromagnetic Scattering

Driessen, B.J.; Kotulski, J.D.

This paper presents a simple methodology for quickly predicting and optimizing computer run time for the Barnes-Hut multipole method for boundary element electromagnetic scattering problems. The methodology is easily extended to other multipole methods (e.g., Greengard-Rokhlin) and to other physics. The idea is to simply COZM t the number of element-cell interactions, number of direct element- element interactions, and the number of cell multipole expansion creations (each expansion weighted by the number of elements in the cell), and then finally combine these three results with the associated unit costs to obtain the total computer :un-time to perform a single matrix-vector multiply. By counting operations instead of actually performing them, the time to predict the computer run time is orders of magnitude smaller than the time to actually perform the associated calculations. This allows for very quick optimization of parameters, such as the maximum number of elements in a final generation cell of the tree. Numerical examples are presented herein in which the rate of return (time saved over time spent finding optimal parameter values) is significantly more than two orders of magnitude.

More Details

Ion Microbeam Studies of Cadmium Zinc Telluride Radiation Detectors by IBICC

Vizkelethy, Gyorgy

Ion Beam Induced Charge Collection (IBICC) and Time Resolved IBICC (TRIBICC) techniques were e for imaging electronic properties of Cadmium Zinc Telluride (CZT) room temperature radiation detectors. The detectors were bombarded with a scanned 5.4 MeV He microbeam and the detector response was analyzed at each point. The electron mobility (A) and Metime (z.), and charge collection efficiency maps were calculated from the data. In order to determine the radiation damage to the detectors, the signal deteriomtion was measured as the function of dose.

More Details

The Design Process of Physical Security as Applied to a U.S. Border Point of Entry

Wagner, George G.

This paper describes the design process of physical security as applied to a U.S. Border Port of Entry (PoE). Included in this paper are descriptions of the elements that compose U.S. border security. The physical security design will describe the various elements that make up the process as well as the considerations that must be taken into account when dealing with system integration of those elements. The distinctions between preventing unlawful entry and exit of illegal contraband will be emphasized.

More Details

Sandia Multispectral Airborne Lidar for UAV Deployment

Daniels, J.W.; Henson, T.D.; Jordan, J.D.; Lang, A.R.; Schmitt, R.L.

Sandia National Laboratories has initiated the development of an airborne system for W laser remote sensing measurements. System applications include the detection of effluents associated with the proliferation of weapons of mass destruction and the detection of biological weapon aerosols. This paper discusses the status of the conceptual design development and plans for both the airborne payload (pointing and tracking, laser transmitter, and telescope receiver) and the Altus unmanned aerospace vehicle platform. Hardware design constraints necessary to maintain system weight, power, and volume limitations of the flight platform are identified.

More Details

Generating High-Brightness Light Ion Beams for Inertial Fusion Energy

Cuneo, Michael E.

Light ion beams may be the best option for an Inertial Fusion Energy (IFE) driver from the standpoint of ei%ciency, standoff, rep-rate operation and cost. This approach uses high-energy-density pulsed power to efficiently accelerate ions in one or two stages at fields of 0.5 to 1.0 GV/m to produce a medium energy (30 MeV), high-current (1 MA) beam of light ions, such as lithium. Ion beams provide the ability for medium distance transport (4 m) of the ions to the target, and standofl of the driver from high- yield implosions. Rep-rate operation of' high current ion sources has ako been demonstrated for industrial applications and couId be applied to IFE. Although (hese factors make light ions the best Iong-teml pulsed- power approach to IFE, light-ion research is being suspended this year in favor of a Z-pinch-driven approach which has the best opport lnity to most-rapidly achieve the U.S. Department of Energy sponsor's goal of high-yield fusion. This paper will summarize the status and most recent results of the light-ion beam program at Sandia National Laboratories (SNL), and document the prospects of light ions for future IFE driver development.

More Details

Numerical Models of Broad-Bandwidth Nanosecond Optical Parametric Oscillators

Journal of Optical Society of America Part B

Smith, Arlee V.

We present three new methods for modeling broad-bandwidth, nanosecond optitcal parametric oscillators in the plane-wave approximation. Each accounts for the group-velocity differences that determine the operating linewidth of unseeded optical parametric oscillators, and each allows the signal and idler waves to develop from quantum noise. The first two methods are based on split-step integration methods in which nonlinear mixing and propagation are calculated separately on alternate steps. One method relies on Fourier transforming handle propagation, wiih mixing integrated over a the fields between t and u to Az step: the other transforms between z and k= in the propagation step, with mixing integrated over At. The third method is based on expansion of the three optical fields in terms of their respective longitudinal empty cavity modes, taking into account the cavity boundary condi- tions. Equations describing the time development of the mode amplitudes are solved to yield the time dependence of the three output fields. These plane-wave models exclude diffractive effects, but can be readily extended to include them.

More Details

Modal Parameter Extraction Using Natural Excitation Response Data

Barney, Patrick S.

The use of natural excitation response data for the extraction of modal parameters has been an alluring idea for many years, The primary reason is that it offers the real world inputs (both spatial and temporal) and the associated responses of the system without the cost of a complex excitation system. The use of NExT allows for a linear representation of the system at operating levels, which is ideal for predictive linear simulation. The NExT parameter estimation methods have relied on using standard modal parameter extraction routines that do not exploit the special model form of NExT data. A parameter estimation method is developed here that is consistent with the form, thereby providing a more robust estimator in the presence of noise. This paper presents the basic methods used in NExT as well as some of the critical issues when using NExT.

More Details

The SMAC Modal Parameter Extraction Package

Mayes, Randall L.

After the basic theory for SMAC is presented below, two applications of the algorithm with real hardware will be presented. The first moderate-damping application is a system with approximately 50 modes in the bandwidth with up to 5 percent damping. The second application has at least 70 modes in the bandwidth, but damping is always below 3 percent. In addition to the improved implementation of the SMAC root finder, coding has been written to extract the mode shapes based on quadrature fit, and this is described herein.

More Details

Laboratory Simulation of Response to a Distributed Pressure Load

Simmermacher, Todd W.

Responses to a distributed pressure load are typically predicted through the use of a finite-element model. This procedure depends on the model to represent the actual structure accurately. Another technique that is developed in this work is to predict the response based upon an experi- mentally derived model. This model consists of frequency response functions. The pressure distribution is assumed to be known. In this work, the pressure load will be a blast load. The focus of this work will be to simulate a harsh, shock-like environment. Data from a reverse Hopkinson bar (RHB) test is used to generate the response to a symmetric, distributed load. The reverse Hopkinson bar generates a high ampli- tude, high frequency content pulse that excites components at near-blast levels. The frequency response functions gen- erated from the RHB are used to generate an experimental model of the structure, which is then used in conjunction with the known pressure distribution, to estimate the component response to a blast. This result can then be used with a model correlation technique to adjust a finite element model such that data from a true blast test can be used to only fine tune the model. This work details the estimation response due to the blast.

More Details

Simulations of the Penetration of 6061-T6511 Aluminum Targets by Spherical-Nosed VAR 4340 Steel Projectiles

International Journal of Solids and Structures

Warren, Thomas L.

In certain penetration events it is proposed that the primary mode of deformation of the target can be approximated by known analytical expressions. In the context of an analysis code, this approximation eliminates the need for discretizing the target as well as the need for a contact algorithm. Thus, this method substantially reduces the computer time and memory requirements. In this paper a forcing function which is derived from a spherical-cavity expansion (SCE) analysis has been implemented in a transient dynamic finite element code. This irnplementation is capable of computing the structural and component responses of a projectile due to a three dimensional penetration event. Simulations are presented for 7.1 l-mm-diameter, 74.7-mm-long, spherical-nose, vacuum- arc-remelted (VAR) 4340 steel projectiles that penetrate 6061-T6511 aluminum targets. Final projectile configurations obtained from the simulations are compared with post-test radiographs obtained from the corresponding experiments. It is shown that the simulations accurately predict the permanent projectile deformation for three dimensional loadings due to incident pitch and yaw over a wide range of striking velocities.

More Details

Using DFX for Algorithm Evaluation

Beiriger, Judy I.

Evaluating whether or not a new seismic processing algorithm can improve the performance of the operational system can be problematic: it maybe difficult to isolate the comparable piece of the operational system it maybe necessary to duplicate ancillary timctions; and comparing results to the tuned, full-featured operational system maybe an unsat- isfactory basis on which to draw conclusions. Algorithm development and evaluation in an environment that more closely resembles the operational system can be achieved by integrating the algorithm with the custom user library of the Detection and Feature Extraction (DFX) code, developed by Science Applications kternational Corporation. This integration gives the seismic researcher access to all of the functionality of DFX, such as database access, waveform quality control, and station-specific tuning, and provides a more meaningfid basis for evaluation. The goal of this effort is to make the DFX environment more accessible to seismic researchers for algorithm evalua- tion. Typically, anew algorithm will be developed as a C-language progmm with an ASCII test parameter file. The integration process should allow the researcher to focus on the new algorithm developmen~ with minimum attention to integration issues. Customizing DFX, however, requires soflsvare engineering expertise, knowledge of the Scheme and C programming languages, and familiarity with the DFX source code. We use a C-language spatial coherence processing algorithm with a parameter and recipe file to develop a general process for integrating and evaluating a new algorithm in the DFX environment. To aid in configuring and managing the DFX environment, we develop a simple parameter management tool. We also identifi and examine capabilities that could simplify the process further, thus reducing the barriers facing researchers in using DFX..These capabilities include additional parameter manage- ment features, a Scheme-language template for algorithm testing, a generic algorithm interface encompassing expected DFX functionality and algorithm input and output, and the aggregation of some DFX I?imctionality to sim- plify the interface.

More Details

The DOE Model for Improving Seismic Event Locations Using Travel Time Corrections: Description and Demonstration

Hipp, James R.

The U.S. National Laboratories, under the auspices of the Department of Energy, have been tasked with improv- ing the capability of the United States National Data Center (USNDC) to monitor compliance with the Comprehen- sive Test Ban Trea~ (CTBT). One of the most important services which the USNDC must provide is to locate suspicious events, preferably as accurately as possible to help identify their origin and to insure the success of on-site inspections if they are deemed necessary. The seismic location algorithm used by the USNDC has the capability to generate accurate locations by applying geographically dependent travel time corrections, but to date, none of the means, proposed for generating and representing these corrections has proven to be entirely satisfactory. In this presentation, we detail the complete DOE model for how regional calibration travel time information gathered by the National Labs will be used to improve event locations and provide more realistic location error esti- mates. We begin with residual data and error estimates from ground truth events. Our model consists of three parts: data processing, data storage, and data retrieval. The former two are effectively one-time processes, executed in advance before the system is made operational. The last step is required every time an accurate event location is needed. Data processing involves applying non-stationary Bayesian kriging to the residwd data to densifi them, and iterating to find the optimal tessellation representation for the fast interpolation in the data retrieval task. Both the kriging and the iterative re-tessellation are slow, computationally-expensive processes but this is acceptable because they are performed off-line, before any events are to be located. In the data storage task, the densified data set is stored in a database and spatially indexed. Spatial indexing improves the access efficiency of the geographically-ori- ented data requests associated with event location. Finally, in the Data Retrieval phase, when an accurate location is needed, the densified data is retrieved and a quick interpolation is performed using natural neighbor interpolation with a gradient slope modification to guarantee continuous derivatives. To test our model, we use the residuals from a large set of synthetic events (441) that were created to have travel times consistent with the IASP91 radial base model plus perturbations of up to 2 seconds taken from spherical har- monic surfaces with randomly generated coefficients. Relocating these events using 3 stations with poor azimuthal coverage and IASP91 travel times alone yields dislocations of up 278 km with a mean value of 58 km. Using our model to apply travel time corrections we reduce the hugest dislocation to 151 km and the mean value to 13 km. Fur- ther, the error ellipses generated now accurately reflect the uncertainly associated with the composite model (base model + corrections), and as a result are small for events occurring near ground truth event points and large for events occurring where no calibration data is available.

More Details

The DOE Knowledge Base Mthodology for the Creation of an Optimal Spatial Tessellation

Hipp, James R.

The DOE Knowledge Base is a library of detailed information whose purpose is to improve the capability of the United States National Data Center (USNDC) to monitor compliance with the Comprehensive Test Ban Treaty (CTBT). Much of the data contained by the Knowledge Base is spatial in nature, and some of it is used to improve the accuracy with which seismic locations are determined while maintaining or improving current calculational perfor- mance. In this presentation, we define and describe the methodology used to create spatial tessellations of seismic data which are utilized with a gradient-modified natural-neighbor interpolation method to evaluate travel-time corrections. The goal is to interpolate a specified correction surface, or a group of them, with prescribed accuracy and surface smoothness requirements, while minimizing the number of data points necessary to represent the surface. Maintain- ing accuracy is crucial toward improving the precision of seismic origin location. Minimizing the number of nodes in the tessellation improves calculational and data access efficiency and performance. The process requires two initialization steps and an iterated 7 step algorithm for inserting new tessellation nodes. First, M residual data from ground truth events are included in the tessellation. These data remain fixed throughout the creation of the triangular tessellation. Next, a coarse grid of nodes is laid over the region to be tessellated. The coarse grid is necessary to define the boundary of the region to be tessellated. Next the 7 step iterated algorithm is performed to add new nodes to the tessellation to ensure that accuracy and smoothness requirements are met. These steps include 1) all data points in the tessellation are linked together to form a triangular tessellation using p standard Delaunay tessellation technique; 2) all of the data points, excluding the original data and boundruy nodes, are smoothed using a length-weighted Laplacian smoother to remove poorly formed triangles; 3) all new data points are assigned corrections by performing a Non-stationary Bayesian Kriging calculation for each new triangle node; 4) all nodes that exceed surface roughness requirements are split by inserting a new node at the mid-points of the edges that share the rough nod% 5) all remaining triangle edge midpoints and centers are inte~olated using gradient-modified natural-neighbor interpolation and kriged using the Bayesian IGiging algoritlm 6) new nodes are inserted into the tessellation at all edge and triangle mid-points that exceed the specified relative error tolerance between the interpo- lated and Iaiged values, and 7) all new insertion nodes are added to the tessellations node list. Steps 1 through 7 are repeated until all relative error and surface smoothness requirements are satisfied. Results indicate that node densities in the tessellation are largest in regions of high surface curvature as expected. Generally, gradient modified natural-neighbor interpolation methods do a better job than linear natural-neighbor methods at meeting accuracy requirements which translates to fewer nodes necessary to represent the surface.

More Details

Using the DOE Knowledge Base for Special Event Analysis

Armstrong, H.M.; Harris, J.M.; Young, C.J.

The DOE Knowledge Base is a library of detailed information whose purpose is to support the United States National Data Center (USNDC) in its mission to monitor compliance with the Comprehensive Test Ban Treaty (CTBT). One of the important tasks which the USNDC must accomplish is to periodically perform detailed analysis of events of high interest, so-called "Special Events", to provide the national authority with information needed to make policy decisions. In this paper we investigate some possible uses of the Knowledge Base for Special Event Analysis (SEA), and make recommendations for improving Knowledge Base support for SEA. To analyze an event in detail, there are two basic types of data which must be used sensor-derived data (wave- forms, arrivals, events, etc.) and regiohalized contextual data (known sources, geological characteristics, etc.). Cur- rently there is no single package which can provide full access to both types of data, so for our study we use a separate package for each MatSeis, the Sandia Labs-developed MATLAB-based seismic analysis package, for wave- form data analysis, and ArcView, an ESRI product, for contextual data analysis. Both packages are well-suited to pro- totyping because they provide a rich set of currently available functionality and yet are also flexible and easily extensible, . Using these tools and Phase I Knowledge Base data sets, we show how the Knowledge Base can improve both the speed and the quality of SEA. Empirically-derived interpolated correction information can be accessed to improve both location estimates and associated error estimates. This information can in turn be used to identi~ any known nearby sources (e.g. mines, volcanos), which may then trigger specialized processing of the sensor data. Based on the location estimate, preferred magnitude formulas and discriminants can be retrieved, and any known blockages can be identified to prevent miscalculations. Relevant historic events can be identilled either by spatial proximity searches or through waveform correlation processing. The locations and waveforms of these events can then be made available for side-by-side comparison and processing. If synthetic modeling is thought to be warranted, a wide variety of rele- vant contextu~l information (e.g. crustal thickness and layering, seismic velocities, attenuation factors) can be retrieved and sent to the appropriate applications. Once formedj the synthetics can then be brought in for side-by-side comparison and fhrther processing. Based on our study, we make two general recommendations. First, proper inter-process communication between sensor data analysis software and contextual data analysis sofisvare should be developed. Second, some of the Knowl- edge Base data sets should be prioritized or winnowed to streamline comparison with observed quantities.

More Details

Development to Release of CTBT Knowledge Base Datasets

Keyser, Ralph G.

For the CTBT Knowledge Base to be useful as a tool for improving U.S. monitoring capabilities, the contents of the Knowledge Base must be subjected to a well-defined set of procedures to ensure integrity and relevance of the con- stituent datasets. This paper proposes a possible set of procedures for datasets that are delivered to Sandia National Laboratories (SNL) for inclusion in the Knowledge Base. The proposed procedures include defining preliminary acceptance criteria, performing verification and validation activities, and subjecting the datasets to approvrd by domain experts. Preliminary acceptance criteria include receipt of the data, its metadata, and a proposal for its usability for U.S. National Data Center operations. Verification activi- ties establish the correctness and completeness of the data, while validation activities establish the relevance of the data to its proposed use. Results from these activities are presented to domain experts, such as analysts and peers for final approval of the datasets for release to the Knowledge Base. Formats and functionality will vary across datasets, so the procedures proposed herein define an overall plan for establishing integrity and relevance of the dataset. Specific procedures for verification, validation, and approval will be defined for each dataset, or for each type of dataset, as appropriate. Potential dataset sources including Los Alamos National Laboratories and Lawrence Livermore National Laborato- ries have contributed significantly to the development of thk process.

More Details

FAA Fluorescent Penetrant Activities - An Update

Moore, David G.

The Federal Aviation Administration's Airworthiness Assurance NDI Validation Center (AANC) is currently characterizing low cycle fatigue specimens that will support the needs of penetrant manufacturers, commercial airline industry and the Federal Aviation Administration. The main focus of this characterization is to maintain and enhance the evaluation of penetrant inspection materials and apply resources to support the aircraft community needs. This paper discusses efforts to-date to document the Wright Laboratory penetrant evaluation process and characterize penetrant brightness readings in the initial set of sample calibration panels using Type 1 penetrant.

More Details

Remediation of a Classified Waste Landfill at Sandia National Laboratories, NM

Ward, Dann C.

The Sandia National Laboratory es/New Mexico (SNLiNM) Environmental Restoration Project is currently excavating the Classified Waste Landfill in Technical Area II (TA-H), which consists of disposal pits and trenches with discrete disposal cells. TA-11 is a secure, controlled assess, research facility managed by SNIJNM for the US Department of Energy (DOE). The 45-acre facility was established in 1948 for the assembly and maintenance of nuclear weapons. The assembly of weapons was discontinued in 1954. Since that time, TA-11 has been used primarily for explosive research and testing. Beginning is 1984, the DOE Er,vironmental Restoration Program conducted several environmental investigations across TA-11 and SNMNM. These investigations identified sites requiring firther study and possible corrective action. The majority of these sites were grouped into operable units (OUS). The TA-11 OU included 13 sites, one of which is identified as the Classified Waste Landfill (CWLF). The CWLF covers about 2.5 acres and was operated from approximately 1947 through 1987. It was the site for disposal of classified weapon components, s ome of which are potentially explosive, hazardous, ardor radioactively contarninatod. Until about 1958, no records were maintained for material disposed of in the CWLF. Information on the CWLF has been assembled horn interview notes, delivery to reckmation records and other sources. Items disposed of included security containers, hoppers, skids, missiles, wooden boxes, deactivated heat sources, tntium boosters, scintillation cocktails, weapons cases, shells, lasers, radar equipment and accountable mata-ials. Potential contaminants include tritium, thorium, cesium-137, strontium-90, uraniun, plutonium, beryllium, cadmium, lithium, chloroform, toluene, benzene ad other solvents.

More Details

Effects of Spatially Heterogeneous Porosity on Matrix-Diffusion as Investigated by X ray Absorption Imaging

Journal of Contaminate Hydrology

Tidwell, Vincent C.

Laboratory experiments were performed to investigate the effects of spatial variation in porosity on matrix-diffusion processes. Four centimeter-scale slabs of Culebra dolomite taken from the Waste Isolation Pilot Plant site were used in the tests. Experiments involved the simple diffusion of iodine into a single edge of each rock slab while X ray absorption imaging was used to measure the resulting two-dmensional solute concentration field as a function of time. X ray imaging was also used to quantify the two-dimensional porosity field of each rock slab. Image analysis provided a unique opportunity to both visuake and quantifj the effects of the spatially variable porosi~ on matrixdMusion. Four key results were obtained. First, significant variation in rates of diffusion were realized over the relatively small length (centimeter) and time scales (months) investigated. Second, clear evidence of diffusion preferentially following zones of relatively higher porosity was noted. Third, rate of difhion was found to vary as tracer diffused into the rock slabs encountering changing porosity conditions. Fourth, strong correlation between porosi~ and the calculated diffusion coefficients was found. In fact, the nature of the correlation can be related to the geometry, position, and orientation of the heterogeneous porosity features populating each rock slab.

More Details

Self-Organized Growth of Alloy Superlattices

Nature

Floro, Jerrold A.

We predict theoretically and demonstrate experimentally the spontaneous formation of a superlattice during crystal growth. When a strained alloy grows by "step flow", the steps at the surface form periodic bunches. The resulting modulated strain biases the incorporation of the respective alloy components at different steps in the bunch, leading to the formation of a superlattice. X-ray diffraction and electron microscopy for SiGe grown on Si give clear evidence for such spontaneous superlattice formation.

More Details

Conformational Diversity in (Octaethylporphinato) (trichloroacetato)iron(III) Derivatives

Inorganic Chemistry Acta

Shelnutt, John A.

Treatment of [Fe(OEP)]20 with trichloroacetic acid results in ruffled formation of (octaethylporphinato trichloroacetato)iron(HI). Various crystalline solvates can be isolated, depending on the crystallization solvent. Initial crystallization with CHC13/hexanes resulted in the isolation of an unsolvated form. [Fe(OEP)(02C2C13 )]. This form contains distinct porphyrin core conformations at the same site: one is domed and the other is ruffled. Crystal data for [Fe(OEP)(02C2C13 )]: Q = 14.734(4) .4. b = 13.674(1) .\. c = 17..541 [,.5] .~. 3 = 90.67(1)0, V = 35-!5.8(14) .\3. monoclinic. space group R1/ n. Z = 4. Subsequent crystallization with CHC13/hexanes resulted in a new crystalline form, [Fe(OEP)(OzC2C13 )~.- CHC13; the porphyrin core is slightly ruffled. Crystal data for [Fe(OEP)(OoC2C13 )]. CHC13: a =12.323(1) .~, 6 = 13.062(3) .\. C = 14.327(2) .$, Q = 89.32(1)", .3 = 113.36(2)0. :~ = 105.26(1)'. V = `2031.3(6) .\3. triclinic. space group Pi. Z = 2. Crystallization with CH2C12/hexanes resulted in the isolation of yet another form, [Fe(OEP) (02 C2C13)]. H02C2C13. which contains two independent molecules in the unit cell: molecule is slightly saddled and molecule B is modestly ruffled. Crystal data for [Fe(OEP)(02ClC13 )]. H02C2C13: a = 13.148(3) .\, b = 13.45.5(3) A, c = Q3.761(5) -& ~ = 90.72(3)", ~ = 91. ~4(3)". -y = 92.36(3)0, V = 4198.5(15) .\3, triclinic.space group PI, Z = 4. .+11 conformations form dimers in the solid state. Temperature-dependent manometic susceptibility measurements showed that [Fe(OEP)(02C2C13)] .CHC13 contains a high-spin iron(III) center; the data for {Fe(OEP)(02C2C13 )l.H02C2C13 are understood in terms of an admixed intermediate-spin state (S = 3/2, 5/2) and are readily fit to a faltempo model with a ground state multiplet containing about 78% S = 5/2 character and 22% S = 3/2 character. The structural data for [Fe(OEP)(02C2C13 )]. CHC13 are consistent with the observed high-spin state, while data for ~Fe(OEP) (02 C2C13)] .H02C2C13 are consistent with the admixed-spin iron(HI) character. The observed core conformations have been described by a normal-coordinate structural decomposition method.

More Details

Unipolar Complementary Circuits Using Double Electron Layer Tunneling Tansistors

Applied Physics Letters

Simmons, Jerry A.

We demonstrate unipolar complementary circuits consisting of a pair of resonant tunneling transistors based on the gate control of 2D-2D interlayer tunneling, where a single transistor - in addition to exhibiting a welldefined negative-differential-resistance can be operated with either positive or negative transconductance. Details of the device operation are analyzed in terms of the quantum capacitance effect and band-bending in a double quantum well structure, and show good agreement with experiment. Application of resonant tunneling complementary logic is discussed by demonstrating complementary static random access memory using two devices connected in series.

More Details

Evolution of 2D Potts Model Grain Microstructures from an Initial Hillert Size Distribution

Battaile, Corbett C.

Grain growth experiments and simulations exhibit self-similar grain size distributions quite different from that derived via a mean field approach by Hillert [ 1]. To test whether this discrepancy is due to insufficient anneal times, two different two-dimensional grain structures with realistic topologies and Hillert grain size distributions are generated and subjected to grain growth via the Monte Carlo Potts Model (MCPM). In both cases, the observed self-similar grain size distributions deviate from the initial Hillert form and conform instead to that observed in MCPM grain growth simulations that start from a random microstructure. This suggests that the Hillert grain size distribution is not an attractor.

More Details

An Overview of HATS: A Language Independent High Assurance Transformation System

Winter, V.L.

Transformations that are based on syntax directed rewriting systems can have a significant impact on the construction of high assurance systems. However, in order for a transformational approach to be useful to a particular problem domain, a (general) transformation system must be adapted to the notation of that particular domain. A transformation system that can be easily adapted to various domain notations has the potential of having a wide range of applicability. In this paper we dissus why transforrmtion is attractive horn a high assurance perspective, as well as some issues surrounding automated transformation within specific problem domains. We then give an overview of a language independent High Assurance Transformation System (HATS) that is being developed at Sandia National Laboratories.

More Details

Validation of Electrical-Impedance Tomography for Measurements of Material Distribution in Two-Phase Flows

International Journal of Multiphase Flow

Torczynski, John R.

A series of studies is presented in which an electrical-impedance tomography (EXT) system is validated for two-phase flow measurements. The EIT system, developed at Sandia National Laboratories, is described along with the computer algorithm used for reconstructing phase volume fraction profiles. The algorithm is first tested using numerical data and experimental phantom measurements, with good results. The EIT system is then applied to solid-liquid and gas-liquid flows, and results are compared to an established gamma-densitometry tomography (GDT) system. In the solid-liquid flows, the average solid volume fractions measured by EIT are in good agreement with nominal values; in the gas-liquid flows, average gas volume fractions and radial gas volume fraction profiles from GDT and EIT are also in good agreement.

More Details

Use of Dissolved and Colloidal Actinide Parameters within the 1996 Waste Isolation Pilot Plant Compliance Certification Application

Stockman, Christine T.

Many of the papers in this volume present detailed descriptions of the chemical analyses and methodologies that have been used to evaluate the maximum dissolved and colloid concentrations of actinides within the WIPP repository as part of the performance assessment. This paper describes the program fcm collecting experimental data and provides an overview of how the PA modeled the release of radionuclides to the accessible environment, and how volubility and colloid parameters were used by the PA models.

More Details

Experimental Demonstration of Guiding and Bending of Electromagnetic Waves in a Photonic Crystal

Science

Lin, Shawn-Yu

The routing and interconnection of optical signals through narrow channels and around sharp corners is important for large-scale all-optical circuit applications. A recent computational result suggests that photonic crystals may offer a novel way of achieving this goal by providing a mechanism for guiding light that is fundamentally different from traditional index guiding. Waveguiding in a photonic crystal, and near 100% transmission of electromagnetic waves around sharp 90o corners were observed experimentally. Bend- ing radii were made smaller than one wavelength.

More Details

An Overview of Surface Finishes and Their Role in Printed Circuit Board Solderability and Solder Joint Performance

Circuit World

Vianco, Paul T.

A overview has been presented on the topic of alternative surface finishes for package I/Os and circuit board features. Aspects of processability and solder joint reliability were described for the following coatings: baseline hot-dipped, plated, and plated-and-fused 100Sn and Sn-Pb coatings; Ni/Au; Pd, Ni/Pd, and Ni/Pd/Au finishes; and the recently marketed immersion Ag coatings. The Ni/Au coatings appear to provide the all-around best option in terms of solderability protection and wire bondability. Nickel/Pal ftishes offer a slightly reduced level of performance in these areas that is most likely due to variable Pd surface conditions. It is necessmy to minimize dissolved Au or Pd contents in the solder material to prevent solder joint embrittlement. Ancillary aspects that included thickness measurement techniques; the importance of finish compatibility with conformal coatings and conductive adhesives; and the need for alternative finishes for the processing of non-Pb bearing solders were discussed.

More Details

Low-(18)O Silicic Magmas: Why Are They So Rare?

Earth and Planetary Science Letters

Balsley, Steven D.

LOW-180 silicic magmas are reported from only a small number of localities (e.g., Yellowstone and Iceland), yet petrologic evidence points to upper crustal assimilation coupled with fractional crystallization (AFC) during magma genesis for nearly all silicic magmas. The rarity of 10W-l `O magmas in intracontinental caldera settings is remarkable given the evidence of intense 10W-l*O meteoric hydrothermal alteration in the subvolcanic remnants of larger caldera systems. In the Platoro caldera complex, regional ignimbrites (150-1000 km3) have plagioclase 6180 values of 6.8 + 0.1%., whereas the Middle Tuff, a small-volume (est. 50-100 km3) post-caldera collapse pyroclastic sequence, has plagioclase 8]80 values between 5.5 and 6.8%o. On average, the plagioclase phenocrysts from the Middle Tuff are depleted by only 0.3%0 relative to those in the regional tuffs. At Yellowstone, small-volume post-caldera collapse intracaldera rhyolites are up to 5.5%o depleted relative to the regional ignimbrites. Two important differences between the Middle Tuff and the Yellowstone 10W-180 rhyolites elucidate the problem. Middle Tuff magmas reached water saturation and erupted explosively, whereas most of the 10W-l 80 Yellowstone rhyolites erupted effusively as domes or flows, and are nearly devoid of hydrous phenocrysts. Comparing the two eruptive types indicates that assimilation of 10W-180 material, combined with fractional crystallization, drives silicic melts to water oversaturation. Water saturated magmas either erupt explosively or quench as subsurface porphyrins bejiire the magmatic 180 can be dramatically lowered. Partial melting of low- 180 subvolcanic rocks by near-anhydrous magmas at Yellowstone produced small- volume, 10W-180 magmas directly, thereby circumventing the water saturation barrier encountered through normal AFC processes.

More Details

Multispectral UV Fluorescence Detection of a Dilute Constituent in an Optically Dense Matrix

Applied Optics

Chan, O.H.; Rubenstein, R.; Tisone, G.C.; Wagner, J.S.

Multispectral UV fluorescence measurements were made of an optically dense medium (fetal bovine serum, FBS) spiked with sodium salicylate at concentrate ions from 0.2 to 500 pg/ml . Analysis of the spectra show that, depending on experimental conditions, reasonably good estimates of concentration can be obtained across the entire range of concentrate ions. Experimental conditions required for recovering these estimates are demonstrated.

More Details

Predictable Safety in the Control of High Consequence Systems

Covan, John M.

Many industries transmit large amounts of energy under the control of safety critical systems, inadvertent release of energy by such systems can result in negative high consequences. Thirpaper describes aprincipie-ba.re dstrategyfor preventing inadvertent release due [O normai operational stresses or abnormal (e.g., accident) stresses. The sajetyprin- ciples, deveioped by Sandia )?a~ional Laboratories for im- bedding detonation safety in nuclear weapons, include iso- [atio~ inoperabilip and incompatibility. There principles will be defined in the paper. They are illustrated and con- trasted to conventionalpractice via the application to a gas jiunace control system.

More Details

Use of Z-Pinch Sources for High-Pressure Equation-of-State Studies

Asay, J.R.

In this paper, we describe a new technique for using a pulsed power source (Z pinch) to produce planar shock waves for high-pressure equation of state (EOS) studies. Initial EOS experiments conducted with techniques indicate that these sources are effective for shock wave studies in samples with diameters of a few millimeters and thicknesses of a fraction of one millimeter, and thus provide the possibility for achieving accuracy in shock and particle velocity measurements of a few percent. We have used the Z pinch source to produce the first in-situ time-resolve particle velocity profiles obtained with pulsed radiation sources in the Mbar regime. Particle velocity profiles obtained with a VISAR interferometer are compared with I-D numerical simulations performed with a radiation-hydrodynamics code, ALEGRA. Good agreement with experimental results was achieved in the simulations and suggests that Z pinch source should be a valuable tool for high-pressure EOS studies in thermodynamic regimes important to hypervelocity impact.

More Details
Results 92801–92850 of 99,299
Results 92801–92850 of 99,299