Publications

Results 90201–90250 of 99,299

Search results

Jump to search filters

Distributed Sensor Particles for Remote Fluorescence Detection of Trace Analytes: UXO/CW

Singh, Anup K.; Schmitt, Randal L.; Johnson, Mark S.; Hargis Jr., Philip J.; Simonson, Robert J.; Schoeniger, Joseph S.; Ashley, Carol S.; Brinker, C.J.; Hance, Bradley G.

This report summarizes the development of sensor particles for remote detection of trace chemical analytes over broad areas, e.g residual trinitrotoluene from buried landmines or other unexploded ordnance (UXO). We also describe the potential of the sensor particle approach for the detection of chemical warfare (CW) agents. The primary goal of this work has been the development of sensor particles that incorporate sample preconcentration, analyte molecular recognition, chemical signal amplification, and fluorescence signal transduction within a ''grain of sand''. Two approaches for particle-based chemical-to-fluorescence signal transduction are described: (1) enzyme-amplified immunoassays using biocompatible inorganic encapsulants, and (2) oxidative quenching of a unique fluorescent polymer by TNT.

More Details

Rio Grande Erosion Potential Demonstration - Report for the National Border Technology Program

Jepsen, Richard A.; Roberts, Jesse D.

This demonstration project is a collaboration among DOE, Sandia National Laboratories, the University of Texas, El Paso (UTEP), the International Boundary and Water Commission (IBWC), and the US Army Corps of Engineers (USACE). Sandia deployed and demonstrated a field measurement technology that enables the determination of erosion and transport potential of sediments in the Rio Grande. The technology deployed was the Mobile High Shear Stress Flume. This unique device was developed by Sandia's Carlsbad Programs for the USACE and has been used extensively in collaborative efforts on near shore and river systems throughout the United States. Since surface water quantity and quality along with human health is an important part of the National Border Technology Program, technologies that aid in characterizing, managing, and protecting this valuable resource from possible contamination sources is imperative.

More Details

Silicon Three-Dimensional Photonic Crystal and its Applications

Lin, Shawn-Yu; Fleming, J.G.; Lyo, Sungkwun K.

Photonic crystals are periodically engineered ''materials'' which are the photonic analogues of electronic crystals. Much like electronic crystal, photonic crystal materials can have a variety of crystal symmetries, such as simple-cubic, closed-packed, Wurtzite and diamond-like crystals. These structures were first proposed in late 1980's. However, due mainly to fabrication difficulties, working photonic crystals in the near-infrared and visible wavelengths are only just emerging. In this article, we review the construction of two- and three-dimensional photonic crystals of different symmetries at infrared and optical wavelengths using advanced semiconductor processing. We further demonstrate that this process lends itself to the creation of line defects (linear waveguides) and point defects (micro-cavities), which are the most basic building blocks for optical signal processing, filtering and routing.

More Details

Embedded Self-Powered MicroSensors for Monitoring the Surety of Critical Buildings and Infrastructures

Pfeifer, Kent B.; Rumpf, Arthur N.; Leming, Sarah L.

Monitoring the condition of critical structures is vital for not only assuring occupant safety and security during naturally occurring and malevolent events, but also to determine the fatigue rate under normal aging conditions and to allow for efficient upgrades. This project evaluated the feasibility of applying integrated, remotely monitored micro-sensor systems to assess the structural performance of critical infrastructure. These measurement systems will provide forensic data on structural integrity, health, response, and overall structural performance in load environments such as aging, earthquake, severe wind, and blast attacks. We have investigated the development of ''self-powered'' sensor tags that can be used to monitor the state-of-health of a structure and can be embedded in that structure without compromising the integrity of the structure. A sensor system that is powered by converting structural stresses into electrical power via piezoelectric transducers has been demonstrated including work toward integration of that sensor with a novel radio frequency (RF) tagging technology as a means of remotely reading the data from the sensor.

More Details

Dexterous Manipulation: Making Remote Manipulators Easy to Use

Harrigan, Raymond W.; Bennett, Phil C.

Perhaps the most basic barrier to the widespread deployment of remote manipulators is that they are very difficult to use. Remote manual operations are fatiguing and tedious, while fully autonomous systems are seldom able to function in changing and unstructured environments. An alternative approach to these extremes is to exploit computer control while leaving the operator in the loop to take advantage of the operator's perceptual and decision-making capabilities. This report describes research that is enabling gradual introduction of computer control and decision making into operator-supervised robotic manipulation systems, and its integration on a commercially available, manually controlled mobile manipulator.

More Details

The ASCI Network for SC 2000: Gigabyte Per Second Networking

Pratt, Thomas J.; Naegle, John H.; Martinez, Luis G.; Hu, Tan C.; Miller, Marc M.; Barnaby, Marty L.; Adams, Roger L.; Klaus, Edward J.

This document highlights the Discom's Distance computing and communication team activities at the 2000 Supercomputing conference in Dallas Texas. This conference is sponsored by the IEEE and ACM. Sandia's participation in the conference has now spanned a decade, for the last five years Sandia National Laboratories, Los Alamos National Lab and Lawrence Livermore National Lab have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives, Program rubric to demonstrate ASCI's emerging capabilities in computational science and our combined expertise in high performance computer science and communication networking developments within the program. At SC 2000, DISCOM demonstrated an infrastructure. DISCOM2 uses this forum to demonstrate and focus communication and pre-standard implementation of 10 Gigabit Ethernet, the first gigabyte per second data IP network transfer application, and VPN technology that enabled a remote Distributed Resource Management tools demonstration. Additionally a national OC48 POS network was constructed to support applications running between the show floor and home facilities. This network created the opportunity to test PSE's Parallel File Transfer Protocol (PFTP) across a network that had similar speed and distances as the then proposed DISCOM WAN. The SCINET SC2000 showcased wireless networking and the networking team had the opportunity to explore this emerging technology while on the booth. This paper documents those accomplishments, discusses the details of their convention exhibit floor. We also supported the production networking needs of the implementation, and describes how these demonstrations supports DISCOM overall strategies in high performance computing networking.

More Details

An Evaluation of Molten-Salt Power Towers Including Results of the Solar Two Project

Reilly, Hugh E.; Kolb, Gregory J.

This report utilizes the results of the Solar Two project, as well as continuing technology development, to update the technical and economic status of molten-salt power towers. The report starts with an overview of power tower technology, including the progression from Solar One to the Solar Two project. This discussion is followed by a review of the Solar Two project--what was planned, what actually occurred, what was learned, and what was accomplished. The third section presents preliminary information regarding the likely configuration of the next molten-salt power tower plant. This section draws on Solar Two experience as well as results of continuing power tower development efforts conducted jointly by industry and Sandia National Laboratories. The fourth section details the expected performance and cost goals for the first commercial molten-salt power tower plant and includes a comparison of the commercial performance goals to the actual performance at Solar One and Solar Two. The final section summarizes the successes of Solar Two and the current technology development activities. The data collected from the Solar Two project suggest that the electricity cost goals established for power towers are reasonable and can be achieved with some simple design improvements.

More Details

A Critical Review of the State-of-the-Art in Autonomous Land Vehicle Systems and Technology

Eicker, Patrick J.

This report describes the current state-of-the-art in Autonomous Land Vehicle (ALV) systems and technology. Five functional technology areas are identified and addressed. For each a brief, subjective, preface is first provided which envisions the necessary technology for the deployment of an operational ALV system. Subsequently, a detailed literature review is provided to support and elaborate these views. It is further established how these five technology areas fit together as a functioning whole. The essential conclusion of this report is that the necessary sensors, algorithms and methods to develop and demonstrate an operationally viable all-terrain ALV already exist and could be readily deployed. A second conclusion is that the successful development of an operational ALV system will rely on an effective approach to systems engineering. In particular, a precise description of mission requirements and a clear definition of component functionality is essential.

More Details

A Case Study in Competitive Technical and Market Intelligence Support and Lessons Learned for the uChemLab LDRD Grand Challenge Project

Southwell, Edwin T.; Garcia, Marie L.; Meyers, Charles E.

The {mu}ChemLab{trademark} Laboratory Directed Research and Development (LDRD) Grand Challenge project began in October 1996 and ended in September 2000. The technical managers of the {mu}ChemLab{trademark} project and the LDRD office, with the support of a consultant, conducted a competitive technical and market demand intelligence analysis of the {mu}ChemLab{trademark}. The managers used this knowledge to make project decisions and course adjustments. CTI/MDI positively impacted the project's technology development, uncovered potential technology partnerships, and supported eventual industry partner contacts. CTI/MDI analysis is now seen as due diligence and the {mu}ChemLab{trademark} project is now the model for other Sandia LDRD Grand Challenge undertakings. This document describes the CTI/MDI analysis and captures the more important ''lessons learned'' of this Grand Challenge project, as reported by the project's management team.

More Details

Applications of Transport/Reaction Codes to Problems in Cell Modeling

Means, Shawn A.; Rintoul, Mark D.; Shadid, John N.

We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes.

More Details

The Visionary's Dilemma

Myers, David R.; Sumpter, Carol W.; Jakubczak II, Jerome F.

Novel technologies often are born prior to identifying application arenas that can provide the financial support for their development and maturation. After creating new technologies, innovators rush to identify some previously difficult-to-meet product or process challenge. In this regard, microsystems technology is following a path that many other electronic technologies have previously faced. From this perspective, the development of a robust technology follows a three-stage approach. First there is the ''That idea will never work.'' stage, which is hurdled only by proving the concept. Next is the ''Why use such a novel (unproven) technology instead of a conventional one?'' stage. This stage is overcome when a particular important device cannot be made economically--or at all--through the existing technological base. This initial incorporation forces at least limited use of the new technology, which in turn provides the revenues and the user base to mature and sustain the technology. Finally there is the ''Sure that technology (e.g., microsystems) is good for that product (e.g., accelerometers and pressure sensors), but the problems are too severe for any other application'' stage which is only overcome with the across-the-board application of the new technology. With an established user base, champions for the technology become willing to apply the new technology as a potential solution to other problems. This results in the widespread diffusion of the previously shunned technology, making the formerly disruptive technology the new standard. Like many technologies in the microelectronics industry, the microsystems community is now traversing this well-worn path. This paper examines the evolution of microsystems technology from the perspective of Sandia National Laboratories' development of a sacrificial surface micromachining technology and the associated infrastructure.

More Details

3-D finite element analysis of induction logging in a dipping formation

IEEE Transactions on Geoscience and Remote Sensing

Weiss, Chester J.

Electromagnetic induction (EMI) by a magnetic dipole located above a dipping interface is of relevance to the petroleum well-logging industry. The problem is fully three-dimensional (3-D) when formulated as above, but reduces to an analytically tractable one-dimensional (1-D) problem when cast as a small tilted coil above a horizontal interface. The two problems are related by a simple coordinate rotation. An examination of the induced eddy currents and the electric charge accumulation at the interface help to explain the inductive and polarization effects commonly observed in induction logs from dipping geological formations. The equivalence between the 1-D and 3-D formulations of the problem enables the validation of a previously published finite element solver for 3-D controlled-source EMI.

More Details

Directory Enabled Policy Based Networking

Keliiaa, Curtis M.

This report presents a discussion of directory-enabled policy-based networking with an emphasis on its role as the foundation for securely scalable enterprise networks. A directory service provides the object-oriented logical environment for interactive cyber-policy implementation. Cyber-policy implementation includes security, network management, operational process and quality of service policies. The leading network-technology vendors have invested in these technologies for secure universal connectivity that transverses Internet, extranet and intranet boundaries. Industry standards are established that provide the fundamental guidelines for directory deployment scalable to global networks. The integration of policy-based networking with directory-service technologies provides for intelligent management of the enterprise network environment as an end-to-end system of related clients, services and resources. This architecture allows logical policies to protect data, manage security and provision critical network services permitting a proactive defense-in-depth cyber-security posture. Enterprise networking imposes the consideration of supporting multiple computing platforms, sites and business-operation models. An industry-standards based approach combined with principled systems engineering in the deployment of these technologies allows these issues to be successfully addressed. This discussion is focused on a directory-based policy architecture for the heterogeneous enterprise network-computing environment and does not propose specific vendor solutions. This document is written to present practical design methodology and provide an understanding of the risks, complexities and most important, the benefits of directory-enabled policy-based networking.

More Details

Icarus: A 2-D Direct Simulation Monte Carlo (DSMC) Code for Multi-Processor Computers

Bartel, Timothy J.; Plimpton, Steven J.; Gallis, Michael A.

Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird[11.1] and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modeled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modeled using steric factors derived from Arrhenius reaction rates or in a manner similar to continuum modeling. Surface chemistry is modeled with surface reaction probabilities; an optional site density, energy dependent, coverage model is included. Electrons are modeled by either a local charge neutrality assumption or as discrete simulational particles. Ion chemistry is modeled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electro-static fields can either be: externally input, a Langmuir-Tonks model or from a Green's Function (Boundary Element) based Poison Solver. Icarus has been used for subsonic to hypersonic, chemically reacting, and plasma flows. The Icarus software package includes the grid generation, parallel processor decomposition, post-processing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. All of the software packages are written in standard Fortran.

More Details

Verification and Validation of Encapsulation Flow Models in GOMA, Version 1.1

Mondy, Lisa A.; Rao, Rekha R.; Schunk, Peter R.; Sackinger, Philip A.; Adolf, Douglas B.

Encapsulation is a common process used in manufacturing most non-nuclear components including: firing sets, neutron generators, trajectory sensing signal generators (TSSGs), arming, fusing and firing devices (AF and Fs), radars, programmers, connectors, and batteries. Encapsulation is used to contain high voltage, to mitigate stress and vibration and to protect against moisture. The purpose of the ASCI Encapsulation project is to develop a simulation capability that will allow us to aid in the encapsulation design process, especially for neutron generators. The introduction of an encapsulant poses many problems because of the need to balance ease of processing and properties necessary to achieve the design benefits such as tailored encapsulant properties, optimized cure schedule and reduced failure rates. Encapsulants can fail through fracture or delamination as a result of cure shrinkage, thermally induced residual stresses, voids or incomplete component embedding and particle gradients. Manufacturing design requirements include (1) maintaining uniform composition of particles in order to maintain the desired thermal coefficient of expansion (CTE) and density, (2) mitigating void formation during mold fill, (3) mitigating cure and thermally induced stresses during cure and cool down, and (4) eliminating delamination and fracture due to cure shrinkage/thermal strains. The first two require modeling of the fluid phase, and it is proposed to use the finite element code GOMA to accomplish this. The latter two require modeling of the solid state; however, ideally the effects of particle distribution would be included in the calculations, and thus initial conditions would be set from GOMA predictions. These models, once they are verified and validated, will be transitioned into the SIERRA framework and the ARIA code. This will facilitate exchange of data with the solid mechanics calculations in SIERRA/ADAGIO.

More Details

The Physics of SERAPHIM

Marder, Barry M.

The Segmented Rail Phased Induction Motor (SERAPHIM) has been proposed as a propulsion method for urban maglev transit, advanced monorail, and other forms of high speed ground transportation. In this report we describe the technology, consider different designs, and examine its strengths and weaknesses.

More Details

Development of a Risk-Based Performance Assessment Method for Long-Term Cover Systems--Application to the Monticello Mill Tailings Repository

Ho, Clifford K.; Arnold, Bill W.; Cochran, John R.; Webb, Stephen W.

A probabilistic, risk-based performance-assessment methodology is being developed to assist designers, regulators, and involved stakeholders in the selection, design, and monitoring of long-term covers for contaminated subsurface sites. This report presents an example of the risk-based performance-assessment method using a repository site in Monticello, Utah. At the Monticello site, a long-term cover system is being used to isolate long-lived uranium mill tailings from the biosphere. Computer models were developed to simulate relevant features, events, and processes that include water flux through the cover, source-term release, vadose-zone transport, saturated-zone transport, gas transport, and exposure pathways. The component models were then integrated into a total-system performance-assessment model, and uncertainty distributions of important input parameters were constructed and sampled in a stochastic Monte Carlo analysis. Multiple realizations were simulated using the integrated model to produce cumulative distribution functions of the performance metrics, which were used to assess cover performance for both present- and long-term future conditions. Performance metrics for this study included the water percolation reaching the uranium mill tailings, radon flux at the surface, groundwater concentrations, and dose. Results of this study can be used to identify engineering and environmental parameters (e.g., liner properties, long-term precipitation, distribution coefficients) that require additional data to reduce uncertainty in the calculations and improve confidence in the model predictions. These results can also be used to evaluate alternative engineering designs and to identify parameters most important to long-term performance.

More Details

Routing Data Authentication in Wireless Networks

Torgerson, Mark D.; Van Leeuwen, Brian P.

In this paper, we discuss several specific threats directed at the routing data of an ad hoc network. We address security issues that arise from wrapping authentication mechanisms around ad hoc routing data. We show that this bolt-on approach to security may make certain attacks more difficult, but still leaves the network routing data vulnerable. We also show that under a certain adversarial model, most existing routing protocols cannot be secured with the aid of digital signatures.

More Details

Final Report for the Quality of Service for Networks Laboratory Directed Research and Development Project

Eldridge, John M.; Tarman, Thomas D.; Brenkosh, Joseph P.; Dillinger, John D.; Michalski, John T.

The recent unprecedented growth of global network (Internet) usage has created an ever-increasing amount of congestion. Telecommunication companies (Telco) and Internet Service Providers (ISP's), which provide access and distribution through the network, are increasingly more aware of the need to manage this growth. Congestion, if left unmanaged, will result in a degradation of the over-all network. These access and distribution networks currently lack formal mechanisms to select Quality of Service (QoS) attributes for data transport. Network services with a requirement for expediency or consistent amounts of bandwidth cannot function properly in a communication environment without the implementation of a QoS structure. This report describes and implements such a structure that results in the ability to identify, prioritize, and police critical application flows.

More Details

On the Convergence of Stochastic Finite Elements

Delaurentis, John M.

We investigate the rate of convergence of stochastic basis elements to the solution of a stochastic operator equation. As in deterministic finite elements, the solution may be approximately represented as the linear combination of basis elements. In the stochastic case, however, the solution belongs to a Hilbert space of functions defined on a cross product domain endowed with the product of a deterministic and probabilistic measure. We show that if the dimension of the stochastic space is n, and the desired accuracy is of order {var_epsilon}, the number of stochastic elements required to achieve this level of precision, in the Galerkin method, is on the order of | ln {var_epsilon} |{sup n}.

More Details

Modification of TOUGH2 to Include the Dusty Gas Model for Gas Diffusion

Webb, Stephen W.

The GEO-SEQ Project is investigating methods for geological sequestration of CO{sub 2}. This project, which is directed by LBNL and includes a number of other industrial, university, and national laboratory partners, is evaluating computer simulation methods including TOUGH2 for this problem. The TOUGH2 code, which is a widely used code for flow and transport in porous and fractured media, includes simplified methods for gas diffusion based on a direct application of Fick's law. As shown by Webb (1998) and others, the Dusty Gas Model (DGM) is better than Fick's Law for modeling gas-phase diffusion in porous media. In order to improve gas-phase diffusion modeling for the GEO-SEQ Project, the EOS7R module in the TOUGH2 code has been modified to include the Dusty Gas Model as documented in this report. In addition, the liquid diffusion model has been changed from a mass-based formulation to a mole-based model. Modifications for separate and coupled diffusion in the gas and liquid phases have also been completed. The results from the DGM are compared to the Fick's law behavior for TCE and PCE diffusion across a capillary fringe. The differences are small due to the relatively high permeability (k = 10{sup -11} m{sup 2}) of the problem and the small mole fraction of the gases. Additional comparisons for lower permeabilities and higher mole fractions may be useful.

More Details

Adaptive Sensor Optimization and Cognitive Image Processing Using Autonomous Optical Neuroprocessors

Cameron, Stewart M.

Measurement and signal intelligence demands has created new requirements for information management and interoperability as they affect surveillance and situational awareness. Integration of on-board autonomous learning and adaptive control structures within a remote sensing platform architecture would substantially improve the utility of intelligence collection by facilitating real-time optimization of measurement parameters for variable field conditions. A problem faced by conventional digital implementations of intelligent systems is the conflict between a distributed parallel structure on a sequential serial interface functionally degrading bandwidth and response time. In contrast, optically designed networks exhibit the massive parallelism and interconnect density needed to perform complex cognitive functions within a dynamic asynchronous environment. Recently, all-optical self-organizing neural networks exhibiting emergent collective behavior which mimic perception, recognition, association, and contemplative learning have been realized using photorefractive holography in combination with sensory systems for feature maps, threshold decomposition, image enhancement, and nonlinear matched filters. Such hybrid information processors depart from the classical computational paradigm based on analytic rules-based algorithms and instead utilize unsupervised generalization and perceptron-like exploratory or improvisational behaviors to evolve toward optimized solutions. These systems are robust to instrumental systematics or corrupting noise and can enrich knowledge structures by allowing competition between multiple hypotheses. This property enables them to rapidly adapt or self-compensate for dynamic or imprecise conditions which would be unstable using conventional linear control models. By incorporating an intelligent optical neuroprocessor in the back plane of an imaging sensor, a broad class of high-level cognitive image analysis problems including geometric change detection, pattern recognition, and correlated feature extraction can be realized in an inherently parallel fashion without information bottlenecking or external supervision, Using this approach, we believe that autonomous control systems embodied with basic adaptive decision-theoretic capabilities can be developed for imaging and surveillance sensors to improve discrimination in stressing operational environments.

More Details

Hybrid Processing of Measurable and Subjective Data

Cooper, James A.

Conventional systems surety analysis is basically restricted to measurable or physical-model-derived data. However, most analyses, including high-consequence system surety analysis, must also utilize subjective information. In order to address this need, there has been considerable effort on analytically incorporating engineering judgment. For example, Dempster-Shafer theory establishes a framework in which frequentist probability and Bayesian incorporation of new data are subsets. Although Bayesian and Dempster-Shafer methodology both allow judgment, neither derives results that can indicate the relative amounts of subjective judgment and measurable data in the results. The methodology described in this report addresses these problems through a hybrid-mathematics-based process that allows tracking of the degree of subjective information in the output, thereby providing more informative (as well as more appropriate) results. In addition, most high consequence systems offer difficult-to-analyze situations. For example, in the Sandia National Laboratories nuclear weapons program, the probability that a weapon responds safely when exposed to an abnormal environment (e.g., lightning, crush, metal-melting temperatures) must be assured to meet a specific requirement. There are also non-probabilistic DOE and DoD requirements (e.g., for determining the adequacy of positive measures). The type of processing required for these and similar situations transcends conventional probabilistic and human factors methodology. The results described herein address these situations by efficiently utilizing subjective and objective information in a hybrid mathematical structure in order to directly apply to the surety assessment of high consequence systems. The results can also improve the quality of the information currently provided to decision-makers. To this end, objective inputs are processed in a conventional manner; while subjective inputs are derived from the combined engineering judgment of experts in the appropriate disciplines. In addition to providing output constituents (including portrayal of uncertainty) corresponding to combination of these input types, their individual contributions to the resultant uncertainty are determined and provided as part of the output information. Finally, the safety assessment is complemented by a latent effects analysis, facilitated by soft-aggregation accumulation of observed operational constituents.

More Details

ACME - Algorithms for Contact in a Multiphysics Environment API Version 1.0

Brown, Kevin H.; Summers, Randall M.; Glass, Micheal W.; Gullerud, Arne S.; Heinstein, Martin; Jones, Reese E.

An effort is underway at Sandia National Laboratories to develop a library of algorithms to search for potential interactions between surfaces represented by analytic and discretized topological entities. This effort is also developing algorithms to determine forces due to these interactions for transient dynamics applications. This document describes the Application Programming Interface (API) for the ACME (Algorithms for Contact in a Multiphysics Environment) library.

More Details

BAC-G2 Predictions of Thermochemistry for Gas-Phase Aluminum Compounds

Allendorf, Mark

A self-consistent set of thermochemical data for 55 molecules in the Al-H-C-O-F-Cl system are obtained from ab initio quantum-chemistry calculations using the BAC-G2 method. Calculations were performed for both stable and radical species. Good agreement is found between the calculations and experimental heats of formation in most cases where data are available for comparison. Electronic energies, molecular geometries, moments of inertia, and vibrational frequencies are provided in the Supporting Information, as are polynomial fits of the thermodynamic data (heat of formation, entropy, and heat capacity) over the 300--3000 K temperature range.

More Details

Multisublevel Magnetoquantum Conductance in Single and Coupled Double Quantum Wires

Physical Review B

Lyo, Sungkwun K.

We study the ballistic and diffusive magnetoquantum transport using a typical quantum point contact geometry for single and tunnel-coupled double wires that are wide (less than or similar to1 mum) in one perpendicular direction with densely populated sublevels and extremely confined in the other perpendicular (i.e., growth) direction. A general analytic solution to the Boltzmann equation is presented for multisublevel elastic scattering at low temperatures. The solution is employed to study interesting magnetic-field dependent behavior of the conductance such as a large enhancement and quantum oscillations of the conductance for various structures and field orientations. These phenomena originate from the following field-induced properties: magnetic confinement, displacement of the initial- and final-state wave functions for scattering, variation of the Fermi velocities, mass enhancement, depopulation of the sublevels and anticrossing (in double quantum wires). The magnetoconductance is strikingly different in long diffusive (or rough. dirty) wires from the quantized conductance in short ballistic (or clean) wires. Numerical results obtained for the rectangular confinement potentials in the growth direction are satisfactorily interpreted in terms of the analytic solutions based on harmonic confinement potentials. Some of the predicted features of the field-dependent diffusive and quantized conductances are consistent with recent data from GaAs/AlxGa1-xAs double quantum wires.

More Details

Capacity of Prestressed Concrete Containment Vessels with Prestressing Loss

Smith, Jeffrey A.

Reduced prestressing and degradation of prestressing tendons in concrete containment vessels were investigated using finite element analysis of a typical prestressed containment vessel. The containment was analyzed during a loss of coolant accident (LOCA) with varying levels of prestress loss and with reduced tendon area. It was found that when selected hoop prestressing tendons were completely removed (as if broken) or when the area of selected hoop tendons was reduced, there was a significant impact on the ultimate capacity of the containment vessel. However, when selected hoop prestressing tendons remained, but with complete loss of prestressing, the predicted ultimate capacity was not significantly affected for this specific loss of coolant accident. Concrete cracking occurred at much lower levels for all cases. For cases where selected vertical tendons were analyzed with reduced prestressing or degradation of the tendons, there also was not a significant impact on the ultimate load carrying capacity for the specific accident analyzed. For other loading scenarios (such as seismic loading) the loss of hoop prestressing with the tendons remaining could be more significant on the ultimate capacity of the containment vessel than found for the accident analyzed. A combination of loss of prestressing and degradation of the vertical tendons could also be more critical during other loading scenarios.

More Details

SLINGSHOT - a Coilgun Design Code

Marder, Barry M.

The Sandia coilgun [1,2,3,4,5] is an inductive electromagnetic launcher. It consists of a sequence of powered, multi-turn coils surrounding a flyway of circular cross-section through which a conducting armature passes. When the armature is properly positioned with respect to a coil, a charged capacitor is switched into the coil circuit. The rising coil currents induce a current in the armature, producing a repulsive accelerating force. The basic numerical tool for modeling the coilgun is the SLINGSHOT code, an expanded, user-friendly successor to WARP-10 [6]. SLINGSHOT computes the currents in the coils and armature, finds the forces produced by those currents, and moves the armature through the array of coils. In this approach, the cylindrically symmetric coils and armature are subdivided into concentric hoops with rectangular cross-section, in each of which the current is assumed to be uniform. The ensemble of hoops are treated as coupled circuits. The specific heats and resistivities of the hoops are found as functions of temperature and used to determine the resistive heating. The code calculates the resistances and inductances for all hoops, and the mutual inductances for all hoop pairs. Using these, it computes the hoop currents from their circuit equations, finds the forces from the products of these currents and the mutual inductance gradient, and moves the armature. Treating the problem as a set of coupled circuits is a fast and accurate approach compared to solving the field equations. Its use, however, is restricted to problems in which the symmetry dictates the current paths. This paper is divided into three parts. The first presents a demonstration of the code. The second describes the input and output. The third part describes the physical models and numerical methods used in the code. It is assumed that the reader is familiar with coilguns.

More Details

The Markov Latent Effects Approach to Safety and Decision -Making

Cooper, James A.

The methodology in this report addresses the safety effects of organizational and operational factors that can be measured through ''inspection.'' The investigation grew out of a preponderance of evidence that the safety ''culture'' (attitude of employees and management toward safety) was frequently one of the major root causes behind accidents or safety-relevant failures. The approach is called ''Markov latent effects'' analysis. Since safety also depends on a multitude of factors that are best measured through well known risk analysis methods (e.g., fault trees, event trees, FMECA, physical response modeling, etc.), the Markov latent effects approach supplements conventional safety assessment and decision analysis methods. A top-down mathematical approach is developed for decomposing systems, for determining the most appropriate items to be measured, and for expressing the measurements as imprecise subjective metrics through possibilistic or fuzzy numbers. A mathematical model is developed that facilitates combining (aggregating) inputs into overall metrics and decision aids, also portraying the inherent uncertainty. A major goal of the modeling is to help convey the top-down system perspective. Metrics are weighted according to significance of the attribute with respect to subsystems and are aggregated nonlinearly. Since the accumulating effect responds less and less to additional contribution, it is termed ''soft'' mathematical aggregation, which is analogous to how humans frequently make decisions. Dependence among the contributing factors is accounted for by incorporating subjective metrics on commonality and by reducing the overall contribution of these combinations to the overall aggregation. Decisions derived from the results are facilitated in several ways. First, information is provided on input ''Importance'' and ''Sensitivity'' (both Primary and Secondary) in order to know where to place emphasis on investigation of root causes and in considering new controls that may be necessary. Second, trends in inputs and outputs are tracked in order to obtain significant information, including cyclic information, for the decision process. Third, Early Alerts are provided in order to facilitate pre-emptive action. Fourth, the outputs are compared to soft thresholds provided by sigmoid functions. The methodology has been implemented in a software tool.

More Details

Multicomponent-Multiphase Equation of State for Carbon

Kerley, Gerald I.; Chhabildas, L.C.

The unique properties of carbon have made it both a fascinating and an important subject of experimental and theoretical studies for many years [1]-[4]. The contrast between its best-known elemental forms, graphite and diamond, is particularly striking. Graphite is black, has a rather low density and high compressibility (close to that of magnesium), and is greasy enough to be useful as a lubricant and in pencil leads. Diamond is brilliantly translucent, 60% more dense than graphite, less compressible than either tungsten or corundum, and its hardness makes it useful for polishing and cutting. This variability in properties, as well as that observed among the many classes of carbon compounds, arises because of profound differences in electronic structure of the carbon bonds [5]. A number of other solid forms of carbon are known. Pyrolytic graphite [6] is a polycrystalline material in which the individual crystallites have a structure quite similar to that of natural graphite. Fullerite (solid C 60), discovered only ten years ago [7], consists of giant molecules in which the atoms are arranged into pentagons and hexagons on the surface of a spherical cage. Amorphous carbon [8][9], including carbon black and ordinary soot, is a disordered form of graphite in which the hexagonally bonded layers are randomly oriented. Glassy carbons [9][10], on the other hand, have more random structures. Many other structures have been discussed [1][9].

More Details

Location Algorithms and Errors in Time-of-Arrival Systems

Hogg, Christopher J.

This report describes least squares solution methods and linearized estimates of solution errors caused by data errors. These methods are applied to event locating systems which use time-of-arrival (TOA) data. Analyses are presented for algorithms that use the TOA data in a ''direct'' manner and for algorithms utilizing Time-of-arrival Squared (TSQ) methods. Location and error estimation results were applied to a ''typical'' satellite TOA detecting system. Using Monte Carlo methods, it was found that the linearized location error estimates were valid for random data errors with relatively large variances and relatively poor event/sensor geometries. In addition to least squares methods, which use an L{sub 2} norm, methods were described for L{sub 1} and L{sub {infinity}} norms. In general, these latter norms offered little improvement over least squares methods. Reduction of the location error variances can be effected by using information in addition to the TOA data themselves by adding judiciously chosen ''conditioning'' equation(s) to the least squares system. However, the added information can adversely affect the mean errors. Also, conditioned systems may offer location solutions where nonconditioned scenarios may not be solvable. Solution methods and linearized error estimates are given for ''conditioned'' systems. It was found that for significant data errors, the linearized estimates were also close to the Monte Carlo results.

More Details

Pulsed Dielectric Breakdown of Aluminum Oxide (ALOX) Filled Epoxy Encapsulants: Effects of Formulation and Electric Stress Concentration

Anderson, Robert A.; Lagasse, Robert R.; Schroeder, John L.; Zeuch, David H.; Montgomery, Stephen

Aluminum oxide (ALOX) filled epoxy is the dielectric encapsulant in shock driven high-voltage power supplies. ALOX encapsulants display a high dielectric strength under purely electrical stress, but minimal information is available on the combined effects of high voltage and mechanical shock. We report breakdown results from applying electrical stress in the form of a unipolar high-voltage pulse of the order of 10-{micro}s duration, and our findings may establish a basis for understanding the results from proposed combined-stress experiments. A test specimen geometry giving approximately uniform fields is used to compare three ALOX encapsulant formulations, which include the new-baseline 459 epoxy resin encapsulant and a variant in which the Alcoa T-64 alumina filler is replaced with Sumitomo AA-10 alumina. None of these encapsulants show a sensitivity to ionizing radiation. We also report results from specimens with sharp-edged electrodes that cause strong, localized field enhancement as might be present near electrically-discharged mechanical fractures in an encapsulant. Under these conditions the 459-epoxy ALOX encapsulant displays approximately 40% lower dielectric strength than the older Z-cured Epon 828 formulation. An investigation of several processing variables did not reveal an explanation for this reduced performance. The 459-epoxy encapsulant appears to suffer electrical breakdown if the peak field anywhere reaches a critical level. The stress-strain characteristics of Z-cured ALOX encapsulant are measured under high triaxial pressure and we find that this stress causes permanent deformation and a network of microscopic fractures. Recommendations are made for future experimental work.

More Details

Report on the Fracture Analysis of HfB{sub 2}-SiC and ZrB{sub 2}-SiC Composites

Loehman, Ronald E.

Hafnium diboride-silicon carbide (HS) and zirconium diboride-silicon carbide (ZS) composites are potential materials for high temperature, thermal shock applications such as for components on re-entry vehicles. In order to establish material constants necessary for evaluation of in situ fracture, bars fractured in four-point flexure were examined using fractographic principles. The fracture toughness was determined from measurements of the critical crack sizes and the strength values and the crack branching constants were established to use in forensic fractography for future in-flight tests. The fracture toughnesses range from about 13 MPam{sup 1/2} at room temperature to about 6 MPam{sup 1/2} at 1400 C for ZrB{sub 2}-Sic composites and from about 13 MPam{sup 1/2} at room temperature to about 4 MPam{sup 1/2} at 1400 C for HfB{sub 2}-SiC composites. Thus, the toughnesses of either the HS or ZS composites have the potential for use in thermal shock applications. Processing and manufacturing defects limited the strength of the test bars. However, examination of the microstructure on the fracture surfaces shows that the processing of these composites can be improved. There is potential for high toughness composites with high strength to be used in thermal shock conditions if the processing and handling are controlled.

More Details
Results 90201–90250 of 99,299
Results 90201–90250 of 99,299