The polarization reversal process in a rhombohedral ferroelectric ceramic material was investigated using field-induced strain measurements and texture development. Special attention was focused on the difference in the field-induced strains between the first quarter cycle and subsequent loading conditions. Results show that the initial field-induced strain is about twelve times greater than the subsequent strain, which immediately suggests that mechanisms involved in these conditions during the polarization reversal process are different. The difference in the magnitude of field-induced strain is discussed in terms of 180 degree and non-180 degree domain reorientation processes.
Conventional high-temperature compression stress-relaxation (CSR) experiments (e.g., using a Shawbury-Wallace relaxometer) measure the force periodically at room temperature. In this paper, we first describe modifications that allow the force measurements to be made isothermally and show that such measurements lead to more accurate estimates of sealing force decay. We then use conventional Arrhenius analysis and linear extrapolation of the high-temperature (80--110 C) CSR results for two commercial butyl o-ring materials (Butyl-A and Butyl-B) to show that Butyl-B is predicted to have approximately three times longer lifetime at room temperature (23 C). To test the linear extrapolation assumed by the Arrhenius approach, we conducted ultrasensitive oxygen consumption measurements from 110 C to room temperature for the two butyl materials. The results indicated that linear extrapolation of the high temperature CSR results for Butyl-A was reasonable whereas a significant curvature to a lower activation energy was observed for Butyl-B below 80 C. Using the oxygen consumption results to extrapolate the CSR results from 80 C to 23 C resulted in the conclusion that Butyl-B would actually degrade much faster than Butyl-A at 23 C, the opposite of the earlier conclusion based solely on extrapolation of the high-temperature CSR results. Since samples of both materials that had aged in the field for {approx}20 years at 23 C were available, it was possible to check the predictions using compression set measurements made on the field materials. The comparisons were in accord with the extrapolated predictions made using the ultrasensitive oxygen consumption measurements, underscoring the power of this extrapolation approach.
We present a model for optimizing the placement of sensors in municipal water networks to detect maliciously-injected contaminants. An optimal sensor configuration minimizes the expected fraction of the population at risk. We formulate this problem as an integer program, which can be solved with generally available IP solvers. We find optimal sensor placements for three real networks with synthetic risk and population data. Our experiments illustrate that this formulation can be solved relatively quickly, and that the predicted sensor configuration is relatively insensitive to uncertainties in the data used for prediction.
We describe COLIN, a Common Optimization Library INterface for C++. COLIN provides C++ template classes that define a generic interface for both optimization problems and optimization solvers. COLIN is specifically designed to facilitate the development of hybrid optimizers, for which one optimizer calls another to solve an optimization subproblem. We illustrate the capabilities of COLIN with an example of a memetic genetic programming solver.
Development of next generation electronics for pulse discharge systems requires miniaturization and integration of high voltage, high value resistors (greater than 100 megohms) with novel substrate materials. These material advances are needed for improved reliability, robustness and performance. In this study, high sheet resistance inks of 1 megohm per square were evaluated to reduce overall electrical system volume. We investigated a deposition process that permits co-sintering of high-sheet-resistance inks with a variety of different material substrates. Our approach combines the direct write process of aerosol jetting with laser sintering and conventional thermal sintering processes. One advantage of aerosol jetting is that high quality, fine line depositions can be achieved on a wide variety of substrates. When combined with laser sintering, the aerosol jetting approach has the capability to deposit resistors at any location on a substrate and to additively trim the resistors to specific values. We have demonstrated a 400 times reduction in overall resistor volume compared to commercial chip resistors using the above process techniques. Resistors that exhibited this volumetric efficiency were fabricated by 850 C thermal processing on alumina substrates and by 0.1W laser sintering on Kapton substrates.
Sandia National Laboratories has developed a mesoscale hopping mobility platform (Hopper) to overcome the longstanding problems of mobility and power in small scale unmanned vehicles. The system provides mobility in situations such as negotiating tall obstacles and rough terrain that are prohibitive for other small ground base vehicles. The Defense Advanced Research Projects Administration (DARPA) provided the funding for the hopper project.
A mine dog evaluation project initiated by the Geneva International Center for Humanitarian Demining is evaluating the capability and reliability of mine detection dogs. The performance of field-operational mine detection dogs will be measured in test minefields in Afghanistan and Bosnia containing actual, but unfused landmines. Repeated performance testing over two years through various seasonal weather conditions will provide data simulating near real world conditions. Soil samples will be obtained adjacent to the buried targets repeatedly over the course of the test. Chemical analysis results from these soil samples will be used to evaluate correlations between mine dog detection performance and seasonal weather conditions. This report documents the analytical chemical methods and results from the third batch of soils received. This batch contained samples from Kharga, Afghanistan collected in October 2002.
We report on the use of a supercomputer simulation to study the performance sensitivity to systematic changes in the job parameters of run time, number of CPUs, and interarrival time. We also examine the effect of changes in share allocation and service ratio for job prioritization under a Fair Share queuing Algorithm to see the effect on facility figures of merit. We used log data from the ASCI supercomputer Blue Mountain and the ASCI simulator BIRMinator to perform this study. The key finding is that the performance of the supercomputer is quite sensitive to all the job parameters with the interarrival rate of the jobs being most sensitive at the highest rates and increasing run times the least sensitive job parameter with respect to utilization and rapid turnaround. We also find that this facility is running near its maximum practical utilization. Finally, we show the importance of the use of simulation in understanding the performance sensitivity of a supercomputer.
We report on a performance evaluation of a Fair Share system at the ASCI Blue Mountain supercomputer cluster. We study the impacts of share allocation under Fair Share on wait times and expansion factor. We also measure the Service Ratio, a typical figure of merit for Fair Share systems, with respect to a number of job parameters. We conclude that Fair Share does little to alter important performance metrics such as expansion factor. This leads to the question of what Fair Share means on cluster machines. The essential difference between Fair Share on a uni-processor and a cluster is that the workload on a cluster is not fungible in space or time. We find that cluster machines must be highly utilized and support checkpointing in order for Fair Share to function more closely to the spirit in which it was originally developed.
Standard weak solutions to the Poisson problem on a bounded domain have square-integrable derivatives, which limits the admissible regularity of inhomogeneous data. The concept of solution may be further weakened in order to define solutions when data is rough, such as for inhomogeneous Dirichlet data that is only square-integrable over the boundary. Such very weak solutions satisfy a nonstandard variational form (u, v) = G(v). A Galerkin approximation combined with an approximation of the right-hand side G defines a finite-element approximation of the very weak solution. Applying conforming linear elements leads to a discrete solution equivalent to the text-book finite-element solution to the Poisson problem in which the boundary data is approximated by L{sub 2}-projections. The L{sub 2} convergence rate of the discrete solution is O(h{sub s}) for some s {element_of} (0,1/2) that depends on the shape of the domain, asserting a polygonal (two-dimensional) or polyhedral (three-dimensional) domain without slits and (only) square-integrable boundary data.
A three-dimensional tungsten photonic crystal is experimentally realized with a complete photonic band gap at wavelengths {lambda} {ge} 3 {micro}m. At an effective temperature of <T> {approx} 1535 K, the photonic crystal exhibits a sharp emission at {approx}1.5 {micro}m and is promising for thermal photovoltaic (TPV) power generation. Based on the spectral radiance, a proper length scaling and a planar TPV model calculation, an optical-to-electric conversion efficiency of {approx}34% and electrical power of {approx}14 W/cm{sup 2} is theoretically possible.
Hadamard Transform Spectrometer (HTS) approaches share the multiplexing advantages found in Fourier transform spectrometers. Interest in Hadamard systems has been limited due to data storage/computational limitations and the inability to perform accurate high order masking in a reasonable amount of time. Advances in digital micro-mirror array (DMA) technology have opened the door to implementing an HTS for a variety of applications including fluorescent microscope imaging and Raman imaging. A Hadamard transform spectral imager (HTSI) for remote sensing offers a variety of unique capabilities in one package such as variable spectral and temporal resolution, no moving parts (other than the micro-mirrors) and vibration tolerance. Two approaches to for 2D HTS systems have been investigated in this LDRD. The first approach involves dispersing the incident light, encoding the dispersed light then recombining the light. This method is referred to as spectral encoding. The other method encodes the incident light then disperses the encoded light. The second technique is called spatial encoding. After creating optical designs for both methods the spatial encoding method was selected as the method that would be implemented because the optical design was less costly to implement.
The design, fabrication, and performance of a planar microbattery made from a silicon wafer with a bonded lid are presented. The battery is designed with two compartments, separated by four columns of micro-posts. These posts are 3 or 5 micrometers in diameter. The posts permit transport of liquid electrolyte, but stop particles of battery material from each compartment from mixing. The anode and cathode battery compartments, the posts, fill holes, and conductive vias are all made using high-aspect-ratio reactive ion (Bosch) etching. After the silicon wafer is completed, it is anodically bonded or adhesive bonded to a Pyrex{reg_sign} wafer lid. The battery materials are made from micro-disperse particles that are 3-5 micrometers in diameter. The lithium-ion chemistry is microcarbon mesobeads and lithium cobalt oxide. The battery capacity is 1.83 micro-amp-hrs/cm{sup 2} at a discharge rate of 25 microamps.
This report describes the development of a miniature mobile microrobot device and several microsystems needed to create a miniature microsensor delivery platform. This work was funded under LDRD No.10785, entitled, ''Integrated Microsensors for Autonomous Microrobots''. The approach adopted in this project was to develop a mobile platform, to which would be attached wireless RF remote control and data acquisition in addition to various microsensors. A modular approach was used to produce a versatile microrobot platform and reduce power consumption and physical size.
Impedance based, planar chemical microsensors are the easiest sensors to integrate with electronics. The goal of this work is a several order of magnitude increase in the sensitivity of this sensor type. The basic idea is to mimic biological chemical sensors that rely on changes in ion transport across very thin organic membranes (supported Bilayer Membranes: sBLMs) for the sensing. To improve the durability of bilayers we show how they can be supported on planar metal electrodes. The large increase in sensitivity over polyelectrolytes will come from molecular recognition elements like antibodies that bind the analyte molecule. The molecular recognition sites can be tied to the lipid bilayer capacitor membrane and a number of mechanisms can be used to modulate the impedance of the lipid bilayers. These include coupled ion channels, pore modification and double layer capacitance modification by the analyte molecule. The planar geometry of our electrodes allows us to create arrays of sensors on the same chip, which we are calling the ''Lipid Chip''.
Communication networks, both data and telephone, are subject to attack by malicious individuals. One goal of the owners of communication networks is to defend the networks from attackers. For several years, firewalls have been used on data networks to help provide protection from attackers. Until recently, there was no comparable system to use with a telephone network. Several vendors have recognized the need for systems to protect telephone networks and have developed products to improve the security for telephone networks. Sandia evaluated the capabilities of commercial telephone firewall products. Based on the evaluation of the telephone firewall products, Sandia installed the SecureLogix TeleWall system at both the New Mexico and California sites. Sandia is currently in the early stages of applying the capabilities of the SecureLogix telephone network firewall system to the environment at Sandia. Any site that has invested in computer network security protection through the implementation of firewall and intrusion detection systems should also implement appropriate telephone network protection through a telephone firewall system.
Sandia National Laboratories has developed a portfolio of programs to address the critical skills needs of the DP labs, as identified by the 1999 Chiles Commission Report. The goals are to attract and retain the best and the brightest students and transition them into Sandia--and DP Complex--employees. The US Department of Energy/Defense Programs University Partnerships funded seven laboratory critical skills development programs in FY02. This report provides a qualitative and quantitative evaluation of these programs and their status.
An analytical capability is being developed that can be used to predict the effect of corrosion on the performance of electrical circuits and systems. The availability of this ''toolset'' will dramatically improve our ability to influence device and circuit design, address and remediate field occurrences, and determine real limits for circuit service life. In pursuit of this objective, we have defined and adopted an iterative, statistical-based, top-down approach that will permit very formidable and real obstacles related to both the development and use of the toolset to be resolved as effectively as possible. An important component of this approach is the direct incorporation of expert opinion. Some of the complicating factors to be addressed involve the code/model complexity, the existence of large number of possible degradation processes, and an incompatibility between the length scales associated with device dimensions and the corrosion processes. Two of the key aspects of the desired predictive toolset are (1) a direct linkage of an electrical-system performance model with mechanistic-based, deterministic corrosion models, and (2) the explicit incorporation of a computational framework to quantify the effects of non-deterministic parameters (uncertainty). The selected approach and key elements of the toolset are first described in this paper. These descriptions are followed by some examples of how this toolset development process is being implemented.
This report describes the second phase of a project entitled ''Innovative Business Cases for Energy Storage in a Restructured Electricity Marketplace''. During part one of the effort, nine ''Stretch Scenarios'' were identified. They represented innovative and potentially significant uses of electric energy storage. Based on their potential to significantly impact the overall energy marketplace, the five most compelling scenarios were identified. From these scenarios, five specific ''Storage Market Opportunities'' (SMOs) were chosen for an in-depth evaluation in this phase. The authors conclude that some combination of the Power Cost Volatility and the T&D Benefits SMOs would be the most compelling for further investigation. Specifically, a combination of benefits (energy, capacity, power quality and reliability enhancement) achievable using energy storage systems for high value T&D applications, in regions with high power cost volatility, makes storage very competitive for about 24 GW and 120 GWh during the years of 2001 and 2010.
Terrorism is a scourge common to the international community and its threat to world peace and stability is severe and imminent. This paper evaluates the campaign against terrorism and the possible modalities of constructive cooperation between China and the United States in this fight. Technical cooperation can enhance Sino-U.S. security capabilities for dealing with the terrorist threat. This paper identifies specific bilateral cooperative activities that may benefit common interests. Focusing on protecting people, facilities, and infrastructure, Sino-U.S. cooperation may introduce protective technologies and training, including means of boosting port and border security, and detecting explosives or nuclear materials. Cooperation will not only enhance the global counterterrorism campaign, but also form a sound foundation for constructive and cooperative relations between the two countries.
In the LIGA process for manufacturing microcomponents, a polymer film is exposed to an x-ray beam passed through a gold pattern. This is followed by the development stage, in which a selective solvent is used to remove the exposed polymer, reproducing the gold pattern in the polymer film. Development is essentially polymer dissolution, a physical process which is not well understood. We have used coarse-grained molecular dynamics simulation to study the early stage of polymer dissolution. In each simulation a film of non-glassy polymer was brought into contact with a layer of solvent. The mutual penetration of the two phases was tracked as a function of time. Several film thicknesses and two different chain lengths were simulated. In all cases, the penetration process conformed to ideal Fickian diffusion. We did not see the formation of a gel layer or other non-ideal effects. Variations in the Fickian diffusivities appeared to be caused primarily by differences in the bulk polymer film density.
Two lots of manufactured Type 3a zeolite samples were compared by TGA/IR analysis. The first lot, obtained from Davidson Chemical, a commercial vendor, was characterized during the previous study cycle for its water and water-plus-CO{sub 2} uptake in order to determine whether CO{sub 2} uptake prevented water adsorption by the zeolite. It was determined that CO{sub 2} did not hamper water adsorption using the Davidson zeolite. CO{sub 2} was found on the zeolite surface at dewpoints below -40 C, however it was found to be reversibly adsorbed. During the course of the previous studies, chemical analyses revealed that the Davidson 3a zeolite contained calcium in significant quantities, along with the traditional counterions potassium and sodium. Chemical analysis of a Type 3a zeolite sample retrieved from Kansas City (heretofore referred to as the ''Stores 3a'' sample) indicated that the Stores sample was a more traditional Type 3a zeolite, containing no calcium. TGA/IR studies this year focused on obtaining CO{sub 2} and water absorbance data from the Stores 3a zeolite. Within the Stores 3a sample, CO{sub 2} was found to be reversibly absorbed within the sample, but only at and below -60 C with 5% CO{sub 2} loading. The amount of CO{sub 2} observed eluting from the Stores zeolite at this condition was similar to what was observed from the Davidson zeolite sample but with a greater uncertainty in the measured value. The results of the Stores 3a studies are summarized within this report.
Multifidelity modeling, in which one component of a system is modeled at a significantly different level of fidelity than another, has several potential advantages. For example, a higher-fidelity component model can be evaluated in the context of a lower-fidelity full system model that provides more realistic boundary conditions and yet can be executed quickly enough for rapid design changes or design optimization. Developing such multifidelity models presents challenges in several areas, including coupling models with differing spatial dimensionalities. In this report we describe a multifidelity algorithm for thermal radiation problems in which a three-dimensional, finite-element model of a system component is embedded in a system of zero-dimensional (lumped-parameter) components. We tested the algorithm on a prototype system with three problems: heating to a constant temperature, cooling to a constant temperature, and a simulated fire environment. The prototype system consisted of an aeroshell enclosing three components, one of which was represented by a three-dimensional finite-element model. We tested two versions of the algorithm; one used the surface-average temperature of the three dimensional component to couple it to the system model, and the other used the volume-average temperature. Using the surface-average temperature provided somewhat better temperature predictions than using the volume-average temperature. Our results illustrate the difficulty in specifying consistency for multifidelity models. In particular, we show that two models may be consistent for one application but not for another. While the temperatures predicted by the multifidelity model were not as accurate as those predicted by a full three-dimensional model, our results show that a multifidelity system model can potentially execute much faster than a full three-dimensional finite-element model for thermal radiation problems with sufficient accuracy for some applications, while still predicting internal temperatures for the higher fidelity component. These results indicate that optimization studies with mixed-fidelity models are feasible when they may not be feasible with three-dimensional system models, if the concomitant loss in accuracy is within acceptable bounds.
Current algorithms for the inverse calibration of hydraulic conductivity (K) fields to observed head data update the K values to achieve calibration but consider the parameters defining the spatial correlation of the K values to be fixed. Here we examine the ability of a genetic algorithm (GA) to update indicator variogram parameters defining the spatial correlation of the K field subject to minimizing differences between modeled and observed head values and also to minimizing the advective travel time across the model. The technique is presented on a test problem consisting of 83 K values randomly selected from 8649 gas-permeameter measurements made on a block of heterogeneous sandstone. Indicator variograms at the 10th, 40th, 60th and 90th percentiles of the cumulative log10 K distribution are used to describe the spatial variability of the log10 hydraulic conductivity data. For each threshold percentile, the variogram models are parameterized by the nugget, sill, anisotropic range values and the direction of principal correlation. The 83 conditioning data and the variogram models are used as input to a geostatistical indicator simulation algorithm.
Ceramic interconnect technology has been adapted to new structures. In particular, the ability to customize processing order and material choices in Low Temperature Cofired Ceramic (LTCC) has enabled new features to be constructed, which address needs in MEMS packaging as well as other novel structures. Unique shapes in LTCC permit the simplification of complete systems, as in the case of a miniature ion mobility spectrometer (IMS). In this case, a rolled tube has been employed to provide hermetic external contacts to electrodes and structures internal to the tube. Integral windows in LTCC have been fabricated for use in both lids and circuits where either a short term need for observation or a long-term need for functionality exists. These windows are fabricated without adhesive, are fully compatible with LTCC processing, and remain optically clear. Both vented and encapsulated functional volumes have been fabricated using a sacrificial material technique. These hold promise for self-assembly of systems, as well as complex internal structures in cavities, micro fluidic and optical channels, and multilevel integration techniques. Separation of the burnout and firing cycles has permitted custom internal environments to be established. Existing commercial High Temperature Cofired Ceramic (HTCC) and LTCC systems can also be rendered to have improved properties. A rapid prototyping technique for patterned HTCC packages has permitted prototypes to be realized in a few days, and has further applications to micro fluidics, heat pipes, and MEMS, among others. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract DE-AC04-94AL85000.
We have designed and fabricated a polysilicon sidewall-contact motion monitor that fits in between the teeth of a MEMS gear. The monitor has a center grounded member that is moved into contact with a pad held at voltage. When observing motion, however, the monitor fails after only a few actuations. A thorough investigation of the contacting interfaces revealed that for voltages > 5 V with a current limit of 100 pA, the main conduction process is Fowler-Nordheim tunneling. After a few switch cycles, the polysilicon interfaces became insulating. This is shown to be a permanent change and the suspected mechanism is field-induced oxidation of the asperity contacts. To reduce the effects of field-induced oxidation, tests were performed at 0.5 V and no permanent insulation was observed. However, the position of the two contacting surfaces produced three types of conduction processes: Fowler-Nordheim tunneling, ohmic, and insulator, which were observed in a random order during switch cycling. The alignment of contact asperities produced this positional effect.
The particular lead zirconate/titanate composition PZT 95/5-2Nb was identified many years ago as a promising ferroelectric ceramic for use in shock-driven pulsed power supplies. The bulk density and the corresponding porous microstructure of this material can be varied by adding different types and quantities of organic pore formers prior to bisque firing and sintering. Early studies showed that the porous microstructure could have a significant effect on power supply performance, with only a relatively narrow range of densities providing acceptable shock wave response. However, relatively few studies were performed over the years to characterize the shock response of this material, yielding few insights on how microstructural features actually influence the constitutive mechanical, electrical, and phase-transition properties. The goal of the current work was to address these issues through comparative shock wave experiments on PZT 95/5-2Nb materials having different porous microstructures. A gas-gun facility was used to generate uniaxial-strain shock waves in test materials under carefully controlled impact conditions. Reverse-impact experiments were conducted to obtain basic Hugoniot data, and transmitted-wave experiments were conducted to examine both constitutive mechanical properties and shock-driven electrical currents. The present work benefited from a recent study in which a baseline material with a particular microstructure had been examined in detail. This study identified a complex mechanical behavior governed by anomalous compressibility and incomplete phase transformation at low shock amplitudes, and by a relatively slow yielding process at high shock amplitudes. Depoling currents are reduced at low shock stresses due to the incomplete transformation, and are reduced further in the presence of a strong electrical field. At high shock stresses, depoling currents are driven by a wave structure governed by the threshold for dynamic yielding. This wave structure is insensitive to the final wave amplitude, resulting in depoling currents that do not increase with shock amplitude for stresses above the yield threshold. In the present study, experiments were conducted under matched experimental conditions to directly compare with the behavior of the baseline material. Only subtle differences were observed in the mechanical and electrical shock responses of common-density materials having different porous microstructures, but large effects were observed when initial density was varied.
There is currently a great need for solid state lasers that emit in the infrared. Whether or not conjugated polymers that emit in the IR can be synthesized is an interesting theoretical challenge. We show that emission in the IR can be achieved in designer polymers in which the effective Coulomb correlation is smaller than that in existing systems. We also show that the structural requirement for having small effective Coulomb correlations is that there exist transverse {pi}--conjugation over a few bonds in addition to longitudinal conjugation with large conjugation lengths.
We present a self-consistent modeling of a 3.4-THz intersubband laser device. An ensemble Monte Carlo simulation, including both carrier-carrier and carrier-phonon scattering, is used to predict current density, population inversion, gain, and electron temperature. However, these two scattering mechanisms alone appear to be insufficient to explain the observed current density. In addition, the insufficient scattering yields a gain that is slightly higher than inferred from experiments. This suggests the presence of a non-negligible scattering mechanism which is unaccounted for in the present calculations.
The capabilities of a fully microscopic approach for the calculation of optical material properties of semiconductor lasers are reviewed. Several comparisons between the results of these calculations and measured data are used to demonstrate that the approach yields excellent quantitative agreement with the experiment. It is outlined how this approach allows one to predict the optical properties of devices under high-power operating conditions based only on low-intensity photo luminescence (PL) spectra. Examples for the gain-, absorption-, PL- and linewidth enhancement factor-spectra in single and multiple quantum-well structures, superlattices, Type II quantum wells and quantum dots, and for various material systems are discussed.
Using intense magnetic pressure, a method was developed to launch flyer plates to velocities in excess of 20 km s{sup -1}. This technique was used to perform plate-impact, shock wave experiments on cryogenic liquid deuterium (LD{sub 2}) to examine its high-pressure equation of state (EOS). Using an impedance matching method, Hugoniot measurements were obtained in the pressure range of 22--100 GPa. The results of these experiments disagree with the previously reported Hugoniot measurements of LD2 in the pressure range above {approx}40 GPa, but are in good agreement with first principles, ab initio models for hydrogen and its isotopes.
Two-dimensional proportional detectors with their faster data collection, large dynamic range, and more available information than point or linear proportional detectors make them ideal for microdiffraction analysis. The unique capabilities of these detectors coupled with a rotating anode source, capillary optics, and a variety of accessories allow for a wide range of applications.
This project is being conducted by Sandia National Laboratories in support of the DARPA Augmented Cognition program. Work commenced in April of 2002. The objective for the DARPA program is to 'extend, by an order of magnitude or more, the information management capacity of the human-computer warfighter.' Initially, emphasis has been placed on detection of an operator's cognitive state so that systems may adapt accordingly (e.g., adjust information throughput to the operator in response to workload). Work conducted by Sandia focuses on development of technologies to infer an operator's ongoing cognitive processes, with specific emphasis on detecting discrepancies between machine state and an operator's ongoing interpretation of events.
The paper presents a multiclass, multilabel implementation of least squares support vector machines (LS-SVM) for direction of arrival (DOA) estimation in a CDMA system. For any estimation or classification system, the algorithm's capabilities and performance must be evaluated. Specifically, for classification algorithms, a high confidence level must exist along with a technique to tag misclassifications automatically. The presented learning algorithm includes error control and validation steps for generating statistics on the multiclass evaluation path and the signal subspace dimension. The error statistics provide a confidence level for the classification accuracy.
Proposed for publication in the Special Issue of
Concurrency and Computation: Practice and Experience - The
High Performance Architectural Challenge: Mass Market Versus
Proprietary Components.
Low temperature co-fire ceramic (LTCC) materials technology offers a cost-effective and versatile approach to design and manufacture high performance and reliable advanced microelectronic packages (e.g., for wireless communications). A critical issue in manufacturing LTCC microelectronics is the need to precisely and reproducibly control shrinkage on sintering. Master Sintering Curve (MSC) theory has been evaluated and successfully applied as a tool to predict and control LTCC sintering. Dilatometer sintering experiments were designed and completed to characterize the anisotropic sintering behavior of green LTCC materials formed by tape casting. The resultant master sintering curve generated from these data provides a means to predict density as a function of sintering time and temperature. The application of MSC theory to DuPont 951 Green Tape{trademark} will be demonstrated.
In the past decade, an advanced composite repair technology has made great strides in commercial aviation use. Extensive testing and analysis, through joint programs between the Sandia Labs FAA Airworthiness Assurance Center and the aviation industry, have proven that composite materials can be used to repair damaged aluminum structure. Successful pilot programs have produced flight performance history to establish the viability and durability of bonded composite patches as a permanent repair on commercial aircraft structures. With this foundation in place, efforts are underway to adapt bonded composite repair technology to civil structures. This paper presents a study in the application of composite patches on large trucks and hydraulic shovels typically used in mining operations. Extreme fatigue, temperature, erosive, and corrosive environments induce an array of equipment damage. The current weld repair techniques for these structures provide a fatigue life that is inferior to that of the original plate. Subsequent cracking must be revisited on a regular basis. It is believed that the use of composite doublers, which do not have brittle fracture problems such as those inherent in welds, will help extend the structure's fatigue life and reduce the equipment downtime. Two of the main issues for adapting aircraft composite repairs to civil applications are developing an installation technique for carbon steel structure and accommodating large repairs on extremely thick structures. This paper will focus on the first phase of this study which evaluated the performance of different mechanical and chemical surface preparation techniques. The factors influencing the durability of composite patches in severe field environments will be discussed along with related laminate design and installation issues.
It has recently been shown that local values of the conventional exchange energy per particle cannot be described by an analytic expansion in the density variation. Yet, it is known that the total exchange-correlation (XC) energy per particle does not show any corresponding nonanalyticity. Indeed, the nonanalyticity is here shown to be an effect of the separation into conventional exchange and correlation. We construct an alternative separation in which the exchange part is made well behaved by screening its long-ranged contributions, and the correlation part is adjusted accordingly. This alternative separation is as valid as the conventional one, and introduces no new approximations to the total XC energy. We demonstrate functional development based on this approach by creating and deploying a local-density-approximation-type XC functional. Hence, this work includes both the theory and the practical calculations needed to provide a starting point for an alternative approach towards improved approximations of the total XC energy.
A 1:4-scale model of a prestressed concrete containment vessel (PCCV), representative of a pressurized water reactor (PWR) plant in Japan, was constructed by NUPEC at Sandia National Laboratories from January 1997 through June, 2000. Concurrently, Sandia instrumented the model with nearly 1500 transducers to measure strain, displacement and forces in the model from prestressing through the pressure testing. The limit state test of the PCCV model, culminating in functional failure (i.e. leakage by cracking and liner tearing) was conducted in September, 2000 at Sandia National Laboratories. After inspecting the model and the data after the limit state test, it became clear that, other than liner tearing and leakage, structural damage was limited to concrete cracking and the overall structural response (displacements, rebar and tendon strains, etc.) was only slightly beyond yield. (Global hoop strains at the mid-height of the cylinder only reached 0.4%, approximately twice the yield strain in steel.) In order to provide additional structural response data, for comparison with inelastic response conditions, the PCCV model filled nearly full with water and pressurized to 3.6 times the design pressure, when a catastrophic rupture occurred preceded only briefly by successive tensile failure of several hoop tendons. This paper summarizes the results of these tests.
The U.S. Department of Energy recently announced the first five grants for the Genomes to Life (GTL) Program. The goal of this program is to ''achieve the most far-reaching of all biological goals: a fundamental, comprehensive, and systematic understanding of life.'' While more information about the program can be found at the GTL website (www.doegenomestolife.org), this paper provides an overview of one of the five GTL projects funded, ''Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling.'' This project is a combined experimental and computational effort emphasizing developing, prototyping, and applying new computational tools and methods to elucidate the biochemical mechanisms of the carbon sequestration of Synechococcus Sp., an abundant marine cyanobacteria known to play an important role in the global carbon cycle. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO(2) are important terms in the global environmental response to anthropogenic atmospheric inputs of CO(2) and that oceanic microorganisms play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. The project includes five subprojects: an experimental investigation, three computational biology efforts, and a fifth which deals with addressing computational infrastructure challenges of relevance to this project and the Genomes to Life program as a whole. Our experimental effort is designed to provide biology and data to drive the computational efforts and includes significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Our computational efforts include coupling molecular simulation methods with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes and developing a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. Furthermore, given that the ultimate goal of this effort is to develop a systems-level of understanding of how the Synechococcus genome affects carbon fixation at the global scale, we will develop and apply a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, because the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats, we have also established a companion computational infrastructure to support this effort as well as the Genomes to Life program as a whole.
The Big Hill salt dome, located in southeastern Texas, is home to one of four underground oil-storage facilities managed by the U. S. Department of Energy Strategic Petroleum Reserve (SPR) Program. Sandia National Laboratories, as the geotechnical advisor to the SPR, conducts site-characterization investigations and other longer-term geotechnical and engineering studies in support of the program. This report describes the conversion of two-dimensional geologic interpretations of the Big Hill site into three-dimensional geologic models. The new models include the geometry of the salt dome, the surrounding sedimentary units, mapped faults, and the 14 oil storage caverns at the site. This work provides a realistic and internally consistent geologic model of the Big Hill site that can be used in support of future work.
Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, i.e., experimental data, is the issue.
The Cretaceous strata that fill the San Juan Basin of northwestern New Mexico and southwestern Colorado were shortened in a generally north-south to north northeast-south southwest direction during the Laramide orogeny. This shortening was the result of compression of the strata between southward indentation of the San Juan uplift at the north edge of the basin and northward to northeastward indentation of the Zuni uplift from the south. Right-lateral strike-slip motion was concentrated at the eastern and western margins of the basin to form the Hogback monocline and the Nacimiento uplift at the same time. Small amounts of shear may have occurred along pre-existing basement faults within the basin as well. Vertical extension fractures, striking north-south to north northeast-south southwest (parallel to the Laramide maximum horizontal compressive stress) with local variations, formed in both Mesaverde and Dakota sandstones under this system, and are found in outcrops and in the subsurface. The less-mature Mesaverde sandstones typically contain relatively long and irregular vertical extension fractures, whereas the underlying quartzitic Dakota sandstones contain more numerous, shorter, sub-parallel, closely spaced extension fractures. Conjugate shear fractures in several orientations are also present locally in Dakota strata.
Laboratory measurements provide benchmark data for wavelength-dependent plasma opacities to assist inertial confinement fusion, astrophysics, and atomic physics research. There are several potential benefits to using z-pinch radiation for opacity measurements, including relatively large cm-scale lateral sample sizes and relatively-long 3-5 ns experiment durations. These features enhance sample uniformity. The spectrally resolved transmission through a CH-tamped NaBr foil was measured. The z-pinch produced the X-rays for both the heating source and backlight source. The (50+4) eV foil electron temperature and (3±1) × 1021 cm-3 foil electron density were determined by analysis of the Na absorption features. LTE and NLTE opacity model calculations of the n=2 to 3, 4 transitions in bromine ionized into the M-shell are in reasonably good agreement with the data.
The properties of solid foams depend on their structure, which usually evolves in the fluid state as gas bubbles expand to form polyhedral cells. The characteristic feature of foam structure-randomly packed cells of different sizes and shapes-is examined in this article by considering soap froth. This material can be modeled as a network of minimal surfaces that divide space into polyhedral cells. The cell-level geometry of random soap froth is calculated with Brakke's Surface Evolver software. The distribution of cell volumes ranges from monodisperse to highly polydisperse. Topological and geometric properties, such as surface area and edge length, of the entire foam and individual cells, are discussed. The shape of struts in solid foams is related to Plateau borders in liquid foams and calculated for different volume fractions of material. The models of soap froth are used as templates to produce finite element models of open-cell foams. Three-dimensional images of open-cell foams obtained with x-ray microtomography allow virtual reconstruction of skeletal structures that compare well with the Surface Evolver simulations of soap-froth geometry.
Because the entire flowfield is generally illuminated in microscopic particle image velocimetry (microPIV), determining the depth over which particles will contribute to the measured velocity is more difficult than in traditional, light-sheet PIV. This paper experimentally and computationally measures the influence that volume illumination, optical parameters, and particle size have on the depth of correlation for typical microPIV systems. First, it is demonstrated mathematically that the relative contribution to the measured velocity at a given distance from the object plane is proportional to the curvature of the local cross-correlation function at that distance. The depth of correlation is then determined in both the physical experiments and in computational simulations by directly measuring the relative contribution to the correlation function of particles located at a known separation from the object plane. These results are then compared with a previously derived analytical model that predicts the depth of correlation from the basic properties of the imaging system and seed particles used for the microPIV measurements. Excellent agreement was obtained between the analytical model and both computational and physical experiments, verifying the accuracy of the previously derived analytical model.
A probabilistic, transient, three-phase model of chemical transport through human skin has been developed to assess the relative importance of uncertain parameters and processes during chemical exposure assessments and transdermal drug delivery. Penetration routes through the skin that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes for a hypothetical scenario of chemical transport through the skin. At early times (60 seconds), the sweat ducts provided a significant amount of simulated mass flux into the bloodstream. At longer times (1 hour), diffusion through the stratum corneum became important because of its relatively large surface area. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times.
Thermally-induced natural convection heat transfer in the annulus between horizontal concentric cylinders has been studied using the commercial code Fluent. The boundary layers are meshed all the way to the wall because forced convection wall functions are not appropriate. Various one-and two-equation turbulence models have been considered. Overall and local heat transfer rates are compared with existing experimental data.
A series of experiments has been performed in the Sandia National Laboratories FLAME facility with a 2-meter diameter JP-8 fuel pool fire. Sandia heat flux gages were employed to measure the incident flux at 8 locations outside the flame. Experiments were repeated to generate sufficient data for accurate confidence interval analysis. Additional sources of error are quantified and presented together with the data. The goal of this paper is to present these results in a way that is useful for validation of computer models that are capable of predicting heat flux from large fires. We anticipate using these data for comparison to validate models within the Advanced Simulation and Computing (ASC, formerly ASCI) codes FUEGO and SYRINX that predict fire dynamics and radiative transport through participating media. We present preliminary comparisons between existing models and experimental results.
An improved model for the gas damping of out-of-plane motion of a microbeam is developed based on the Reynolds equation (RE). A boundary condition for the RE is developed that relates the pressure at the beam perimeter to the beam motion. The two coefficients in this boundary condition are determined from Navier-Stokes (NS) simulations with the slip boundary condition for small slip lengths (relative to the gap height) and from Direct Simulation Monte Carlo (DSMC) molecular gas dynamics simulations for larger slip lengths. This boundary condition significantly improves the accuracy of the RE for cases where the beam width is only slightly greater than the gap height.
RMPP (reliable message passing protocol) is a lightweight transport protocol designed for clusters that provides end-to-end flow control and fault tolerance. In this article, presentations were made that compares RMPP to TCP, UDP, and "Utopia". The article compared the protocols on four benchmarks: bandwidth, latency, all-to-all, and communication-computation overlap. The results have shown that message-based protocols like RMPP have several advantages over TCP including ease of implementation, support for computation/communication overlap, and low CPU overhead.
The general problem considered is an optimization problem involving product design where some initial data are available and computer simulation is to be used to obtain more information. Resources and system complexity together restrict the number of simulations that can be performed in search of optimal settings for the product parameters. Consequently levels of these parameters, used in the simulations, (the experimental design) must be selected in an efficient way. We describe an algorithmic 'response-modeling' approach for performing this selection. The algorithm is illustrated using a rolamite design application. We provide (as examples) optimal one, two and three-point experimental designs for the rolamite computational analyses.
In many micro-scale fluid dynamics problems, molecular-level processes can control the interfacial energy and viscoelastic properties at a liquid-solid interface. This leads to a flow behavior that is very different from those similar fluid dynamics problems at the macro-scale. Presently, continuum modeling fails to capture this flow behavior. Molecular dynamics simulations have been applied to investigate these complex fluid-wall interactions at the nano-scale. Results show that the influence of the wall crystal lattice orientation on the fluid-wall interactions can be very important. To address those problems involving interactions of multiple length scales, a coupled atomistic-continuum model has been developed and applied to analyze flow in channels with atomically smooth walls. The present coupling strategy uses the molecular dynamics technique to probe the non-equilibrium flow near the channel walls and applies constraints to the fluid particle motion, which is coupled to the continuum flow modeling in the interior region. We have applied this new methodology to investigate Couette flow in micro-channels.
Improvements have been made at TRIUMF to permit higher proton intensities of up to 1010 cm-2s-1 over the energy range 20-500 MeV. This improved capability enables the study of displacement damage effects that require higher fluence irradiations. In addition, a high energy neutron irradiation capability has been developed for terrestrial cosmic ray soft error rate (SER) characterization of integrated circuits. The neutron beam characteristics of this facility are similar to those currently available at the Los Alamos National Laboratory WNR test facility. SER data measured on several SRAMs using the TRIUMF neutron beam are in good agreement with the results obtained on the same devices using the WNR facility. The TRIUMF neutron beam also contains thermal neutrons that can be easily removed by a sheet of cadmium. The ability to choose whether thermal neurons are present is a useful attribute not possible at the WNR.
This document provides the Technical Safety Requirements (TSR) for the Sandia National Laboratories Gamma Irradiation Facility (GIF). The TSR is a compilation of requirements that define the conditions, the safe boundaries, and the administrative controls necessary to ensure the safe operation of a nuclear facility and to reduce the potential risk to the public and facility workers from uncontrolled releases of radioactive or other hazardous materials. These requirements constitute an agreement between DOE and Sandia National Laboratories management regarding the safe operation of the Gamma Irradiation Facility.
Our research plan is two-fold: first, we have extended our biological model of bottom-up visual attention with several recently characterized cortical interactions that are known to be responsible for human performance in certain visual tasks, and second, we have used an eyetracking system for collecting human eye movement data, from which we can calibrate the new additions to the model. We acquired an infrared video eyetracking system, which we are using to record observers' eye position with high temporal (120Hz) and spatial ({+-} 0.25 deg visual angle) accuracy. We collected eye movement scan paths from observers as they view computer-generated fractals, rural and urban outdoor scenes, and overhead satellite imagery. We found that, with very high statistical significance (10 to 12 z-scores), the saliency model accurately predicts locations that human observers will find interesting. We adopted our model of short-range interactions among overlapping spatial orientation channels to better predict bottom-up stimulus-driven attention in humans. This enhanced model is even more accurate in its predictions of human observers' eye movements. We are currently incorporating biologically plausible long-range interactions among orientation channels, which will aid in the detection of elongated contours such as rivers, roads, airstrips, and other man-made structures.
Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues of the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no-further-action decisions is site specific and cannot be calculated prior to the sampling. It may be advantageous to use the reduction in MPEV as a stopping rule for systematic sampling across the site that can then be followed by focused sampling in areas identified has having UXO during the systematic sampling. The techniques presented here provide answers to the questions of ''Where to sample?'' and ''When to stop?'' and are capable of running in near real time to support iterative site characterization campaigns.
Solidification is an important aspect of welding, brazing, soldering, LENS fabrication, and casting. The current trend toward utilizing large-scale process simulations and materials response models for simulation-based engineering is driving the development of new modeling techniques. However, the effective utilization of these models is, in many cases, limited by a lack of fundamental understanding of the physical processes and interactions involved. In addition, experimental validation of model predictions is required. We have developed new and expanded experimental techniques, particularly those needed for in-situ measurement of the morphological and kinetic features of the solidification process. The new high-speed, high-resolution video techniques and data extraction methods developed in this work have been used to identify several unexpected features of the solidification process, including the observation that the solidification front is often far more dynamic than previously thought. In order to demonstrate the utility of the video techniques, correlations have been made between the in-situ observations and the final solidification microstructure. Experimental methods for determination of the solidification velocity in highly dynamic pulsed laser welds have been developed, implemented, and used to validate and refine laser welding models. Using post solidification metallographic techniques, we have discovered a previously unreported orientation relationship between ferrite and austenite in the Fe-Cr-Ni alloy system, and have characterized the conditions under which this new relationship develops. Taken together, the work has expanded both our understanding of, and our ability to characterize, solidification phenomena in complex alloy systems and processes.
This report summarizes a variety of the most useful and commonly applied methods for obtaining Dempster-Shafer structures, and their mathematical kin probability boxes, from empirical information or theoretical knowledge. The report includes a review of the aggregation methods for handling agreement and conflict when multiple such objects are obtained from different sources.
The microcombustor described in this report was developed primarily for thermal management in microsystems and as a platform for micro-scale flame ionization detectors (microFID). The microcombustor consists of a thin-film heater/thermal sensor patterned on a thin insulating membrane that is suspended from its edges over a silicon frame. This micromachined design has very low heat capacity and thermal conductivity and is an ideal platform for heating catalytic materials placed on its surface. Catalysts play an important role in this design since they provide a convenient surface-based method for flame ignition and stabilization. The free-standing platform used in the microcombustor mitigates large heat losses arising from large surface-to-volume ratios typical of the microdomain, and, together with the insulating platform, permit combustion on the microscale. Surface oxidation, flame ignition and flame stabilization have been demonstrated with this design for hydrogen and hydrocarbon fuels premixed with air. Unoptimized heat densities of 38 mW/mm{sup 2} have been achieved for the purpose of heating microsystems. Importantly, the microcombustor design expands the limits of flammability (Low as compared with conventional diffusion flames); an unoptimized LoF of 1-32% for natural gas in air was demonstrated with the microcombustor, whereas conventionally 4-16% observed. The LoF for hydrogen, methane, propane and ethane are likewise expanded. This feature will permit the use of this technology in many portable applications were reduced temperatures, lean fuel/air mixes or low gas flows are required. By coupling miniature electrodes and an electrometer circuit with the microcombustor, the first ever demonstration of a microFID utilizing premixed fuel and a catalytically-stabilized flame has been performed; the detection of -1-3% of ethane in hydrogen/air is shown. This report describes work done to develop the microcombustor for microsystem heating and flame ionization detection and includes a description of modeling and simulation performed to understand the basic operation of this device. Ancillary research on the use of the microcombustor in calorimetric gas sensing is also described where appropriate.
The purpose of microstructural control is to optimize materials properties. To that end, they have developed sophisticated and successful computational models of both microstructural evolution and mechanical response. However, coupling these models to quantitatively predict the properties of a given microstructure poses a challenge. This problem arises because most continuum response models, such as finite element, finite volume, or material point methods, do not incorporate a real length scale. Thus, two self-similar polycrystals have identical mechanical properties regardless of grain size, in conflict with theory and observations. In this project, they took a tiered risk approach to incorporate microstructure and its resultant length scales in mechanical response simulations. Techniques considered include low-risk, low-benefit methods, as well as higher-payoff, higher-risk methods. Methods studied include a constitutive response model with a local length-scale parameter, a power-law hardening rate gradient near grain boundaries, a local Voce hardening law, and strain-gradient polycrystal plasticity. These techniques were validated on a variety of systems for which theoretical analyses and/or experimental data exist. The results may be used to generate improved constitutive models that explicitly depend upon microstructure and to provide insight into microstructural deformation and failure processes. Furthermore, because mechanical state drives microstructural evolution, a strain-enhanced grain growth model was coupled with the mechanical response simulations. The coupled model predicts both properties as a function of microstructure and microstructural development as a function of processing conditions.
A laser safety and hazard analysis was performed for the ARES laser system based on the 2000 version of the American National Standards Institute's (ANSI) Standard Z136.1,for Safe Use of Lasers and the 2000 version of the ANSI Standard Z136.6, for Safe Use of Lasers Outdoors. The ARES laser system is a Van/Truck based mobile platform, which is used to perform laser interaction experiments and tests at various national test sites.
A laser safety and hazard analysis was performed for the AURA laser system based on the 2000 version of the American National Standards Institute's (ANSI) Standard Z136.1, for ''Safe Use of Lasers'' and the 2000 version of the ANSI Standard Z136.6, for ''Safe Use of Lasers Outdoors''. The trailer based AURA laser system is a mobile platform, which is used to perform laser interaction experiments and tests at various national test sites. The trailer (B70) based AURA laser system is generally operated on the United State Air Force Starfire Optical Range (SOR) at Kirtland Air Force Base (KAFB), New Mexico. The laser is used to perform laser interaction testing inside the laser trailer as well as outside the trailer at target sites located at various distances from the exit telescope. In order to protect personnel, who work inside the Nominal Hazard Zone (NHZ), from hazardous laser emission exposures it was necessary to determine the Maximum Permissible Exposure (MPE) for each laser wavelength (wavelength bands) and calculate the appropriate minimum Optical Density (OD{sub min}) of the laser safety eyewear used by authorized personnel and the Nominal Ocular Hazard Distance (NOHD) to protect unauthorized personnel who may have violated the boundaries of the control area and enter into the laser's NHZ.
Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. Similarly, device-scale simulation tools (e.g., DaVinci) are commonly used in the design of individual semiconductor components. Some problems, such as single-event upset (SEU), require the fidelity of a mesh-based device simulator but are only meaningful when dynamically coupled with an external circuit. For such problems a mixed-level simulator is desirable, but the two types of simulation generally have different (sometimes conflicting) numerical requirements. To address these considerations, we have investigated variations of the two-level Newton algorithm, which preserves tight coupling between the circuit and the partial differential equations (PDE) device, while optimizing the numerics for both.
The effects of photocurrents in nuclear weapons induced by proximal nuclear detonations are well known and remain a serious hostile environment threat for the US stockpile. This report describes the final results of an LDRD study of the physical phenomena underlying prompt photocurrents in microelectronic devices and circuits. The goals of this project were to obtain an improved understanding of these phenomena, and to incorporate improved models of photocurrent effects into simulation codes to assist designers in meeting hostile radiation requirements with minimum build and test cycles. We have also developed a new capability on the ion microbeam accelerator in Sandia's Ion Beam Materials Research Laboratory (the Transient Radiation Microscope, or TRM) to supply ionizing radiation in selected micro-regions of a device. The dose rates achieved in this new facility approach those possible with conventional large-scale dose-rate sources at Sandia such as HERMES III and Saturn. It is now possible to test the physics and models in device physics simulators such as Davinci in ways not previously possible. We found that the physical models in Davinci are well suited to calculating prompt photocurrents in microelectronic devices, and that the TRM can reproduce results from conventional large-scale dose-rate sources in devices where the charge-collection depth is less than the range of the ions used in the TRM.
We are currently exploring and developing a new statistical mechanics approach to designing self organizing and self assembling systems that is unique to SNL. The primary application target for this ongoing research is the development of new kinds of nanoscale components and hardware systems. However, a surprising out of the box connection to software development is emerging from this effort. With some amount of modification, the collective behavior physics ideas for enabling simple hardware components to self organize may also provide design methods for a new class of software modules. Large numbers of these relatively small software components, if designed correctly, would be able to self assemble into a variety of much larger and more complex software systems. This self organization process would be steered to yield desired sets of system properties. If successful, this would provide a radical (disruptive technology) path to developing complex, high reliability software unlike any known today. The special work needed to evaluate this high risk, high payoff opportunity does not fit well into existing SNL funding categories, as it is well outside of the mainstreams of both conventional software development practices and the nanoscience research area that spawned it. We proposed a small LDRD effort aimed at appropriately generalizing these collective behavior physics concepts and testing their feasibility for achieving the self organization of large software systems. Our favorable results motivate an expanded effort to fully develop self-organizing software as a new technology.
This document was prepared to support the Department of Energy's compliance with Sections 106 and 110 of the National Historic Preservation Act. It provides an overview of the historic context in which Sandia National Laboratories/California was created and developed. Establishing such a context allows for a reasonable and reasoned historical assessment of Sandia National Laboratories/California properties. The Cold War arms race provides the primary historical context for the SNL/CA built environment.
We present a technique for determining non-linear resistances, capacitances, and inductances from ring down data in a non-linear RLC circuit. Although the governing differential equations are non-linear, we are able to solve this problem using linear least squares without doing any sort of non-linear iteration.
Military test and training ranges operate with live fire engagements to provide realism important to the maintenance of key tactical skills. Ordnance detonations during these operations typically produce minute residues of parent explosive chemical compounds. Occasional low order detonations also disperse solid phase energetic material onto the surface soil. These detonation remnants are implicated in chemical contamination impacts to groundwater on a limited set of ranges where environmental characterization projects have occurred. Key questions arise regarding how these residues and the environmental conditions (e.g. weather and geostratigraphy) contribute to groundwater pollution impacts. This report documents interim results of experimental work evaluating mass transfer processes from solid phase energetics to soil pore water. The experimental work is used as a basis to formulate a mass transfer numerical model, which has been incorporated into the porous media simulation code T2TNT. Experimental work to date with Composition B explosive has shown that column tests typically produce effluents near the temperature dependent solubility limits for RDX and TNT. The influence of water flow rate, temperature, porous media saturation and mass loading is documented. The mass transfer model formulation uses a mass transfer coefficient and surface area function and shows good agreement with the experimental data. Continued experimental work is necessary to evaluate solid phase particle size and 2-dimensional effects, and actual low order detonation debris. Simulation model improvements will continue leading to a capability to complete screening assessments of the impacts of military range operations on groundwater quality.
The GEO-SEQ Project is investigating methods for geological sequestration of CO{sub 2}. This project, which is directed by LBNL and includes a number of other industrial, university, and National Laboratory partners, is evaluating computer simulation models including TOUGH2. One of the problems to be considered is Enhanced Coal Bed Methane (ECBM) recovery. In this scenario, CO2 is pumped into methane-rich coal beds. Due to adsorption processes, the CO2 is sorbed onto the coal, which displaces the previously sorbed methane (CH4). The released methane can then be recovered, at least partially offsetting the cost of CO2 sequestration. Modifications have been made to the EOS7R equation of state in TOUGH2 to include the extended Langmuir isotherm for sorbing gases, including the change in porosity associated with the sorbed gas mass. Comparison to hand calculations for pure gas and binary mixtures shows very good agreement. Application to a CO{sub 2} well injection problem given by Law et al. (2002) shows good agreement considering the differences in the equations of state.
Rapid detection and identification of bacteria and other pathogens is important for many civilian and military applications. The taxonomic significance, or the ability to differentiate one microorganism from another, using fatty acid content and distribution is well known. For analysis fatty acids are usually converted to fatty acid methyl esters (FAMEs). Bench-top methods are commercially available and recent publications have demonstrated that FAMEs can be obtained from whole bacterial cells in an in situ single-step pyrolysis/methylation analysis. This report documents the progress made during a three year Laboratory Directed Research and Development (LDRD) program funded to investigate the use of microfabricated components (developed for other sensing applications) for the rapid identification of bioorganisms based upon pyrolysis and FAME analysis. Components investigated include a micropyrolyzer, a microGC, and a surface acoustic wave (SAW) array detector. Results demonstrate that the micropyrolyzer can pyrolyze whole cell bacteria samples using only milliwatts of power to produce FAMEs from bacterial samples. The microGC is shown to separate FAMEs of biological interest, and the SAW array is shown to detect volatile FAMEs. Results for each component and their capabilities and limitations are presented and discussed. This project has produced the first published work showing successful pyrolysis/methylation of fatty acids and related analytes using a microfabricated pyrolysis device.
The Defense Advanced Research Projects Agency (DARPA) has recognized that biological and chemical toxins are a real and growing threat to troops, civilians, and the ecosystem. The Explosives Components Facility at Sandia National Laboratories (SNL) has been working with the University of Montana, the Southwest Research Institute, and other agencies to evaluate the feasibility of directing honeybees to specific targets, and for environmental sampling of biological and chemical ''agents of harm''. Recent work has focused on finding and locating buried landmines and unexploded ordnance (UXO). Tests have demonstrated that honeybees can be trained to efficiently and accurately locate explosive signatures in the environment. However, it is difficult to visually track the bees and determine precisely where the targets are located. Video equipment is not practical due to its limited resolution and range. In addition, it is often unsafe to install such equipment in a field. A technology is needed to provide investigators with the standoff capability to track bees and accurately map the location of the suspected targets. This report documents Light Detection and Ranging (LIDAR) tests that were performed by SNL. These tests have shown that a LIDAR system can be used to track honeybees. The LIDAR system can provide both the range and coordinates of the target so that the location of buried munitions can be accurately mapped for subsequent removal.
This paper presents a description of the SIERRA Framework Version 3 parallel transfer operators. The high-level design including object interrelationships, as well as requirements for their use, is discussed. Transfer operators are used for moving field data from one computational mesh to another. The need for this service spans many different applications. The most common application is to enable loose coupling of multiple physics modules, such as for the coupling of a quasi-statics analysis with a thermal analysis. The SIERRA transfer operators support the transfer of nodal and element fields between meshes of different, arbitrary parallel decompositions. Also supplied are ''copy'' transfer operators for efficient transfer of fields between identical meshes. A ''copy'' transfer operator is also implemented for constraint objects. Each of these transfer operators is described. Also, two different parallel algorithms are presented for handling the geometric misalignment between different parallel-distributed meshes.
In a Synthetic Aperture Radar (SAR) system, the purpose of the receiver is to process incoming radar signals in order to obtain target information and ultimately construct an image of the target area. Incoming raw signals are usually in the microwave frequency range and are typically processed with analog circuitry, requiring hardware designed specifically for the desired signal processing operations. A more flexible approach is to process the signals in the digital domain. Recent advances in analog-to-digital converter (ADC) and Field Programmable Gate Array (FPGA) technology allow direct digital processing of wideband intermediate frequency (IF) signals. Modern ADCs can achieve sampling rates in excess of 1GS/s, and modern FPGAs can contain millions of logic gates operating at frequencies over 100 MHz. The combination of these technologies is necessary to implement a digital radar receiver capable of performing high speed, sophisticated and scalable DSP designs that are not possible with analog systems. Additionally, FPGA technology allows designs to be modified as the design parameters change without the need for redesigning circuit boards, potentially saving both time and money. For typical radars receivers, there is a need for operation at multiple ranges, which requires filters with multiple decimation rates, i.e., multiple bandwidths. In previous radar receivers, variable decimation was implemented by switching between SAW filters to achieve an acceptable filter configuration. While this method works, it is rather ''brute force'' because it duplicates a large amount of hardware and requires a new filter to be added for each IF bandwidth. By implementing the filter digitally in FPGAs, a larger number of decimation values (and consequently a larger number of bandwidths) can be implemented with no need for extra components. High performance, wide bandwidth radar systems also place high demands on the DSP throughput of a given digital receiver. In such applications, the maximum clock frequency of a given FPGA is not adequate to support the required data throughput. This problem can be overcome by employing a parallel implementation of the pane filter. The parallel pane filter uses a polyphase parallelization technique to achieve an aggregate data rate which is twice that of the FPGA clock frequency. This is achieved at the expense of roughly doubling the FPGA resource usage.
The Nuclear Regulatory Commission (NRC) approves new package designs for shipping fissile quantities of UF{sub 6}. Currently there are three packages approved by the NRC for domestic shipments of fissile quantities of UF{sub 6}: NCI-21PF-1; UX-30; and ESP30X. For approval by the NRC, packages must be subjected to a sequence of physical tests to simulate transportation accident conditions as described in 10 CFR Part 71. The primary objective of this project was to relate the conditions experienced by these packages in the tests described in 10 CFR Part 71 to conditions potentially encountered in actual accidents and to estimate the probabilities of such accidents. Comparison of the effects of actual accident conditions to 10 CFR Part 71 tests was achieved by means of computer modeling of structural effects on the packages due to impacts with actual surfaces, and thermal effects resulting from test and other fire scenarios. In addition, the likelihood of encountering bodies of water or sufficient rainfall to cause complete or partial immersion during transport over representative truck routes was assessed. Modeled effects, and their associated probabilities, were combined with existing event-tree data, plus accident rates and other characteristics gathered from representative routes, to derive generalized probabilities of encountering accident conditions comparable to the 10 CFR Part 71 conditions. This analysis suggests that the regulatory conditions are unlikely to be exceeded in real accidents, i.e. the likelihood of UF{sub 6} being dispersed as a result of accident impact or fire is small. Moreover, given that an accident has occurred, exposure to water by fire-fighting, heavy rain or submersion in a body of water is even less probable by factors ranging from 0.5 to 8E-6.