High-resolution finite volume methods for solving systems of conservation laws have been widely embraced in research areas ranging from astrophysics to geophysics and aero-thermodynamics. These methods are typically at least second-order accurate in space and time, deliver non-oscillatory solutions in the presence of near discontinuities, e.g., shocks, and introduce minimal dispersive and diffusive effects. High-resolution methods promise to provide greatly enhanced solution methods for Sandia's mainstream shock hydrodynamics and compressible flow applications, and they admit the possibility of a generalized framework for treating multi-physics problems such as the coupled hydrodynamics, electro-magnetics and radiative transport found in Z pinch physics. In this work, we describe initial efforts to develop a generalized 'black-box' conservation law framework based on modern high-resolution methods and implemented in an object-oriented software framework. The framework is based on the solution of systems of general non-linear hyperbolic conservation laws using Godunov-type central schemes. In our initial efforts, we have focused on central or central-upwind schemes that can be implemented with only a knowledge of the physical flux function and the minimal/maximal eigenvalues of the Jacobian of the flux functions, i.e., they do not rely on extensive Riemann decompositions. Initial experimentation with high-resolution central schemes suggests that contact discontinuities with the concomitant linearly degenerate eigenvalues of the flux Jacobian do not pose algorithmic difficulties. However, central schemes can produce significant smearing of contact discontinuities and excessive dissipation for rotational flows. Comparisons between 'black-box' central schemes and the piecewise parabolic method (PPM), which relies heavily on a Riemann decomposition, shows that roughly equivalent accuracy can be achieved for the same computational cost with both methods. However, PPM clearly outperforms the central schemes in terms of accuracy at a given grid resolution and the cost of additional complexity in the numerical flux functions. Overall we have observed that the finite volume schemes, implemented within a well-designed framework, are extremely efficient with (potentially) very low memory storage. Finally, we have found by computational experiment that second and third-order strong-stability preserving (SSP) time integration methods with the number of stages greater than the order provide a useful enhanced stability region. However, we observe that non-SSP and non-optimal SSP schemes with SSP factors less than one can still be very useful if used with time-steps below the standard CFL limit. The 'well-designed' integration schemes that we have examined appear to perform well in all instances where the time step is maintained below the standard physical CFL limit.
Experimental data are compiled and reviewed for aerosol particle releases due to combustion in air of Plutonium (Pu). The aerosol release fraction (ARF), which is the mass of Pu aerosolized, divided by the mass of Pu oxidized, is dependent on whether the oxidizing Pu sample is static (i.e. stationary) or dynamic (i.e. falling in air). ARF data are compiled for sample masses ranging from 30 mg to 1770 g, oxidizing temperatures varying from 113 C to {approx}1000 C, and air flow rates varying from 0.05 m/s to 5.25 m/s. The measured ARFs range over five orders of magnitude. The maximum observed static ARF is 2.4 x 10{sup -3}, and this is the recommended ARF for safety studies of static Pu combustion.
On May 19 and 20, 2003, thirty-some members of Sandia staff and management met to discuss the long-term connections between energy and national security. Three broad security topics were explored: I. Global and U.S. economic dependence on oil (and gas); II. Potential security implications of global climate change; and III. Vulnerabilities of the U.S. domestic energy infrastructure. This report, rather than being a transcript of the workshop, represents a synthesis of background information used in the workshop, ideas that emerged in the discussions, and ex post facto analysis of the discussions. Each of the three subjects discussed at this workshop has significant U.S. national security implications. Each has substantial technology components. Each appears a legitimate area of concern for a national security laboratory with relevant technology capabilities. For the laboratory to play a meaningful role in contributing to solutions to national problems such as these, it needs to understand the political, economic, and social environments in which it expects its work to be accepted and used. In addition, it should be noted that the problems of oil dependency and climate change are not amenable to solution by the policies of any one nation--even the one that is currently the largest single energy consumer. Therefore, views, concerns, policies, and plans of other countries will do much to determine which solutions might work and which might not.
Biomanufacturing has the potential to be one of the defining technologies in the upcoming century. Research, development, and applications in the fields of biotechnology, bioengineering, biodetection, biomaterials, biocomputation and bioenergy will have dramatic impact on both the products we are able to create, and the ways in which we create them. In this report, we examine current research trends in biotechnology, identify key areas where biomanufacturing will likely be a major contributing field, and report on recent developments and barriers to progress in key areas.
This report summarizes the results obtained from a Laboratory Directed Research & Development (LDRD) project entitled 'Investigation of Potential Applications of Self-Assembled Nanostructured Materials in Nuclear Waste Management'. The objectives of this project are to (1) provide a mechanistic understanding of the control of nanometer-scale structures on the ion sorption capability of materials and (2) develop appropriate engineering approaches to improving material properties based on such an understanding.
An understanding of the dynamics of z-pinch wire array explosion and collapse is of critical interest to the development and future of pulsed power inertial confinement fusion experiments. Experimental results clearly show the extreme three-dimensional nature of the wire explosion and collapse process. The physics of this process can be approximated by the resistive magnetohydrodynamic (MHD) equations augmented by thermal and radiative transport modeling. Z-pinch MHD physics is dominated by material regions whose conductivity properties vary drastically as material passes from solid through melt into plasma regimes. At the same time void regions between the wires are modeled as regions of very low conductivity. This challenging physical situation requires a sophisticated three-dimensional modeling approach matched by sufficient computational resources to make progress in predictive modeling and improved physical understanding.
Two sorbents, zirconium coated zeolite and magnesium hydroxide, were tested for their effectiveness in removing arsenic from Albuquerque municipal water. Results for the zirconium coated zeolite indicate that phosphate present in the water interfered with the sorption of arsenic. Additionally, there was a large quantity of iron and copper present in the water, corrosion products from the piping system, which may have interfered with the uptake of arsenic by the sorbent. Magnesium hydroxide has also been proven to be a strong sorbent for arsenic as well as other metals. Carbonate, present in water, has been shown to interfere with the sorption of arsenic by reacting with the magnesium hydroxide to form magnesium carbonate. The reaction mechanism was investigated by FT-IR and shows that hydrogen bonding between an oxygen on the arsenic species and a hydrogen on the Mg(OH)2 is most likely the mechanism of sorption. This was also confirmed by RAMAN spectroscopy and XRD. Technetium exists in multiple oxidation states (IV and VII) and is easily oxidized from the relatively insoluble Tc(IV) form to the highly water soluble and mobile Tc(VII) form. The two oxidation states exhibit different sorption characteristics. Tc(VII) does not sorb to most materials whereas Tc(IV) will strongly sorb to many materials. Therefore, it was determined that it is necessary to first reduce the Tc (using SnCl2) before sorption to stabilize Tc in the environment. Additionally, the effect of carbonate and phosphate on the sorption of technetium by hydroxyapatite was studied and indicated that both have a significant effect on reducing Tc sorption.
This article describes how features of event tree analysis and Monte Carlo-based discrete event simulation can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology, with some of the best features of each. The resultant object-based event scenario tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible. Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST methodology is then applied to an aviation safety problem that considers mechanisms by which an aircraft might become involved in a runway incursion incident. The resulting OBEST model demonstrates how a close link between human reliability analysis and probabilistic risk assessment methods can provide important insights into aviation safety phenomenology.
An efficient polymer mass loss and foam response model has been developed to predict the behavior of unconfined polyurethane foam exposed to fire-like heat fluxes. The mass loss model is based on a simple two-step mechanism using distributed reaction rates. The mass loss model was implemented into a multidimensional finite element heat conduction code that supports chemical kinetics and dynamic enclosure radiation. A discretization bias correction model was parameterized using elements with characteristic lengths ranging from 0.1 cm to 1 cm. Bias corrected solutions with these large elements gave essentially the same results as grid-independent solutions using 0.01-cm elements. Predictions were compared to measured decomposition front locations determined from real-time X-rays of 9-cm diameter, 15-cm tall cylinders of foam that were heated with lamps. The calculated and measured locations of the decomposition fronts were well within 1 cm of each other and in some cases the fronts coincided.
Combining broad-beam circuit level single-event upset (SEU) response with heavy ion microprobe charge collection measurements on single silicon-germanium heterojunction bipolar transistors improves understanding of the charge collection mechanisms responsible for SEU response of digital SiGe HBT technology. This new understanding of the SEU mechanisms shows that the right rectangular parallele-piped model for the sensitive volume is not applicable to this technology. A new first-order physical model is proposed and calibrated with moderate success.
For several years, Sandia National Laboratories and Lawrence Livermore National Laboratory have been part of a consortium funded by the Department of Energy to improve fuel efficiency of heavy vehicles such as Class 8 trucks through aerodynamic drag reduction. The objective of this work is to demonstrate the feasibility of using the steady Reynolds-Averaged Navier-Stokes (RANS) approach to predict the flow field around heavy vehicles, with special emphasis on the base region of the trailer, and to compute the aerodynamic forces. In particular, Sandia's computational fluid dynamics code, SACCARA, was used to simulate the flow on a simplified model of a tractor-trailer vehicle. The results are presented and compared with NASA Ames experimental data to assess the predictive capability of RANS to model the flow field and predict the aerodynamic forces.
Vulnerability analysis studies show that one of the worst threats against a facility is that of an active insider during an emergency evacuation. When a criticality or other emergency alarm occurs, employees immediately proceed along evacuation routes to designated areas. Procedures are then implemented to account for all material, classified parts, etc. The 3-Dimensional Video Motion Detection (3DVMD) technology could be used to detect and track possible insider activities during alarm situations, as just described, as well as during normal operating conditions. The 3DVMD technology uses multiple cameras to create 3-dimensional detection volumes or zones. Movement throughout detection zones is tracked and high-level information, such as the number of people and their direction of motion, is extracted. In the described alarm scenario, deviances of evacuation procedures taken by an individual could be immediately detected and relayed to a central alarm station. The insider could be tracked and any protected items removed from the area could be flagged. The 3DVMD technology could also be used to monitor such items as machines that are used to build classified parts. During an alarm, detections could be made if items were removed from the machine. Overall, the use of 3DVMD technology during emergency evacuations would help to prevent the loss of classified items and would speed recovery from emergency situations. Further security could also be added by analyzing tracked behavior (motion) as it corresponds to predicted behavior, e.g., behavior corresponding with the execution of required procedures. This information would be valuable for detecting a possible insider not only during emergency situations, but also during times of normal operation.
The state-of-the-art of inertial micro-sensors (gyroscopes and accelerometers) has advanced to the point where they are displacing the more traditional sensors in many size, power, and/or cost-sensitive applications. A factor limiting the range of application of inertial micro-sensors has been their relatively poor bias stability. The incorporation of an integral sensitive axis rotation capability would enable bias mitigation through proven techniques such as indexing, and foster the use of inertial micro-sensors in more accuracy-sensitive applications. Fabricating the integral rotation mechanism in MEMS technology would minimize the penalties associated with incorporation of this capability, and preserve the inherent advantages of inertial micro-sensors.
This paper presents the development of a two-stage pulse tube cooler for space applications. The staged cooler incorporates an integral High Efficiency Cryocooler (HEC) pulse tube cooler with a linear cold head and a split second remote coaxial cold head. The two-stage cold head was designed to provide simultaneous large cooling power at 95 K at the linear cold head and 180 K at the split coaxial cold head. The innovative staging design allows up to 50 cm of separation between the cold heads. The cooler is compatible with the existing HEC flight electronics.
How can information required for the proper functioning of a cell, an organism, or a species be transmitted in an error-introducing environment? Clearly, similar to engineering communication systems, biological systems must incorporate error control in their information transmissino processes. if genetic information in the DNA sequence is encoded in a manner similar to error control encoding, the received sequence, the messenger RNA (mRNA) can be analyzed using coding theory principles. This work explores potential parallels between engineering communication systems and the central dogma of genetics and presents a coding theory approach to modeling the process of protein translation initiation. The messenger RNA is viewed as a noisy encoded sequence and the ribosoe as an error control decoder. Decoding models based on chemical and biological characteristics of the ribosome and the ribosome binding site of the mRNA are developed and results of applying the models to the Escherichia coli K-12 are presented.
This report provides a summary of the three-year LDRD (Laboratory Directed Research and Development) project aimed at developing microchemical sensors for continuous, in-situ monitoring of volatile organic compounds. A chemiresistor sensor array was integrated with a unique, waterproof housing that allows the sensors to be operated in a variety of media including air, soil, and water. Numerous tests were performed to evaluate and improve the sensitivity, stability, and discriminatory capabilities of the chemiresistors. Field tests were conducted in California, Nevada, and New Mexico to further test and develop the sensors in actual environments within integrated monitoring systems. The field tests addressed issues regarding data acquisition, telemetry, power requirements, data processing, and other engineering requirements. Significant advances were made in the areas of polymer optimization, packaging, data analysis, discrimination, design, and information dissemination (e.g., real-time web posting of data; see www.sandia.gov/sensor). This project has stimulated significant interest among commercial and academic institutions. A CRADA (Cooperative Research and Development Agreement) was initiated in FY03 to investigate manufacturing methods, and a Work for Others contract was established between Sandia and Edwards Air Force Base for FY02-FY04. Funding was also obtained from DOE as part of their Advanced Monitoring Systems Initiative program from FY01 to FY03, and a DOE EMSP contract was awarded jointly to Sandia and INEEL for FY04-FY06. Contracts were also established for collaborative research with Brigham Young University to further evaluate, understand, and improve the performance of the chemiresistor sensors.
Battery life is an important, yet technically challenging, issue for battery development and application. Adequately estimating battery life requires a significant amount of testing and modeling effort to validate the results. Integrated battery testing and modeling is quite feasible today to simulate battery performance, and therefore applicable to predict its life. A relatively simple equivalent-circuit model (ECM) is used in this work to show that such an integrated approach can actually lead to a high-fidelity simulation of a lithium-ion cell's performance and life. The methodology to model the cell's capacity fade during thermal aging is described to illustrate its applicability to battery calendar life prediction.
This paper describes a methodology for implementing disk-less cluster systems using the Network File System (NFS) that scales to thousands of nodes. This method has been successfully deployed and is currently in use on several production systems at Sandia National Labs. This paper will outline our methodology and implementation, discuss hardware and software considerations in detail and present cluster configurations with performance numbers for various management operations like booting.
A new capability for modeling thin-shell structures within the coupled Euler-Lagrange code, Zapotec, is under development. The new algorithm creates an artificial material interface for the Eulerian portion of the problem by expanding a Lagrangian shell element such that it has an effective thickness that spans one or more Eulerian cells. The algorithm implementation is discussed along with several examples involving blast loading on plates.
The potential of a new cable diagnostic known as Pulse-Arrested Spark Discharge technique (PASD) is being studied. Previous reports have documented the capability of the technique to locate cable failures using a short high voltage pulse. This report will investigate the impact of PASD on the sample under test. In this report, two different energy deposition experiments are discussed. These experiments include the PASD pulse ({approx}6 mJ) and a high energy discharge ({approx}600 mJ) produced from a charged capacitor source. The high energy experiment is used to inflict detectable damage upon the insulators and to make comparisons with the effects of the low energy PASD pulse. Insulator breakdown voltage strength before and after application of the PASD pulse and high energy discharges are compared. Results indicate that the PASD technique does not appear to degrade the breakdown strength of the insulator or to produce visible damage. However, testing of the additional materials, including connector insulators, may be warranted to verify PASDs non-destructive nature across the full spectrum of insulators used in commercial aircraft wiring systems.
Electrostatic actuators exhibit fast response times and are easily integrated into microsystems because they can be fabricated with standard IC micromachining processes and materials. Although electrostatic actuators have been used extensively in 'dry' MEMS, they have received less attention in microfluidic systems probably because of challenges such as electrolysis, anodization, and electrode polarization. Here we demonstrate that ac drive signals can be used to prevent electrode polarization, and thus enable electrostatic actuation in many liquids, at potentials low enough to avoid electrochemistry. We measure the frequency response of an interdigitated silicon comb-drive actuator in liquids spanning a decade of dielectric permittivities and four decades of conductivity, and present a simple theory that predicts the characteristic actuation frequency. The analysis demonstrates the importance of the native oxide on silicon actuator response, and suggests that the actuation frequency can be shifted by controlling the thickness of the oxide. For native silicon devices, actuation is predicted at frequencies less than 10 MHz, in electrolytes of ionic strength up to 100 mmol/L, and thus electrostatic actuation may be feasible in many bioMEMS and other microfluidic applications.
Characterizing the geology, geotechnical aspects, and rock properties of deep underground facility sites can enhance targeting strategies for both nuclear and conventional weapons. This report describes the results of a study to investigate the utility of remote spectral sensing for augmenting the geological and geotechnical information provided by traditional methods. The project primarily considered novel exploitation methods for space-based sensors, which allow clandestine collection of data from denied sites. The investigation focused on developing and applying novel data analysis methods to estimate geologic and geotechnical characteristics in the vicinity of deep underground facilities. Two such methods, one for measuring thermal rock properties and one for classifying rock types, were explored in detail. Several other data exploitation techniques, developed under other projects, were also examined for their potential utility in geologic characterization.
As rapid Internet growth continues, global communications becomes more dependent on Internet availability for information transfer. Recently, the Internet Engineering Task Force (IETF) introduced a new protocol, Multiple Protocol Label Switching (MPLS), to provide high-performance data flows within the Internet. MPLS emulates two major aspects of the Asynchronous Transfer Mode (ATM) technology. First, each initial IP packet is 'routed' to its destination based on previously known delay and congestion avoidance mechanisms. This allows for effective distribution of network resources and reduces the probability of congestion. Second, after route selection each subsequent packet is assigned a label at each hop, which determines the output port for the packet to reach its final destination. These labels guide the forwarding of each packet at routing nodes more efficiently and with more control than traditional IP forwarding (based on complete address information in each packet) for high-performance data flows. Label assignment is critical in the prompt and accurate delivery of user data. However, the protocols for label distribution were not adequately secured. Thus, if an adversary compromises a node by intercepting and modifying, or more simply injecting false labels into the packet-forwarding engine, the propagation of improperly labeled data flows could create instability in the entire network. In addition, some Virtual Private Network (VPN) solutions take advantage of this 'virtual channel' configuration to eliminate the need for user data encryption to provide privacy. VPN's relying on MPLS require accurate label assignment to maintain user data protection. This research developed a working distributive trust model that demonstrated how to deploy confidentiality, authentication, and non-repudiation in the global network label switching control plane. Simulation models and laboratory testbed implementations that demonstrated this concept were developed, and results from this research were transferred to industry via standards in the Optical Internetworking Forum (OIF).
Coilguns have demonstrated their capability to launch projectiles to 1 km/s, and there is interest in their application for long-range precision strike weapons. However, the incorporation of cooling systems for repetitive operation will impact the mechanical design and response of the future coils. To assess the impact of such changes, an evaluation of the ruggedness and reliability of the existing 50 mm bore coil designed in 1993 was made by repeatedly testing at stress levels associated with operation in a coilgun. A two-coil testbed has been built with a static projectile where each coil is energized by its own capacitor bank. Simulation models of the applied forces generated in this testbed have been created with the SLINGSHOT circuit code to obtain loads equivalent to the worst-case anticipated in a 50 mm coilgun that could launch a 236 g projectile to 2 km/s. Bench measurements of the seven remaining coils built in 1993 have been used to evaluate which coils were viable for testing, and only one was found defective. Measurements of the gradient of the effective coil inductance in the presence of the projectile were compared to values from SLINGSHOT, and the agreement is excellent. Repeated testing of the HFC5 coil built in 1993 has demonstrated no failures after 205 shots, which is an order of magnitude greater than any number achieved in previous testing. Although this testing has only been done on two coils, the results are encouraging as it demonstrates there are no fundamental weak links in the design that will cause a very early failure. Several recommendations for future coil designs are suggested based on observations of this study.
Spectral Dynamics announced the shipment of a 316-channel data acquisition system. The system was custom designed for the Light Initiated High Explosive (LIHE) facility at Sandia Labs in Albuquerque, New Mexico by Spectral Dynamics Advanced Research Products Group. This Spectral Dynamics data acquisition system was tailored to meet the unique LIHE environmental and testing requirements utilizing Spectral Dynamics commercial off the shelf (COTS) Jaguar and VIDAS products supplemented by SD Alliance partner's (COTS) products. 'This system is just the beginning of our cutting edge merged technology solutions,' stated Mark Remelman, Manager for the Spectral Dynamics Advanced Research Products Group. 'This Hybrid system has 316-channels of data acquisition capability, comprised of 102.4kHz direct to disk acquisition and 2.5MHz, 200Mhz & 500Mhz RAM based capabilities. In addition it incorporates the advanced bridge conditioning and dynamic configuration capabilities offered by Spectral Dynamics new Smart Interface Panel System (SIPS{trademark}).' After acceptance testing, Tony King, the Instrumentation Engineer facilitating the project for the Sandia LIHE group commented; 'The LIHE staff was very impressed with the design, construction, attention to detail and overall performance of the instrumentation system'. This system combines VIDAS, a leading edge fourth generation SD-VXI hardware and field-proven software system from SD's Advanced Research Products Group with SD's Jaguar, a multiple Acquisition Control Peripheral (ACP) system that allows expansion to hundreds of channels without sacrificing signal processing performance. Jaguar incorporates dedicated throughput disks for each ACP providing time streaming to disk at up to the maximum sample rate. Spectral Dynamics, Inc. is a leading worldwide supplier of systems and software for advanced computer-automated data acquisition, vibration testing, structural dynamics, explosive shock, high-speed transient capture, acoustic analysis, monitoring, measurement, control and backup. Spectral Dynamics products are used for research, design verification, product testing and process improvement by manufacturers of all types of electrical, electronic and mechanical products, as well as by universities and government-funded agencies. The Advanced Research Products Group is the newest addition to the Spectral Dynamics family. Their newest VXI data acquisition hardware pushes the envelope on capabilities and embodies the same rock solid design methodologies, which have always differentiated Spectral Dynamics from its competition.
The production of metal vapor as a consequence of high intensity laser irradiation is a serious concern in laser welding. Despite the widespread use of lasers in manufacturing, little fundamental understanding of laser/material interaction in the weld pool exists. Laser welding experiments on 304 stainless steel have been completed which have advanced our fundamental understanding of the magnitude and the parameter dependence of metal vaporization in laser spot welding. Calculations using a three-dimensional, transient, numerical model were used to compare with the experimental results. Convection played a very important role in the heat transfer especially towards the end of the laser pulse. The peak temperatures and velocities increased significantly with the laser power density. The liquid flow is mainly driven by the surface tension and to a much less extent, by the buoyancy force. Heat transfer by conduction is important when the liquid velocity is small at the beginning of the pulse and during weld pool solidification. The effective temperature determined from the vapor composition was found to be close to the numerically computed peak temperature at the weld pool surface. At very high power densities, the computed temperatures at the weld pool surface were found to be higher than the boiling point of 304 stainless steel. As a result, vaporization of alloying elements resulted from both total pressure and concentration gradients. The calculations showed that the vaporization was concentrated in a small region under the laser beam where the temperature was very high.
The National Nanotechnology Initiative (NNI), first announced in 1999 has grown into a major U. S. investment involving twenty federal agencies. As a lead federal agency, the Department of Energy (DOE) is developing a network of Nanoscale Science and Research Centers (NSRC). NSRCs will be highly collaborative national user facilities associated with DOE National Laboratories where university, laboratory, and industrial researchers can work together to advance nanoscience and technology. The Center for Integrated Nanotechnologies (CINT), which is operated jointly by Sandia National Laboratories and Alamos National Laboratory, has a unique technical vision focused on integrating scientific disciplines and expertise across multiple length scales going all the way from the nano world to the world around us. It is often said that nanotechnology has the potential to change almost everything we do. However, this prophecy will only come to pass when we learn to couple nanoscale functions into the macroscale world. Obviously coupling the nano- and micro-length scales is an important piece of this challenge and one can site many examples where the performance of existing microdevices has been improved by adding nanotechnology. Examples include low friction coatings for MEMS and compact light sources for ChemLab spectrometers. While this approach has produced significant benefit, we believe that the true potential will be realized only when device architectures are designed 'from the nanoscale up', allowing nanoscale function to drive microscale performance.
Over the past ten years, Sandia has developed RF radar responsive tag systems and supporting technologies for various government agencies and industry partners. RF tags can function as RF transmitters or radar transponders that enable tagging, tracking, and location determination functions. Expertise in tag architecture, microwave and radar design, signal analysis and processing techniques, digital design, modeling and simulation, and testing have been directly applicable to these tag programs. In general, the radar responsive tag designs have emphasized low power, small package size, and the ability to be detected by the radar at long ranges. Recently, there has been an interest in using radar responsive tags for Blue Force tracking and Combat ID (CID). The main reason for this interest is to allow airborne surveillance radars to easily distinguish U.S. assets from those of opposing forces. A Blue Force tracking capability would add materially to situational awareness. Combat ID is also an issue, as evidenced by the fact that approximately one-quarter of all U.S. casualties in the Gulf War took the form of ground troops killed by friendly fire. Because the evolution of warfare in the intervening decade has made asymmetric warfare the norm rather than the exception, swarming engagements in which U.S. forces will be freely intermixed with opposing forces is a situation that must be anticipated. Increasing utilization of precision munitions can be expected to drive fires progressively closer to engaged allied troops at times when visual de-confliction is not an option. In view of these trends, it becomes increasingly important that U.S. ground forces have a widely proliferated all-weather radar responsive tag that communicates to all-weather surveillance. The purpose of this paper is to provide an overview of the recent, current, and future radar responsive research and development activities at Sandia National Laboratories that support both the Blue Force Tracking and Combat ID application.
Alumina/poly(methyl methacrylate) (PMMA) nanocomposites were synthesized using 38 and 17 nm alumina nanoparticles. At an optimum weight fraction, the resulting nanocomposites display a room-temperature brittle-to-ductile transition in uniaxial tension with an increase in the strain-to-failure that averages 40% strain and the appearance of a well-defined yield point in uniaxial tension. Concurrently, the glass transition temperature (T{sub g}) of the nanocomposites drops by more than 20 C. The brittle-to-ductile transition is found to depend on poor interfacial adhesion between polymer and nanoparticle. This allows the nucleation of voids, typically by larger particles ({approx}100 nm), which subsequently expand during loading. This void formation suppresses craze formation and promotes delocalized shear yielding. In addition, the reduction in T{sub g} shrinks the shear yield envelope, further promoting this type of yield behavior. The brittle-to-ductile phenomenon is found to require both larger particles for void growth and smaller particles that induce the lowering of yield stress.
Nanoparticles have received much attention and have been the subject of many reviews. Nanoparticles have also been used to form super molecular structures for molecular electronic, and sensor applications. However, many limitations exist when using nanoparticles, including the ability to manipulate the particles post synthesis. Current methods to prepare nanoparticles employ functionalities like thiols, amines, phosphines, isocyanides, or a citrate as the metal capping agent. While these capping agents prevent agglomeration or precipitation of the particles, most are difficult to displace or impede packing in nanoparticle films due to coulombic repulsion. It is in this vein that we undertook the synthesis of nanoparticles that have a weakly bound capping agent that is strong enough to prevent agglomeration and in the case of the platinum particles allow for purification, but yet, easily displaced by other strongly binding ligands. The nanoparticles where synthesized according to the Brust method except stearonitrile was used instead of an aliphatic thiol. Both platinum and gold were examined in this manner. A representative procedure for the synthesis of platinum nanoparticles involved the phase transfer of chloroplatinic acid (0.37 g, 0.90 mmol) dissolved in water (30 mL) to a solution of tetraoctylammonium bromide (2.2 g, 4.0 mmol) in toluene (80 mL). After the chloroplatinic acid was transferred into the organic phase the aqueous phase was removed. Stearonitrile (0.23 g, 0.87 mmol) was added and sodium borohydride (0.38 g, 49 mmol) in water (25 mL) was added. The solution turned black almost immediately and after 15 min the organic phase was separated and passed through a 0.45 {micro}m Teflon filter. The resulting solution was concentrated and twice precipitated into ethanol ({approx}200 mL) to yield 0.11 g of black platinum nanoparticles. TGA experiments showed that the Pt particles contained 35% by mass stearonitrile. TEM images showed an average particle size of 1.3 {+-} 0.3 nm. A representative procedure for the synthesis of gold nanoparticles involved the transfer of hydrogen tetrachloroaurate (0.18 g, 0.53 mmol) dissolved in water (15 mL) to a solution of tetraoctylammonium bromide (1.1 g, 2.0 mmol) in toluene (40 mL). After the gold salt transferred into the organic phase the aqueous phase was removed. Stearonitrile (0.23 g, 0.87 mmol) was added and sodium borohydride (0.19 g, 5.0 mmol) in water (13 mL) was added. The solution turned dark red almost immediately, and after 15 min the organic phase was separated and passed through a 0.45 {micro}m Teflon filter. The resulting solution was used without purification via precipitation because attempts at precipitation with ethanol resulted in agglomeration. TEM images showed an average particle size of 5.3 {+-} 1.3 nm. The nanoparticles synthesized were also characterized using atomic force microscopy in tapping mode. The AFM images agree with the TEM images and show a relatively monodispersed collection of nanoparticles. Platinum nanoparticles were synthesized without stearonitrile to show that the particles were in fact capped with the stearonitrile and not the tetraoctylammonium bromide. In the absence of stearonitrile the nanoparticles would not redissolve in hexane or toluene after precipitation. While it is possible the tetraoctylammonium bromide helps prevent agglomeration by solvation into the capping stearonitrile ligand layer on the particles recovery of a quantitative amount of the starting tetraoctylammonium bromide was difficult and we cannot rule out that some small amount of tetraoctylammonium bromide serves in a synergistic capacity to help solubilize the isolated platinum particles. Several exchange reactions were carried out using the isolated Pt nanoparticles. The stearonitrile cap was exchanged for hexadecylmercaptan, octanethiol, and benzeneethylthiol. In a typical exchange reaction, Pt nanoparticles (10 mg) were suspended in hexane (10 mL) and the exchange ligand was added (50 {micro}L). The solutions were allowed to stir overnight and precipitated twice using ethanol. TGA experiments confirmed ligand exchange. We have also shown that these particles may be assembled in a layer by layer (LBL) fashion to build up three dimensional assemblies. As an example of this LBL assembly a substrate consisting of gold electrodes separated by 8 {micro}m on a quartz wafer was first functionalized by immersing in a solution of 1,8-octanedithiol (50 {micro}L) in hexane (10 mL) for 15 min, rinsed with hexane (10 mL), ethanol (10 mL), and dried under a stream of nitrogen. The scaffold was then placed in a toluene solution containing Au nanoparticles capped with stearonitrile (10 mg/mL) for 15 minutes. The scaffold was then rinsed with hexane (10 mL), ethanol (10 mL), and dried under a stream of nitrogen. The substrate was then immersed iteratively between the 1,8-octanedithiol and the Au nanoparticle solution 4 more times.
Implicit time integration coupled with SUPG discretization in space leads to additional terms that provide consistency and improve the phase accuracy for convection dominated flows. Recently, it has been suggested that for small Courant numbers these terms may dominate the streamline diffusion term, ostensibly causing destabilization of the SUPG method. While consistent with a straightforward finite element stability analysis, this contention is not supported by computational experiments and contradicts earlier Von-Neumann stability analyses of the semidiscrete SUPG equations. This prompts us to re-examine finite element stability of the fully discrete SUPG equations. A careful analysis of the additional terms reveals that, regardless of the time step size, they are always dominated by the consistent mass matrix. Consequently, SUPG cannot be destabilized for small Courant numbers. Numerical results that illustrate our conclusions are reported.
To observe the effects of polarization fields and screening, we have performed contacted electroreflectance (CER) measurements on In{sub 0.07}Ga{sub 0.93}N/GaN single quantum well light emitting diodes for different reverse bias voltages. Room-temperature CER spectra exhibited three features which are at lower energy than the GaN band gap and are associated with the quantum well. The position of the lowest-energy experimental peak, attributed to the ground-state quantum well transition, exhibited a limited Stark shift except at large reverse bias when a redshift in the peak energy was observed. Realistic band models of the quantum well samples were constructed using self-consistent Schroedinger-Poisson solutions, taking polarization and screening effects in the quantum well fully into account. The model predicts an initial blueshift in transition energy as reverse bias voltage is increased, due to the cancellation of the polarization electric field by the depletion region field and the associated shift due to the quantum-confined Stark effect. A redshift is predicted to occur as the applied field is further increased past the flatband voltage. While the data and the model are in reasonable agreement for voltages past the flatband voltage, they disagree for smaller values of reverse bias, when charge is stored in the quantum well, and no blueshift is observed experimentally. To eliminate the blueshift and screen the electric field, we speculate that electrons in the quantum well are trapped in localized states.
The high-pressure response of cryogenic liquid deuterium (LD{sub 2}) has been studied to pressures of {approx}400GPa and densities of {approx}1.5g/cm{sup 3}. Using intense magnetic pressure produced by the Sandia National Laboratories Z accelerator, macroscopic aluminum or titanium flyer plates, several mm in lateral dimensions and a few hundred microns in thickness, have been launched to velocities in excess of 22 km/s, producing constant pressure drive times of approximately 30 ns in plate impact, shock wave experiments. This flyer plate technique was used to perform shock wave experiments on LD{sub 2} to examine its high-pressure equation of state. Using an impedance matching method, Hugoniot measurements of LD{sub 2} were obtained in the pressure range of {approx}22-100GPa. Results of these experiments indicate a peak compression ratio of approximately 4.3 on the Hugoniot. In contrast, previously reported Hugoniot states inferred from laser-driven experiments indicate a peak compression ratio of approximately 5.5-6 in this same pressure range. The stiff Hugoniot response observed in the present impedance matching experiments was confirmed in simultaneous, independent measurements of the relative transit times of shock waves reverberating within the sample cell, between the front aluminum drive plate and the rear sapphire window. The relative timing was found to be sensitive to the density compression along the principal Hugoniot. Finally, mechanical reshock measurements of LD{sub 2} using sapphire, aluminum, and {alpha}-quartz anvils were made. These results also indicate a stiff response, in agreement with the Hugoniot and reverberating wave measurements. Using simple model-independent arguments based on wave propagation, the principal Hugoniot, reverberating wave, and sapphire anvil reshock measurements are shown to be internally self-consistent, making a strong case for a Hugoniot response with a maximum compression ratio of {approx}4.3-4.5. The trends observed in the present data are in very good agreement with several ab initio models and a recent chemical picture model for LD{sub 2}, but in disagreement with previously reported laser-driven shock results. Due to this disagreement, significant emphasis is placed on the discussion of uncertainties, and the potential systematic errors associated with each measurement.
The US Enabling Technology Program in fusion is investigating the use of free flowing liquid surfaces facing the plasma. We have been studying the issues in integrating a liquid surface divertor into a configuration based upon an advanced tokamak, specifically the ARIES-RS configuration. The simplest form of such a divertor is to extend the flow of the liquid first wall into the divertor and thereby avoid introducing additional fluid streams. In this case, one can modify the flow above the divertor to enhance thermal mixing. For divertors with flowing liquid metals (or other electrically conductive fluids) MHD (magneto-hydrodynamics) effects are a major concern and can produce forces that redirect flow and suppress turbulence. An evaluation of Flibe (a molten salt) as a working fluid was done to assess a case in which the MHD forces could be largely neglected. Initial studies indicate that, for a tokamak with high power density, an integrated Flibe first wall and divertor does not seem workable. We have continued work with molten salts and replaced Flibe with Flinabe, a mixture of lithium, sodium and beryllium fluorides, that has some potential because of its lower melting temperature. Sn and Sn-Li have also been considered, and the initial evaluations on heat removal with minimal plasma contamination show promise, although the complicated 3D MHD flows cannot yet be fully modeled. Particle pumping in these design concepts is accomplished by conventional means (ports and pumps). However, trapping of hydrogen in these flowing liquids seems plausible and novel concepts for entrapping helium are also being studied.
Low temperature, Sn-based Pb-free solders were developed by making alloy additions to the starting material, 96.5Sn-3.5Ag (mass%). The melting behavior was determined using Differential Scanning Calorimetry (DSC). The solder microstructure was evaluated by optical microscopy and electron probe microanalysis (EPMA). Shear strength measurements, hardness tests, intermetallic compound (IMC) layer growth measurements, and solderability tests were performed on selected alloys. Three promising ternary alloy compositions and respective solidus temperatures were: 91.84Sn-3.33Ag-4.83Bi, 212 C; 87.5Sn-7.5Au-5.0Bi, 200 C; and 86.4Sn-5.1 Ag-8.5Au, 205 C. A quaternary alloy had the composition 86.8Sn-3.2Ag-5.0Bi-5.0Au and solidus temperature of 194 C The shear strength of this quaternary alloy was nearly twice that of the eutectic Sn-Pb solder. The 66Sn-5.0Ag-10Bi-5.0Au-101n-4.0Cu alloy had a solidus temperature of 178 C and good solderability on Cu. The lowest solidus temperature of 159 C was realized with the alloy 62Sn-5.0Ag-10Bi-4.0Au-101n-4.0Cu-5.0Ga. The contributing factor towards the melting point depression was the composition of the solid solution, Sn-based matrix phase of each solder.
Within the magnetic fusion energy program in the US, a program called APEX is investigating the use of free flowing liquid surfaces to form the inner surface of the chamber around the plasma. As part of this work, the APEX Team has investigated several possible design implementations and developed a specific engineering concept for a fusion reactor with liquid walls. Our approach has been to utilize an already established design for a future fusion reactor, the ARIES-RS, for the basic chamber geometry and magnetic configuration, and to replace the chamber technology in this design with liquid wall technology for a first wall and divertor and a blanket with adequate tritium breeding. This paper gives an overview of one design with a molten salt (a mixture of lithium, beryllium and sodium fluorides) forming the liquid surfaces and a ferritic steel for the structural material of the blanket. The design point is a reactor with 3840 MW of fusion power of which 767 MW is in the form of energetic particles (alpha power) and 3073 MW is in the form of neutrons. The alpha plus auxiliary power total 909 MW of which 430 MW is radiated from the core mostly onto the first wall and the balance flows into the edge plasma and is distributed between the first wall and the divertor. In pursuing the application of liquid surfaces in APEX, the team has developed analytical tools that are significant achievements themselves and also pursued experiments on flowing liquids. This work is covered elsewhere, but the paper will also note several such areas to indicate the supporting science behind the design presented. Significant new work in modeling the plasma edge to understand the interaction of the plasma with the liquid walls is one example. Another is the incorporation of magneto-hydrodynamic (MHD) effects in fluid modeling and heat transfer.
A very general and robust approach to solving optimization problems involving probabilistic uncertainty is through the use of Probabilistic Ordinal Optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the probabilistic merits of local design alternatives, rather than on crisp quantification of the alternatives. Thus, we simply ask the question: 'Is that alternative better or worse than this one?' to some level of statistical confidence we require, not: 'HOW MUCH better or worse is that alternative to this one?'. In this paper we illustrate an elementary application of probabilistic ordinal concepts in a 2-D optimization problem. Two uncertain variables contribute to uncertainty in the response function. We use a simple Coordinate Pattern Search non-gradient-based optimizer to step toward the statistical optimum in the design space. We also discuss more sophisticated implementations, and some of the advantages and disadvantages versus non-ordinal approaches for optimization under uncertainty.
A recently developed Centroidal Voronoi Tessellation (CVT) unstructured sampling method is investigated here to assess its suitability for use in statistical sampling and function integration. CVT efficiently generates a highly uniform distribution of sample points over arbitrarily shaped M-Dimensional parameter spaces. It has recently been shown on several 2-D test problems to provide superior point distributions for generating locally conforming response surfaces. In this paper, its performance as a statistical sampling and function integration method is compared to that of Latin-Hypercube Sampling (LHS) and Simple Random Sampling (SRS) Monte Carlo methods, and Halton and Hammersley quasi-Monte-Carlo sequence methods. Specifically, sampling efficiencies are compared for function integration and for resolving various statistics of response in a 2-D test problem. It is found that on balance CVT performs best of all these sampling methods on our test problems.
GaAsSbN was grown by organometallic vapor phase epitaxy (OMVPE) as an alternative material to InGaAsN for long wavelength emission on GaAs substrates. OMVPE of GaAsSbN using trimethylgallium, 100% arsine, trimethylantimony, and 1,1-dimethylhydrazine was found to be kinetically limited at growth temperatures ranging from 520 C to 600 C, with an activation energy of 10.4 kcal/mol. The growth rate was linearly dependent on the group III flow and has a complex dependence on the group V constituents. A room temperature photoluminescence wavelength of >1.3 {micro}m was observed for unannealed GaAs{sub 0.69}Sb{sub 0.3}N{sub 0.01}. Low temperature (4 K) photoluminescence of GaAs{sub 0.69}Sb{sub 0.3}N{sub 0.01} shows an increase in FWHM of 2.4-3.4 times the FWHM of GaAs{sub 0.7}Sb{sub 0.3}, a red shift of 55-77 meV, and a decrease in intensity of one to two orders of magnitude. Hall measurements indicate a behavior similar to that of InGaAsN, a 300 K hole mobility of 350 cm{sup 2}/V-s with a 1.0 x 10{sup 17}/cm{sup 3} background hole concentration, and a 77 K mobility of 1220 cm{sup 2}/V-s with a background hole concentration of 4.8 x 10{sup 16}/cm{sup 3}. The hole mass of GaAs{sub 0.7}Sb{sub 0.3}/GaAs heterostructures was estimated at 0.37-0.40m{sub o}, and we estimate an electron mass of 0.2-0.3m{sub o} for the GaAs{sub 0.69}Sb{sub 0.3}N{sub 0.01}/GaAs system. The reduced exciton mass for GaAsSbN was estimated at about twice that found for GaAsSb by a comparison of diamagnetic shift vs. magnetic field.
Trends in radiation production from dynamic-hohlraums driven by single and nested wire arrays were studied. The axial radiation developed from the interior of an imploding dynamic hohlraum target was compared with that generated using a standard nested array on Z. Measurements over a range of single-array masses showed a decrease in radiation power for masses above 3.5 mg.
The energetics and thermal motion of the self-assembled domain structures of lead on copper were discussed. It was found that the self-assembled patterns arose from a temperature-independent surface stress difference of approximately 1.2 N/m. The domain patterns evolved in a manner consistent with models, when the lead coverage was increased.
Distributed, on-demand, data-intensive, and collaborative simulation analysis tools are being developed by an international team to solve real problems such as bioinformatics applications. The project consists of three distinct focuses: compute, visualize, and collaborate. Each component utilizes software and hardware that performs across the International Grid. Computers in North America, Asia, and Europe are working on a common simulation programs. The results are visualized in a multi-way 3D visualization collaboration session where additional compute requests can be submitted in real-time. Navigation controls and data replication issues are addressed and solved with a scalable solution. Published by Elsevier B.V.
The requirement to accurately measure subsurface groundwater flow at contaminated sites, as part of a time and cost effective remediation program, has spawned a variety of flow evaluation technologies. Validation of the accuracy and knowledge regarding the limitations of these technologies are critical for data quality and application confidence. Leading the way in the effort to validate and better understand these methodologies, the US Army Environmental Center has funded a multi-year program to compare and evaluate all viable horizontal flow measurement technologies. This multi-year program has included a field comparison phase, an application of selected methods as part of an integrated site characterization program phase, and most recently, a laboratory and numerical simulator phase. As part of this most recent phase, numerical modeling predictions and laboratory measurements were made in a simulated fracture borehole set-up within a controlled flow simulator. The scanning colloidal borescope flowmeter (SCBFM) and advanced hydrophysical logging (NxHpL{trademark}) tool were used to measure velocities and flow rate in a simulated fractured borehole in the flow simulator. Particle tracking and mass flux measurements were observed and recorded under a range of flow conditions in the simulator. Numerical models were developed to aid in the design of the flow simulator and predict the flow conditions inside the borehole. Results demonstrated that the flow simulator allowed for predictable, easily controlled, and stable flow rates both inside and outside the well. The measurement tools agreed well with each other over a wide range of flow conditions. The model results demonstrate that the Scanning Colloidal Borescope did not interfere with the flow in the borehole in any of the tests. The model is capable of predicting flow conditions and agreed well with the measurements and observations in the flow simulator and borehole. Both laboratory and model results showed a lower limit of fracture velocity in which inflow occurs, but horizontal flow does not establish itself in the center of the borehole. In addition, both laboratory and model results showed circulation cells in the borehole above and below the fracture horizon. The length of the interval over which the circulating cells occurred was much larger than the interval of actual horizontal flow. These results suggest that for the simple fracture geometry simulated in this study, horizontal flow can be predictable and measurable, and that this flow is representative of the larger, near- field flow system. Additional numerical refinements and laboratory simulations of more robust, life- like fracture geometries should be considered. The preliminary conclusions of this work suggest the following: (1) horizontal flow in the fractured medium which is representative of the near- field flow conditions can be established in a wellbore; (2) this horizontal flow can be accurately measured and numerically predicted; (3) the establishment of directionally quantifiable horizontal flow is dependent on four parameters: borehole diameter, structure, permeability and the hydraulic gradient of the flowing feature; and, (4) by measuring three of these four parameters, the fourth parameter can be numerically derived through computer simulations.
The influences of temperature and processing conditions (unpoled or poled-depoled) on strength, fracture toughness and the stress-strain behavior of tin-modified lead zirconate titanate (PSZT) were evaluated in four-point bending. PSZT exhibits temperature-dependent non-linear and non-symmetric stress-strain behavior. A consequence of temperature dependent non-linearity is an apparent reduction in the flexural strength of PSZT as temperature increases. At room temperature the average stress in the outer-fiber of bend bars was 84 MPa, whereas, for specimens tested at 120 C the average failure stress was only 64 MPa. The load-carrying capacity, however, does not change with temperature, but the degree of deformation tolerated by PSZT prior to failure increased with temperature.
AUTOmated GENeration of Control Programs for Robotic Welding of Ship Structure (AUTOGEN) is software that automates the planning and compiling of control programs for robotic welding of ship structure. The software works by evaluating computer representations of the ship design and the manufacturing plan. Based on this evaluation, AUTOGEN internally identifies and appropriately characterizes each weld. Then it constructs the robot motions necessary to accomplish the welds and determines for each the correct assignment of process control values. AUTOGEN generates these robot control programs completely without manual intervention or edits except to correct wrong or missing input data. Most ship structure assemblies are unique or at best manufactured only a few times. Accordingly, the high cost inherent in all previous methods of preparing complex control programs has made robot welding of ship structures economically unattractive to the U.S. shipbuilding industry. AUTOGEN eliminates the cost of creating robot control programs. With programming costs eliminated, capitalization of robots to weld ship structures becomes economically viable. Robot welding of ship structures will result in reduced ship costs, uniform product quality, and enhanced worker safety. Sandia National Laboratories and Northrop Grumman Ship Systems worked with the National Ship-building Research Program to develop a means of automated path and process generation for robotic welding. This effort resulted in the AUTOGEN program, which has successfully demonstrated automated path generation and robot control. Although the current implementation of AUTOGEN is optimized for welding applications, the path and process planning capability has applicability to a number of industrial applications, including painting, riveting, and adhesive delivery.
Broadcasting messages through the earth is a daunting task. Indeed, broadcasting a normal telephone conversion through the earth by wireless means is impossible with todays technology. Most of us don't care, but some do. Industries that drill into the earth need wireless communication to broadcast navigation parameters. This allows them to steer their drill bits. They also need information about the natural formation that they are drilling. Measurements of parameters such as pressure, temperature, and gamma radiation levels can tell them if they have found a valuable resource such as a geothermal reservoir or a stratum bearing natural gas. Wireless communication methods are available to the drilling industry. Information is broadcast via either pressure waves in the drilling fluid or electromagnetic waves in the earth and well tubing. Data transmission can only travel one way at rates around a few baud. Given that normal Internet telephone modems operate near 20,000 baud, these data rates are truly very slow. Moreover, communication is often interrupted or permanently blocked by drilling conditions or natural formation properties. Here we describe a tool that communicates with stress waves traveling through the steel drill pipe and production tubing in the well. It's based on an old idea called Acoustic Telemetry. But what we present here is more than an idea. This tool exists, it's drilled several wells, and it works. Currently, it's the first and only acoustic telemetry tool that can withstand the drilling environment. It broadcasts one way over a limited range at much faster rates than existing methods, but we also know how build a system that can communicate both up and down wells of indefinite length.
A conceptual design for a plutonium air transport package capable of surviving a 'worst case' airplane crash has been developed by Sandia National Laboratories (SNL) for the Japan Nuclear Cycle Development Institute (JNC). A full-scale prototype, designated as the Perforated Metal Air Transport Package (PMATP) was thermally tested in the SNL Radiant Heat Test Facility. This testing, conducted on an undamaged package, simulated a regulation one-hour aviation fuel pool fire test. Finite element thermal predictions compared well with the test results. The package performed as designed, with peak containment package temperatures less than 80 C after exposure to a one-hour test in a 1000 C environment.
Sandia National Laboratories (SNL) has designed a crash-resistant container, the Perforated Metal Air Transportable Package (PMATP), capable of surviving a worst-case plane crash, including both impact and subsequent fire, for the air transport of plutonium. This report presents thermal analyses of the full-scale PMATP in its undamaged (pre-test) condition and in bounding post-accident states. The goal of these thermal simulations was to evaluate the performance of the package in a worst-case post-crash fire. The full-scale package is approximately 1.6 m long by 0.8 m diameter. The thermal analyses were performed with the FLEX finite element code. This analysis clearly predicts that the PMATP provides acceptable thermal response characteristics, both for the post-accident fire of a one-hour duration and the after-fire heat-soak condition. All predicted temperatures for the primary containment vessel are well within design limits for safety.
This report presents a perspective on the role of code comparison activities in verification and validation. We formally define the act of code comparison as the Code Comparison Principle (CCP) and investigate its application in both verification and validation. One of our primary conclusions is that the use of code comparisons for validation is improper and dangerous. We also conclude that while code comparisons may be argued to provide a beneficial component in code verification activities, there are higher quality code verification tasks that should take precedence. Finally, we provide a process for application of the CCP that we believe is minimal for achieving benefit in verification processes.
This report extends an earlier characterization of long-duration and short-duration energy storage technologies to include life-cycle cost analysis. Energy storage technologies were examined for three application categories--bulk energy storage, distributed generation, and power quality--with significant variations in discharge time and storage capacity. More than 20 different technologies were considered and figures of merit were investigated including capital cost, operation and maintenance, efficiency, parasitic losses, and replacement costs. Results are presented in terms of levelized annual cost, $/kW-yr. The cost of delivered energy, cents/kWh, is also presented for some cases. The major study variable was the duration of storage available for discharge.
A mine dog evaluation project initiated by the Geneva International Center for Humanitarian Demining is evaluating the capability and reliability of mine detection dogs. The performance of field-operational mine detection dogs will be measured in test minefields in Afghanistan and Bosnia containing actual, but unfused landmines. Repeated performance testing over two years through various seasonal weather conditions will provide data simulating near real world conditions. Soil samples will be obtained adjacent to the buried targets repeatedly over the course of the test. Chemical analysis results from these soil samples will be used to evaluate correlations between mine dog detection performance and seasonal weather conditions. This report documents the analytical chemical methods and results from the fourth batch of soils received. This batch contained samples from Kharga, Afghanistan collected in April 2003 and Sarajevo, Bosnia collected in May 2003.
This SAND report provides the technical progress for the first quarter (through February 2003) of the Sandia-led project, 'Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling,' funded by the DOE Office of Science Genomes to Life Program. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO2 are important terms in the global environmental response to anthropogenic atmospheric inputs of CO2 and that oceanic microorganisms play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. In this project, we will investigate the carbon sequestration behavior of Synechococcus Sp., an abundant marine cyanobacteria known to be important to environmental responses to carbon dioxide levels, through experimental and computational methods. This project is a combined experimental and computational effort with emphasis on developing and applying new computational tools and methods. Our experimental effort will provide the biology and data to drive the computational efforts and include significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Computational tools will be essential to our efforts to discover and characterize the function of the molecular machines of Synechococcus. To this end, molecular simulation methods will be coupled with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes. In addition, we will develop a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. The ultimate goal of this effort is develop and apply new experimental and computational methods needed to generate a new level of understanding of how the Synechococcus genome affects carbon fixation at the global scale. Anticipated experimental and computational methods will provide ever-increasing insight about the individual elements and steps in the carbon fixation process, however relating an organism's genome to its cellular response in the presence of varying environments will require systems biology approaches. Thus a primary goal for this effort is to integrate the genomic data generated from experiments and lower level simulations with data from the existing body of literature into a whole cell model. We plan to accomplish this by developing and applying a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats. These challenges are unprecedented in high performance scientific computing and necessitate the development of a companion computational infrastructure to support this effort. More information about this project, including a copy of the original proposal, can be found at www.genomes-to-life.org
This report is the latest in a continuing series that highlights the recent technical accomplishments associated with the work being performed within the Materials and Process Sciences Center. Our research and development activities primarily address the materials-engineering needs of Sandia's Nuclear-Weapons (NW) program. In addition, we have significant efforts that support programs managed by the other laboratory business units. Our wide range of activities occurs within six thematic areas: Materials Aging and Reliability, Scientifically Engineered Materials, Materials Processing, Materials Characterization, Materials for Microsystems and Materials Modeling and Computational Simulation. We believe these highlights collectively demonstrate the importance that a strong materials-science base has on the ultimate success of the NW program and the overall DOE technology portfolio.
We seek to understand which supercomputer architecture will be best for supercomputers at the Petaflops scale and beyond. The process we use is to predict the cost and performance of several leading architectures at various years in the future. The basis for predicting the future is an expanded version of Moore's Law called the International Technology Roadmap for Semiconductors (ITRS). We abstract leading supercomputer architectures into chips connected by wires, where the chips and wires have electrical parameters predicted by the ITRS. We then compute the cost of a supercomputer system and the run time on a key problem of interest to the DOE (radiation transport). These calculations are parameterized by the time into the future and the technology expected to be available at that point. We find the new advanced architectures have substantial performance advantages but conventional designs are likely to be less expensive (due to economies of scale). We do not find a universal ''winner'', but instead the right architectural choice is likely to involve non-technical factors such as the availability of capital and how long people are willing to wait for results.
Yucca Mountain has been designated as the nation's high-level radioactive waste repository, and the U.S. Department of Energy has been approved to apply to the U.S. Nuclear Regulatory Commission for a license to construct a repository. The temperature and humidity inside the emplacement drift will affect the degradation rate of the waste packages and waste forms as well as the quantity of water available to transport dissolved radionuclides out of the waste canister. Thermal radiation and turbulent natural convection are the main modes of heat transfer inside the drift. This paper presents the result of three-dimensional computational fluid dynamics simulations of a segment of emplacement drift. The model contained the three main types of waste packages and was run at the time that the peak waste package temperatures are expected. Results show that thermal radiation is the dominant mode of heat transfer inside the drift. Natural convection affects the variation in surface temperature on the hot waste packages and can account for a large fraction of the heat transfer for the colder waste packages. The paper also presents the sensitivity of model results to uncertainties in several input parameters. The sensitivity study shows that the uncertainty in peak waste package temperatures due to in-drift parameters is <3 C.
A laser safety and hazard analysis was performed for the temperature stabilized Big Sky Laser Technology (BSLT) laser central to the ARES system based on the 2000 version of the American National Standards Institute's (ANSI) Standard Z136.1, for Safe Use of Lasers and the 2000 version of the ANSI Standard Z136.6, for Safe Use of Lasers Outdoors. As a result of temperature stabilization of the BSLT laser the operating parameters of the laser had changed requiring a hazard analysis based on the new operating conditions. The ARES laser system is a Van/Truck based mobile platform, which is used to perform laser interaction experiments and tests at various national test sites.
Cantilever epitaxy (CE) has been developed to produce GaN on sapphire with low dislocation densities as needed for improved devices. The basic mechanism of seeding growth on sapphire mesas and lateral growth of cantilevers until they coalesce has been modified with an initial growth step at 950 C. This step produces a gable with (11{bar 2}2) facets over the mesas, which turns threading dislocations from vertical to horizontal in order to reduce the local density above mesas. This technique has produced material with densities as low as 2-3x10{sup 7}/cm{sup 2} averaged across extended areas of GaN on sapphire, as determined with AFM, TEM and cathodoluminescence (CL). This density is about two orders of magnitude below that of conventional planar growths; these improvements suggest that locating wide-area devices across both cantilever and mesa regions is possible. However, the first implementation of this technique also produced a new defect: cracks at cantilever coalescences with associated arrays of lateral dislocations. These defects have been labeled 'dark-block defects' because they are non-radiative and appear as dark rectangles in CL images. Material has been grown that does not have dark-block defects. Examination of the evolution of the cantilever films for many growths, both partial and complete, indicates that producing a film without these defects requires careful control of growth conditions and crystal morphology at multiple steps. Their elimination enhances optical emission and uniformity over large (mm) size areas.
This report provides a survey of remediation and treatment technologies for contaminants of concern at environmental restoration (ER) sites at Sandia National Laboratories, New Mexico. The sites that were evaluated include the Tijeras Arroyo Groundwater, Technical Area V, and Canyons sites. The primary contaminants of concern at these sites include trichloroethylene (TCE), tetrachloroethylene (PCE), and nitrate in groundwater. Due to the low contaminant concentrations (close to regulatory limits) and significant depths to groundwater ({approx}500 feet) at these sites, few in-situ remediation technologies are applicable. The most applicable treatment technologies include monitored natural attenuation and enhanced bioremediation/denitrification to reduce the concentrations of TCE, PCE, and nitrate in the groundwater. Stripping technologies to remove chlorinated solvents and other volatile organic compounds from the vadose zone can also be implemented, if needed.
Tensions on the Korean Peninsula remain high despite a long-term strategy by South Korea to increase inter-Korean exchanges in economics, culture, sports, and other topics. This is because the process of reconciliation has rarely extended to military and security topics and those initiatives that were negotiated have been ineffective. Bilateral interactions must include actions to reduce threats and improve confidence associated with conventional military forces (land, sea, and air) as well as nuclear, chemical, and biological activities that are applicable to developing and producing weapons of mass destruction (WMD). The purpose of this project is to develop concepts for inter-Korean confidence building measures (CBMs) for military and WMD topics that South Korea could propose to the North when conditions are right. This report describes the historical and policy context for developing security-related CBMs and presents an array of bilateral options for conventional military and WMD topics within a consistent framework. The conceptual CBMs address two scenarios: (1) improved relations where construction of a peace regime becomes a full agenda item in inter-Korean dialogue, and (2) continued tense inter-Korean relations. Some measures could be proposed in the short term under current conditions, others might be implemented in a series of steps, while some require a higher level of cooperation than currently exists. To support decision making by political leaders, this research focuses on strategies and policy options and does not include technical details.
A particle image velocimetry instrument has been constructed for a transonic wind tunnel and applied to study the interaction created by a supersonic axisymmetric jet exhausting from a flat plate into a subsonic compressible crossflow. Data have been acquired in two configurations; one is a two-dimensional measurement on the streamwise plane along the wind tunnel centerline, and the other is a stereoscopic measurement in the crossplane of the interaction. The presence of the induced counter-rotating vortex pair is clearly visible in both data sets. The streamwise-plane data determined the strength and location of the vortices using the vertical velocity component while the crossplane data directly provided a measurement of the vortical motion. A comparison of the vertical velocity component measured using each configuration showed reasonable agreement.
This document describes a general protocol (involving both experimental and data analytic aspects) that is designed to be a roadmap for rapidly obtaining a useful assessment of the average lifetime (at some specified use conditions) that might be expected from cells of a particular design. The proposed experimental protocol involves a series of accelerated degradation experiments. Through the acquisition of degradation data over time specified by the experimental protocol, an unambiguous assessment of the effects of accelerating factors (e.g., temperature and state of charge) on various measures of the health of a cell (e.g., power fade and capacity fade) will result. In order to assess cell lifetime, it is necessary to develop a model that accurately predicts degradation over a range of the experimental factors. In general, it is difficult to specify an appropriate model form without some preliminary analysis of the data. Nevertheless, assuming that the aging phenomenon relates to a chemical reaction with simple first-order rate kinetics, a data analysis protocol is also provided to construct a useful model that relates performance degradation to the levels of the accelerating factors. This model can then be used to make an accurate assessment of the average cell lifetime. The proposed experimental and data analysis protocols are illustrated with a case study involving the effects of accelerated aging on the power output from Gen-2 cells. For this case study, inadequacies of the simple first-order kinetics model were observed. However, a more complex model allowing for the effects of two concurrent mechanisms provided an accurate representation of the experimental data.
An aircraft wire systems laboratory has been developed to support technical maturation of diagnostic technologies being used in the aviation community for detection of faulty attributes of wiring systems. The design and development rationale of the laboratory is based in part on documented findings published by the aviation community. The main resource at the laboratory is a test bed enclosure that is populated with aged and newly assembled wire harnesses that have known defects. This report provides the test bed design and harness selection rationale, harness assembly and defect fabrication procedures, and descriptions of the laboratory for usage by the aviation community.
The Trilinos Project is an effort to facilitate the design, development, integration and ongoing support of mathematical software libraries. In particular, our goal is to develop parallel solver algorithms and libraries within an object-oriented software framework for the solution of large-scale, complex multi-physics engineering and scientific applications. Our emphasis is on developing robust, scalable algorithms in a software framework, using abstract interfaces for flexible interoperability of components while providing a full-featured set of concrete classes that implement all abstract interfaces. Trilinos uses a two-level software structure designed around collections of packages. A Trilinos package is an integral unit usually developed by a small team of experts in a particular algorithms area such as algebraic preconditioners, nonlinear solvers, etc. Packages exist underneath the Trilinos top level, which provides a common look-and-feel, including configuration, documentation, licensing, and bug-tracking. Trilinos packages are primarily written in C++, but provide some C and Fortran user interface support. We provide an open architecture that allows easy integration with other solver packages and we deliver our software to the outside community via the Gnu Lesser General Public License (LGPL). This report provides an overview of Trilinos, discussing the objectives, history, current development and future plans of the project.
This paper describes the liquid metal integrated test system (LIMITS) at Sandia National Laboratories. This system was designed to study the flow of molten metals and salts in a vacuum as a preliminary study for flowing liquid surfaces inside of magnetic fusion reactors. The system consists of a heated furnace with attached centrifugal pump, a vacuum chamber, and a transfer chamber for storage and addition of fresh material. Diagnostics include an electromagnetic flow meter, a high temperature pressure transducer, and an electronic level meter. Many ports in the vacuum chamber allow testing the thermal behavior of the flowing liquids heated with an electron beam or study of the effect of a magnetic field on motion of the liquid. Some preliminary tests have been performed to determine the effect of a static magnetic field on stream flow from a nozzle.
The intense magnetic field generated in the 20 MA Z-machine is used to accelerate metallic flyer plates to high velocity (peak velocity {approx}20-30 km/s) for the purpose of generating strong shocks (peak pressure {approx}5-10 Mb) in equation of state experiments. We have used the Sandia developed, 2D magneto-hydrodynamic (MHD) simulation code ALEGRA to investigate the physics of accelerating flyer plates using multi-megabar magnetic drive pressures. Through detailed analysis of experimental data using ALEGRA, we developed a 2D, predictive MHD model for simulating material science experiments on Z. The ALEGRA MHD model accurately produces measured time dependent flyer velocities. Details of the ALEGRA model are presented. Simulation and experimental results are compared and contrasted for shots using standard and shaped current pulses whose peak drive pressure is {approx}2 Mb. Isentropic compression of Al to 1.7 Mb is achieved by shaping the current pulse.
The Trilinos Project is an effort to facilitate the design, development, integration and ongoing support of mathematical software libraries. A new software capability is introduced into Trilinos as a package. A Trilinos package is an integral unit usually developed by a small team of experts in a particular algorithms area such as algebraic preconditioners, nonlinear solvers, etc. The Trilinos Users Guide is a resource for new and existing Trilinos users. Topics covered include how to configure and build Trilinos, what is required to integrate an existing package into Trilinos and examples of how those requirements can be met, as well as what tools and services are available to Trilinos packages. Also discussed are some common practices that are followed by many Trilinos package developers. Finally, a snapshot of current Trilinos packages and their interoperability status is provided, along with a list of supported computer platforms.
We give the results of a study using Monte Carlo ion interaction codes to simulate and optimize elastic recoil detection analysis for {sup 3}He buildup in tritide films. Two different codes were used. The primary tool was MCERD, written especially for simulating ion beam analysis using optimizations and enhancements for greatly increasing the probabilities for the creation and the detection of recoil atoms. MPTRIM, an implementation of the TRIMRC code for a massively parallel computer, was also used for comparison and for determination of absolute yield. This study was undertaken because of a need for high-resolution depth profiling of 3He and near-surface light impurities (e.g. oxygen) in metal hydride films containing tritium.
Multiple scattering effects in ERD measurements are studied by comparing two Monte Carlo simulation codes, representing different approaches to obtain acceptable statistics, to experimental spectra measured from a HfO{sub 2} sample with a time-of-flight-ERD setup. The results show that both codes can reproduce the absolute detection yields and the energy distributions in an adequate way. The effect of the choice of the interatomic potential in multiple scattering effects is also studied. Finally the capabilities of the MC simulations in the design of new measurement setups are demonstrated by simulating the recoil energy spectra from a WC{sub x}N{sub y} sample with a low energy heavy ion beam.
This paper describes the application of a filtered-Rayleigh-scattering (FRS) instrument for nonintrusive temperature imaging in a vortex-driven diffusion flame. The FRS technique provides quantitative, spatially correlated temperature data without the flow intrusion or time lag associated with physical probes. Use of a molecular iodine filter relaxes the requirement for clean, particulate-free flowfields and offers the potential for imaging near walls, test section windows and in sooty flames, all of which are preculded in conventional Rayleigh imaging, where background interference from these sources typically overwhelms the weak molecular scattering signal. For combustion applications, FRS allows for full-field temperature imaging without chemical seeding of the flowfield, which makes FRS an attractive alternative to other laser-based imaging methods such as planar laser-induced fluorescencs (PLIF). In this work, the details of our FRS imaging system are presented and temperature measurements from an acoustically forced diffusion flame are provided. The local Rayleigh crosssection is corrected using Raman imaging measurements of the methane fuel molecule, which are then correlated to other major species using a laminar flamelet approach. To our knowledge, this is the first report of joint Raman/FRS imaging for nonpremixed combustion. Measurements are presented from flames driven at 7.5 Hz, where a single vortex stretches the flame, and at 90 Hz, where two consecutive vortices interact to cause a repeatable strain-induced flame-quenching event.
The Sandia Secure Processor (SSP) is a new native Java processor that has been specifically designed for embedded applications. The SSP's design is a system composed of a core Java processor that directly executes Java bytecodes, on-chip intelligent IO modules, and a suite of software tools for simulation and compiling executable binary files. The SSP is unique in that it provides a way to control real-time IO modules for embedded applications. The system software for the SSP is a 'class loader' that takes Java .class files (created with your favorite Java compiler), links them together, and compiles a binary. The complete SSP system provides very powerful functionality with very light hardware requirements with the potential to be used in a wide variety of small-system embedded applications. This paper gives a detail description of the Sandia Secure Processor and its unique features.
The magnitude and structure of the ion wakefield potential below a single negatively charged dust particle levitated in the plasma sheath region were measured using a test particle. Attractive and repulsive components of the interaction force were extracted from a trajectory analysis of low-energy collisions between different mass particles in a well-defined electrostatic potential that constrained the dynamics of the collisions to one dimension. As the vertical spacing between the particles increased, the peak attractive force decreased and the width of the potential increased. For the largest vertical separations measured in this study, the lower particle does not form a vertical pair with the upper particle but rather has an equilibrium position offset from the bottom of the parabolic potential confining well.
The objective of this study was to determine if a distribution of pit induction times (from potentiostatic experiments) could be used to predict a distribution of pitting potentials (from potentiodynamic experiments) for high-purity aluminum. Pit induction times were measured for 99.99 Al in 50 mM NaCl at potentials of -0.35, -0.3, -0.25, and -0.2 V vs. saturated calomel electrode. Analysis of the data showed that the pit germination rate generally was an exponential function of the applied potential; however, a subset of the germination rate data appeared to be mostly potential insensitive. The germination rate behavior was used as an input into a mathematical relationship that provided a prediction of pitting potential distribution. Good general agreement was found between the predicted distribution and an experimentally determined pitting potential distribution, suggesting that the relationships presented here provide a suitable means for quantitatively describing pit germination rate.
Polyoxoniobate chemistry, both in the solid state and in solution is dominated by [Nb{sub 6}O{sub 19}]{sup 8-}, the Lindquist ion. Recently, we have expanded this chemistry through use of hydrothermal synthesis. The current publication illustrates how use of heteroatoms is another means of diversifying polyoxoniobate chemistry. Here we report the synthesis of Na{sub 8}[Nb{sub 8}Ti{sub 2}O{sub 28}] {center_dot} 34H{sub 2}O [{bar 1}] and its structural characterization from single-crystal X-ray data. This salt crystallizes in the P-1 space group (a = 11.829(4) {angstrom}, b = 12.205(4) {angstrom}, c = 12.532(4) {angstrom}, {alpha} = 97.666(5){sup o}, {beta} = 113.840(4){sup o}, {gamma} = 110.809(4){sup o}), and the decameric anionic cluster [Nb{sub 8}Ti{sub 2}O{sub 28}]{sup 8-} has the same cluster geometry as the previously reported [Nb{sub 10}O{sub 28}]{sup 6-} and [V{sub 10}O{sub 28}]{sup 6-}. Molecular modeling studies of [Nb{sub 10}O{sub 28}]{sup 6-} and all possible isomers of [Nb{sub 8}Ti{sub 2}O{sub 28}]{sup 8-} suggest that this cluster geometry is stabilized by incorporating the Ti{sup 4+} into cluster positions in which edge-sharing is maximized. In this manner, the overall repulsion between edge-sharing octahedra within the cluster is minimized, as Ti{sup 4+} is both slightly smaller and of lower charge than Nb{sup 5+}. Synthetic studies also show that while the [Nb{sub 10}O{sub 28}]{sup 6-} cluster is difficult to obtain, the [Nb{sub 8}Ti{sub 2}O{sub 28}]{sup 8-} cluster can be synthesized reproducibly and is stable in neutral to basic solutions, as well.
Microstructural evolution during simple solid-state sintering of two-dimensional compacts of elongated particles packed in different arrangements was simulated using a kinetic, Monte Carlo model. The model used simulates curvature-driven grain growth, pore migration by surface diffusion, vacancy formation, diffusion along grain boundaries, and annihilation. Only the shape of the particles was anisotropic; all other extensive thermodynamic and kinetic properties such as surface energies and diffusivities were isotropic. We verified our model by simulating sintering in the analytically tractable cases of simple-packed and close-packed, elongated particles and comparing the shrinkage rate anisotropies with those predicted analytically. Once our model was verified, we used it to simulate sintering in a powder compact of aligned, elongated particles of arbitrary size and shape to gain an understanding of differential shrinkage. Anisotropic shrinkage occurred in all compacts with aligned, elongated particles. However, the direction of higher shrinkage was in some cases along the direction of elongation and in other cases in the perpendicular direction, depending on the details of the powder compact. In compacts of simple-packed, mono-sized, elongated particles, shrinkage was higher in the direction of elongation. In compacts of close-packed, mono-sized, elongated particles and of elongated particles with a size and shape distribution, the shrinkage was lower in the direction of elongation. The results of these simulations are analyzed, and the implication of these results is discussed.
This paper describes the development of a surface-acoustic-wave (SAW) sensor that is designed to be operated continuously and in situ to detect volatile organic compounds. A ruggedized stainless-steel package that encases the SAW device and integrated circuit board allows the sensor to be deployed in a variety of media including air, soil, and even water. Polymers were optimized and chosen based on their response to chlorinated aliphatic hydrocarbons (e.g., trichloroethylene), which are common groundwater contaminants. Initial testing indicates that a running-average data-logging algorithm can reduce the noise and increase the sensitivity of the in-situ sensor.
Solid-state lighting using light-emitting diodes (LEDs) has the potential to reduce energy consumption for lighting by 50% while revolutionizing the way we illuminate our homes, work places, and public spaces. Nevertheless, substantial technical challenges remain in order for solid-state lighting to significantly displace the well-developed conventional lighting technologies. We review the potential of LED solid-state lighting to meet the long-term cost goals.
We have adopted a binary superlattice structure for long-wavelength broadband detection. In this superlattice, the basis contains two unequal wells, with which more energy states are created for broadband absorption. At the same time, responsivity is more uniform within the detection band because of mixing of wave functions from the two wells. This uniform line shape is particularly suitable for spectroscopy applications. The detector is designed to cover the entire 8-14 {micro}m long-wavelength atmospheric window. The observed spectral widths are 5.2 and 5.6 {micro}m for two nominally identical wafers. The photoresponse spectra from both wafers are nearly unchanged over a wide range of operating bias and temperature. The background-limited temperature is 50 K at 2 V bias for F/1.2 optics.
A quiet revolution is underway. Over the next 5-10 years inorganic-semiconductor-based solid-state lighting technology is expected to outperform first incandescent, and then fluorescent and high-intensity-discharge, lighting. Along the way, many decision points and technical challenges will be faced. To help understand these challenges, the U.S. Department of Energy, the Optoelectronics Industry Development Association and the National Electrical Manufacturers Association recently updated the U.S. Solid-State Lighting Roadmap. In the first half of this paper, we present an overview of the high-level targets of the inorganic-semiconductor part of that update. In the second half of this paper, we discuss some implications of those high-level targets on the GaN-based semiconductor chips that will be the 'engine' for solid-state lighting.
We have investigated the liquid-phase self-assembly of 1-alkanethiols (HS(CH{sub 2}){sub n-1}CH{sub 3}, n = 8, 16, and 18) on hydrogenated Ge(111), using attenuated total reflection Fourier transform infrared spectroscopy as well as water contact angle measurements. The infrared absorbance of C-H stretching modes of alkanethiolates on Ge, in conjunction with water contact angle measurements, demonstrates that the final packing density is a function of alkanethiol concentration in 2-propanol and its chain length. High concentration and long alkyl chain increase the steady-state surface coverage of alkanethiolates. A critical chain length exists between n = 8 and 16, above which the adsorption kinetics is comparable for all long alkyl chain 1-alkanethiols. The steady-state coverage of hexadecanethiolates, representing long-chain alkanethiolates, reaches a maximum at approximately 5.9 x 10{sup 14} hexadecanethiolates/cm{sup 2} in 1 M solution. The characteristic time constant to reach a steady state also decreases with increasing chain length. This chain length dependence is attributed to the attractive chain-to-chain interaction in long-alkyl-chain self-assembled monolayers, which reduces the desorption-to-adsorption rate ratio (k{sub d}/k{sub a}). We also report the adsorption and desorption rate constants (k{sub a} and k{sub d}) of 1-hexadecanethiol on hydrogenated Ge(111) at room temperature. The alkanethiol adsorption is a two-step process following a first-order Langmuir isotherm: (1) fast adsorption with k{sub a} = 2.4 {+-} 0.2 cm{sup 3}/(mol s) and k{sub d} = (8.2 {+-} 0.5) x 10{sup -6} s{sup -1}; (2) slow adsorption with k{sub a} = 0.8 {+-} 0.5 cm{sup 3}/(mol s) and k{sub d} = (3 {+-} 2) x 10{sup -6} s{sup -1}.
The present study is a numerical investigation of the propagation of electromagnetic transients in dispersive media. It considers propagation in water using Debye and composite Rocard-Powles-Lorentz models for the complex permittivity. The study addresses this question: For practical transmitted spectra, does precursor propagation provide any features that can be used to advantage over conventional signal propagation in models of dispersive media of interest? A companion experimental study is currently in progress that will attempt to measure the effects studied here.
Time-of-flight secondary ion mass spectrometry (TOF-SIMS) by its parallel nature, generates complex and very large datasets quickly and easily. An example of such a large dataset is a spectral image where a complete spectrum is collected for each pixel. Unfortunately, the large size of the data matrix involved makes it difficult to extract the chemical information from the data using traditional techniques. Because time constraints prevent an analysis of every peak, prior knowledge is used to select the most probable and significant peaks for evaluation. However, this approach may lead to a misinterpretation of the system under analysis. Ideally, the complete spectral image would be used to provide a comprehensive, unbiased materials characterization based on full spectral signatures. Automated eXpert spectral image analysis (AXSIA) software developed at Sandia National Laboratories implements a multivariate curve resolution technique that was originally developed for energy dispersive X-ray spectroscopy (EDS) [Microsci. Microanal. 9 (2003) 1]. This paper will demonstrate the application of the method to TOF-SIMS. AXSIA distills complex and very large spectral image datasets into a limited number of physically realizable and easily interpretable chemical components, including both spectra and concentrations. The number of components derived during the analysis represents the minimum number of components needed to completely describe the chemical information in the original dataset. Since full spectral signatures are used to determine each component, an enhanced signal-to-noise is realized. The efficient statistical aggregation of chemical information enables small and unexpected features to be automatically found without user intervention.
The spreading of polymer droplets is studied using molecular dynamics simulations. To study the dynamics of both the precursor foot and the bulk droplet, large hemispherical drops of 200 000 monomers are simulated using a bead-spring model for polymers of chain length 10, 20, and 40 monomers per chain. We compare spreading on flat and atomistic surfaces, chain length effects, and different applications of the Langevin and dissipative particle dynamics thermostats. We find diffusive behavior for the precursor foot and good agreement with the molecular kinetic model of droplet spreading using both flat and atomistic surfaces. Despite the large system size and long simulation time relative to previous simulations, we find that even larger systems are required to observe hydrodynamic behavior in the hemispherical spreading droplet.
The Eulerian hydrocode, CTH, has been used to study the interaction of hypervelocity flyer plates with thin targets at velocities from 6 to 11 km/s. These penetrating impacts produce debris clouds that are subsequently allowed to stagnate against downstream witness plates. Velocity histories from this latter plate are used to infer the evolution and propagation of the debris cloud. This analysis, which is a companion to a parallel experimental effort, examined both numerical and physics-based issues. We conclude that numerical resolution and convergence are important in ways we had not anticipated. The calculated release from the extreme states generated by the initial impact shows discrepancies with related experimental observations, and indicates that even for well-known materials (e.g., aluminum), high-temperature failure criteria are not well understood, and that non-equilibrium or rate-dependent equations of state may be influencing the results.
Protein microtubules (MTs) 25 nm in diameter and tens of micrometers long have been used as templates for the biomimetic mineralization of FeOOH. Exposure of MTs to anaerobic aqueous solutions of Fe{sup 2+} buffered to neutral pH followed by aerial oxidation leads to the formation of iron oxide coated MTs. The iron oxide layer was found to grow via a two-step process: initially formed 10-30 nm thick coatings were found to be amorphous in structure and comprised of several iron-containing species. Further growth resulted in MTs coated with highly crystalline layers of lepidocrocite with a controllable thickness of up to 125 nm. On the micrometer size scale, these coated MTs were observed to form large, irregular bundles containing hundreds of individually coated MTs. Iron oxide grew selectively on the MT surface, a result of the highly charged MT surface that provided an interface favorable for iron oxide nucleation. This result illustrates that MTs can be used as scaffolds for the in-situ production of high-aspect-ratio inorganic nanowires.
The paper presents a theoretical study of synchronization between two coupled lasers. A theory valid for arbitrary coupling between lasers is used. Its key feature is that the laser field is decomposed in terms of the composite-cavity modes reflecting the spatial field dependence over the entire coupled-laser system. The ensuing multimode equations are reduced to class-B, and further to class-A equations which resemble competing species equations. Bifurcation analysis, supported by insight provided by analytical solutions, is used to investigate influences of pump, carrier decay rate, polarization decay rate, and coupling mirror losses on synchronization between lasers. Population pulsation is found to be an essential mode competition mechanism responsible for bistability in the synchronized solutions. Finally, we discovered that the mechanism leading to laser synchronization changes from strong composite-cavity mode competition in class-A regime to frequency locking of composite-cavity modes in class-B regime.
Dynamic compressive properties of an epoxy syntactic foam at various strain rates under lateral confinement have been investigated with a pulse-shaped split Hopkinson pressure bar (SHPB). The quasi-static responses were obtained with an MTS 810 materials test system. The quasi-static and dynamic stress-strain behavior of the foam under confinement exhibited an elastic-plastic-like response whereas an elastic-brittle behavior was observed under uniaxial stress loading conditions. The modulus of elasticity and yield strength, which had higher values than those in uniaxial stress case, were both sensitive to strain rates. However, the strain-hardening behavior under confinement was not strain-rate sensitive. A phenomenological elastic-plastic type of material model was employed to describe the strain-rate-dependent compressive properties of the syntactic foam under confinement, which agreed well with experimental results.
This research addresses effects of temperature, including adiabatic temperature rise in specimen during dynamic compression and environmental temperature, on the dynamic compressive properties of an epoxy syntactic foam. The adiabatic temperature rise in specimen during dynamic compression is found to be so small that its effects may be neglected. However, environmental temperature has significant effects on dynamic compressive behavior. With decreasing temperature, the foam initially hardens but then softens when below a transitional temperature, which are dominated by mechanisms of thermal-softening and damage-softening, respectively. A phenomenological material model accounting for both temperature and strain-rate effects has been developed, which well describes the compressive and failure behaviors at various strain rates and environmental temperatures.
We report for the first time a one-step, templateless method to directly prepare large arrays of oriented TiO{sub 2}-based nanotubes and continuous films. These titania nanostructures can also be easily prepared as conformal coatings on a substrate. The nanostructured films were formed on a Ti substrate seeded with TiO{sub 2} nanoparticles. SEM and TEM results suggested that a folding mechanism of sheetlike structures was involved in the formation of the nanotubes. The oriented arrays of TiO{sub 2} nanotubes, continuous films, and coatings are expected to have potentials for applications in catalysis, filtration, sensing, photovoltaic cells, and high surface area electrodes.
Currently, the Egyptian Atomic Energy Authority is designing a shallow-land disposal facility for low-level radioactive waste. To insure containment and prevent migration of radionuclides from the site, the use of a reactive backfill material is being considered. One material under consideration is hydroxyapatite, Ca{sub 10}(PO{sub 4}){sub 6}(OH){sub 2}, which has a high affinity for the sorption of many radionuclides. Hydroxyapatite has many properties that make it an ideal material for use as a backfill including low water solubility (K{sub sp} > 10{sup -40}), high stability under reducing and oxidizing conditions over a wide temperature range, availability, and low cost. However, there is often considerable variation in the properties of apatites depending on source and method of preparation. In this work, we characterized and compared a synthetic hydroxyapatite with hydroxyapatites prepared from cattle bone calcined at 500 C, 700 C, 900 C and 1100 C. The analysis indicated the synthetic hydroxyapatite was similar in morphology to 500 C prepared cattle hydroxyapatite. With increasing calcination temperature the crystallinity and crystal size of the hydroxyapatites increased and the BET surface area and carbonate concentration decreased. Batch sorption experiments were performed to determine the effectiveness of each material to sorb uranium. Sorption of U was strong regardless of apatite type indicating all apatite materials evaluated. Sixty day desorption experiments indicated desorption of uranium for each hydroxyapatite was negligible.
High-power 18650 Li-ion cells have been developed for hybrid electric vehicle applications as part of the DOE Advanced Technology Development (ATD) program. The thermal abuse response of two advanced chemistries (Gen1 and Gen2) were measured and compared with commercial Sony 18650 cells. Gen1 cells consisted of an MCMB graphite based anode and a LiNi{sub 0.85}Co{sub 0.15}O{sub 2} cathode material while the Gen2 cells consisted of a MAG10 anode graphite and a LiNi{sub 0.80}Co{sub 0.15} Al{sub 0.05}O{sub 2} cathode. Accelerating rate calorimetry (ARC) and differential scanning calorimetry (DSC) were used to measure the thermal response and properties of the cells and cell materials up to 400 C. The MCMB graphite was found to result in increased thermal stability of the cells due to more effective solid electrolyte interface (SEI) formation. The Al stabilized cathodes were seen to have higher peak reaction temperatures that also gave improved cell thermal response. The effects of accelerated aging on cell properties were also determined. Aging resulted in improved cell thermal stability with the anodes showing a rapid reduction in exothermic reactions while the cathodes only showed reduced reactions after more extended aging.
{sup 90}Sr contamination is a major problem at several U.S. sites. At some sites, {sup 90}Sr has migrated deep underground making site remediation difficult. In this paper, we describe a novel method for precipitation of hydroxyapatite, a strong sorbent for {sup 90}Sr, in soil. The method is based on mixing a solution of calcium citrate and sodium phosphate in soil. As the indigenous soil microorganisms mineralize the citrate, the calcium is released and forms hydroxyapatite. Soil, taken from the Albuquerque desert, was treated with a sodium phosphate solution or a sodium phosphate/calcium citrate solution. TEM and EDS were used to identify hydroxyapatite with CO{sub 3}{sup 2-} substitutions, with a formula of (Ca{sub 4.8}Na{sub 0.2})[(PO{sub 4}){sub 2.8}(CO{sub 3}){sub 0.2}](OH), in the soil treated with the sodium phosphate/calcium citrate solution. Untreated and treated soils were used in batch sorption experiments for Sr uptake. Average Sr uptake was 19.5, 77.0 and 94.7% for the untreated soil, soil treated with sodium phosphate, and soil with apatite, respectively. In desorption experiments, the untreated soil, phosphate treated soil and apatite treated soil released an average of 34.2, 28.8 and 4.8% respectively. The results indicate the potential of forming apatite in soil using soluble reagents for retardation of radionuclide migration.
A fundamental challenge for engineering communication systems is the problem of transmitting information from the source to the receiver over a noisy channel. This same problem exists in a biological system. How can information required for the proper functioning of a cell, an organism, or a species be transmitted in an error introducing environment? Source codes (compression codes) and channel codes (error-correcting codes) address this problem in engineering communication systems. The ability to extend these information theory concepts to study information transmission in biological systems can contribute to the general understanding of biological communication mechanisms and extend the field of coding theory into the biological domain. In this work, we review and compare existing coding theoretic methods for modeling genetic systems. We introduce a new error-correcting code framework for understanding translation initiation, at the cellular level and present research results for Escherichia coli K-12. By studying translation initiation, we hope to gain insight into potential error-correcting aspects of genomic sequences and systems.
Visualization of scientific frontiers is a relatively new field, yet it has a long history and many predecessors. The application of science to science itself has been undertaken for decades with notable early contributions by Derek Price, Thomas Kuhn, Diana Crane, Eugene Garfield, and many others. What is new is the field of information visualization and application of its techniques to help us understand the process of science in the making. In his new book, Chaomei Chen takes us on a journey through this history, touching on predecessors, and then leading us firmly into the new world of Mapping Scientific Frontiers. Building on the foundation of his earlier book, Information Visualization and Virtual Environments, Chen's new offering is much less a tutorial in how to do information visualization, and much more a conceptual exploration of why and how the visualization of science can change the way we do science, amplified by real examples. Chen's stated intents for the book are: (1) to focus on principles of visual thinking that enable the identification of scientific frontiers; (2) to introduce a way to systematize the identification of scientific frontiers (or paradigms) through visualization techniques; and (3) to stimulate interdisciplinary research between information visualization and information science researchers. On all these counts, he succeeds. Chen's book can be broken into two parts which focus on the first two purposes stated above. The first, consisting of the initial four chapters, covers history and predecessors. Kuhn's theory of normal science punctuated by periods of revolution, now commonly known as paradigm shifts, motivates the work. Relevant predecessors outside the traditional field of information science such as cartography (both terrestrial and celestial), mapping the mind, and principles of visual association and communication, are given ample coverage. Chen also describes enabling techniques known to information scientists, such as multi-dimensional scaling, advanced dimensional reduction, social network analysis, Pathfinder network scaling, and landscape visualizations. No algorithms are given here; rather, these techniques are described from the point of view of enabling 'visual thinking'. The Generalized Similarity Analysis (GSA) technique used by Chen in his recent published papers is also introduced here. Information and computer science professionals would be wise not to skip through these early chapters. Although principles of gestalt psychology, cartography, thematic maps, and association techniques may be outside their technology comfort zone, or interest, these predecessors lay a groundwork for the 'visual thinking' that is required to create effective visualizations. Indeed, the great challenge in information visualization is to transform the abstract and intangible into something visible, concrete, and meaningful to the user. The second part of the book, covering the final three chapters, extends the mapping metaphor into the realm of scientific discovery through the structuring of literatures in a way that enables us to see scientific frontiers or paradigms. Case studies are used extensively to show the logical progression that has been made in recent years to get us to this point. Homage is paid to giants of the last 20 years including Michel Callon for co-word mapping, Henry Small for document co-citation analysis and specialty narratives (charting a path linking the different sciences), and Kate McCain for author co-citation analysis, whose work has led to the current state-of-the-art. The last two chapters finally answer the question - 'What does a scientific paradigm look like?' The visual answer given is specific to the GSA technique used by Chen, but does satisfy the intent of the book - to introduce a way to visually identify scientific frontiers. A variety of case studies, mostly from Chen's previously published work - supermassive black holes, cross-domain applications of Pathfinder networks, mass extinction debates, impact of Don Swanson's work, and mad cow disease and vCJD in humans - succeed in explaining how visualization can be used to show the development of, competition between, and eventual acceptance (or replacement) of scientific paradigms. Although not addressed specifically, Chen's work nonetheless makes the persuasive argument that visual maps alone are not sufficient to explain 'the making of science' to a non-expert in a particular field. Rather, expert knowledge is still required to interpret these maps and to explain the paradigms. This combination of visual maps and expert knowledge, used jointly to good effect in the book, becomes a potent means for explaining progress in science to the expert and non-expert alike. Work to extend the GSA technique to explore latent domain knowledge (important work that falls below the citation thresholds typically used in GSA) is also explored here.
The essential oil of white sage, Salvia apiana, was obtained by steam distillation and analysed by GC-MS. A total of 13 components were identified, accounting for >99.9% of the oil. The primary component was 1,8-cineole, accounting for 71.6% of the oil.
An IVA (inductive voltage adder) research programme at AWE began with the construction of a small scale IVA test bed named LINX and progressed to building PIM (Prototype IVA Module). The work on PIM is geared towards furnishing AWE with a range of machines operating at 1 to 4 MV that may eventually supersede, with an upgrade in performance, existing machines operating in that voltage range. PIM has a water dielectric Blumlein of 10 ohms charged by a Marx generator. This has been used to drive either one or two 1.5 MV inductive cavities and fitting a third cavity may be attempted in the future. The latest two cavity configuration is shown which requires a split oil coax to connect the two cavities in parallel. It also has a laser triggering system for initiating the Blumlein and the prepulse reduction system fitted to the output of the Blumlein. A short MITL (magnetically insulated transmission line) connects the cavities, via a vacuum pumping section, to a chamber containing an e-beam diode test load.
Surfactant-templated silica thin films are potentially important materials for applications such as chemical sensing. However, a serious limitation for their use in aqueous environments is their poor hydrolytic stability. One convenient method of increasing the resistance of mesoporous silica to water degradation is addition of alumina, either doped into the pore walls during material synthesis or grafted onto the pore surface of preformed mesophases. Here, we compare these two routes to Al-modified mesoporous silica with respect to their effectiveness in decreasing the solubility of thin mesoporous silicate films. Direct synthesis of templated silica films prepared with Al/Si = 1:50 was found to limit film degradation, as measured by changes in film thickness, to less than 15% at near-neutral pH over a 1 week period. In addition to suppressing film dissolution, addition of Al can also cause structural changes in silica films templated with the nonionic surfactant Brij 56 (C{sub 16}H{sub 33}(OCH{sub 2}CH{sub 2}){sub n{approx}10}OH), including mesophase transformation, a decrease in accessible porosity, and an increase in structural disorder. The solubility behavior of films is also sensitive to their particular mesophase, with 3D phases (cubic, disordered) possessing less internal but more thickness stability than 2D phases (hexagonal), as determined with ellipsometric measurements. Finally, grafting of Al species onto the surface of surfactant-templated silica films also significantly increases aqueous stability, although to a lesser extent than the direct synthesis route.
We demonstrate a voltage tunable two-color quantum-well infrared photodetector (QWIP) that consists of multiple periods of two distinct AlGaAs/GaAs superlattices separated by AlGaAs blocking barriers on one side and heavily doped GaAs layers on the other side. The detection peak switches from 9.5 {micro}m under large positive bias to 6 {micro}m under negative bias. The background-limited temperature is 55 K for 9.5 {micro}m detection and 80 K for 6 {micro}m detection. We also demonstrate that the corrugated-QWIP geometry is suitable for coupling normally incident light into the detector.
We demonstrate the presence of a resonant interaction between a pair of coupled quantum wires, which are formed in the ultrahigh mobility two-dimensional electron gas of a GaAs/AlGaAs quantum well. The coupled-wire system is realized by an extension of the split-gate technique, in which bias voltages are applied to Schottky gates on the semiconductor surface, to vary the width of the two quantum wires, as well as the strength of the coupling between them. The key observation of interest here is one in which the gate voltages used to define one of the wires are first fixed, after which the conductance of this wire is measured as the gate voltage used to form the other wire is swept. Over the range of gate voltage where the swept wire pinches off, we observe a resonant peak in the conductance of the fixed wire that is correlated precisely to this pinchoff condition. In this paper, we present new results on the current- and temperature-dependence of this conductance resonance, which we suggest is related to the formation of a local moment in the swept wire as its conductance is reduced below 2e{sup 2}/h.
Analytical instrumentation such as time-of-flight secondary ion mass spectrometry (ToF-SIMS) provides a tremendous quantity of data since an entire mass spectrum is saved at each pixel in an ion image. The analyst often selects only a few species for detailed analysis; the majority of the data are not utilized. Researchers at Sandia National Laboratory (SNL) have developed a powerful multivariate statistical analysis (MVSA) toolkit named AXSIA (Automated eXpert Spectrum Image Analysis) that looks for trends in complete datasets (e.g., analyzes the entire mass spectrum at each pixel). A unique feature of the AXSIA toolkit is the generation of intuitive results (e.g., negative peaks are not allowed in the spectral response). The robust statistical process is able to unambiguously identify all of the spectral features uniquely associated with each distinct component throughout the dataset. General Electric and Sandia used AXSIA to analyze raw data files generated on an Ion Tof IV ToF-SIMS instrument. Here, we will show that the MVSA toolkit identified metallic contaminants within a defect in a polymer sample. These metallic contaminants were not identifiable using standard data analysis protocol.
The maximum contact map overlap (MAX-CMO) between a pair of protein structures can be used as a measure of protein similarity. It is a purely topological measure and does not depend on the sequence of the pairs involved in the comparison. More importantly, the MAX-CMO present a very favorable mathematical structure which allows the formulation of integer, linear and Lagrangian models that can be used to obtain guarantees of optimality. It is not the intention of this paper to discuss the mathematical properties of MAX-CMO in detail as this has been dealt elsewhere. In this paper we compare three algorithms that can be used to obtain maximum contact map overlaps between protein structures. We will point to the weaknesses and strengths of each one. It is our hope that this paper will encourage researchers to develop new and improve methods for protein comparison based on MAX-CMO.
We consider the convergence properties of a non-elitist self-adaptive evolutionary strategy (ES) on multi-dimensional problems. In particular, we apply our recent convergence theory for a discretized (1,{lambda})-ES to design a related (1,{lambda})-ES that converges on a class of seperable, unimodal multi-dimensional problems. The distinguishing feature of self-adaptive evolutionary algorithms (EAs) is that the control parameters (like mutation step lengths) are evolved by the evolutionary algorithm. Thus the control parameters are adapted in an implicit manner that relies on the evolutionary dynamics to ensure that more effective control parameters are propagated during the search. Self-adaptation is a central feature of EAs like evolutionary stategies (ES) and evolutionary programming (EP), which are applied to continuous design spaces. Rudolph summarizes theoretical results concerning self-adaptive EAs and notes that the theoretical underpinnings for these methods are essentially unexplored. In particular, convergence theories that ensure convergence to a limit point on continuous spaces have only been developed by Rudolph, Hart, DeLaurentis and Ferguson, and Auger et al. In this paper, we illustrate how our analysis of a (1,{lambda})-ES for one-dimensional unimodal functions can be used to ensure convergence of a related ES on multidimensional functions. This (1,{lambda})-ES randomly selects a search dimension in each iteration, along which points generated. For a general class of separable functions, our analysis shows that the ES searches along each dimension independently, and thus this ES converges to the (global) minimum.
We have investigated InAs quantum dots (QD) formed on GaAs(1 0 0) using metal-organic chemical vapor deposition. Through a combination of room temperature photoluminescence and atomic force microscopy we have characterized the quantum dots. We have determined the effect of growth rate, deposited thickness, hydride partial pressure, and temperature on QD energy levels. The window of thickness for QD formation is very small, about 3 {angstrom} of InAs. By decreasing the growth rate used to deposit InAs, the ground state transition of the QD is shifted to lower energies. The formation of optically active InAs QD is very sensitive to temperature. Temperatures above 500 C do not form optically active QDs. The thickness window for QD formation increases slightly at 480 C. This is attributed to the thermal dependence of diffusion length. The AsH{sub 3} partial pressure has a non-linear effect on the QD ground state energy.
This paper analyzes the collected charge in heavy ion irradiated MOS structures. The charge generated in the substrate induces a displacement effect which strongly depends on the capacitor structure. Networks of capacitors are particularly sensitive to charge sharing effects. This has important implications for the reliability of SOI and DRAMs which use isolation oxides as a key elementary structure. The buried oxide of present day and future SOI technologies is thick enough to avoid a significant collection from displacement effects. On the other hand, the retention capacitors of trench DRAMs are particularly sensitive to charge release in the substrate. Charge collection on retention capacitors participate to the MBU sensitivity of DRAM.
We report operation of a terahertz quantum-cascade laser at 3.8 THz ({lambda} {approx} 79 {micro}m) up to a heat-sink temperature of 137 K. A resonant phonon depopulation design was used with a low-loss metal-metal waveguide, which provided a confinement factor of nearly unity. A threshold current density of 625 A/cm{sup 2} was obtained in pulsed mode at 5 K. Devices fabricated using a conventional semi-insulating surface-plasmon waveguide lased up to 92 K with a threshold current density of 670 A/cm{sup 2} at 5 K.
This paper presents the first 3-D simulation of heavy-ion induced charge collection in a SiGe HBT, together with microbeam testing data. The charge collected by the terminals is a strong function of the ion striking position. The sensitive area of charge collection for each terminal is identified based on analysis of the device structure and simulation results. For a normal strike between the deep trench edges, most of the electrons and holes are collected by the collector and substrate terminals, respectively. For an ion strike between the shallow trench edges surrounding the emitter, the base collects appreciable amount of charge. Emitter collects negligible amount of charge. Good agreement is achieved between the experimental and simulated data. Problems encountered with mesh generation and charge collection simulation are also discussed.
Seismic event location is made challenging by the difficulty of describing event location uncertainty in multidimensions, by the non-linearity of the Earth models used as input to the location algorithm, and by the presence of local minima which can prevent a location code from finding the global minimum. Techniques to deal with these issues will be described. Since some of these techniques are computationally expensive or require more analysis by human analysts, users need a flexible location code that allows them to select from a variety of solutions that span a range of computational efficiency and simplicity of interpretation. A new location code, LocOO, has been developed to deal with these issues. A seismic event location is comprised of a point in 4-dimensional (4D) space-time, surrounded by a 4D uncertainty boundary. The point location is useless without the uncertainty that accompanies it. While it is mathematically straightforward to reduce the dimensionality of the 4D uncertainty limits, the number of dimensions that should be retained depends on the dimensionality of the location to which the calculated event location is to be compared. In nuclear explosion monitoring, when an event is to be compared to a known or suspected test site location, the three spatial components of the test site and event location are to be compared and 3 dimensional uncertainty boundaries should be considered. With LocOO, users can specify a location to which the calculated seismic event location is to be compared and the dimensionality of the uncertainty is tailored to that of the location specified by the user. The code also calculates the probability that the two locations in fact coincide. The non-linear travel time curves that constrain calculated event locations present two basic difficulties. The first is that the non-linearity can cause least squares inversion techniques to fail to converge. LocOO implements a nonlinear Levenberg-Marquardt least squares inversion technique that is guaranteed to converge in a finite number of iterations for tractable problems. The second difficulty is that a high degree of non-linearity causes the uncertainty boundaries around the event location to deviate significantly from elliptical shapes. LocOO can optionally calculate and display non-elliptical uncertainty boundaries at the cost of a minimal increase in computation time and complexity of interpretation. All location codes are plagued by the possibility of having local minima obscuring the single global minimum. No code can guarantee that it will find the global minimum in a finite number of computations. Grid search algorithms have been developed to deal with this problem, but have a high computational cost. In order to improve the likelihood of finding the global minimum in a timely manner, LocOO implements a hybrid least squares-grid search algorithm. Essentially, many least squares solutions are computed starting from a user-specified number of initial locations; and the solution with the smallest sum squared weighted residual is assumed to be the optimal location. For events of particular interest, analysts can display contour plots of gridded residuals in a selected region around the best-fit location, improving the probability that the global minimum will not be missed and also providing much greater insight into the character and quality of the calculated solution.
To improve the nuclear event monitoring capability of the U.S., the NNSA Ground-based Nuclear Explosion Monitoring Research & Engineering (GNEM R&E) program has been developing a collection of products known as the Knowledge Base (KB). Though much of the focus for the KB has been on the development of calibration data, we have also developed numerous software tools for various purposes. The Matlab-based MatSeis package and the associated suite of regional seismic analysis tools were developed to aid in the testing and evaluation of some Knowledge Base products for which existing applications were either not available or ill-suited. This presentation will provide brief overviews of MatSeis and each of the tools, emphasizing features added in the last year. MatSeis was begun in 1996 and is now a fairly mature product. It is a highly flexible seismic analysis package that provides interfaces to read data from either flatfiles or an Oracle database. All of the standard seismic analysis tasks are supported (e.g. filtering, 3 component rotation, phase picking, event location, magnitude calculation), as well as a variety of array processing algorithms (beaming, FK, coherency analysis, vespagrams). The simplicity of Matlab coding and the tremendous number of available functions make MatSeis/Matlab an ideal environment for developing new monitoring research tools (see the regional seismic analysis tools below). New MatSeis features include: addition of evid information to events in MatSeis, options to screen picks by author, input and output of origerr information, improved performance in reading flatfiles, improved speed in FK calculations, and significant improvements to Measure Tool (filtering, multiple phase display), Free Plot (filtering, phase display and alignment), Mag Tool (maximum likelihood options), and Infra Tool (improved calculation speed, display of an F statistic stream). Work on the regional seismic analysis tools (CodaMag, EventID, PhaseMatch, and Dendro) began in 1999 and the tools vary in their level of maturity. All rely on MatSeis to provide necessary data (waveforms, arrivals, origins, and travel time curves). CodaMag Tool implements magnitude calculation by scaling to fit the envelope shape of the coda for a selected phase type (Mayeda, 1993; Mayeda and Walter, 1996). New tool features include: calculation of a yield estimate based on the source spectrum, display of a filtered version of the seismogram based on the selected band, and the output of codamag data records for processed events. EventID Tool implements event discrimination using phase ratios of regional arrivals (Hartse et al., 1997; Walter et al., 1999). New features include: bandpass filtering of displayed waveforms, screening of reference events based on SNR, multivariate discriminants, use of libcgi to access correction surfaces, and the output of discrim{_}data records for processed events. PhaseMatch Tool implements match filtering to isolate surface waves (Herrin and Goforth, 1977). New features include: display of the signal's observed dispersion and an option to use a station-based dispersion surface. Dendro Tool implements agglomerative hierarchical clustering using dendrograms to identify similar events based on waveform correlation (Everitt, 1993). New features include: modifications to include arrival information within the tool, and the capability to automatically add/re-pick arrivals based on the picked arrivals for similar events.
Iterated local search, or ILS, is among the most straightforward meta-heuristics for local search. ILS employs both small-step and large-step move operators. Search proceeds via iterative modifications to a single solution, in distinct alternating phases. In the first phase, local neighborhood search (typically greedy descent) is used in conjunction with the small-step operator to transform solutions into local optima. In the second phase, the large-step operator is applied to generate perturbations to the local optima obtained in the first phase. Ideally, when local neighborhood search is applied to the resulting solution, search will terminate at a different local optimum, i.e., the large-step perturbations should be sufficiently large to enable escape from the attractor basins of local optima. ILS has proven capable of delivering excellent performance on numerous N P-Hard optimization problems. [LMS03]. However, despite its implicity, very little is known about why ILS can be so effective, and under what conditions. The goal of this paper is to advance the state-of-the-art in the analysis of meta-heuristics, by providing answers to this research question. They focus on characterizing both the relationship between the structure of the underlying search space and ILS performance, and the dynamic behavior of ILS. The analysis proceeds in the context of the job-shop scheduling problem (JSP) [Tai94]. They begin by demonstrating that the attractor basins of local optima in the JSP are surprisingly weak, and can be escaped with high probaiblity by accepting a short random sequence of less-fit neighbors. this result is used to develop a new ILS algorithms for the JSP, I-JAR, whose performance is competitive with tabu search on difficult benchmark instances. They conclude by developing a very accurate behavioral model of I-JAR, which yields significant insights into the dynamics of search. The analysis is based on a set of 100 random 10 x 10 problem instances, in addition to some widely used benchmark instances. Both I-JAR and the tabu search algorithm they consider are based on the N1 move operator introduced by van Laarhoven et al. [vLAL92]. The N1 operator induces a connected search space, such that it is always possible to move from an arbitrary solution to an optimal solution; this property is integral to the development of a behavioral model of I-JAR. However, much of the analysis generalizes to other move operators, including that of Nowicki and Smutnick [NS96]. Finally the models are based on the distance between two solutions, which they take as the well-known disjunctive graph distance [MBK99].
Sintering is one of the oldest processes used by man to manufacture materials dating as far back as 12,000 BC. While it is an ancient process, it is also necessary for many modern technologies such a multilayered ceramic packages, wireless communication devices, and many others. The process consists of thermally treating a powder or compact at a temperature below the melting point of the main constituent, for the purpose of increasing its strength by bonding together of the particles. During sintering, the individual particles bond, the pore space between particles is eliminated, the resulting component can shrinks by as much as 30 to 50% by volume, and it can distort its shape tremendously. Being able to control and predict the shrinkage and shape distortions during sintering has been the goal of much research in material science. And it has been achieved to varying degrees by many. The object of this project was to develop models that could simulate sintering at the mesoscale and at the macroscale to more accurately predict the overall shrinkage and shape distortions in engineering components. The mesoscale model simulates microstructural evolution during sintering by modeling grain growth, pore migration and coarsening, and vacancy formation, diffusion and annihilation. In addition to studying microstructure, these simulation can be used to generate the constitutive equations describing shrinkage and deformation during sintering. These constitutive equations are used by continuum finite element simulations to predict the overall shrinkage and shape distortions of a sintering crystalline powder compact. Both models will be presented. Application of these models to study sintering will be demonstrated and discussed. Finally, the limitations of these models will be reviewed.
We describe stochastic agent-based simulations of protein-emulating agents to perform computation via dynamic self-assembly. The binding and actuation properties of the types of agents required to construct a RAM machine (equivalent to a Turing machine) are described. We present an example computation and describe the molecular biology and non-equilibrium statistical mechanics, and information science properties of this system.
Acid-base titration and metal sorption experiments were performed on both mesoporous alumina and alumina particles under various ionic strengths. It has been demonstrated that surface chemistry and ion sorption within nanopores can be significantly modified by a nano-scale space confinement. As the pore size is reduced to a few nanometers, the difference between surface acidity constants (ΔpK = pK2 - pK1) decreases, giving rise to a higher surface charge density on a nanopore surface than that on an unconfined solid-solution interface. The change in surface acidity constants results in a shift of ion sorption edges and enhances ion sorption on that nanopore surfaces.
Three-dimensional photonic-crystal emitter for thermal photovoltaic power generation was studied. The photonic crystal, at 1535 K, exhibited a sharp emission at λ∼1.5 μm and was promising for thermal photovoltaic (TPV) generation. It was shown that an optical-to-electric conversion efficiency of ∼34% and electrical power of ∼14 W/cm2 is possible.
A Simple PolyUrethane Foam (SPUF) mass loss and response model has been developed to predict the behavior of unconfined, rigid, closed-cell, polyurethane foam-filled systems exposed to fire-like heat fluxes. The model, developed for the B61 and W80-0/1 fireset foam, is based on a simple two-step mass loss mechanism using distributed reaction rates. The initial reaction step assumes that the foam degrades into a primary gas and a reactive solid. The reactive solid subsequently degrades into a secondary gas. The SPUF decomposition model was implemented into the finite element (FE) heat conduction codes COYOTE [1] and CALORE [2], which support chemical kinetics and dynamic enclosure radiation using 'element death.' A discretization bias correction model was parameterized using elements with characteristic lengths ranging from 1-mm to 1-cm. Bias corrected solutions using the SPUF response model with large elements gave essentially the same results as grid independent solutions using 100-{micro}m elements. The SPUF discretization bias correction model can be used with 2D regular quadrilateral elements, 2D paved quadrilateral elements, 2D triangular elements, 3D regular hexahedral elements, 3D paved hexahedral elements, and 3D tetrahedron elements. Various effects to efficiently recalculate view factors were studied -- the element aspect ratio, the element death criterion, and a 'zombie' criterion. Most of the solutions using irregular, large elements were in agreement with the 100-{micro}m grid-independent solutions. The discretization bias correction model did not perform as well when the element aspect ratio exceeded 5:1 and the heated surface was on the shorter side of the element. For validation, SPUF predictions using various sizes and types of elements were compared to component-scale experiments of foam cylinders that were heated with lamps. The SPUF predictions of the decomposition front locations were compared to the front locations determined from real-time X-rays. SPUF predictions of the 19 radiant heat experiments were also compared to a more complex chemistry model (CPUF) predictions made with 1-mm elements. The SPUF predictions of the front locations were closer to the measured front locations than the CPUF predictions, reflecting the more accurate SPUF prediction of mass loss. Furthermore, the computational time for the SPUF predictions was an order of magnitude less than for the CPUF predictions.
Presented within this report are the results of a brief examination of optical tagging technologies funded by the Laboratory Directed Research and Development (LDRD) program at Sandia National Laboratories. The work was performed during the summer months of 2002 with total funding of $65k. The intent of the project was to briefly examine a broad range of approaches to optical tagging concentrating on the wavelength range between ultraviolet (UV) and the short wavelength infrared (SWIR, {lambda} < 2{micro}m). Tagging approaches considered include such things as simple combinations of reflective and absorptive materials closely spaced in wavelength to give a high contrast over a short range of wavelengths, rare-earth oxides in transparent binders to produce a narrow absorption line hyperspectral tag, and fluorescing materials such as phosphors, dies and chemically precipitated particles. One technical approach examined in slightly greater detail was the use of fluorescing nano particles of metals and semiconductor materials. The idea was to embed such nano particles in an oily film or transparent paint binder. When pumped with a SWIR laser such as that produced by laser diodes at {lambda}=1.54{micro}m, the particles would fluoresce at slightly longer wavelengths, thereby giving a unique signal. While it is believed that optical tags are important for military, intelligence and even law enforcement applications, as a business area, tags do not appear to represent a high on return investment. Other government agencies frequently shop for existing or mature tag technologies but rarely are interested enough to pay for development of an untried technical approach. It was hoped that through a relatively small investment of laboratory R&D funds, enough technologies could be identified that a potential customers requirements could be met with a minimum of additional development work. Only time will tell if this proves to be correct.
A Chemical-structure-based PolyUrethane Foam (CPUF) decomposition model has been developed to predict the fire-induced response of rigid, closed-cell polyurethane foam-filled systems. The model, developed for the B-61 and W-80 fireset foam, is based on a cascade of bondbreaking reactions that produce CO2. Percolation theory is used to dynamically quantify polymer fragment populations of the thermally degrading foam. The partition between condensed-phase polymer fragments and gas-phase polymer fragments (i.e. vapor-liquid split) was determined using a vapor-liquid equilibrium model. The CPUF decomposition model was implemented into the finite element (FE) heat conduction codes COYOTE and CALORE, which support chemical kinetics and enclosure radiation. Elements were removed from the computational domain when the calculated solid mass fractions within the individual finite element decrease below a set criterion. Element removal, referred to as ?element death,? creates a radiation enclosure (assumed to be non-participating) as well as a decomposition front, which separates the condensed-phase encapsulant from the gas-filled enclosure. All of the chemistry parameters as well as thermophysical properties for the CPUF model were obtained from small-scale laboratory experiments. The CPUF model was evaluated by comparing predictions to measurements. The validation experiments included several thermogravimetric experiments at pressures ranging from ambient pressure to 30 bars. Larger, component-scale experiments were also used to validate the foam response model. The effects of heat flux, bulk density, orientation, embedded components, confinement and pressure were measured and compared to model predictions. Uncertainties in the model results were evaluated using a mean value approach. The measured mass loss in the TGA experiments and the measured location of the decomposition front were within the 95% prediction limit determined using the CPUF model for all of the experiments where the decomposition gases were vented sufficiently. The CPUF model results were not as good for the partially confined radiant heat experiments where the vent area was regulated to maintain pressure. Liquefaction and flow effects, which are not considered in the CPUF model, become important when the decomposition gases are confined.
Sandia National Laboratories has been encapsulating magnetic components for over 40 years. The reliability of magnetic component assemblies that must withstand a variety of environments and then function correctly is dependent on the use of appropriate encapsulating formulations. Specially developed formulations are critical and enable us to provide high reliability magnetic components. This paper discuss epoxy, urethane, and silicone formulations for several of our magnetic components.
Niobium doped PZT 95/5 (lead zirconate-lead titanate) is the material used in voltage bars for all ferroelectric neutron generator power supplies. In June of 1999, the transfer and scale-up of the Sandia Process from Department 1846 to Department 14192 was initiated. The laboratory-scale process of 1.6 kg has been successfully scaled to a production batch quantity of 10 kg. This report documents efforts to characterize and optimize the production-scale process utilizing Design of Experiments methodology. Of the 34 factors identified in the powder preparation sub-process, 11 were initially selected for the screening design. Additional experiments and safety analysis subsequently reduced the screening design to six factors. Three of the six factors (Milling Time, Media Size, and Pyrolysis Air Flow) were identified as statistically significant for one or more responses and were further investigated through a full factorial interaction design. Analysis of the interaction design resulted in developing models for Powder Bulk Density, Powder Tap Density, and +20 Mesh Fraction. Subsequent batches validated the models. The initial baseline powder preparation conditions were modified, resulting in improved powder yield by significantly reducing the +20 mesh waste fraction. Response variation analysis indicated additional investigation of the powder preparation sub-process steps was necessary to identify and reduce the sources of variation to further optimize the process.
Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model-based predictions. Several hypothetical prediction problems are created and addressed. Hypothetical problems are used because no guidance was provided concerning what was needed for this aspect of the analysis. The resulting predictions and corresponding uncertainty assessment demonstrate the flexibility of this approach.
This User Guide for the RADTRAN 5 computer code for transportation risk analysis describes basic risk concepts and provides the user with step-by-step directions for creating input files by means of either the RADDOG input file generator software or a text editor. It also contains information on how to interpret RADTRAN 5 output, how to obtain and use several types of important input data, and how to select appropriate analysis methods. Appendices include a glossary of terms, a listing of error messages, data-plotting information, images of RADDOG screens, and a table of all data in the internal radionuclide library.
The Rapid Terrain Visualization interferometric synthetic aperture radar was designed and built at Sandia National Laboratories as part of an Advanced Concept Technology Demonstration (ACTD) to 'demonstrate the technologies and infrastructure to meet the Army requirement for rapid generation of digital topographic data to support emerging crisis or contingencies.' This sensor is currently being operated by Sandia National Laboratories for the Joint Precision Strike Demonstration (JPSD) Project Office to provide highly accurate digital elevation models (DEMs) for military and civilian customers, both inside and outside of the United States. The sensor achieves better than DTED Level IV position accuracy in near real-time. The system is being flown on a deHavilland DHC-7 Army aircraft. This paper outlines some of the technologies used in the design of the system, discusses the performance, and will discuss operational issues. In addition, we will show results from recent flight tests, including high accuracy maps taken of the San Diego area.
Fast and quantitative analysis of cellular activity, signaling and responses to external stimuli is a crucial capability and it has been the goal of several projects focusing on patch clamp measurements. To provide the maximum functionality and measurement options, we have developed a patch clamp array device that incorporates on-chip electronics, mechanical, optical and microfluidic coupling as well as cell localization through fluid flow. The preliminary design, which integrated microfluidics, electrodes and optical access, was fabricated and tested. In addition, new designs which further combine mechanical actuation, on-chip electronics and various electrode materials with the previous designs are currently being fabricated.
Silane adhesion promoters are commonly used to improve the adhesion, durability, and corrosion resistance of polymer-oxide interfaces. The current study investigates a model interface consisting of the natural oxide of 100 Si and an epoxy cured from diglycidyl ether of bisphenol A (DGEBA) and triethylenetetraamine (TETA). The thickness of (3-glycidoxypropyl)trimethoxysilane (GPS) films placed between the two materials provided the structural variable. Five surface treatments were investigated: a bare interface, a rough monolayer film, a smooth monolayer film, a 5 nm thick film, and a 10 nm thick film. Previous neutron reflection experiments revealed large extension ratios (>2) when the 5 and 10 nm thick GPS films were exposed to deuterated nitrobenzene vapor. Despite the larger extension ratio for the 5 nm thick film, the epoxy/Si fracture energy (G{sub c}) was equal to that of the 10 nm thick film under ambient conditions. Even the smooth monolayer exhibited the same G{sub c}. Only when the monolayer included a significant number of agglomerates did the G{sub c} drop to levels closer to that of the bare interface. When immersed in water at room temperature for 1 week, the threshold energy release rate (G{sub th}) was nearly equal to G{sub c} for the smooth monolayer, 5 nm thick film, and 10 nm thick film. While the G{sub th} for all three films decreased with increasing water temperature, the G{sub th} of the smooth monolayer decreased more rapidly. The bare interface was similarly sensitive to temperature; however, the G{sub th} of the rough monolayer did not change significantly as the temperature was raised. Despite the influence of pH on hydrolysis, the G{sub th} was insensitive to the pH of the water for all surface treatments.
Boron carbide displays a rich response to dynamic compression that is not well understood. To address poorly understood aspects of behavior, including dynamic strength and the possibility of phase transformations, a series of plate impact experiments was performed that also included reshock and release configurations. Hugoniot data were obtained from the elastic limit (15-18 GPa) to 70 GPa and were found to agree reasonably well with the somewhat limited data in the literature. Using the Hugoniot data, as well as the reshock and release data, the possibility of the existence of one or more phase transitions was examined. There is tantalizing evidence, but at this time no phase transition can be conclusively demonstrated. However, the experimental data are consistent with a phase transition at a shock stress of about 40 GPa, though the volume change associated with it would have to be small. The reshock and release experiments also provide estimates of the shear stress and strength in the shocked state as well as a dynamic mean stress curve for the material. The material supports only a small shear stress in the shocked (Hugoniot) state, but it can support a much larger shear stress when loaded or unloaded from the shocked state. This strength in the shocked state is initially lower than the strength at the elastic limit but increases with pressure to about the same level. Also, the dynamic mean-stress curve estimated from reshock and release differs significantly from the hydrostate constructed from low-pressure data. Finally, a spatially resolved interferometer was used to directly measure spatial variations in particle velocity during the shock event. These spatially resolved measurements are consistent with previous work and suggest a nonuniform failure mode occurring in the material.
This paper describes an integrated experimental and computational framework for developing 3-D structural models for humic acids (HAs). This approach combines experimental characterization, computer assisted structure elucidation (CASE), and atomistic simulations to generate all 3-D structural models or a representative sample of these models consistent with the analytical data and bulk thermodynamic/structural properties of HAs. To illustrate this methodology, structural data derived from elemental analysis, diffuse reflectance FT-IR spectroscopy, 1-D/2-D {sup 1}H and {sup 13}C solution NMR spectroscopy, and electrospray ionization quadrupole time-of-flight mass spectrometry (ESI QqTOF MS) are employed as input to the CASE program SIGNATURE to generate all 3-D structural models for Chelsea soil humic acid (HA). These models are subsequently used as starting 3-D structures to carry out constant temperature-constant pressure molecular dynamics simulations to estimate their bulk densities and Hildebrand solubility parameters. Surprisingly, only a few model isomers are found to exhibit molecular compositions and bulk thermodynamic properties consistent with the experimental data. The simulated {sup 13}C NMR spectrum of an equimolar mixture of these model isomers compares favorably with the measured spectrum of Chelsea soil HA.
Inertial confinement fusion capsule implosions absorbing up to 35 kJ of x-rays from a {approx}220 eV dynamic hohlraum on the Z accelerator at Sandia National Laboratories have produced thermonuclear D-D neutron yields of (2.6 {+-} 1.3) x 10{sup 10}. Argon spectra confirm a hot fuel with Te {approx} 1 keV and n{sub e} {approx} (1-2) x 10{sup 23} cm{sup -3}. Higher performance implosions will require radiation symmetry control improvements. Capsule implosions in a {approx}70 eV double-Z-pinch-driven secondary hohlraum have been radiographed by 6.7 keV x-rays produced by the Z-beamlet laser (ZBL), demonstrating a drive symmetry of about 3% and control of P{sub 2} radiation asymmetries to {+-}2%. Hemispherical capsule implosions have also been radiographed in Z in preparation for future experiments in fast ignition physics. Z-pinch-driven inertial fusion energy concepts are being developed. The refurbished Z machine (ZR) will begin providing scaling information on capsule and Z-pinch in 2006. The addition of a short pulse capability to ZBL will enable research into fast ignition physics in the combination of ZR and ZBL-petawatt. ZR could provide a test bed to study NIF-relevant double-shell ignition concepts using dynamic hohlraums and advanced symmetry control techniques in the double-pinch hohlraum backlit by ZBL.
Two-dimensional processes of nickel electrodeposition in LIGA microfabrication were modeled using the finite-element method and a fully coupled implicit solution scheme via Newtons technique. Species concentrations, electrolyte potential, flow field, and positions of the moving deposition surfaces were computed by solving the species-mass, charge, and momentum conservation equations as well as pseudo-solid mesh-motion equations that employ an arbitrary Lagrangian-Eulerian (ALE) formulation. Coupling this ALE approach with repeated re-meshing and re-mapping makes it possible to track the entire transient deposition processes from start of deposition until the trenches are filled, thus enabling the computation of local current densities that influence the microstructure and functional/mechanical properties of the deposit.
Using shock wave reverberation experiments, water samples were quasi-isentropically compressed between silica and sapphire plates to peak pressures of 1-5 GPa on nanosecond time scales. Real time optical transmission measurements were used to examine changes in the compressed samples. Although the ice VII phase is thermodynamically favored above 2 GPa, the liquid state was initially preserved and subsequent freezing occurred over hundreds of nanoseconds only for the silica cells. Images detailing the formation and growth of the solid phase were obtained. These results provide unambiguous evidence of bulk water freezing on such short time scales.
Combined XRD/neutron Rietveld refinements were performed on PbZr{sub 0.30}Ti{sub 0.70}O{sub 3} powder samples doped with nominally 4% Ln (where Ln = Ce, Nd, Tb, Y, or Yb). Resulting refined structural parameters indicated that the lattice parameters and volume changes in the tetragonal perovskite unit cell were consistent with A and/or B-site doping of the structure. Ce doping is inconsistent with respect to its rather large atomic radius, but is understood in terms of its oxidation to the Ce{sup +4} oxidation state in the structure. Results of the B-site displacement values for the Ti/Zr site indicate that amphoteric doping of Ln cations in the structure results in superior properties for PLnZT materials.
Blastwalls are often assumed to be the answer for facility protection from malevolent explosive assault, particularly from large vehicle bombs (LVB's). The assumption is made that the blastwall, if it is built strong enough to survive, will provide substantial protection to facilities and people on the side opposite the LVB. This paper will demonstrate through computer simulations and experimental data the behavior of explosively induced air blasts during interaction with blastwalls. It will be shown that air blasts can effectively wrap around and over blastwalls. Significant pressure reduction can be expected on the downstream side of the blastwall but substantial pressure will continue to propagate. The effectiveness of the blastwall to reduce blast overpressure depends on the geometry of the blastwall and the location of the explosive relative to the blastwall.