This paper describes the development of a surface-acoustic-wave (SAW) sensor that is designed to be operated continuously and in situ to detect volatile organic compounds. A ruggedized stainless-steel package that encases the SAW device and integrated circuit board allows the sensor to be deployed in a variety of media including air, soil, and even water. Polymers were optimized and chosen based on their response to chlorinated aliphatic hydrocarbons (e.g., trichloroethylene), which are common groundwater contaminants. Initial testing indicates that a running-average data-logging algorithm can reduce the noise and increase the sensitivity of the in-situ sensor.
Solid-state lighting using light-emitting diodes (LEDs) has the potential to reduce energy consumption for lighting by 50% while revolutionizing the way we illuminate our homes, work places, and public spaces. Nevertheless, substantial technical challenges remain in order for solid-state lighting to significantly displace the well-developed conventional lighting technologies. We review the potential of LED solid-state lighting to meet the long-term cost goals.
We have adopted a binary superlattice structure for long-wavelength broadband detection. In this superlattice, the basis contains two unequal wells, with which more energy states are created for broadband absorption. At the same time, responsivity is more uniform within the detection band because of mixing of wave functions from the two wells. This uniform line shape is particularly suitable for spectroscopy applications. The detector is designed to cover the entire 8-14 {micro}m long-wavelength atmospheric window. The observed spectral widths are 5.2 and 5.6 {micro}m for two nominally identical wafers. The photoresponse spectra from both wafers are nearly unchanged over a wide range of operating bias and temperature. The background-limited temperature is 50 K at 2 V bias for F/1.2 optics.
A quiet revolution is underway. Over the next 5-10 years inorganic-semiconductor-based solid-state lighting technology is expected to outperform first incandescent, and then fluorescent and high-intensity-discharge, lighting. Along the way, many decision points and technical challenges will be faced. To help understand these challenges, the U.S. Department of Energy, the Optoelectronics Industry Development Association and the National Electrical Manufacturers Association recently updated the U.S. Solid-State Lighting Roadmap. In the first half of this paper, we present an overview of the high-level targets of the inorganic-semiconductor part of that update. In the second half of this paper, we discuss some implications of those high-level targets on the GaN-based semiconductor chips that will be the 'engine' for solid-state lighting.
We have investigated the liquid-phase self-assembly of 1-alkanethiols (HS(CH{sub 2}){sub n-1}CH{sub 3}, n = 8, 16, and 18) on hydrogenated Ge(111), using attenuated total reflection Fourier transform infrared spectroscopy as well as water contact angle measurements. The infrared absorbance of C-H stretching modes of alkanethiolates on Ge, in conjunction with water contact angle measurements, demonstrates that the final packing density is a function of alkanethiol concentration in 2-propanol and its chain length. High concentration and long alkyl chain increase the steady-state surface coverage of alkanethiolates. A critical chain length exists between n = 8 and 16, above which the adsorption kinetics is comparable for all long alkyl chain 1-alkanethiols. The steady-state coverage of hexadecanethiolates, representing long-chain alkanethiolates, reaches a maximum at approximately 5.9 x 10{sup 14} hexadecanethiolates/cm{sup 2} in 1 M solution. The characteristic time constant to reach a steady state also decreases with increasing chain length. This chain length dependence is attributed to the attractive chain-to-chain interaction in long-alkyl-chain self-assembled monolayers, which reduces the desorption-to-adsorption rate ratio (k{sub d}/k{sub a}). We also report the adsorption and desorption rate constants (k{sub a} and k{sub d}) of 1-hexadecanethiol on hydrogenated Ge(111) at room temperature. The alkanethiol adsorption is a two-step process following a first-order Langmuir isotherm: (1) fast adsorption with k{sub a} = 2.4 {+-} 0.2 cm{sup 3}/(mol s) and k{sub d} = (8.2 {+-} 0.5) x 10{sup -6} s{sup -1}; (2) slow adsorption with k{sub a} = 0.8 {+-} 0.5 cm{sup 3}/(mol s) and k{sub d} = (3 {+-} 2) x 10{sup -6} s{sup -1}.
The present study is a numerical investigation of the propagation of electromagnetic transients in dispersive media. It considers propagation in water using Debye and composite Rocard-Powles-Lorentz models for the complex permittivity. The study addresses this question: For practical transmitted spectra, does precursor propagation provide any features that can be used to advantage over conventional signal propagation in models of dispersive media of interest? A companion experimental study is currently in progress that will attempt to measure the effects studied here.
Time-of-flight secondary ion mass spectrometry (TOF-SIMS) by its parallel nature, generates complex and very large datasets quickly and easily. An example of such a large dataset is a spectral image where a complete spectrum is collected for each pixel. Unfortunately, the large size of the data matrix involved makes it difficult to extract the chemical information from the data using traditional techniques. Because time constraints prevent an analysis of every peak, prior knowledge is used to select the most probable and significant peaks for evaluation. However, this approach may lead to a misinterpretation of the system under analysis. Ideally, the complete spectral image would be used to provide a comprehensive, unbiased materials characterization based on full spectral signatures. Automated eXpert spectral image analysis (AXSIA) software developed at Sandia National Laboratories implements a multivariate curve resolution technique that was originally developed for energy dispersive X-ray spectroscopy (EDS) [Microsci. Microanal. 9 (2003) 1]. This paper will demonstrate the application of the method to TOF-SIMS. AXSIA distills complex and very large spectral image datasets into a limited number of physically realizable and easily interpretable chemical components, including both spectra and concentrations. The number of components derived during the analysis represents the minimum number of components needed to completely describe the chemical information in the original dataset. Since full spectral signatures are used to determine each component, an enhanced signal-to-noise is realized. The efficient statistical aggregation of chemical information enables small and unexpected features to be automatically found without user intervention.
The spreading of polymer droplets is studied using molecular dynamics simulations. To study the dynamics of both the precursor foot and the bulk droplet, large hemispherical drops of 200 000 monomers are simulated using a bead-spring model for polymers of chain length 10, 20, and 40 monomers per chain. We compare spreading on flat and atomistic surfaces, chain length effects, and different applications of the Langevin and dissipative particle dynamics thermostats. We find diffusive behavior for the precursor foot and good agreement with the molecular kinetic model of droplet spreading using both flat and atomistic surfaces. Despite the large system size and long simulation time relative to previous simulations, we find that even larger systems are required to observe hydrodynamic behavior in the hemispherical spreading droplet.
The Eulerian hydrocode, CTH, has been used to study the interaction of hypervelocity flyer plates with thin targets at velocities from 6 to 11 km/s. These penetrating impacts produce debris clouds that are subsequently allowed to stagnate against downstream witness plates. Velocity histories from this latter plate are used to infer the evolution and propagation of the debris cloud. This analysis, which is a companion to a parallel experimental effort, examined both numerical and physics-based issues. We conclude that numerical resolution and convergence are important in ways we had not anticipated. The calculated release from the extreme states generated by the initial impact shows discrepancies with related experimental observations, and indicates that even for well-known materials (e.g., aluminum), high-temperature failure criteria are not well understood, and that non-equilibrium or rate-dependent equations of state may be influencing the results.
Protein microtubules (MTs) 25 nm in diameter and tens of micrometers long have been used as templates for the biomimetic mineralization of FeOOH. Exposure of MTs to anaerobic aqueous solutions of Fe{sup 2+} buffered to neutral pH followed by aerial oxidation leads to the formation of iron oxide coated MTs. The iron oxide layer was found to grow via a two-step process: initially formed 10-30 nm thick coatings were found to be amorphous in structure and comprised of several iron-containing species. Further growth resulted in MTs coated with highly crystalline layers of lepidocrocite with a controllable thickness of up to 125 nm. On the micrometer size scale, these coated MTs were observed to form large, irregular bundles containing hundreds of individually coated MTs. Iron oxide grew selectively on the MT surface, a result of the highly charged MT surface that provided an interface favorable for iron oxide nucleation. This result illustrates that MTs can be used as scaffolds for the in-situ production of high-aspect-ratio inorganic nanowires.
The paper presents a theoretical study of synchronization between two coupled lasers. A theory valid for arbitrary coupling between lasers is used. Its key feature is that the laser field is decomposed in terms of the composite-cavity modes reflecting the spatial field dependence over the entire coupled-laser system. The ensuing multimode equations are reduced to class-B, and further to class-A equations which resemble competing species equations. Bifurcation analysis, supported by insight provided by analytical solutions, is used to investigate influences of pump, carrier decay rate, polarization decay rate, and coupling mirror losses on synchronization between lasers. Population pulsation is found to be an essential mode competition mechanism responsible for bistability in the synchronized solutions. Finally, we discovered that the mechanism leading to laser synchronization changes from strong composite-cavity mode competition in class-A regime to frequency locking of composite-cavity modes in class-B regime.
Dynamic compressive properties of an epoxy syntactic foam at various strain rates under lateral confinement have been investigated with a pulse-shaped split Hopkinson pressure bar (SHPB). The quasi-static responses were obtained with an MTS 810 materials test system. The quasi-static and dynamic stress-strain behavior of the foam under confinement exhibited an elastic-plastic-like response whereas an elastic-brittle behavior was observed under uniaxial stress loading conditions. The modulus of elasticity and yield strength, which had higher values than those in uniaxial stress case, were both sensitive to strain rates. However, the strain-hardening behavior under confinement was not strain-rate sensitive. A phenomenological elastic-plastic type of material model was employed to describe the strain-rate-dependent compressive properties of the syntactic foam under confinement, which agreed well with experimental results.
This research addresses effects of temperature, including adiabatic temperature rise in specimen during dynamic compression and environmental temperature, on the dynamic compressive properties of an epoxy syntactic foam. The adiabatic temperature rise in specimen during dynamic compression is found to be so small that its effects may be neglected. However, environmental temperature has significant effects on dynamic compressive behavior. With decreasing temperature, the foam initially hardens but then softens when below a transitional temperature, which are dominated by mechanisms of thermal-softening and damage-softening, respectively. A phenomenological material model accounting for both temperature and strain-rate effects has been developed, which well describes the compressive and failure behaviors at various strain rates and environmental temperatures.
We report for the first time a one-step, templateless method to directly prepare large arrays of oriented TiO{sub 2}-based nanotubes and continuous films. These titania nanostructures can also be easily prepared as conformal coatings on a substrate. The nanostructured films were formed on a Ti substrate seeded with TiO{sub 2} nanoparticles. SEM and TEM results suggested that a folding mechanism of sheetlike structures was involved in the formation of the nanotubes. The oriented arrays of TiO{sub 2} nanotubes, continuous films, and coatings are expected to have potentials for applications in catalysis, filtration, sensing, photovoltaic cells, and high surface area electrodes.
Currently, the Egyptian Atomic Energy Authority is designing a shallow-land disposal facility for low-level radioactive waste. To insure containment and prevent migration of radionuclides from the site, the use of a reactive backfill material is being considered. One material under consideration is hydroxyapatite, Ca{sub 10}(PO{sub 4}){sub 6}(OH){sub 2}, which has a high affinity for the sorption of many radionuclides. Hydroxyapatite has many properties that make it an ideal material for use as a backfill including low water solubility (K{sub sp} > 10{sup -40}), high stability under reducing and oxidizing conditions over a wide temperature range, availability, and low cost. However, there is often considerable variation in the properties of apatites depending on source and method of preparation. In this work, we characterized and compared a synthetic hydroxyapatite with hydroxyapatites prepared from cattle bone calcined at 500 C, 700 C, 900 C and 1100 C. The analysis indicated the synthetic hydroxyapatite was similar in morphology to 500 C prepared cattle hydroxyapatite. With increasing calcination temperature the crystallinity and crystal size of the hydroxyapatites increased and the BET surface area and carbonate concentration decreased. Batch sorption experiments were performed to determine the effectiveness of each material to sorb uranium. Sorption of U was strong regardless of apatite type indicating all apatite materials evaluated. Sixty day desorption experiments indicated desorption of uranium for each hydroxyapatite was negligible.
High-power 18650 Li-ion cells have been developed for hybrid electric vehicle applications as part of the DOE Advanced Technology Development (ATD) program. The thermal abuse response of two advanced chemistries (Gen1 and Gen2) were measured and compared with commercial Sony 18650 cells. Gen1 cells consisted of an MCMB graphite based anode and a LiNi{sub 0.85}Co{sub 0.15}O{sub 2} cathode material while the Gen2 cells consisted of a MAG10 anode graphite and a LiNi{sub 0.80}Co{sub 0.15} Al{sub 0.05}O{sub 2} cathode. Accelerating rate calorimetry (ARC) and differential scanning calorimetry (DSC) were used to measure the thermal response and properties of the cells and cell materials up to 400 C. The MCMB graphite was found to result in increased thermal stability of the cells due to more effective solid electrolyte interface (SEI) formation. The Al stabilized cathodes were seen to have higher peak reaction temperatures that also gave improved cell thermal response. The effects of accelerated aging on cell properties were also determined. Aging resulted in improved cell thermal stability with the anodes showing a rapid reduction in exothermic reactions while the cathodes only showed reduced reactions after more extended aging.
{sup 90}Sr contamination is a major problem at several U.S. sites. At some sites, {sup 90}Sr has migrated deep underground making site remediation difficult. In this paper, we describe a novel method for precipitation of hydroxyapatite, a strong sorbent for {sup 90}Sr, in soil. The method is based on mixing a solution of calcium citrate and sodium phosphate in soil. As the indigenous soil microorganisms mineralize the citrate, the calcium is released and forms hydroxyapatite. Soil, taken from the Albuquerque desert, was treated with a sodium phosphate solution or a sodium phosphate/calcium citrate solution. TEM and EDS were used to identify hydroxyapatite with CO{sub 3}{sup 2-} substitutions, with a formula of (Ca{sub 4.8}Na{sub 0.2})[(PO{sub 4}){sub 2.8}(CO{sub 3}){sub 0.2}](OH), in the soil treated with the sodium phosphate/calcium citrate solution. Untreated and treated soils were used in batch sorption experiments for Sr uptake. Average Sr uptake was 19.5, 77.0 and 94.7% for the untreated soil, soil treated with sodium phosphate, and soil with apatite, respectively. In desorption experiments, the untreated soil, phosphate treated soil and apatite treated soil released an average of 34.2, 28.8 and 4.8% respectively. The results indicate the potential of forming apatite in soil using soluble reagents for retardation of radionuclide migration.
A fundamental challenge for engineering communication systems is the problem of transmitting information from the source to the receiver over a noisy channel. This same problem exists in a biological system. How can information required for the proper functioning of a cell, an organism, or a species be transmitted in an error introducing environment? Source codes (compression codes) and channel codes (error-correcting codes) address this problem in engineering communication systems. The ability to extend these information theory concepts to study information transmission in biological systems can contribute to the general understanding of biological communication mechanisms and extend the field of coding theory into the biological domain. In this work, we review and compare existing coding theoretic methods for modeling genetic systems. We introduce a new error-correcting code framework for understanding translation initiation, at the cellular level and present research results for Escherichia coli K-12. By studying translation initiation, we hope to gain insight into potential error-correcting aspects of genomic sequences and systems.
Visualization of scientific frontiers is a relatively new field, yet it has a long history and many predecessors. The application of science to science itself has been undertaken for decades with notable early contributions by Derek Price, Thomas Kuhn, Diana Crane, Eugene Garfield, and many others. What is new is the field of information visualization and application of its techniques to help us understand the process of science in the making. In his new book, Chaomei Chen takes us on a journey through this history, touching on predecessors, and then leading us firmly into the new world of Mapping Scientific Frontiers. Building on the foundation of his earlier book, Information Visualization and Virtual Environments, Chen's new offering is much less a tutorial in how to do information visualization, and much more a conceptual exploration of why and how the visualization of science can change the way we do science, amplified by real examples. Chen's stated intents for the book are: (1) to focus on principles of visual thinking that enable the identification of scientific frontiers; (2) to introduce a way to systematize the identification of scientific frontiers (or paradigms) through visualization techniques; and (3) to stimulate interdisciplinary research between information visualization and information science researchers. On all these counts, he succeeds. Chen's book can be broken into two parts which focus on the first two purposes stated above. The first, consisting of the initial four chapters, covers history and predecessors. Kuhn's theory of normal science punctuated by periods of revolution, now commonly known as paradigm shifts, motivates the work. Relevant predecessors outside the traditional field of information science such as cartography (both terrestrial and celestial), mapping the mind, and principles of visual association and communication, are given ample coverage. Chen also describes enabling techniques known to information scientists, such as multi-dimensional scaling, advanced dimensional reduction, social network analysis, Pathfinder network scaling, and landscape visualizations. No algorithms are given here; rather, these techniques are described from the point of view of enabling 'visual thinking'. The Generalized Similarity Analysis (GSA) technique used by Chen in his recent published papers is also introduced here. Information and computer science professionals would be wise not to skip through these early chapters. Although principles of gestalt psychology, cartography, thematic maps, and association techniques may be outside their technology comfort zone, or interest, these predecessors lay a groundwork for the 'visual thinking' that is required to create effective visualizations. Indeed, the great challenge in information visualization is to transform the abstract and intangible into something visible, concrete, and meaningful to the user. The second part of the book, covering the final three chapters, extends the mapping metaphor into the realm of scientific discovery through the structuring of literatures in a way that enables us to see scientific frontiers or paradigms. Case studies are used extensively to show the logical progression that has been made in recent years to get us to this point. Homage is paid to giants of the last 20 years including Michel Callon for co-word mapping, Henry Small for document co-citation analysis and specialty narratives (charting a path linking the different sciences), and Kate McCain for author co-citation analysis, whose work has led to the current state-of-the-art. The last two chapters finally answer the question - 'What does a scientific paradigm look like?' The visual answer given is specific to the GSA technique used by Chen, but does satisfy the intent of the book - to introduce a way to visually identify scientific frontiers. A variety of case studies, mostly from Chen's previously published work - supermassive black holes, cross-domain applications of Pathfinder networks, mass extinction debates, impact of Don Swanson's work, and mad cow disease and vCJD in humans - succeed in explaining how visualization can be used to show the development of, competition between, and eventual acceptance (or replacement) of scientific paradigms. Although not addressed specifically, Chen's work nonetheless makes the persuasive argument that visual maps alone are not sufficient to explain 'the making of science' to a non-expert in a particular field. Rather, expert knowledge is still required to interpret these maps and to explain the paradigms. This combination of visual maps and expert knowledge, used jointly to good effect in the book, becomes a potent means for explaining progress in science to the expert and non-expert alike. Work to extend the GSA technique to explore latent domain knowledge (important work that falls below the citation thresholds typically used in GSA) is also explored here.
The essential oil of white sage, Salvia apiana, was obtained by steam distillation and analysed by GC-MS. A total of 13 components were identified, accounting for >99.9% of the oil. The primary component was 1,8-cineole, accounting for 71.6% of the oil.
An IVA (inductive voltage adder) research programme at AWE began with the construction of a small scale IVA test bed named LINX and progressed to building PIM (Prototype IVA Module). The work on PIM is geared towards furnishing AWE with a range of machines operating at 1 to 4 MV that may eventually supersede, with an upgrade in performance, existing machines operating in that voltage range. PIM has a water dielectric Blumlein of 10 ohms charged by a Marx generator. This has been used to drive either one or two 1.5 MV inductive cavities and fitting a third cavity may be attempted in the future. The latest two cavity configuration is shown which requires a split oil coax to connect the two cavities in parallel. It also has a laser triggering system for initiating the Blumlein and the prepulse reduction system fitted to the output of the Blumlein. A short MITL (magnetically insulated transmission line) connects the cavities, via a vacuum pumping section, to a chamber containing an e-beam diode test load.
Surfactant-templated silica thin films are potentially important materials for applications such as chemical sensing. However, a serious limitation for their use in aqueous environments is their poor hydrolytic stability. One convenient method of increasing the resistance of mesoporous silica to water degradation is addition of alumina, either doped into the pore walls during material synthesis or grafted onto the pore surface of preformed mesophases. Here, we compare these two routes to Al-modified mesoporous silica with respect to their effectiveness in decreasing the solubility of thin mesoporous silicate films. Direct synthesis of templated silica films prepared with Al/Si = 1:50 was found to limit film degradation, as measured by changes in film thickness, to less than 15% at near-neutral pH over a 1 week period. In addition to suppressing film dissolution, addition of Al can also cause structural changes in silica films templated with the nonionic surfactant Brij 56 (C{sub 16}H{sub 33}(OCH{sub 2}CH{sub 2}){sub n{approx}10}OH), including mesophase transformation, a decrease in accessible porosity, and an increase in structural disorder. The solubility behavior of films is also sensitive to their particular mesophase, with 3D phases (cubic, disordered) possessing less internal but more thickness stability than 2D phases (hexagonal), as determined with ellipsometric measurements. Finally, grafting of Al species onto the surface of surfactant-templated silica films also significantly increases aqueous stability, although to a lesser extent than the direct synthesis route.
We demonstrate a voltage tunable two-color quantum-well infrared photodetector (QWIP) that consists of multiple periods of two distinct AlGaAs/GaAs superlattices separated by AlGaAs blocking barriers on one side and heavily doped GaAs layers on the other side. The detection peak switches from 9.5 {micro}m under large positive bias to 6 {micro}m under negative bias. The background-limited temperature is 55 K for 9.5 {micro}m detection and 80 K for 6 {micro}m detection. We also demonstrate that the corrugated-QWIP geometry is suitable for coupling normally incident light into the detector.
We demonstrate the presence of a resonant interaction between a pair of coupled quantum wires, which are formed in the ultrahigh mobility two-dimensional electron gas of a GaAs/AlGaAs quantum well. The coupled-wire system is realized by an extension of the split-gate technique, in which bias voltages are applied to Schottky gates on the semiconductor surface, to vary the width of the two quantum wires, as well as the strength of the coupling between them. The key observation of interest here is one in which the gate voltages used to define one of the wires are first fixed, after which the conductance of this wire is measured as the gate voltage used to form the other wire is swept. Over the range of gate voltage where the swept wire pinches off, we observe a resonant peak in the conductance of the fixed wire that is correlated precisely to this pinchoff condition. In this paper, we present new results on the current- and temperature-dependence of this conductance resonance, which we suggest is related to the formation of a local moment in the swept wire as its conductance is reduced below 2e{sup 2}/h.
Analytical instrumentation such as time-of-flight secondary ion mass spectrometry (ToF-SIMS) provides a tremendous quantity of data since an entire mass spectrum is saved at each pixel in an ion image. The analyst often selects only a few species for detailed analysis; the majority of the data are not utilized. Researchers at Sandia National Laboratory (SNL) have developed a powerful multivariate statistical analysis (MVSA) toolkit named AXSIA (Automated eXpert Spectrum Image Analysis) that looks for trends in complete datasets (e.g., analyzes the entire mass spectrum at each pixel). A unique feature of the AXSIA toolkit is the generation of intuitive results (e.g., negative peaks are not allowed in the spectral response). The robust statistical process is able to unambiguously identify all of the spectral features uniquely associated with each distinct component throughout the dataset. General Electric and Sandia used AXSIA to analyze raw data files generated on an Ion Tof IV ToF-SIMS instrument. Here, we will show that the MVSA toolkit identified metallic contaminants within a defect in a polymer sample. These metallic contaminants were not identifiable using standard data analysis protocol.
The maximum contact map overlap (MAX-CMO) between a pair of protein structures can be used as a measure of protein similarity. It is a purely topological measure and does not depend on the sequence of the pairs involved in the comparison. More importantly, the MAX-CMO present a very favorable mathematical structure which allows the formulation of integer, linear and Lagrangian models that can be used to obtain guarantees of optimality. It is not the intention of this paper to discuss the mathematical properties of MAX-CMO in detail as this has been dealt elsewhere. In this paper we compare three algorithms that can be used to obtain maximum contact map overlaps between protein structures. We will point to the weaknesses and strengths of each one. It is our hope that this paper will encourage researchers to develop new and improve methods for protein comparison based on MAX-CMO.
We consider the convergence properties of a non-elitist self-adaptive evolutionary strategy (ES) on multi-dimensional problems. In particular, we apply our recent convergence theory for a discretized (1,{lambda})-ES to design a related (1,{lambda})-ES that converges on a class of seperable, unimodal multi-dimensional problems. The distinguishing feature of self-adaptive evolutionary algorithms (EAs) is that the control parameters (like mutation step lengths) are evolved by the evolutionary algorithm. Thus the control parameters are adapted in an implicit manner that relies on the evolutionary dynamics to ensure that more effective control parameters are propagated during the search. Self-adaptation is a central feature of EAs like evolutionary stategies (ES) and evolutionary programming (EP), which are applied to continuous design spaces. Rudolph summarizes theoretical results concerning self-adaptive EAs and notes that the theoretical underpinnings for these methods are essentially unexplored. In particular, convergence theories that ensure convergence to a limit point on continuous spaces have only been developed by Rudolph, Hart, DeLaurentis and Ferguson, and Auger et al. In this paper, we illustrate how our analysis of a (1,{lambda})-ES for one-dimensional unimodal functions can be used to ensure convergence of a related ES on multidimensional functions. This (1,{lambda})-ES randomly selects a search dimension in each iteration, along which points generated. For a general class of separable functions, our analysis shows that the ES searches along each dimension independently, and thus this ES converges to the (global) minimum.
We have investigated InAs quantum dots (QD) formed on GaAs(1 0 0) using metal-organic chemical vapor deposition. Through a combination of room temperature photoluminescence and atomic force microscopy we have characterized the quantum dots. We have determined the effect of growth rate, deposited thickness, hydride partial pressure, and temperature on QD energy levels. The window of thickness for QD formation is very small, about 3 {angstrom} of InAs. By decreasing the growth rate used to deposit InAs, the ground state transition of the QD is shifted to lower energies. The formation of optically active InAs QD is very sensitive to temperature. Temperatures above 500 C do not form optically active QDs. The thickness window for QD formation increases slightly at 480 C. This is attributed to the thermal dependence of diffusion length. The AsH{sub 3} partial pressure has a non-linear effect on the QD ground state energy.
This paper analyzes the collected charge in heavy ion irradiated MOS structures. The charge generated in the substrate induces a displacement effect which strongly depends on the capacitor structure. Networks of capacitors are particularly sensitive to charge sharing effects. This has important implications for the reliability of SOI and DRAMs which use isolation oxides as a key elementary structure. The buried oxide of present day and future SOI technologies is thick enough to avoid a significant collection from displacement effects. On the other hand, the retention capacitors of trench DRAMs are particularly sensitive to charge release in the substrate. Charge collection on retention capacitors participate to the MBU sensitivity of DRAM.
We report operation of a terahertz quantum-cascade laser at 3.8 THz ({lambda} {approx} 79 {micro}m) up to a heat-sink temperature of 137 K. A resonant phonon depopulation design was used with a low-loss metal-metal waveguide, which provided a confinement factor of nearly unity. A threshold current density of 625 A/cm{sup 2} was obtained in pulsed mode at 5 K. Devices fabricated using a conventional semi-insulating surface-plasmon waveguide lased up to 92 K with a threshold current density of 670 A/cm{sup 2} at 5 K.
This paper presents the first 3-D simulation of heavy-ion induced charge collection in a SiGe HBT, together with microbeam testing data. The charge collected by the terminals is a strong function of the ion striking position. The sensitive area of charge collection for each terminal is identified based on analysis of the device structure and simulation results. For a normal strike between the deep trench edges, most of the electrons and holes are collected by the collector and substrate terminals, respectively. For an ion strike between the shallow trench edges surrounding the emitter, the base collects appreciable amount of charge. Emitter collects negligible amount of charge. Good agreement is achieved between the experimental and simulated data. Problems encountered with mesh generation and charge collection simulation are also discussed.
Seismic event location is made challenging by the difficulty of describing event location uncertainty in multidimensions, by the non-linearity of the Earth models used as input to the location algorithm, and by the presence of local minima which can prevent a location code from finding the global minimum. Techniques to deal with these issues will be described. Since some of these techniques are computationally expensive or require more analysis by human analysts, users need a flexible location code that allows them to select from a variety of solutions that span a range of computational efficiency and simplicity of interpretation. A new location code, LocOO, has been developed to deal with these issues. A seismic event location is comprised of a point in 4-dimensional (4D) space-time, surrounded by a 4D uncertainty boundary. The point location is useless without the uncertainty that accompanies it. While it is mathematically straightforward to reduce the dimensionality of the 4D uncertainty limits, the number of dimensions that should be retained depends on the dimensionality of the location to which the calculated event location is to be compared. In nuclear explosion monitoring, when an event is to be compared to a known or suspected test site location, the three spatial components of the test site and event location are to be compared and 3 dimensional uncertainty boundaries should be considered. With LocOO, users can specify a location to which the calculated seismic event location is to be compared and the dimensionality of the uncertainty is tailored to that of the location specified by the user. The code also calculates the probability that the two locations in fact coincide. The non-linear travel time curves that constrain calculated event locations present two basic difficulties. The first is that the non-linearity can cause least squares inversion techniques to fail to converge. LocOO implements a nonlinear Levenberg-Marquardt least squares inversion technique that is guaranteed to converge in a finite number of iterations for tractable problems. The second difficulty is that a high degree of non-linearity causes the uncertainty boundaries around the event location to deviate significantly from elliptical shapes. LocOO can optionally calculate and display non-elliptical uncertainty boundaries at the cost of a minimal increase in computation time and complexity of interpretation. All location codes are plagued by the possibility of having local minima obscuring the single global minimum. No code can guarantee that it will find the global minimum in a finite number of computations. Grid search algorithms have been developed to deal with this problem, but have a high computational cost. In order to improve the likelihood of finding the global minimum in a timely manner, LocOO implements a hybrid least squares-grid search algorithm. Essentially, many least squares solutions are computed starting from a user-specified number of initial locations; and the solution with the smallest sum squared weighted residual is assumed to be the optimal location. For events of particular interest, analysts can display contour plots of gridded residuals in a selected region around the best-fit location, improving the probability that the global minimum will not be missed and also providing much greater insight into the character and quality of the calculated solution.
To improve the nuclear event monitoring capability of the U.S., the NNSA Ground-based Nuclear Explosion Monitoring Research & Engineering (GNEM R&E) program has been developing a collection of products known as the Knowledge Base (KB). Though much of the focus for the KB has been on the development of calibration data, we have also developed numerous software tools for various purposes. The Matlab-based MatSeis package and the associated suite of regional seismic analysis tools were developed to aid in the testing and evaluation of some Knowledge Base products for which existing applications were either not available or ill-suited. This presentation will provide brief overviews of MatSeis and each of the tools, emphasizing features added in the last year. MatSeis was begun in 1996 and is now a fairly mature product. It is a highly flexible seismic analysis package that provides interfaces to read data from either flatfiles or an Oracle database. All of the standard seismic analysis tasks are supported (e.g. filtering, 3 component rotation, phase picking, event location, magnitude calculation), as well as a variety of array processing algorithms (beaming, FK, coherency analysis, vespagrams). The simplicity of Matlab coding and the tremendous number of available functions make MatSeis/Matlab an ideal environment for developing new monitoring research tools (see the regional seismic analysis tools below). New MatSeis features include: addition of evid information to events in MatSeis, options to screen picks by author, input and output of origerr information, improved performance in reading flatfiles, improved speed in FK calculations, and significant improvements to Measure Tool (filtering, multiple phase display), Free Plot (filtering, phase display and alignment), Mag Tool (maximum likelihood options), and Infra Tool (improved calculation speed, display of an F statistic stream). Work on the regional seismic analysis tools (CodaMag, EventID, PhaseMatch, and Dendro) began in 1999 and the tools vary in their level of maturity. All rely on MatSeis to provide necessary data (waveforms, arrivals, origins, and travel time curves). CodaMag Tool implements magnitude calculation by scaling to fit the envelope shape of the coda for a selected phase type (Mayeda, 1993; Mayeda and Walter, 1996). New tool features include: calculation of a yield estimate based on the source spectrum, display of a filtered version of the seismogram based on the selected band, and the output of codamag data records for processed events. EventID Tool implements event discrimination using phase ratios of regional arrivals (Hartse et al., 1997; Walter et al., 1999). New features include: bandpass filtering of displayed waveforms, screening of reference events based on SNR, multivariate discriminants, use of libcgi to access correction surfaces, and the output of discrim{_}data records for processed events. PhaseMatch Tool implements match filtering to isolate surface waves (Herrin and Goforth, 1977). New features include: display of the signal's observed dispersion and an option to use a station-based dispersion surface. Dendro Tool implements agglomerative hierarchical clustering using dendrograms to identify similar events based on waveform correlation (Everitt, 1993). New features include: modifications to include arrival information within the tool, and the capability to automatically add/re-pick arrivals based on the picked arrivals for similar events.
Iterated local search, or ILS, is among the most straightforward meta-heuristics for local search. ILS employs both small-step and large-step move operators. Search proceeds via iterative modifications to a single solution, in distinct alternating phases. In the first phase, local neighborhood search (typically greedy descent) is used in conjunction with the small-step operator to transform solutions into local optima. In the second phase, the large-step operator is applied to generate perturbations to the local optima obtained in the first phase. Ideally, when local neighborhood search is applied to the resulting solution, search will terminate at a different local optimum, i.e., the large-step perturbations should be sufficiently large to enable escape from the attractor basins of local optima. ILS has proven capable of delivering excellent performance on numerous N P-Hard optimization problems. [LMS03]. However, despite its implicity, very little is known about why ILS can be so effective, and under what conditions. The goal of this paper is to advance the state-of-the-art in the analysis of meta-heuristics, by providing answers to this research question. They focus on characterizing both the relationship between the structure of the underlying search space and ILS performance, and the dynamic behavior of ILS. The analysis proceeds in the context of the job-shop scheduling problem (JSP) [Tai94]. They begin by demonstrating that the attractor basins of local optima in the JSP are surprisingly weak, and can be escaped with high probaiblity by accepting a short random sequence of less-fit neighbors. this result is used to develop a new ILS algorithms for the JSP, I-JAR, whose performance is competitive with tabu search on difficult benchmark instances. They conclude by developing a very accurate behavioral model of I-JAR, which yields significant insights into the dynamics of search. The analysis is based on a set of 100 random 10 x 10 problem instances, in addition to some widely used benchmark instances. Both I-JAR and the tabu search algorithm they consider are based on the N1 move operator introduced by van Laarhoven et al. [vLAL92]. The N1 operator induces a connected search space, such that it is always possible to move from an arbitrary solution to an optimal solution; this property is integral to the development of a behavioral model of I-JAR. However, much of the analysis generalizes to other move operators, including that of Nowicki and Smutnick [NS96]. Finally the models are based on the distance between two solutions, which they take as the well-known disjunctive graph distance [MBK99].
Sintering is one of the oldest processes used by man to manufacture materials dating as far back as 12,000 BC. While it is an ancient process, it is also necessary for many modern technologies such a multilayered ceramic packages, wireless communication devices, and many others. The process consists of thermally treating a powder or compact at a temperature below the melting point of the main constituent, for the purpose of increasing its strength by bonding together of the particles. During sintering, the individual particles bond, the pore space between particles is eliminated, the resulting component can shrinks by as much as 30 to 50% by volume, and it can distort its shape tremendously. Being able to control and predict the shrinkage and shape distortions during sintering has been the goal of much research in material science. And it has been achieved to varying degrees by many. The object of this project was to develop models that could simulate sintering at the mesoscale and at the macroscale to more accurately predict the overall shrinkage and shape distortions in engineering components. The mesoscale model simulates microstructural evolution during sintering by modeling grain growth, pore migration and coarsening, and vacancy formation, diffusion and annihilation. In addition to studying microstructure, these simulation can be used to generate the constitutive equations describing shrinkage and deformation during sintering. These constitutive equations are used by continuum finite element simulations to predict the overall shrinkage and shape distortions of a sintering crystalline powder compact. Both models will be presented. Application of these models to study sintering will be demonstrated and discussed. Finally, the limitations of these models will be reviewed.
We describe stochastic agent-based simulations of protein-emulating agents to perform computation via dynamic self-assembly. The binding and actuation properties of the types of agents required to construct a RAM machine (equivalent to a Turing machine) are described. We present an example computation and describe the molecular biology and non-equilibrium statistical mechanics, and information science properties of this system.
Acid-base titration and metal sorption experiments were performed on both mesoporous alumina and alumina particles under various ionic strengths. It has been demonstrated that surface chemistry and ion sorption within nanopores can be significantly modified by a nano-scale space confinement. As the pore size is reduced to a few nanometers, the difference between surface acidity constants (ΔpK = pK2 - pK1) decreases, giving rise to a higher surface charge density on a nanopore surface than that on an unconfined solid-solution interface. The change in surface acidity constants results in a shift of ion sorption edges and enhances ion sorption on that nanopore surfaces.
Three-dimensional photonic-crystal emitter for thermal photovoltaic power generation was studied. The photonic crystal, at 1535 K, exhibited a sharp emission at λ∼1.5 μm and was promising for thermal photovoltaic (TPV) generation. It was shown that an optical-to-electric conversion efficiency of ∼34% and electrical power of ∼14 W/cm2 is possible.
A Simple PolyUrethane Foam (SPUF) mass loss and response model has been developed to predict the behavior of unconfined, rigid, closed-cell, polyurethane foam-filled systems exposed to fire-like heat fluxes. The model, developed for the B61 and W80-0/1 fireset foam, is based on a simple two-step mass loss mechanism using distributed reaction rates. The initial reaction step assumes that the foam degrades into a primary gas and a reactive solid. The reactive solid subsequently degrades into a secondary gas. The SPUF decomposition model was implemented into the finite element (FE) heat conduction codes COYOTE [1] and CALORE [2], which support chemical kinetics and dynamic enclosure radiation using 'element death.' A discretization bias correction model was parameterized using elements with characteristic lengths ranging from 1-mm to 1-cm. Bias corrected solutions using the SPUF response model with large elements gave essentially the same results as grid independent solutions using 100-{micro}m elements. The SPUF discretization bias correction model can be used with 2D regular quadrilateral elements, 2D paved quadrilateral elements, 2D triangular elements, 3D regular hexahedral elements, 3D paved hexahedral elements, and 3D tetrahedron elements. Various effects to efficiently recalculate view factors were studied -- the element aspect ratio, the element death criterion, and a 'zombie' criterion. Most of the solutions using irregular, large elements were in agreement with the 100-{micro}m grid-independent solutions. The discretization bias correction model did not perform as well when the element aspect ratio exceeded 5:1 and the heated surface was on the shorter side of the element. For validation, SPUF predictions using various sizes and types of elements were compared to component-scale experiments of foam cylinders that were heated with lamps. The SPUF predictions of the decomposition front locations were compared to the front locations determined from real-time X-rays. SPUF predictions of the 19 radiant heat experiments were also compared to a more complex chemistry model (CPUF) predictions made with 1-mm elements. The SPUF predictions of the front locations were closer to the measured front locations than the CPUF predictions, reflecting the more accurate SPUF prediction of mass loss. Furthermore, the computational time for the SPUF predictions was an order of magnitude less than for the CPUF predictions.
Presented within this report are the results of a brief examination of optical tagging technologies funded by the Laboratory Directed Research and Development (LDRD) program at Sandia National Laboratories. The work was performed during the summer months of 2002 with total funding of $65k. The intent of the project was to briefly examine a broad range of approaches to optical tagging concentrating on the wavelength range between ultraviolet (UV) and the short wavelength infrared (SWIR, {lambda} < 2{micro}m). Tagging approaches considered include such things as simple combinations of reflective and absorptive materials closely spaced in wavelength to give a high contrast over a short range of wavelengths, rare-earth oxides in transparent binders to produce a narrow absorption line hyperspectral tag, and fluorescing materials such as phosphors, dies and chemically precipitated particles. One technical approach examined in slightly greater detail was the use of fluorescing nano particles of metals and semiconductor materials. The idea was to embed such nano particles in an oily film or transparent paint binder. When pumped with a SWIR laser such as that produced by laser diodes at {lambda}=1.54{micro}m, the particles would fluoresce at slightly longer wavelengths, thereby giving a unique signal. While it is believed that optical tags are important for military, intelligence and even law enforcement applications, as a business area, tags do not appear to represent a high on return investment. Other government agencies frequently shop for existing or mature tag technologies but rarely are interested enough to pay for development of an untried technical approach. It was hoped that through a relatively small investment of laboratory R&D funds, enough technologies could be identified that a potential customers requirements could be met with a minimum of additional development work. Only time will tell if this proves to be correct.
A Chemical-structure-based PolyUrethane Foam (CPUF) decomposition model has been developed to predict the fire-induced response of rigid, closed-cell polyurethane foam-filled systems. The model, developed for the B-61 and W-80 fireset foam, is based on a cascade of bondbreaking reactions that produce CO2. Percolation theory is used to dynamically quantify polymer fragment populations of the thermally degrading foam. The partition between condensed-phase polymer fragments and gas-phase polymer fragments (i.e. vapor-liquid split) was determined using a vapor-liquid equilibrium model. The CPUF decomposition model was implemented into the finite element (FE) heat conduction codes COYOTE and CALORE, which support chemical kinetics and enclosure radiation. Elements were removed from the computational domain when the calculated solid mass fractions within the individual finite element decrease below a set criterion. Element removal, referred to as ?element death,? creates a radiation enclosure (assumed to be non-participating) as well as a decomposition front, which separates the condensed-phase encapsulant from the gas-filled enclosure. All of the chemistry parameters as well as thermophysical properties for the CPUF model were obtained from small-scale laboratory experiments. The CPUF model was evaluated by comparing predictions to measurements. The validation experiments included several thermogravimetric experiments at pressures ranging from ambient pressure to 30 bars. Larger, component-scale experiments were also used to validate the foam response model. The effects of heat flux, bulk density, orientation, embedded components, confinement and pressure were measured and compared to model predictions. Uncertainties in the model results were evaluated using a mean value approach. The measured mass loss in the TGA experiments and the measured location of the decomposition front were within the 95% prediction limit determined using the CPUF model for all of the experiments where the decomposition gases were vented sufficiently. The CPUF model results were not as good for the partially confined radiant heat experiments where the vent area was regulated to maintain pressure. Liquefaction and flow effects, which are not considered in the CPUF model, become important when the decomposition gases are confined.
Sandia National Laboratories has been encapsulating magnetic components for over 40 years. The reliability of magnetic component assemblies that must withstand a variety of environments and then function correctly is dependent on the use of appropriate encapsulating formulations. Specially developed formulations are critical and enable us to provide high reliability magnetic components. This paper discuss epoxy, urethane, and silicone formulations for several of our magnetic components.
Niobium doped PZT 95/5 (lead zirconate-lead titanate) is the material used in voltage bars for all ferroelectric neutron generator power supplies. In June of 1999, the transfer and scale-up of the Sandia Process from Department 1846 to Department 14192 was initiated. The laboratory-scale process of 1.6 kg has been successfully scaled to a production batch quantity of 10 kg. This report documents efforts to characterize and optimize the production-scale process utilizing Design of Experiments methodology. Of the 34 factors identified in the powder preparation sub-process, 11 were initially selected for the screening design. Additional experiments and safety analysis subsequently reduced the screening design to six factors. Three of the six factors (Milling Time, Media Size, and Pyrolysis Air Flow) were identified as statistically significant for one or more responses and were further investigated through a full factorial interaction design. Analysis of the interaction design resulted in developing models for Powder Bulk Density, Powder Tap Density, and +20 Mesh Fraction. Subsequent batches validated the models. The initial baseline powder preparation conditions were modified, resulting in improved powder yield by significantly reducing the +20 mesh waste fraction. Response variation analysis indicated additional investigation of the powder preparation sub-process steps was necessary to identify and reduce the sources of variation to further optimize the process.
Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model-based predictions. Several hypothetical prediction problems are created and addressed. Hypothetical problems are used because no guidance was provided concerning what was needed for this aspect of the analysis. The resulting predictions and corresponding uncertainty assessment demonstrate the flexibility of this approach.
This User Guide for the RADTRAN 5 computer code for transportation risk analysis describes basic risk concepts and provides the user with step-by-step directions for creating input files by means of either the RADDOG input file generator software or a text editor. It also contains information on how to interpret RADTRAN 5 output, how to obtain and use several types of important input data, and how to select appropriate analysis methods. Appendices include a glossary of terms, a listing of error messages, data-plotting information, images of RADDOG screens, and a table of all data in the internal radionuclide library.
The Rapid Terrain Visualization interferometric synthetic aperture radar was designed and built at Sandia National Laboratories as part of an Advanced Concept Technology Demonstration (ACTD) to 'demonstrate the technologies and infrastructure to meet the Army requirement for rapid generation of digital topographic data to support emerging crisis or contingencies.' This sensor is currently being operated by Sandia National Laboratories for the Joint Precision Strike Demonstration (JPSD) Project Office to provide highly accurate digital elevation models (DEMs) for military and civilian customers, both inside and outside of the United States. The sensor achieves better than DTED Level IV position accuracy in near real-time. The system is being flown on a deHavilland DHC-7 Army aircraft. This paper outlines some of the technologies used in the design of the system, discusses the performance, and will discuss operational issues. In addition, we will show results from recent flight tests, including high accuracy maps taken of the San Diego area.
Fast and quantitative analysis of cellular activity, signaling and responses to external stimuli is a crucial capability and it has been the goal of several projects focusing on patch clamp measurements. To provide the maximum functionality and measurement options, we have developed a patch clamp array device that incorporates on-chip electronics, mechanical, optical and microfluidic coupling as well as cell localization through fluid flow. The preliminary design, which integrated microfluidics, electrodes and optical access, was fabricated and tested. In addition, new designs which further combine mechanical actuation, on-chip electronics and various electrode materials with the previous designs are currently being fabricated.
Silane adhesion promoters are commonly used to improve the adhesion, durability, and corrosion resistance of polymer-oxide interfaces. The current study investigates a model interface consisting of the natural oxide of 100 Si and an epoxy cured from diglycidyl ether of bisphenol A (DGEBA) and triethylenetetraamine (TETA). The thickness of (3-glycidoxypropyl)trimethoxysilane (GPS) films placed between the two materials provided the structural variable. Five surface treatments were investigated: a bare interface, a rough monolayer film, a smooth monolayer film, a 5 nm thick film, and a 10 nm thick film. Previous neutron reflection experiments revealed large extension ratios (>2) when the 5 and 10 nm thick GPS films were exposed to deuterated nitrobenzene vapor. Despite the larger extension ratio for the 5 nm thick film, the epoxy/Si fracture energy (G{sub c}) was equal to that of the 10 nm thick film under ambient conditions. Even the smooth monolayer exhibited the same G{sub c}. Only when the monolayer included a significant number of agglomerates did the G{sub c} drop to levels closer to that of the bare interface. When immersed in water at room temperature for 1 week, the threshold energy release rate (G{sub th}) was nearly equal to G{sub c} for the smooth monolayer, 5 nm thick film, and 10 nm thick film. While the G{sub th} for all three films decreased with increasing water temperature, the G{sub th} of the smooth monolayer decreased more rapidly. The bare interface was similarly sensitive to temperature; however, the G{sub th} of the rough monolayer did not change significantly as the temperature was raised. Despite the influence of pH on hydrolysis, the G{sub th} was insensitive to the pH of the water for all surface treatments.
Boron carbide displays a rich response to dynamic compression that is not well understood. To address poorly understood aspects of behavior, including dynamic strength and the possibility of phase transformations, a series of plate impact experiments was performed that also included reshock and release configurations. Hugoniot data were obtained from the elastic limit (15-18 GPa) to 70 GPa and were found to agree reasonably well with the somewhat limited data in the literature. Using the Hugoniot data, as well as the reshock and release data, the possibility of the existence of one or more phase transitions was examined. There is tantalizing evidence, but at this time no phase transition can be conclusively demonstrated. However, the experimental data are consistent with a phase transition at a shock stress of about 40 GPa, though the volume change associated with it would have to be small. The reshock and release experiments also provide estimates of the shear stress and strength in the shocked state as well as a dynamic mean stress curve for the material. The material supports only a small shear stress in the shocked (Hugoniot) state, but it can support a much larger shear stress when loaded or unloaded from the shocked state. This strength in the shocked state is initially lower than the strength at the elastic limit but increases with pressure to about the same level. Also, the dynamic mean-stress curve estimated from reshock and release differs significantly from the hydrostate constructed from low-pressure data. Finally, a spatially resolved interferometer was used to directly measure spatial variations in particle velocity during the shock event. These spatially resolved measurements are consistent with previous work and suggest a nonuniform failure mode occurring in the material.
This paper describes an integrated experimental and computational framework for developing 3-D structural models for humic acids (HAs). This approach combines experimental characterization, computer assisted structure elucidation (CASE), and atomistic simulations to generate all 3-D structural models or a representative sample of these models consistent with the analytical data and bulk thermodynamic/structural properties of HAs. To illustrate this methodology, structural data derived from elemental analysis, diffuse reflectance FT-IR spectroscopy, 1-D/2-D {sup 1}H and {sup 13}C solution NMR spectroscopy, and electrospray ionization quadrupole time-of-flight mass spectrometry (ESI QqTOF MS) are employed as input to the CASE program SIGNATURE to generate all 3-D structural models for Chelsea soil humic acid (HA). These models are subsequently used as starting 3-D structures to carry out constant temperature-constant pressure molecular dynamics simulations to estimate their bulk densities and Hildebrand solubility parameters. Surprisingly, only a few model isomers are found to exhibit molecular compositions and bulk thermodynamic properties consistent with the experimental data. The simulated {sup 13}C NMR spectrum of an equimolar mixture of these model isomers compares favorably with the measured spectrum of Chelsea soil HA.
Inertial confinement fusion capsule implosions absorbing up to 35 kJ of x-rays from a {approx}220 eV dynamic hohlraum on the Z accelerator at Sandia National Laboratories have produced thermonuclear D-D neutron yields of (2.6 {+-} 1.3) x 10{sup 10}. Argon spectra confirm a hot fuel with Te {approx} 1 keV and n{sub e} {approx} (1-2) x 10{sup 23} cm{sup -3}. Higher performance implosions will require radiation symmetry control improvements. Capsule implosions in a {approx}70 eV double-Z-pinch-driven secondary hohlraum have been radiographed by 6.7 keV x-rays produced by the Z-beamlet laser (ZBL), demonstrating a drive symmetry of about 3% and control of P{sub 2} radiation asymmetries to {+-}2%. Hemispherical capsule implosions have also been radiographed in Z in preparation for future experiments in fast ignition physics. Z-pinch-driven inertial fusion energy concepts are being developed. The refurbished Z machine (ZR) will begin providing scaling information on capsule and Z-pinch in 2006. The addition of a short pulse capability to ZBL will enable research into fast ignition physics in the combination of ZR and ZBL-petawatt. ZR could provide a test bed to study NIF-relevant double-shell ignition concepts using dynamic hohlraums and advanced symmetry control techniques in the double-pinch hohlraum backlit by ZBL.
Two-dimensional processes of nickel electrodeposition in LIGA microfabrication were modeled using the finite-element method and a fully coupled implicit solution scheme via Newtons technique. Species concentrations, electrolyte potential, flow field, and positions of the moving deposition surfaces were computed by solving the species-mass, charge, and momentum conservation equations as well as pseudo-solid mesh-motion equations that employ an arbitrary Lagrangian-Eulerian (ALE) formulation. Coupling this ALE approach with repeated re-meshing and re-mapping makes it possible to track the entire transient deposition processes from start of deposition until the trenches are filled, thus enabling the computation of local current densities that influence the microstructure and functional/mechanical properties of the deposit.
Using shock wave reverberation experiments, water samples were quasi-isentropically compressed between silica and sapphire plates to peak pressures of 1-5 GPa on nanosecond time scales. Real time optical transmission measurements were used to examine changes in the compressed samples. Although the ice VII phase is thermodynamically favored above 2 GPa, the liquid state was initially preserved and subsequent freezing occurred over hundreds of nanoseconds only for the silica cells. Images detailing the formation and growth of the solid phase were obtained. These results provide unambiguous evidence of bulk water freezing on such short time scales.
Combined XRD/neutron Rietveld refinements were performed on PbZr{sub 0.30}Ti{sub 0.70}O{sub 3} powder samples doped with nominally 4% Ln (where Ln = Ce, Nd, Tb, Y, or Yb). Resulting refined structural parameters indicated that the lattice parameters and volume changes in the tetragonal perovskite unit cell were consistent with A and/or B-site doping of the structure. Ce doping is inconsistent with respect to its rather large atomic radius, but is understood in terms of its oxidation to the Ce{sup +4} oxidation state in the structure. Results of the B-site displacement values for the Ti/Zr site indicate that amphoteric doping of Ln cations in the structure results in superior properties for PLnZT materials.
Blastwalls are often assumed to be the answer for facility protection from malevolent explosive assault, particularly from large vehicle bombs (LVB's). The assumption is made that the blastwall, if it is built strong enough to survive, will provide substantial protection to facilities and people on the side opposite the LVB. This paper will demonstrate through computer simulations and experimental data the behavior of explosively induced air blasts during interaction with blastwalls. It will be shown that air blasts can effectively wrap around and over blastwalls. Significant pressure reduction can be expected on the downstream side of the blastwall but substantial pressure will continue to propagate. The effectiveness of the blastwall to reduce blast overpressure depends on the geometry of the blastwall and the location of the explosive relative to the blastwall.