Wear is a critical factor in determining the durability of microelectromechanical systems (MEMS). While the reliability of polysilicon MEMS has received extensive attention, the mechanisms responsible for this failure mode at the microscale have yet to be conclusively determined. We have used on-chip polycrystalline silicon side-wall friction MEMS specimens to study active mechanisms during sliding wear in ambient air. Worn parts were examined by analytical scanning and transmission electron microscopy, while local temperature changes were monitored using advanced infrared microscopy. Observations show that small amorphous debris particles ({approx}50-100 nm) are removed by fracture through the silicon grains ({approx}500 nm) and are oxidized during this process. Agglomeration of such debris particles into larger clusters also occurs. Some of these debris particles/clusters create plowing tracks on the beam surface. A nano-crystalline surface layer ({approx}20-200 nm), with higher oxygen content, forms during wear at and below regions of the worn surface; its formation is likely aided by high local stresses. No evidence of dislocation plasticity or of extreme local temperature increases was found, ruling out the possibility of high temperature-assisted wear mechanisms.
Microanalysis is typically performed to analyze the near surface of materials. There are many instances where chemical information about the third spatial dimension is essential to the solution of materials analyses. The majority of 3D analyses however focus on limited spectral acquisition and/or analysis. For truly comprehensive 3D chemical characterization, 4D spectral images (a complete spectrum from each volume element of a region of a specimen) are needed. Furthermore, a robust statistical method is needed to extract the maximum amount of chemical information from that extremely large amount of data. In this paper, an example of the acquisition and multivariate statistical analysis of 4D (3-spatial and 1-spectral dimension) x-ray spectral images is described. The method of utilizing a single- or dual-beam FIB (w/o or w/SEM) to get at 3D chemistry has been described by others with respect to secondary-ion mass spectrometry. The basic methodology described in those works has been modified for comprehensive x-ray microanalysis in a dual-beam FIB/SEM (FEI Co. DB-235). In brief, the FIB is used to serially section a site-specific region of a sample and then the electron beam is rastered over the exposed surfaces with x-ray spectral images being acquired at each section. All this is performed without rotating or tilting the specimen between FIB cutting and SEM imaging/x-ray spectral image acquisition. The resultant 4D spectral image is then unfolded (number of volume elements by number of channels) and subjected to the same multivariate curve resolution (MCR) approach that has proven successful for the analysis of lower-dimension x-ray spectral images. The TSI data sets can be in excess of 4Gbytes. This problem has been overcome (for now) and images up to 6Gbytes have been analyzed in this work. The method for analyzing such large spectral images will be described in this presentation. A comprehensive 3D chemical analysis was performed on several corrosion specimens of Cu electroplated with various metals. Figure 1A shows the top view of the localized corrosion region prepared for FIB sectioning. The TSI region has been coated with Pt and a trench has been milled along the bottom edge of the region, exposing it to the electron beam as seen in Figure 1B. The TSI consisted of 25 sections and was approximately 6Gbytes. Figure 1C shows several of the components rendered in 3D: Green is Cu; blue is Pb; cyan represents one of the corrosion products that contains Cu, Zn, O, S, and C; and orange represents the other corrosion product with Zn, O, S and C. Figure 1 D shows all of the component spectral shapes from the analysis. There is severe pathological overlap of the spectra from Ni, Cu and Zn as well as Pb and S. in spite of this clean spectral shapes have been extracted from the TSI. This powerful TSI technique could be applied to other sectioning methods well.
Analyzing the performance of a complex System of Systems (SoS) requires a systems engineering approach. Many such SoS exist in the Military domain. Examples include the Army's next generation Future Combat Systems 'Unit of Action' or the Navy's Aircraft Carrier Battle Group. In the case of a Unit of Action, a system of combat vehicles, support vehicles and equipment are organized in an efficient configuration that minimizes logistics footprint while still maintaining the required performance characteristics (e.g., operational availability). In this context, systems engineering means developing a global model of the entire SoS and all component systems and interrelationships. This global model supports analyses that result in an understanding of the interdependencies and emergent behaviors of the SoS. Sandia National Laboratories will present a robust toolset that includes methodologies for developing a SoS model, defining state models and simulating a system of state models over time. This toolset is currently used to perform logistics supportability and performance assessments of the set of Future Combat Systems (FCS) for the U.S. Army's Program Manager Unit of Action.
We design a density-functional-theory (DFT) exchange-correlation functional that enables an accurate treatment of systems with electronic surfaces. Surface-specific approximations for both exchange and correlation energies are developed. A subsystem functional approach is then used: an interpolation index combines the surface functional with a functional for interior regions. When the local density approximation is used in the interior, the result is a straightforward functional for use in self-consistent DFT. The functional is validated for two metals (Al, Pt) and one semiconductor (Si) by calculations of (i) established bulk properties (lattice constants and bulk moduli) and (ii) a property where surface effects exist (the vacancy formation energy). Good and coherent results indicate that this functional may serve well as a universal first choice for solid-state systems and that yet improved functionals can be constructed by this approach.
In this paper, we explore the stability properties of time-domain numerical methods for multitime partial differential equations (MPDEs) in detail. We demonstrate that simple techniques for numerical discretization can lead easily to instability. By investigating the underlying eigenstructure of several discretization techniques along different artificial time scales, we show that not all combinations of techniques are stable. We identify choices of discretization method and step size, along fast and slow time scales, that lead to robust, stable time-domain integration methods for the MPDE. One of our results is that applying overstable methods along one time-scale can compensate for unstable discretization along others. Our novel integration schemes bring robustness to time-domain MPDE solution methods, as we demonstrate with examples.
Solid-state {sup 1}H magic angle spinning (MAS) NMR was used to investigate sulfonated Diels-Alder poly(phenlylene) polymer membranes. Under high spinning speed {sup 1}H MAS conditions, the proton environments of the sulfonic acid and phenylene polymer backbone are resolved. A double-quantum (DQ) filter using the rotor-synchronized back-to-back (BABA) NMR multiple-pulse sequence allowed the selective suppression of the sulfonic proton environment in the {sup 1}H MAS NMR spectra. This DQ filter in conjunction with a spin diffusion NMR experiment was then used to measure the domain size of the sulfonic acid component within the membrane. In addition, the temperature dependence of the sulfonic acid spin-spin relaxation time (T{sub 2}) was determined, providing an estimate of the activation energy for the proton dynamics of the dehydrated membrane.
The fundamental chemical behavior of the AlCl{sub 3}/SO{sub 2}Cl{sub 2} catholyte system was investigated using {sup 27}Al NMR spectroscopy, Raman spectroscopy, and single-crystal X-ray diffraction. Three major Al-containing species were found to be present in this catholyte system, where the ratio of each was dependent upon aging time, concentration, and/or storage temperature. The first species was identified as [Cl{sub 2}Al({mu}-Cl)]{sub 2} in equilibrium with AlCl{sub 3}. The second species results from the decomposition of SO{sub 2}Cl{sub 2} which forms Cl{sub 2}(g) and SO{sub 2}(g). The SO{sub 2}(g) is readily consumed in the presence of AlCl{sub 3} to form the crystallographically characterized species [Cl{sub 2}Al({mu}-O{sub 2}SCl)]{sub 2} (1). For 1, each Al is tetrahedrally (T{sub d}) bound by two terminal Cl and two {mu}-O ligands whereas, the S is three-coordinated by two {mu}-O ligands and one terminal Cl. The third molecular species also has T{sub d}-coordinated Al metal centers but with increased oxygen coordination. Over time it was noted that a precipitate formed from the catholyte solutions. Raman spectroscopic studies show that this gel or precipitate has a component that was consistent with thionyl chloride. We have proposed a polymerization scheme that accounts for the precipitate formation. Further NMR studies indicate that the precipitate is in equilibrium with the solution.
We demonstrate direct diode-bar side pumping of a Yb-doped fiber laser using embedded-mirror side pumping (EMSP). In this method, the pump beam is launched by reflection from a micro-mirror embedded in a channel polished into the inner cladding of a double-clad fiber (DCF). The amplifier employed an unformatted, non-lensed, ten-emitter diode bar (20 W) and glass-clad, polarization-maintaining, large-mode-area fiber. Measurements with passive fiber showed that the coupling efficiency of the raw diode-bar output into the DCF (ten launch sites) was {approx}84%; for comparison, the net coupling efficiency using a conventional, formatted, fiber-coupled diode bar is typically 50-70%, i.e., EMSP results in a factor of 2-3 less wasted pump power. The slope efficiency of the side-pumped fiber laser was {approx}80% with respect to launched pump power and 24% with respect to electrical power consumption of the diode bar; at a fiber-laser output power of 7.5 W, the EMSP diode bar consumed 41 W of electrical power (18% electrical-to-optical efficiency). When end pumped using a formatted diode bar, the fiber laser consumed 96 W at 7.5 W output power, a factor of 2.3 less efficient, and the electrical-to-optical slope efficiency was lower by a factor of 2.0. Passive-fiber measurements showed that the EMSP alignment sensitivity is nearly identical for a single emitter as for the ten-emitter bar. EMSP is the only method capable of directly launching the unformatted output of a diode bar directly into DCF (including glass-clad DCF), enabling fabrication of low-cost, simple, and compact, diode-bar-pumped fiber lasers and amplifiers.
Spreading of bacteria in a highly advective, disordered environment is examined. Predictions of super-diffusive spreading for a simplified reaction-diffusion equation are tested. Concentration profiles display anomalous growth and super-diffusive spreading. A perturbation analysis yields a crossover time between diffusive and super-diffusive behavior. The time's dependence on the convection velocity and disorder is tested. Like the simplified equation, the full linear reaction-diffusion equation displays super-diffusive spreading perpendicular to the convection. However, for mean positive growth rates the full nonlinear reaction-diffusion equation produces symmetric spreading with a Fisher wavefront, whereas net negative growth rates cause an asymmetry, with a slower wavefront velocity perpendicular to the convection.
Chemical crosslinking is an important tool for probing protein structure and protein-protein interactions. The approach usually involves crosslinking of specific amino acids within a folded protein or protein complex, enzymatic digestion of the crosslinked protein(s), and identification of the resulting crosslinked peptides by liquid chromatography/mass spectrometry (LC/MS). In this manner, distance constraints are obtained for residues that must be in close proximity to one another in the native structure or complex. As the complexity of the system under study increases, for example, a large multi-protein complex, simply measuring the mass of a crosslinked species will not always be sufficient to determine the identity of the crosslinked peptides. In such a case, tandem mass spectrometry (MS/MS) could provide the required information if the data can be properly interpreted. In MS/MS, a species of interest is isolated in the gas phase and allowed to undergo collision induced dissociation (CID). Because the gas-phase dissociation pathways of peptides have been well studied, methods are established for determining peptide sequence by MS/MS. However, although crosslinked peptides dissociate through some of the same pathways as isolated peptides, the additional dissociation pathways available to the former have not been studied in detail. Software such as MS2Assign has been written to assist in the interpretation of MS/MS from crosslinked peptide species, but it would be greatly enhanced by a more thorough understanding of how these species dissociate. We are thus systematically investigating the dissociation pathways open to crosslinked peptide species. A series of polyalanine and polyglycine model peptides have been synthesized containing one or two lysine residues to generate defined inter- and intra-molecular crosslinked species, respectively. Each peptide contains 11 total residues, and one arginine residue is present at the carboxy terminus to mimic species generated by tryptic digestion. The peptides have been allowed to react with a series of commonly used crosslinkers such as DSS, DSG, and DST. The tandem mass spectra acquired for these crosslinked species are being examined as a function of crosslinker identity, site(s) of crosslinking, and precursor charge state. Results from these model studies and observations from actual experimental systems are being incorporated into the MS2Assign software to enhance our ability to effectively use chemical crosslinking in protein complex determination.
The Surface Evolver was used to compute the equilibrium microstructure of random soap foams with bidisperse cell-size distributions and to evaluate topological and geometric properties of the foams and individual cells. The simulations agree with the experimental data of Matzke and Nestler for the probability {rho}(F) of finding cells with F faces and its dependence on the fraction of large cells. The simulations also agree with the theory for isotropic Plateau polyhedra (IPP), which describes the F-dependence of cell geometric properties, such as surface area, edge length, and mean curvature (diffusive growth rate); this is consistent with results for polydisperse foams. Cell surface areas are about 10% greater than spheres of equal volume, which leads to a simple but accurate relation for the surface free energy density of foams. The Aboav-Weaire law is not valid for bidisperse foams.
Bulk migration of particles towards regions of lower shear occurs in suspensions of neutrally buoyant spheres in Newtonian fluids undergoing creeping flow in the annular region between two rotating, coaxial cylinders (a wide-gap Couette). For a monomodal suspension of spheres in a viscous fluid, dimensional analysis indicates that the rate of migration at a given concentration should scale with the square of the sphere radius. However, a previous experimental study showed that the rate of migration of spherical particles at 50% volume concentration actually scaled with the sphere radius to approximately the 2.9 power.
Three nested molybdenum wire arrays with initial outer diameters of 45, 50, and 55 mm were imploded by the - 20 MA, 90 ns rise-time current pulse of Sandia's Z accelerator. The implosions generated Mo plasmas with {approx} 10% of the array's initial mass reaching Ne-like and nearby ionization stages. These ions emitted 2-4 keV L-shell x rays with radiative powers approaching 10 TW. Mo L-shell spectra with axial and temporal resolution were captured and have been analyzed using a collisional-radiative model. The measured spectra indicate significant axial variation in the electron density, which increases from a few times 10{sup 20} cm{sup -3} at the cathode up to - 3 x 10{sup 21} cm{sup -3} near the middle of the 20 mm plasma column (8 mm from the anode). Time-resolved spectra indicate that the peak electron density is reached before the peak of the L-shell emission and decreases with time, while the electron temperature remains within 10% of 1.7 keV over the 20-30 ns L-shell radiation pulse. Finally, while the total yield, peak total power, and peak L-shell power all tended to decrease with increasing initial wire array diameters, the L-shell yield and the average plasma conditions varied little with the initial wire array diameter.
Proposed for publication in Association for Computing Machinery Transactions on Mathematical Software.
ODRPAC (TOMS Algorithm 676) has provided a complete package for weighted orthogonal distance regression for many years. The code is complete with user selectable reporting facilities, numerical and analytic derivatives, derivative checking, and many more features. The foundation for the algorithm is a stable and efficient trust region Levenberg-Marquardt minimizer that exploits the structure of the orthogonal distance regression problem. ODRPAC95 is a modification of the original ODRPAC code that adds support for bound constraints, uses the newer Fortran 95 language, and simplifies the interface to the user called subroutine.
We consider the accuracy of predictions made by integer programming (IP) models of sensor placement for water security applications. We have recently shown that IP models can be used to find optimal sensor placements for a variety of different performance criteria (e.g. minimize health impacts and minimize time to detection). However, these models make a variety of simplifying assumptions that might bias the final solution. We show that our IP modeling assumptions are similar to models developed for other sensor placement methodologies, and thus IP models should give similar predictions. However, this discussion highlights that there are significant differences in how temporal effects are modeled for sensor placement. We describe how these modeling assumptions can impact sensor placements.
A focused ion beam (FIB) is used to accurately sculpt predetermined micron-scale, curved shapes in a number of solids. Using a digitally scanned ion beam system, various features are sputtered including hemispheres and sine waves having dimensions from 1-50 {micro}m. Ion sculpting is accomplished by changing pixel dwell time within individual boustrophedonic scans. The pixel dwell times used to sculpt a given shape are determined prior to milling and account for the material-specific, angle-dependent sputter yield, Y({theta}), as well as the amount of beam overlap in adjacent pixels. A number of target materials, including C, Au and Si, are accurately sculpted using this method. For several target materials, the curved feature shape closely matches the intended shape with milled feature depths within 5% of intended values.
Ccaffeine is a Common Component Architecture (CCA) framework devoted to high-performance computing. In this note we give an overview of the system features of Ccaffeine and CCA that support component-based HPC application development. Object-oriented, single-threaded and lightweight, Ccaffeine is designed to get completely out of the way of the running application after it has been composed from components. Ccaffeine is one of the few frameworks, CCA or otherwise, that can compose and run applications on a parallel machine interactively and then automatically generate a static, possibly self-tuning, executable for production runs. Users can experiment with and debug applications interactively, improving their productivity. When the application is ready, a script is automatically generated, parsed and turned into a static executable for production runs. Within this static executable, dynamic replacement of components can be performed by self-tuning applications.
In recent years, several integer programming models have been proposed to place sensors in municipal water networks in order to detect intentional or accidental contamination. Although these initial models assumed that it is equally costly to place a sensor at any place in the network, there clearly are practical cost constraints that would impact a sensor placement decision. Such constraints include not only labor costs but also the general accessibility of a sensor placement location. In this paper, we extend our integer program to explicitly model the cost of sensor placement. We partition network locations into groups of varying placement cost, and we consider the public health impacts of contamination events under varying budget constraints. Thus our models permit cost/benefit analyses for differing sensor placement designs. As a control for our optimization experiments, we compare the set of sensor locations selected by the optimization models to a set of manually-selected sensor locations.
Electrical operation of III-Nitride light emitting diodes (LEDs) with photonic crystal structures is demonstrated. Employing photonic crystal structures in III-Nitride LEDs is a method to increase light extraction efficiency and directionality. The photonic crystal is a triangular lattice formed by dry etching into the III-Nitride LED. A range of lattice constants is considered (a {approx} 270-340nm). The III-Nitride LED layers include a tunnel junction providing good lateral current spreading without a semi-absorbing metal current spreader as is typically done in conventional III-Nitride LEDs. These photonic crystal III-Nitride LED structures are unique because they allow for carrier recombination and light generation proximal to the photonic crystal (light extraction area) yet displaced from the absorbing metal contact. The photonic crystal Bragg scatters what would have otherwise been guided modes out of the LED, increasing the extraction efficiency. The far-field light radiation patterns are heavily modified compared to the typical III-Nitride LED's Lambertian output. The photonic crystal affects the light propagation out of the LED surface, and the radiation pattern changes with lattice size. LEDs with photonic crystals are compared to similar III-Nitride LEDs without the photonic crystal in terms of extraction, directionality, and emission spectra.
Experimental evidence suggests that the energy balance between processes in play during wire array implosions is not well understood. In fact the radiative yields can exceed by several times the implosion kinetic energy. A possible explanation is that the coupling from magnetic energy to kinetic energy as magnetohydrodynamic plasma instabilities develop provides additional energy. It is thus important to model the instabilities produced in the after implosion stage of the wire array in order to determine how the stored magnetic energy can be connected with the radiative yields. To this aim three-dimensional hybrid simulations have been performed. They are initialized with plasma radial density profiles, deduced in recent experiments [C. Deeney et al., Phys. Plasmas 6, 3576 (1999)] that exhibited large x-ray yields, together with the corresponding magnetic field profiles. Unlike previous work, these profiles do not satisfy pressure balance and differ substantially from those of a Bennett equilibrium. They result in faster growth with an associated transfer of magnetic energy to plasma motion and hence kinetic energy.
Pulsed power driven metallic wire-array Z pinches are the most powerful and efficient laboratory x-ray sources. Furthermore, under certain conditions the soft x-ray energy radiated in a 5 ns pulse at stagnation can exceed the estimated kinetic energy of the radial implosion phase by a factor of 3 to 4. A theoretical model is developed here to explain this, allowing the rapid conversion of magnetic energy to a very high ion temperature plasma through the generation of fine scale, fast-growing m=0 interchange MHD instabilities at stagnation. These saturate nonlinearly and provide associated ion viscous heating. Next the ion energy is transferred by equipartition to the electrons and thus to soft x-ray radiation. Recent time-resolved iron spectra at Sandia confirm an ion temperature T{sub i} of over 200 keV (2 x 10{sup 9} degrees), as predicted by theory. These are believed to be record temperatures for a magnetically confined plasma.
Bayesian medical monitoring is a concept based on using real-time performance-related data to make statistical predictions about a patient's future health. The following paper discusses the fundamentals behind the medical monitoring concept and the application to monitoring the health of nuclear reactors. Necessary assumptions are discussed regarding distributions and failure-rate calculations. A simple example is performed to illustrate the effectiveness of the methods. The methods perform very well for the thirteen subjects in the example, with a clear failure sequence identified for eleven of the subjects.
A novel dual stage chemiluminescence detection system incorporating individually controlled hot stages has been developed and applied to probe for material interaction effects during polymer degradation. Utilization of this system has resulted in experimental confirmation for the first time that in an oxidizing environment a degrading polymer A (in this case polypropylene, PP) is capable of infecting a different polymer B (in this case polybutadiene, HTPB) over a relatively large distance. In the presence of the infectious degrading polymer A, the thermal degradation of polymer B is observed over a significantly shorter time period. Consistent with infectious volatiles from material A initiating the degradation process in material B it was demonstrated that traces (micrograms) of a thermally sensitive peroxide in the vicinity of PP could induce degradation remotely. This observation documents cross-infectious phenomena between different polymers and has major consequences for polymer interactions, understanding fundamental degradation processes and long-term aging effects under combined material exposures.
Photocatalytic porphyrins are used to reduce metal complexes from aqueous solution and, further, to control the deposition of metals onto porphyrin nanotubes and surfactant assembly templates to produce metal composite nanostructures and nanodevices. For example, surfactant templates lead to spherical platinum dendrites and foam-like nanomaterials composed of dendritic platinum nanosheets. Porphyrin nanotubes are reported for the first time, and photocatalytic porphyrin nanotubes are shown to reduce metal complexes and deposit the metal selectively onto the inner or outer surface of the tubes, leading to nanotube-metal composite structures that are capable of hydrogen evolution and other nanodevices.
Proposed for publication in Journal of Applied Physics.
Computation of space-charge current-limiting effects across a vacuum cavity between parallel electrodes has previously been carried out only for thermionic emission spectra. In some applications, where the current arises from an injected electron beam or photo-Compton emission from electrode walls, the electron energy spectra may deviate significantly from Maxwellian. Considering the space charge as a collisionless plasma, we derive an implicit equation for the peak cavity potential assuming steady-state currents. For the examples of graphite, nickel, and gold electrodes exposed to x rays, we find that cavity photoemission currents are typically more severely space-charge limited than they would be with the assumption of a purely Maxwellian energy distribution.
Network administrators and security analysts often do not know what network services are being run in every corner of their networks. If they do have a vague grasp of the services running on their networks, they often do not know what specific versions of those services are running. Actively scanning for services and versions does not always yield complete results, and patch and service management, therefore, suffer. We present Net-State, a system for monitoring, storing, and reporting application and operating system version information for a network. NetState gives security and network administrators the ability to know what is running on their networks while allowing for user-managed machines and complex host configurations. Our architecture uses distributed modules to collect network information and a centralized server that stores and issues reports on that collected version information. We discuss some of the challenges to building and operating NetState as well as the legal issues surrounding the promiscuous capture of network data. We conclude that this tool can solve some key problems in network management and has a wide range of possibilities for future uses.
The stainless steel alloy 17-4PH contains a martensitic microstructure and second phase delta ({delta}) ferrite. Strengthening of 17-4PH is attributed to Cu-rich precipitates produced during age hardening treatments at 900-1150 F (H900-H1150). For wrought 17-4PH, the effects of heat treatment and microstructure on mechanical properties are well-documented [for example, Ref. 1]. Fewer studies are available on cast 17-4PH, although it has been a popular casting alloy for high strength applications where moderate corrosion resistance is needed. Microstructural features and defects particular to castings may have adverse effects on properties, especially when the alloy is heat treated to high strength. The objective of this work was to outline the effects of microstructural features specific to castings, such as shrinkage/solidification porosity, on the mechanical behavior of investment cast 17-4PH. Besides heat treatment effects, the results of metallography and SEM studies showed that the largest effect on mechanical properties is from shrinkage/solidification porosity. Figure 1a shows stress-strain curves obtained from samples machined from castings in the H925 condition. The strength levels were fairly similar but the ductility varied significantly. Figure 1b shows an example of porosity on a fracture surface from a room-temperature, quasi-static tensile test. The rounded features represent the surfaces of dendrites which did not fuse or only partially fused together during solidification. Some evidence of local areas of fracture is found on some dendrite surfaces. The shrinkage pores are due to inadequate backfilling of liquid metal and simultaneous solidification shrinkage during casting. A summary of percent elongation results is displayed in Figure 2a. It was found that higher amounts of porosity generally result in lower ductility. Note that the porosity content was measured on the fracture surfaces. The results are qualitatively similar to those found by Gokhale et al. and Surappa et al. in cast A356 Al and by Gokhale et al. for a cast Mg alloys. The quantitative fractography and metallography work by Gokhale et al. illustrated the strong preference for fracture in regions of porosity in cast material. That is, the fracture process is not correlated to the average microstructure in the material but is related to the extremes in microstructure (local regions of high void content). In the present study, image analysis on random cross-sections of several heats indicated an overall porosity content of 0.03%. In contrast, the area % porosity was as high as 16% when measured on fracture surfaces of tensile specimens using stereology techniques. The results confirm that the fracture properties of cast 17-4PH cannot be predicted based on the overall 'average' porosity content in the castings.
Policy-based network management (PBNM) uses policy-driven automation to manage complex enterprise and service provider networks. Such management is strongly supported by industry standards, state of the art technologies and vendor product offerings. We present a case for the use of PBNM and related technologies for end-to-end service delivery. We provide a definition of PBNM terms, a discussion of how such management should function and the current state of the industry. We include recommendations for continued work that would allow for PBNM to be put in place over the next five years in the unclassified environment.
ALEGRA is an arbitrary Lagrangian-Eulerian finite element code that emphasizes large distortion and shock propagation in inviscid fluids and solids. This document describes user options for modeling resistive magnetohydrodynamics, thermal conduction, and radiation transport effects, and two material temperature physics.
To establish mechanical properties and failure criteria of silicon carbide (SiC-N) ceramics, a series of quasi-static compression tests has been completed using a high-pressure vessel and a unique sample alignment jig. This report summarizes the test methods, set-up, relevant observations, and results from the constitutive experimental efforts. Results from the uniaxial and triaxial compression tests established the failure threshold for the SiC-N ceramics in terms of stress invariants (I{sub 1} and J{sub 2}) over the range 1246 < I{sub 1} < 2405. In this range, results are fitted to the following limit function (Fossum and Brannon, 2004) {radical}J{sub 2}(MPa) = a{sub 1} - a{sub 3}e -a{sub 2}(I{sub 1}/3) + a{sub 4} I{sub 1}/3, where a{sub 1} = 10181 MPa, a{sub 2} = 4.2 x 10{sup -4}, a{sub 3} = 11372 MPa, and a{sub 4} = 1.046. Combining these quasistatic triaxial compression strength measurements with existing data at higher pressures naturally results in different values for the least-squares fit to this function, appropriate over a broader pressure range. These triaxial compression tests are significant because they constitute the first successful measurements of SiC-N compressive strength under quasistatic conditions. Having an unconfined compressive strength of {approx}3800 MPa, SiC-N has been heretofore tested only under dynamic conditions to achieve a sufficiently large load to induce failure. Obtaining reliable quasi-static strength measurements has required design of a special alignment jig and load-spreader assembly, as well as redundant gages to ensure alignment. When considered in combination with existing dynamic strength measurements, these data significantly advance the characterization of pressure-dependence of strength, which is important for penetration simulations where failed regions are often at lower pressures than intact regions.
A laser hazard analysis and safety assessment was performed for the 3rd Tech model DeltaSphere-3000{reg_sign} Laser 3D Scene Digitizer, infrared laser scanner model based on the 2000 version of the American National Standard Institute's Standard Z136.1, for the Safe Use of Lasers. The portable scanner system is used in the Robotic Manufacturing Science and Engineering Laboratory (RMSEL). This scanning system had been proposed to be a demonstrator for a new application. The manufacture lists the Nominal Ocular Hazard Distance (NOHD) as less than 2 meters. It was necessary that SNL validate this NOHD prior to its use as a demonstrator involving the general public. A formal laser hazard analysis is presented for the typical mode of operation for the current configuration as well as a possible modified mode and alternative configuration.
Several SIERRA applications make use of third-party libraries to solve systems of linear and nonlinear equations, and to solve eigenproblems. The classes and interfaces in the SIERRA framework that provide linear system assembly services and access to solver libraries are collectively referred to as solver services. This paper provides an overview of SIERRA's solver services including the design goals that drove the development, and relationships and interactions among the various classes. The process of assembling and manipulating linear systems will be described, as well as access to solution methods and other operations.
The Finite Element Interface to Linear Solvers (FEI) is a linear system assembly library. Sparse systems of linear equations arise in many computational engineering applications, and the solution of linear systems is often the most computationally intensive portion of the application. Depending on the complexity of problems addressed by the application, there may be no single solver package capable of solving all of the linear systems that arise. This motivates the need to switch an application from one solver library to another, depending on the problem being solved. The interfaces provided by various solver libraries for data assembly and problem solution differ greatly, making it difficult to switch an application code from one library to another. The amount of library-specific code in an application can be greatly reduced by having an abstraction layer that puts a 'common face' on various solver libraries. The FEI has seen significant use by finite element applications at Sandia National Laboratories and Lawrence Livermore National Laboratory. The original FEI offered several advantages over using linear algebra libraries directly, but also imposed significant limitations and disadvantages. A new set of interfaces has been added with the goal of removing the limitations of the original FEI while maintaining and extending its strengths.
This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runs on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.
The manipulation of physical interactions between structural moieties on the molecular scale is a fundamental hurdle in the realization and operation of nanostructured materials and high surface area microsystem architectures. These include such nano-interaction-based phenomena as self-assembly, fluid flow, and interfacial tribology. The proposed research utilizes photosensitive molecular structures to tune such interactions reversibly. This new material strategy provides optical actuation of nano-interactions impacting behavior on both the nano- and macroscales and with potential to impact directed nanostructure formation, microfluidic rheology, and tribological control.
As part of the U.S. Department of Energy (DOE) Office of Industrial Technologies (OIT) Industries of the Future (IOF) Forest Products research program, a collaborative investigation was conducted on the sources, characteristics, and deposition of particles intermediate in size between submicron fume and carryover in recovery boilers. Laboratory experiments on suspended-drop combustion of black liquor and on black liquor char bed combustion demonstrated that both processes generate intermediate size particles (ISP), amounting to 0.5-2% of the black liquor dry solids mass (BLS). Measurements in two U.S. recovery boilers show variable loadings of ISP in the upper furnace, typically between 0.6-3 g/Nm{sup 3}, or 0.3-1.5% of BLS. The measurements show that the ISP mass size distribution increases with size from 5-100 {micro}m, implying that a substantial amount of ISP inertially deposits on steam tubes. ISP particles are depleted in potassium, chlorine, and sulfur relative to the fuel composition. Comprehensive boiler modeling demonstrates that ISP concentrations are substantially overpredicted when using a previously developed algorithm for ISP generation. Equilibrium calculations suggest that alkali carbonate decomposition occurs at intermediate heights in the furnace and may lead to partial destruction of ISP particles formed lower in the furnace. ISP deposition is predicted to occur in the superheater sections, at temperatures greater than 750 C, when the particles are at least partially molten.
The thermal hazard posed by large hydrocarbon fires is dominated by the radiative emission from high temperature soot. Since the optical properties of soot, especially in the infrared region of the electromagnetic spectrum, as well as its morphological properties, are not well known, efforts are underway to characterize these properties. Measurements of these soot properties in large fires are important for heat transfer calculations, for interpretation of laser-based diagnostics, and for developing soot property models for fire field models. This research uses extractive measurement diagnostics to characterize soot optical properties, morphology, and composition in 2 m pool fires. For measurement of the extinction coefficient, soot extracted from the flame zone is transported to a transmission cell where measurements are made using both visible and infrared lasers. Soot morphological properties are obtained by analysis via transmission electron microscopy of soot samples obtained thermophoretically within the flame zone, in the overfire region, and in the transmission cell. Soot composition, including carbon-to-hydrogen ratio and polycyclic aromatic hydrocarbon concentration, is obtained by analysis of soot collected on filters. Average dimensionless extinction coefficients of 8.4 {+-} 1.2 at 635 nm and 8.7 {+-} 1.1 at 1310 nm agree well with recent measurements in the overfire region of JP-8 and other fuels in lab-scale burners and fires. Average soot primary particle diameters, radius of gyration, and fractal dimensions agree with these recent studies. Rayleigh-Debye-Gans theory of scattering applied to the measured fractal parameters shows qualitative agreement with the trends in measured dimensionless extinction coefficients. Results of the density and chemistry are detailed in the report.
The measurement of heat flux in hydrocarbon fuel fires (e.g., diesel or JP-8) is difficult due to high temperatures and the sooty environment. Un-cooled commercially available heat flux gages do not survive in long duration fires, and cooled gages often become covered with soot, thus changing the gage calibration. An alternate method that is rugged and relatively inexpensive is based on inverse heat conduction methods. Inverse heat-conduction methods estimate absorbed heat flux at specific material interfaces using temperature/time histories, boundary conditions, material properties, and usually an assumption of one-dimensional (1-D) heat flow. This method is commonly used at Sandia.s fire test facilities. In this report, an uncertainty analysis was performed for a specific example to quantify the effect of input parameter variations on the estimated heat flux when using the inverse heat conduction method. The approach used was to compare results from a number of cases using modified inputs to a base-case. The response of a 304 stainless-steel cylinder [about 30.5 cm (12-in.) in diameter and 0.32-cm-thick (1/8-in.)] filled with 2.5-cm-thick (1-in.) ceramic fiber insulation was examined. Input parameters of an inverse heat conduction program varied were steel-wall thickness, thermal conductivity, and volumetric heat capacity; insulation thickness, thermal conductivity, and volumetric heat capacity, temperature uncertainty, boundary conditions, temperature sampling period; and numerical inputs. One-dimensional heat transfer was assumed in all cases. Results of the analysis show that, at the maximum heat flux, the most important parameters were temperature uncertainty, steel thickness and steel volumetric heat capacity. The use of a constant thermal properties rather than temperature dependent values also made a significant difference in the resultant heat flux; therefore, temperature-dependent values should be used. As an example, several parameters were varied to estimate the uncertainty in heat flux. The result was 15-19% uncertainty to 95% confidence at the highest flux, neglecting multidimensional effects.
This study demonstrates that containment of municipal and hazardous waste in arid and semiarid environments can be accomplished effectively without traditional, synthetic materials and complex, multi-layer systems. This research demonstrates that closure covers combining layers of natural soil, native plant species, and climatic conditions to form a sustainable, functioning ecosystem will meet the technical equivalency criteria prescribed by the U. S. Environmental Protection Agency. In this study, percolation through a natural analogue and an engineered cover is simulated using the one-dimensional, numerical code UNSAT-H. UNSAT-H is a Richards. equation-based model that simulates soil water infiltration, unsaturated flow, redistribution, evaporation, plant transpiration, and deep percolation. This study incorporates conservative, site-specific soil hydraulic and vegetation parameters. Historical meteorological data are used to simulate percolation through the natural analogue and an engineered cover, with and without vegetation. This study indicates that a 3-foot (ft) cover in arid and semiarid environments is the minimum design thickness necessary to meet the U. S. Environmental Protection Agency-prescribed technical equivalency criteria of 31.5 millimeters/year and 1 x 10{sup -7} centimeters/second for net annual percolation and average flux, respectively. Increasing cover thickness to 4 or 5 ft results in limited additional improvement in cover performance.
Effective, high-performance, networked file systems and storage is needed to solve I/O bottlenecks between large compute platforms. Frequently, parallel techniques such as PFTP, are employed to overcome the adverse effect of TCP's congestion avoidance algorithm in order to achieve reasonable aggregate throughput. These techniques can suffer from end-system bottlenecks due to the protocol processing overhead and memory copies involved in moving large amounts of data during I/O. Moreover, transferring data using PFTP requires manual operation, lacking the transparency to allow for interactive visualization and computational steering of large-scale simulations from distributed locations. This paper evaluates the emerging Internet SCSI (iSCSI) protocol [2] as the file/data transport in order that remote clients can transparently access data through a distributed global file system available to local clients. We started our work characterizing the performance behavior of iSCSI in Local Area Networks (LANs). We then proceeded to study the effect of propagation delay on throughput using remote iSCSI storage and explored optimization techniques to mitigate the adverse effects of long delay in high-bandwidth Wide Area Networks (WANs). Lastly, we evaluated iSCSI in a Storage Area Network (SAN) for a Global Parallel Filesystem. We conducted our benchmark based on typical usage model of large-scale scientific applications at Sandia. We demonstrated the benefit of high-performance parallel VO to scientific applications at the IEEE 2004 Supercomputing Conference, using experiences and knowledge gained from this study.
Techniques for mitigating the adsorption of {sup 137}Cs and {sup 60}Co on metal surfaces (e.g. RAM packages) exposed to contaminated water (e.g. spent-fuel pools) have been developed and experimentally verified. The techniques are also effective in removing some of the {sup 60}Co and {sup 137}Cs that may have been adsorbed on the surfaces after removal from the contaminated water. The principle for the {sup 137}Cs mitigation technique is based upon ion-exchange processes. In contrast, {sup 60}Co contamination primarily resides in minute particles of crud that become lodged on cask surfaces. Crud is an insoluble Fe-Ni-Cr oxide that forms colloidal-sized particles as reactor cooling systems corrode. Because of the similarity between Ni{sup 2+} and Co{sup 2+}, crud is able to scavenge and retain traces of cobalt as it forms. A number of organic compounds have a great specificity for combining with nickel and cobalt. Ongoing research is investigating the effectiveness of chemical complexing agent EDTA with regard to its ability to dissolve the host phase (crud) thereby liberating the entrained {sup 60}Co into a solution where it can be rinsed away.
Using a magnetic pressure drive, an absolute measurement of stress and density along the principal compression isentrope is obtained for solid aluminum to 240 GPa. Reduction of the free-surface velocity data relies on a backward integration technique, with approximate accounting for unknown systematic errors in experimental timing. Maximum experimental uncertainties are {+-}4.7% in stress and {+-}1.4% in density, small enough to distinguish between different equation-of-state (EOS) models. The result agrees well with a tabular EOS that uses an empirical universal zero-temperature isotherm.
Spectral imaging where a complete spectrum is collected from each of a series of spatial locations (1D lines, 2D images or 3D volumes) is now available on a wide range of analytical tools - from electron and x-ray to ion beam instruments. With this capability to collect extremely large spectral images comes the need for automated data analysis tools that can rapidly and without bias reduce a large number of raw spectra to a compact, chemically relevant, and easily interpreted representation. It is clear that manual interrogation of individual spectra is impractical even for very small spectral images (< 5000 spectra). More typical spectral images can contain tens of thousands to millions of spectra, which given the constraint of acquisition time may contain between 5 and 300 counts per 1000-channel spectrum. Conventional manual approaches to spectral image analysis such as summing spectra from regions or constructing x-ray maps are prone to bias and possibly error. One way to comprehensively analyze spectral image data, which has been automated, is to utilize an unsupervised self-modeling multivariate statistical analysis method such as multivariate curve resolution (MCR). This approach has proven capable of solving a wide range of analytical problems based upon the counting of x-rays (SEM/STEM-EDX, XRF, PIXE), electrons (EELS, XPS) and ions (TOF-SIMS). As an example of the MCR approach, a STEM x-ray spectral image from a ZrB2-SiC composite was acquired and analyzed. The data were generated in a FEI Tecnai F30-ST TEM/STEM operated at 300kV, equipped with an EDAX SUTW x-ray detector. The spectral image was acquired with the TIA software on the STEM at 128 by 128 pixels (12nm/pixel) for 100msec dwell per pixel (total acquisition time was 30 minutes) with a probe of approximately the same size as each pixel. Each spectrum in the image had, on average, 500 counts. The calculation took 5 seconds on a PC workstation with dual 2.4GHz PentiumIV Xeon processors and 2Gbytes of RAM and resulted in four chemically relevant components, which are shown in Figure 1. The analysis region was at a triple junction of three ZrB2 grains that contained zirconium oxide, aluminum oxide and a glass phase. The power of unbiased statistical methods, such as MCR as applied here, is that no a priori knowledge of the material's chemistry is required. The algorithms, in this case, effectively reduced over 16,000 2000-channel spectra (64Mbytes) to four images and four spectral shapes (72kbytes), which in this case represent chemical phases. This three order of magnitude compression is achieved rapidly with no loss of chemical information. There is also the potential to correlate multiple analytical techniques like, for example, EELS and EDS in the STEM adding sensitivity to light elements as well as bonding information for EELS to the more comprehensive spectral coverage of EDS.
Reducing agricultural water use in arid regions while maintaining or improving economic productivity of the agriculture sector is a major challenge. Controlled environment agriculture (CEA, or, greenhouse agriculture) affords advantages in direct resource use (less land and water required) and productivity (i.e., much higher product yield and quality per unit of resources used) relative to conventional open-field practices. These advantages come at the price of higher operating complexity and costs per acre. The challenge is to implement and apply CEA such that the productivity and resource use advantages will sufficiently outweigh the higher operating costs to provide for overall benefit and viability. This project undertook an investigation of CEA for livestock forage production as a water-saving alternative to open-field forage production in arid regions. Forage production is a large consumer of fresh water in many arid regions of the world, including the southwestern U.S. and northern Mexico. With increasing competition among uses (agriculture, municipalities, industry, recreation, ecosystems, etc.) for limited fresh water supplies, agricultural practice alternatives that can potentially maintain or enhance productivity while reducing water use warrant consideration. The project established a pilot forage production greenhouse facility in southern New Mexico based on a relatively modest and passive (no active heating or cooling) system design pioneered in Chihuahua, Mexico. Experimental operations were initiated in August 2004 and carried over into early-FY05 to collect data and make initial assessments of operational and technical system performance, assess forage nutrition content and suitability for livestock, identify areas needing improvement, and make initial assessment of overall feasibility. The effort was supported through the joint leveraging of late-start FY04 LDRD funds and bundled CY2004 project funding from the New Mexico Small Business Technical Assistance program at Sandia. Despite lack of optimization with the project system, initial results show the dramatic water savings potential of hydroponic forage production compared with traditional irrigated open field practice. This project produced forage using only about 4.5% of the water required for equivalent open field production. Improved operation could bring water use to 2% or less. The hydroponic forage production system and process used in this project are labor intensive and not optimized for minimum water usage. Freshly harvested hydroponic forage has high moisture content that dilutes its nutritional value by requiring that livestock consume more of it to get the same nutritional content as conventional forage. In most other aspects the nutritional content compares well on a dry weight equivalent basis with other conventional forage. More work is needed to further explore and quantify the opportunities, limitations, and viability of this technique for broader use. Collection of greenhouse environmental data in this project was uniquely facilitated through the implementation and use of a self-organizing, wirelessly networked, multi-modal sensor system array with remote cell phone data link capability. Applications of wirelessly networked sensing with improved modeling/simulation and other Sandia technologies (e.g., advanced sensing and control, embedded reasoning, modeling and simulation, materials, robotics, etc.) can potentially contribute to significant improvement across a broad range of CEA applications.
Video and image data are knowledge-rich sources of information, but their utility for current and future systems is limited without autonomous methods for understanding and characterizing their content. Semantic-based video understanding may benefit systems dedicated to the detection of insiders, alarm patterns, unauthorized activities in material monitoring applications, etc. A direct benefit of this technology is not only intelligent alarm analysis, but the ability to browse and perform query-based searches for useful and interesting information after video data has been acquired and stored. These searches can provide a tremendous benefit for use in intelligence agency, government, military, and DOE site investigations. This report provides an initial investigation into the algorithms and methods needed to characterize and understand video content. Such algorithms include background modeling, detecting dynamic image regions, grouping dynamic pixels into coherent objects, and robust tracking strategies. With solid approaches for addressing these problems, analysis can be performed seeking to recognize distinctive objects and their motions leading to semantic-based video searches.
This SAND report provides the technical progress through October 2004 of the Sandia-led project, %22Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling,%22 funded by the DOE Office of Science Genomes to Life Program. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO2 are important terms in the global environmental response to anthropogenic atmospheric inputs of CO2 and that oceanic microorganisms play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. In this project, we will investigate the carbon sequestration behavior of Synechococcus Sp., an abundant marine cyanobacteria known to be important to environmental responses to carbon dioxide levels, through experimental and computational methods. This project is a combined experimental and computational effort with emphasis on developing and applying new computational tools and methods. Our experimental effort will provide the biology and data to drive the computational efforts and include significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Computational tools will be essential to our efforts to discover and characterize the function of the molecular machines of Synechococcus. To this end, molecular simulation methods will be coupled with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes. In addition, we will develop a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these - 4 - pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. The ultimate goal of this effort is develop and apply new experimental and computational methods needed to generate a new level of understanding of how the Synechococcus genome affects carbon fixation at the global scale. Anticipated experimental and computational methods will provide ever-increasing insight about the individual elements and steps in the carbon fixation process, however relating an organism's genome to its cellular response in the presence of varying environments will require systems biology approaches. Thus a primary goal for this effort is to integrate the genomic data generated from experiments and lower level simulations with data from the existing body of literature into a whole cell model. We plan to accomplish this by developing and applying a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats. These challenges are unprecedented in high performance scientific computing and necessitate the development of a companion computational infrastructure to support this effort. More information about this project, including a copy of the original proposal, can be found at www.genomes-to-life.org Acknowledgment We want to gratefully acknowledge the contributions of the GTL Project Team as follows: Grant S. Heffelfinger1*, Anthony Martino2, Andrey Gorin3, Ying Xu10,3, Mark D. Rintoul1, Al Geist3, Matthew Ennis1, Hashimi Al-Hashimi8, Nikita Arnold3, Andrei Borziak3, Bianca Brahamsha6, Andrea Belgrano12, Praveen Chandramohan3, Xin Chen9, Pan Chongle3, Paul Crozier1, PguongAn Dam10, George S. Davidson1, Robert Day3, Jean Loup Faulon2, Damian Gessler12, Arlene Gonzalez2, David Haaland1, William Hart1, Victor Havin3, Tao Jiang9, Howland Jones1, David Jung3, Ramya Krishnamurthy3, Yooli Light2, Shawn Martin1, Rajesh Munavalli3, Vijaya Natarajan3, Victor Olman10, Frank Olken4, Brian Palenik6, Byung Park3, Steven Plimpton1, Diana Roe2, Nagiza Samatova3, Arie Shoshani4, Michael Sinclair1, Alex Slepoy1, Shawn Stevens8, Chris Stork1, Charlie Strauss5, Zhengchang Su10, Edward Thomas1, Jerilyn A. Timlin1, Xiufeng Wan11, HongWei Wu10, Dong Xu11, Gong-Xin Yu3, Grover Yip8, Zhaoduo Zhang2, Erik Zuiderweg8 *Author to whom correspondence should be addressed (gsheffe%40sandia.gov) 1. Sandia National Laboratories, Albuquerque, NM 2. Sandia National Laboratories, Livermore, CA 3. Oak Ridge National Laboratory, Oak Ridge, TN 4. Lawrence Berkeley National Laboratory, Berkeley, CA 5. Los Alamos National Laboratory, Los Alamos, NM 6. University of California, San Diego 7. University of Illinois, Urbana/Champaign 8. University of Michigan, Ann Arbor 9. University of California, Riverside 10. University of Georgia, Athens 11. University of Missouri, Columbia 12. National Center for Genome Resources, Santa Fe, NM Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Local topological modification is widely used to improve mesh quality after automatic generation of tetrahedral and quadrilateral meshes. These same techniques are also used to support adaptive refinement of these meshes. In contrast, few methods are known for locally modifying the topology of hexahedral meshes. Most efforts to do this have been based on fixed transition templates or global refinement. In contrast, a dual-based 'pillowing' method has been used which, while local, is still quite restricted in its application, and is typically applied in a template-based fashion. In this presentation, I will describe the generalization of a dual-based approach to the local topological modification of hex meshes and its application to clean up hexahedral meshes. A set of three operations for locally modifying hex mesh topology has been shown to reproduce the so-called 'flipping' operations described by Bern et. al as well as other commonly-used refinement templates. I will describe the implementation of these operators and their application to real meshes. Challenging aspects of this work have included visualization of a hex mesh and its dual (especially for poor-quality meshes); the incremental modification of both the primal (i.e. the mesh) and the dual simultaneously; and the interactive steering of these operations with the goal of improving hex meshes which would otherwise have unacceptable quality. These aspects will be discussed in the context of improving hex meshes generated by curve contraction-based whisker weaving. Application of these techniques for improving other hexahedral mesh types, for example those resulting from tetrahedral subdivision, will also be discussed.
Many highly oscillatory circuits have a wide separation of time scales between the underlying oscillation and the behavior of interest. This is particularly true of communication circuits. Multiple-time Partial Differential Equation (MPDE) methods offer substantial speed-up for these circuits by introducing a periodic artificial time variable that represents the highly oscillatory behavior. This leaves just the slowly changing behavior of interest, which can be integrated with much larger steps. One problem of particular interest is the larger initial condition that must be specified for this periodic artificial time variable. One possible solution is to formulate an optimization problem in the hopes of increasing the step sizes taken in the slow time direction. This talk will discuss one possible unconstrained optimization problem for determining this initial condition. Numerical results and comparisons to several other initial condition strategies will be presented in addition to MPDE background research and implementation issues.
Experimental evidence and corresponding theoretical analyses have led to the conclusion that the system composed of Xe hollow atom states, that produce a characteristic Xe(L) spontaneous emission spectrum at 1 {at} 2.9 {angstrom} and arise from the excitation of Xe clusters with an intense pulse of 248 nm radiation propagating in a self-trapped plasma channel, closely represents the ideal situation sought for amplification in the multikilovolt region. The key innovation that is central to all aspects of the proposed work is the controlled compression of power to the level ({approx} 10{sup 20} W/cm{sup 3}) corresponding to the maximum achieved by thermonuclear events. Furthermore, since the x-ray power that is produced appears in a coherent form, an entirely new domain of physical interaction is encountered that involves states of matter that are both highly excited and highly ordered. Moreover, these findings lead to the concept of 'photonstaging', an idea which offers the possibility of advancing the power compression by an additional factor of {approx} 10{sup 9} to {approx} 10{sup 29} W/cm{sup 3}. In this completely unexplored regime, g-ray production ({h_bar}{omega}{sub {gamma}} {approx} 1 MeV) is expected to be a leading process. A new technology for the production of very highly penetrating radiation would then be available. The Xe(L) source at {h_bar}{omega}{sub x} {approx} 4.5 keV can be applied immediately to the experimental study of many aspects of the coupling of intense femtosecond x-ray pulses to materials. In a joint collaboration, the UIC group and Sandia plan to explore the following areas. These are specifically, (1) anomalous electromagnetic coupling to solid state materials, (2) 3D nanoimaging of solid matter and hydrated biological materials (e.g. interchromosomal linkers and actin filaments in muscle), and (3) EMP generation with attosecond x-rays.
Because many solid objects, both stationary and mobile, will be present in an indoor environment, the design of an indoor aerosol cloud finding lidar (light detection and ranging) instrument presents a number of challenges. The cloud finder must be able to discriminate between these solid objects and aerosol clouds as small as 1-meter in depth in order to probe suspect clouds. While a near IR ({approx}1.5-{micro}m) laser is desirable for eye-safety, aerosol scattering cross sections are significantly lower in the near-IR than at visible or W wavelengths. The receiver must deal with a large dynamic range since the backscatter from solid object will be orders of magnitude larger than for aerosol clouds. Fast electronics with significant noise contributions will be required to obtain the necessary temporal resolution. We have developed a laboratory instrument to detect aerosol clouds in the presence of solid objects. In parallel, we have developed a lidar performance model for performing trade studies. Careful attention was paid to component details so that results obtained in this study could be applied towards the development of a practical instrument. The amplitude and temporal shape of the signal return are analyzed for discrimination of aerosol clouds in an indoor environment. We have assessed the feasibility and performance of candidate approaches for a fieldable instrument. With the near-IR PMT and a 1.5-{micro}m laser source providing 20-{micro}J pulses, we estimate a bio-aerosol detection limit of 3000 particles/l.
Solution-based synthesis is a powerful approach for creating nano-structured materials. Although there have been significant recent successes in its application to fabricating nanomaterials, the general principles that control solution synthesis are not well understood. The purpose of this LDRD project was to develop the scientific principles required to design and build unique nanostructures in crystalline oxides and II/VI semiconductors using solution-based molecular self-assembly techniques. The ability to synthesize these materials in a range of different nano-architectures (from controlled morphology nanocrystals to surface templated 3-D structures) has provided the foundation for new opportunities in such areas as interactive interfaces for optics, electronics, and sensors. The homogeneous precipitation of ZnO in aqueous solution was used primarily as the model system for the project. We developed a low temperature, aqueous solution synthesis route for preparation of large arrays of oriented ZnO nanostructures. Through control of heterogeneous nucleation and growth, methods to predicatively alter the ZnO microstructures by tailoring the surface chemistry of the crystals were established. Molecular mechanics simulations, involving single point energy calculations and full geometry optimizations, were developed to assist in selecting appropriate chemical systems and understanding physical adsorption and ultimately growth mechanisms in the design of oxide nanoarrays. The versatility of peptide chemistry in controlling the formation of cadmium sulfide nanoparticles and zinc oxide/cadmium sulfide heterostructures was also demonstrated.
There is a general lack of compact electromagnetic radiation sources between 1 and 10 terahertz (THz). This a challenging spectral region lying between optical devices at high frequencies and electronic devices at low frequencies. While technologically very underdeveloped the THz region has the promise to be of significant technological importance, yet demonstrating its relevance has proven difficult due to the immaturity of the area. While the last decade has seen much experimental work in ultra-short pulsed terahertz sources, many applications will require continuous wave (cw) sources, which are just beginning to demonstrate adequate performance for application use. In this project, we proposed examination of two potential THz sources based on intersubband semiconductor transitions, which were as yet unproven. In particular we wished to explore quantum cascade lasers based sources and electronic based harmonic generators. Shortly after the beginning of the project, we shifted our emphasis to the quantum cascade lasers due to two events; the publication of the first THz quantum cascade laser by another group thereby proving feasibility, and the temporary shut down of the UC Santa Barbara free-electron lasers which were to be used as the pump source for the harmonic generation. The development efforts focused on two separate cascade laser thrusts. The ultimate goal of the first thrust was for a quantum cascade laser to simultaneously emit two mid-infrared frequencies differing by a few THz and to use these to pump a non-linear optical material to generate THz radiation via parametric interactions in a specifically engineered intersubband transition. While the final goal was not realized by the end of the project, many of the completed steps leading to the goal will be described in the report. The second thrust was to develop direct THz QC lasers operating at terahertz frequencies. This is simpler than a mixing approach, and has now been demonstrated by a few groups with wavelengths spanning 65-150 microns. We developed and refined the MBE growth for THz for both internally and externally designed QC lasers. Processing related issues continued to plague many of our demonstration efforts and will also be addressed in this report.
A numerical screening study of the interaction between a penetrator and a geological target with a preformed hole has been carried out to identify the main parameters affecting the penetration event. The planning of the numerical experiment was based on the orthogonal array OA(18,7,3,2), which allows 18 simulation runs with 7 parameters at 3 levels each. The strength of 2 of the array allows also for two-factor interaction studies. The seven parameters chosen for this study are: penetrator offset, hole diameter, hole taper, vertical and horizontal velocity of the penetrator, angle of attack of the penetrator and target material. The analysis of the simulation results has been based on main effects plots and analysis of variance (ANOVA), and it has been performed using three metrics: the maximum values of the penetration depth, penetrator deceleration and plastic strain in the penetrator case. This screening study shows that target material has a major influence on penetration depth and penetrator deceleration, while penetrator offset has the strongest effect on the maximum plastic strain.
The proposed Yucca Mountain repository is anticipated to be the first facility for long-term disposal of commercial spent nuclear fuel and high-level radioactive waste in the United States. The facility, located in the southern Nevada desert, is currently in the planning stages with initial exploratory excavations completed. It is an underground facility mined into the tuffaceous volcanic rocks that sit above the local water table. The focus of the work described in this paper is the development of radionuclide absorbers or 'getter' materials for neptunium (Np), iodine (I), and technetium (Tc) for potential deployment in the repository. 'Getter' materials retard the migration of radionuclides through sorption, reduction, or other chemical and physical processes, thereby slowing or preventing the release and transport of radionuclides. An overview of the objectives and approaches utilized in this work with respect to materials selection and modeling of ion 'getters' is presented. The benefits of the 'getter' development program to the United States Department of Energy (US DOE) are outlined.
We have investigated the potential for intense particle beam surface modification to improve the mechanical properties of materials commonly used in the human body for contact surfaces in, for example, hip and knee implants. The materials studied include Ultra-High Molecular Weight Polyethylene (UHMWPE), Ti-6Al-4Al (titanium alloy), and Co-Cr-Mo alloy. Samples in flat form were exposed to both ion and electron beams (UHMWPE), and to ion beam treatment (metals). Post-analysis indicated a degradation in bulk properties of the UHMWPE, except in the case of the lightest ion fluence tested. A surface-alloyed Hf/Ti layer on the Ti-6Al-4V is found to improve surface wear durability, and have favorable biocompatibility. A promising nanolaminate ceramic coating is applied to the Co-Cr-Mo to improve surface hardness.
Large-scale three dimensional Discrete Element simulations of granular flow in a modified split-bottom Couette cell for packs of up to 180,000 mono-disperse spheres are presented and compared with experiments. We find that the velocity profiles collapse onto a universal curve not only at the surface but also in the bulk of the pack until slip between layers becomes significant. In agreement with experiment, we find similar relations between the cell geometry and parameters involved in rescaling the velocities at the surface and in the bulk. Likewise, a change in the shape of the shear zone is observed as predicted for tall packs once the center of the shear zone is correctly defined; although the transition does not appear to be first order. Finally, the effect of cohesion is considered as a means to test the theoretical predictions.
This report summarizes the results of a five-year effort to understand the mechanisms and develop models that predict the corrosion of refractories in oxygen-fuel glass-melting furnaces. Thermodynamic data for the Si-O-(Na or K) and Al-O-(Na or K) systems are reported, allowing equilibrium calculations to be performed to evaluate corrosion of silica- and alumina-based refractories under typical furnace operating conditions. A detailed analysis of processes contributing to corrosion is also presented. Using this analysis, a model of the corrosion process was developed and used to predict corrosion rates in an actual industrial glass furnace. The rate-limiting process is most likely the transport of NaOH(gas) through the mass-transport boundary layer from the furnace atmosphere to the crown surface. Corrosion rates predicted on this basis are in better agreement with observation than those produced by any other mechanism, although the absolute values are highly sensitive to the crown temperature and the NaOH(gas) concentration at equilibrium and at the edge of the boundary layer. Finally, the project explored the development of excimer laser induced fragmentation (ELIF) fluorescence spectroscopy for the detection of gas-phase alkali hydroxides (e.g., NaOH) that are predicted to be the key species causing accelerated corrosion in these furnaces. The development of ELIF and the construction of field-portable instrumentation for glass furnace applications are reported and the method is shown to be effective in industrial settings.
The large number of government and industry activities supporting the Unit of Action (UA), with attendant documents, reports and briefings, can overwhelm decision-makers with an overabundance of information that hampers the ability to make quick decisions often resulting in a form of gridlock. In particular, the large and rapidly increasing amounts of data and data formats stored on UA Advanced Collaborative Environment (ACE) servers has led to the realization that it has become impractical and even impossible to perform manual analysis leading to timely decisions. UA Program Management (PM UA) has recognized the need to implement a Decision Support System (DSS) on UA ACE. The objective of this document is to research the commercial Knowledge Discovery and Data Mining (KDDM) market and publish the results in a survey. Furthermore, a ranking mechanism based on UA ACE-specific criteria has been developed and applied to a representative set of commercially available KDDM solutions. In addition, an overview of four R&D areas identified as critical to the implementation of DSS on ACE is provided. Finally, a comprehensive database containing detailed information on surveyed KDDM tools has been developed and is available upon customer request.
This paper describes the development of a set of software tools useful for analyzing ultra-wideband (UWB) antennas and structures. These tools are used to perform finite difference time domain (FDTD) simulation of a conical antenna with continuous wave (CW) and UWB pulsed excitations. The antenna is analyzed using spherical coordinate-based FDTD equations that are derived from first principles. The simulation results for CW excitation are compared to simulation and measured results from published sources; the results for UWB excitation are new.
This document contains a description of the verification and validation process used for the RADTRAN 5.5 code. The verification and validation process ensured the proper calculational models and mathematical and numerical methods were used in the RADTRAN 5.5 code for the determination of risk and consequence assessments. The differences between RADTRAN 5 and RADTRAN 5.5 are the addition of tables, an expanded isotope library, and the additional User-Defined meteorological option for accident dispersion. 3
As interest in 3D face recognition increases the importance of the initial alignment problem does as well. In this paper we present a method utilizing the registered 2D color and range image of a face to automatically identify the eyes, nose, and mouth. These features are important to initially align faces in both standard 2D and 3D face recognition algorithms. For our algorithm to run as fast as possible, we focus on the 2D color information. This allows the algorithm to run in approximately 4 seconds on a 640X480 image with registered range data. On a database of 1,500 images the algorithm achieved a facial feature detection rate of 99.6% with 0.4% of the images skipped due to hair obstruction of the face.
A continuum-scale, evolutionary model of bubble nucleation, growth and He release for aging metal tritides is described which accounts for major features of the tritide database. Bubble nucleation, modeled as self-trapping of interstitially diffusing He atoms, occurs during the first few days following tritium introduction into the metal. Bubble growth by dislocation loop punching yields good agreement between He atomic volumes and bubble pressures determined from bulk swelling and 3He NMR data. The bubble spacing distribution determined from NMR is shown to remain fixed with age, justifying the separation of nucleation and growth phases and providing a sensitive test of the growth formulation. Late in life, bubble interactions are proposed to produce cooperative stress effects, which lower the bubble pressure. Helium generated near surfaces and surface-connected porosity accounts for the low-level early helium release. Use of an average ligament stress criterion predicts an onset of inter-bubble fracture in good agreement with the He/Metal ratio observed for rapid He release. From the model, it is concluded that He retention can be controlled through control of bubble nucleation.
The diminished response of thermoluminescent phosphors over time is a well-documented challenge to thermoluminescent dosimetry. Wide ranges in fading rates for various phosphor types have been reported, making it necessary for many external dosimetry programs to perform individual studies on thermoluminescent fade. Sandia National Laboratories currently uses the Thermo 8802 LiF:Mg,Ti thermoluminescent dosimeter (TLD) in its personnel external dosimetry program. Doses received in the field are calculated by applying a fade algorithm published by the manufacturer to TLD readings. Since the algorithm was established by characterizing the diminished response of a TLD similar to the 8802, Sandia chose to model its fade study after the analysis done by Thermo. As a result, the parameters of each experiment were comparable, and data from the two studies were compared to determine whether or not the current algorithm should be modified specifically for use at Sandia. Cards were irradiated using an internal 90Sr/90Y source, and pre- and post-irradiation fading rates were monitored over a period of 18 wk. While significant fading was demonstrated, results closely matched those found in the original Thermo study.
The molten salt Flibe, a combination of lithium and beryllium fluorides studied for molten salt fission reactors, has been proposed as a breeder and coolant for fusion applications. The melting points of 2LiF-BeF2 and LiF-BeF2 are 460°C and 363°C, but LiF-BeF2 is rather viscous and has less lithium for breeding. In the Advanced Power Extraction (APEX) Program, concepts with a free flowing liquid for the first wall and blanket were investigated. Flinabe (a mixture of LiF, BeF2 and NaF) was selected for a molten salt design because a melting temperature below 350°C appeared possible and this provided an attractive operating temperature window for a reactor. To confirm that a ternary salt with a low melting temperature existed, several combinations of the fluoride salts, LiF, NaF and BeF2, were melted in a stainless steel crucible under vacuum. One had an apparent melting temperature of 305°C. The test system, preparation of the mixtures, melting procedures and temperature curves for the melting and cooling are presented along with the apparent melting points. Thermal modeling of the salt pool and crucible is reported in an accompanying paper.
PDC drill bit performance has been greatly improved over the past three decades by innovations in bit design and how these designs are applied. The next leap forward is most likely to come from using high-speed, real-time downhole data to optimize the performance of these sophisticated bits on an application-by-application basis. By effectively managing conditions of lateral, axial and torsional acceleration, damage to cutting structures can be minimized for improved penetration rates. Avoiding these damaging vibrations is essential to increasing bit durability and improving overall drilling economics. This paper describes one set of independent drilling optimization results obtained as part of a series of controlled demonstrations of PDC bits. Sandia National Laboratories on behalf of the U. S. Department of Energy (DOE) managed this work. The effort was organized as a Cooperative Research and Development Agreement (CRADA) established between Sandia and four bit manufacturers with DOE funding and in-kind contributions by the industry partners. The goal of this CRADA was to demonstrate drag bit performance in formations with degrees of hardness typical of those encountered while drilling geothermal wells. The test results indicate that the surface weight-on-bit (WOB), revolutions per minute (RPM) and torque readings traditionally used to guide adjustments in the drilling parameters do not always provide the true picture of what is really taking place at the bit. Instead, a holistic approach combining traditional methods of optimization together with high-speed, real-time data enables far better decisions for improving bit performance and avoiding damaging situations that may have otherwise gone unnoticed.
The Z-Pinch Power Plant uses the results from Sandia National Laboratories' Z accelerator in a power plant application to generate energy pulses using inertial confinement fusion. A collaborative project has been initiated by Sandia to investigate the scientific principles of a power generation system using this technology. Research is under way to develop an integrated concept that describes the operational issues of a 1000 MW electrical power plant. Issues under consideration include: 1-20 gigajoule fusion pulse containment, repetitive mechanical connection of heavy hardware, generation of terawatt pulses every 10 seconds, recycling often thousand tons of steel, and manufacturing of millions of hohlraums and capsules per year. Additionally, waste generation and disposal issues are being examined. This paper describes the current concept for the plant and also the objectives for future research.
An experimental investigation is made into the fluid mechanics and heat transfer of a circular cylinder immersed in a wall-bounded turbulent mixed-convection flow of water. The cylinder is oriented spanwise to the forced channel flow and within the thermal boundary layer of the heated lower wall. The flow channel is capped with a cold, near-adiabatic upper wall producing a fully turbulent gap Rayleigh number of 108. A low-speed crossflow is applied to advect the turbulent thermal plumes over the cylinder surface. We present spatially resolved cylinder-surface heat-flux data alongside 2-D PIV imaging of the streamwise and wall-normal velocity components for two flow conditions in the mixed-convection heat-transfer regime. The measured cylinder-wake flowfield reflects the complex coupling between the separated wake flow, the highly turbulent freestream and the buoyant wall and cylinder boundary layers. A method for measurement of spatially resolved surface heat fluxes based on the measured cylinder-surface temperature distribution and a well-posed two-dimensional solution to the conduction problem in the cylinder wall is presented. The resulting spatially resolved flux measurements show enhanced surface heat transfer, which results from the intense buoyancy generated free-stream turbulence and mixing in the cylinder wake. This work extends the literature on thermal convection with crossflow well into the turbulent regime and is, to our knowledge, the first investigation of surface heat-transfer to an object of engineering importance placed in this type of turbulent mixed-convection flowfield. The data are currently being utilized for validation of mixed-convection turbulence models at Sandia and comparisons between the computational and experimental results are presented.
Forward-to-reverse bias step-recovery measurements were performed on In.07Ga.93N/GaN and Al.36Ga.64N/Al.46Ga.54N quantum-well (QW) light-emitting diodes grown on sapphire. With the QW sampling the minority-carrier hole density at a single position, distinctive two-phase optical decay curves were observed. Using diffusion equation solutions to self-consistently model both the electrical and optical responses, hole transport parameters tp = 758 {+-} 44 ns, Lp = 588 {+-} 45 nm, and up = 0.18 {+-} 0.02 cm2/Vs were obtained for GaN. The mobility was thermally activated with an activation energy of 52 meV, suggesting trap-modulated transport. Optical measurements of sub-bandgap peaks exhibited slow responses approaching the bulk lifetime. For Al.46Ga.54N, a longer lifetime of tp = 3.0 us was observed, and the diffusion length was shorter, Lp = 280 nm. Mobility was an order of magnitude smaller than in GaN, up = 10-2 cm2/Vs, and was insensitive to temperature, suggesting hole transport through a network of defects.
We report micro-Raman studies of self-heating in an AlGaN/GaN heterostructure field-effect transistor using below (visible 488.0 nm) and near (UV 363.8 nm) GaN band-gap excitation. The shallow penetration depth of the UV light allows us to measure temperature rise ({Delta}T) in the two-dimensional electron gas (2DEG) region of the device between drain and source. Visible light gives the average {Delta}T in the GaN layer, and that of the SiC substrate, at the same lateral position. Combined, we depth profile the self-heating. Measured {Delta}T in the 2DEG is consistently over twice the average GaN-layer value. Electrical and thermal transport properties are simulated. We identify a hotspot, located at the gate edge in the 2DEG, as the prevailing factor in the self-heating.
Order-of-accuracy verification is necessary to ensure that software correctly solves a given set of equations. One method to verify the order of accuracy of a code is the method of manufactured solutions. In this study, a manufactured solution has been derived and implemented that allows verification of not only the Euler, Navier-Stokes, and Reynolds-Averaged Navier-Stokes (RANS) equation sets, but also some of their associated boundary conditions (BC's): slip, no-slip (adiabatic and isothermal), and outflow (subsonic, supersonic, and mixed). Order-of-accuracy verification has been performed for the Euler and Navier-Stokes equations and these BC's in a compressible computational fluid dynamics code. All of the results shown are on skewed, non-uniform meshes. RANS results will be presented in a future paper. The observed order of accuracy was lower than the expected order of accuracy in two cases. One of these cases resulted in the identification and correction of a coding mistake in the CHAD gradient correction that was reducing the observed order of accuracy. This mistake would have been undetectable on a Cartesian mesh. During the search for the CHAD gradient correction problem, an unrelated coding mistake was found and corrected. The other case in which the observed order of accuracy was less than expected was a test of the slip BC; although no specific coding or formulation mistakes have yet been identified. After the correction of the identified coding mistakes, all of the aforementioned equation sets and BC's demonstrated the expected (or at least acceptable) order of accuracy except the slip condition.
Rate constants for the thermal dissociation of Si{sub 2}H{sub 6} are predicted with a novel transition state model. The saddle points for dissociation on the Si{sub 2}H{sub 6} potential energy surface are lower in energy than the corresponding separated products, as confirmed by high level ab initio quantum mechanical calculations. Thus, the dissociations of Si{sub 2}H{sub 6} to produce SiH{sub 2} + SiH{sub 4} (R1) and H{sub 3}SiSiH + H{sub 2} (R2) both proceed through tight inner transition states followed by loose outer transition states. The present 'dual' transition state model couples variational phase space theory treatments of the outer transition states with ab initio based fixed harmonic vibrator treatments of the inner transition states to obtain effective numbers of states for the two transition states acting in series. It is found that, at least near room temperature, such a dual transition state model is generally required for the proper description of each of the dissociations. Only at quite high temperatures, i.e., above 2000 K for (R1) and 600 K for (R2), does a single fixed inner transition state provide an adequate description. Similarly, only at quite low temperatures (below 100 and 10 K for (R1) and (R2), respectively) does a single outer transition state provide an adequate description. Pressure dependent rate constants are obtained from solutions to the multichannel master equation. These calculations confirm that dissociation channel (R2) is negligible under conditions relevant to the thermal chemical vapor deposition (CVD) processes. Rate constants for the chemical activation reactions, SiH{sub 2} + SiH{sub 4} {yields} Si{sub 2}H{sub 6} (R-1) and SiH{sub 2} + SiH{sub 4} {yields} H{sub 3}SiSiH + H{sub 2} (R3), are also evaluated within the dual transition state model. It is found that reaction R3 is the dominant channel for low pressures and high temperatures, i.e., below 100 Torr for temperatures above 1100 K.
In this article, we discuss in detail the addition of hydrogen atoms to diacetylene and the reverse dissociation reactions, H + C{sub 4}H{sub 2} {leftrightarrow} i-C{sub 4}H{sub 3} (R1) and H + C{sub 4}H{sub 2} n-C{sub 4}H{sub 3} (R2). The theory utilizes high-level electronic structure methodology to characterize the potential energy surface, Rice-Ramsperger-Kassel-Marcus (RRKM) theory to calculate microcanonical/J-resolved rate coefficients, and a two-dimensional master-equation approach to extract phenomenological (thermal) rate coefficients. Comparison is made with experimental results where they are available. The rate coefficients k{sub 1}(T, p) and k{sub 2}(T, p) are cast in forms that can be used in chemical kinetic modeling. In addition, we predict values of the heats of formation of i-C{sub 4}H{sub 3} and n-C{sub 4}H{sub 3} and discuss their importance in flame chemistry. Our basis-set extrapolated, quadratic-configuration-interaction with single and double excitations (and triple excitations added perturbatively), QCISD(T), predictions of these heats of formation at 298 K are 130.8 kcal/mol for n-C{sub 4}H{sub 3} and 119.3 kcal/mol for the i-isomer; multireference CI calculations with a nine-electron, nine-orbital, complete-active-space (CAS) reference wavefunction give just slightly larger values for these parameters. Our results are in good agreement with the recent focal-point analysis of Wheeler et al. (J. Chem. Phys. 2004, 121, 8800-8813), but they differ substantially for {Delta} H{sub f 298}{sup 0}(n-C{sub 4}H{sub 3}) with the earlier diffusion Monte Carlo predictions of Krokidis et al.
Many accelerators at Sandia National Laboratories utilize the Rimfire gas switch for high-voltage, high-power switching. Future accelerators will have increased performance requirements for switching elements. When designing improved versions of the Rimfire switch, there is a need for quick and accurate simulation of the electrical effects of geometry changes. This paper presents an advanced circuit model of the Rimfire switch that can be used for these simulations. The development of the model is shown along with comparisons to past models and experimental results.
The Sandia Lightning Simulator at Sandia National Laboratories can provide up to 200 kA for a simulated single lightning stroke, 100 kA for a subsequent stroke, and hundreds of Amperes of continuing current. It has recently been recommissioned after a decade of inactivity and the single-stroke capability demonstrated. The simulator capabilities, basic design components, upgrades, and diagnostic capabilities are discussed in this paper.
An experimental investigation is made into the fluid mechanics and heat transfer of a circular cylinder immersed in a wall-bounded turbulent mixed-convection flow of water. The cylinder is oriented spanwise to the forced channel flow and within the thermal boundary layer of the heated lower wall. The flow channel is capped with a cold, near-adiabatic upper wall producing a fully turbulent gap Rayleigh number of 10{sup 8}. A low-speed crossflow is applied to advect the turbulent thermal plumes over the cylinder surface. We present spatially resolved cylinder-surface heat-flux data alongside 2-D PIV imaging of the streamwise and wall-normal velocity components for two flow conditions in the mixed-convection heat-transfer regime. The measured cylinder-wake flowfield reflects the complex coupling between the separated wake flow, the highly turbulent freestream and the buoyant wall and cylinder boundary layers. A method for measurement of spatially resolved surface heat fluxes based on the measured cylinder-surface temperature distribution and a well-posed two-dimensional solution to the conduction problem in the cylinder wall is presented. The resulting spatially resolved flux measurements show enhanced surface heat transfer, which results from the intense buoyancy generated free-stream turbulence and mixing in the cylinder wake. This work extends the literature on thermal convection with crossflow well into the turbulent regime and is, to our knowledge, the first investigation of surface heat-transfer to an object of engineering importance placed in this type of turbulent mixed-convection flowfield. The data are currently being utilized for validation of mixed convection turbulence models at Sandia and comparisons between the computational and experimental results are presented.
The WLF equation is typically used to describe the dependence of polymer mobility on temperature at atmospheric pressure. Tests at different pressures would at least require different WLF parameterization. Completely different tests, for example, probing the temperature dependence of mobility at constant density, would require even greater modifications. By performing molecular dynamics simulations on simple chain molecules equilibrated at different thermodynamic states, we have shown that the mobility depends in a more general sense on the potential energy density of the system. That is, mobilities for any equilibrated state collapse onto one master curve when plotted against the potential energy density. Moreover, this relationship can be fit by either a 'generalized' WLF equation or by a power-law relationship observed in critical phenomena. When this mobility relationship is used within a rheologically simple, thermodynamically consistent, viscoelastic framework, quantitative agreement is seen between experimental data and theoretical predictions on a range of tests covering enthalpy relaxation to mechanical yield to physical aging.
In this study, we describe the extension of the 2-d preliminary design bluff body drag estimation tool developed by De Chant1 to apply for 3-d flows. As with the 2-d method, the 3-d extension uses a combined approximate Green's function/Gram-Charlier series approach to retain the body geometry information. Whereas, the 2-d methodology relied solely upon the use of small disturbance theory for the inviscid flow field associated with the body of interest to estimate the near-field initial conditions, e.g. velocity defect, the 3-d methodology uses both analytical (where available) and numerical inviscid solutions. The defect solution is then used as an initial condition in an approximate 3-d Green's function solution. Finally, the Green's function solution is matched to the 3-d analog of the classical 2-d Gram-Charlier series and then integrated to yield the net form drag on the bluff body. Preliminary results indicate that drag estimates computed are of accuracy equivalent to the 2-d method for flows with large separation, i.e. less than 20% relative error. As was the lower dimensional method, the 3-d concept is intended to be a supplement to turbulent Navier-Stokes and experimental solution for estimating drag coefficients over blunt bodies.
The Method of Nearby Problems is employed to generate exact solutions to equations 'nearby' the steady and unsteady Burgers equation. Burgers equation is chosen because of the existence of exact solutions, and these exact solutions are discussed. Legendre polynomials are used to derive the exact solutions to the nearby problems, and the application of Legendre polynomials for both 1D and 2D problems is also discussed. Results are presented for the steady-state Burgers equation corresponding to a viscous shock wave for Reynolds numbers of 8, 16, and 512. The low Reynolds number cases are well approximated by 10th order Legendre polynomial fits, while the high Reynolds number case is not. The unsteady Burgers equation corresponding to coalescence of two viscous shock waves at a Reynolds number of 8 is also examined. Preliminary results indicate that further investigation is required to accurately capture this 2D solution.
We performed calculations to investigate the classical theories of chain branching and thermal--run--away that lead to the rapid oxidation of fuels. Mathematically, both theories infer the existence of eigenvalues with positive real parts i.e., explosive modes. We found in studies of homogeneous hydrogen--air and the methane--air mixtures that when ignition is initiated by a sufficiently high initial temperature, the transient response of the system exhibits two stages. The first stage is characterized by the existence of explosive modes. The ensuing second stage consists of fast exponential decay modes that bring the system to its equilibrium point. We demonstrated with two examples that the existence of explosive modes is not a necessary condition for the existence of a premixed flame. Homogeneous ignition calculations for mixtures with an initial concentration of radical species suggest that the diffusive transport of radical species is probably responsible for the lack of explosive modes in premixed flames.
Modal analysis of three-dimensional structures frequently involves finite element discretizations with millions of unknowns and requires computing hundreds or thousands of eigenpairs. In this presentation we review methods based on domain decomposition for such eigenspace computations in structural dynamics. We distinguish approaches that solve the eigenproblem algebraically (with minimal connections to the underlying partial differential equation) from approaches that tightly couple the eigensolver with the partial differential equation.
Low-temperature combustion concepts that utilize cooled EGR, early/retarded injection, high swirl ratios, and modest compression ratios have recently received considerable attention. To understand the combustion and, in particular, the soot formation process under these operating conditions, a modeling study was carried out using the KIVA-3V code with an improved phenomenological soot model. This multi-step soot model includes particle inception, surface growth, surface oxidation, and particle coagulation. Additional models include a piston-ring crevice model, the KH/RT spray breakup model, a droplet wall impingement model, a wall heat transfer model, and the RNG k-{var_epsilon} turbulence model. The Shell model was used to simulate the ignition process, and a laminar-and-turbulent characteristic time combustion model was used for the post-ignition combustion process. A low-load (IMEP=3 bar) operating condition was considered and the predicted in-cylinder pressures and heat release rates were compared with measurements. Predicted soot mass, soot particle size, soot number density distributions and other relevant quantities are presented and discussed. The effects of variable EGR rate (0-68%), injection pressure (600-1200 bar), and injection timing were studied. The predictions demonstrate that both EGR and retarded injection are beneficial for reducing NO{sub x} emissions, although the former has a more pronounced effect. Additionally, higher soot emissions are typically predicted for the higher EGR rates. However, when the EGR rate exceeds a critical value (over 65% in this study), the soot emissions decrease. Reduced soot emissions are also predicted when higher injection pressures or retarded injection timings are employed. The reduction in soot with retarded injection is less than what is observed experimentally, however.