Wear is a critical factor in determining the durability of microelectromechanical systems (MEMS). While the reliability of polysilicon MEMS has received extensive attention, the mechanisms responsible for this failure mode at the microscale have yet to be conclusively determined. We have used on-chip polycrystalline silicon side-wall friction MEMS specimens to study active mechanisms during sliding wear in ambient air. Worn parts were examined by analytical scanning and transmission electron microscopy, while local temperature changes were monitored using advanced infrared microscopy. Observations show that small amorphous debris particles ({approx}50-100 nm) are removed by fracture through the silicon grains ({approx}500 nm) and are oxidized during this process. Agglomeration of such debris particles into larger clusters also occurs. Some of these debris particles/clusters create plowing tracks on the beam surface. A nano-crystalline surface layer ({approx}20-200 nm), with higher oxygen content, forms during wear at and below regions of the worn surface; its formation is likely aided by high local stresses. No evidence of dislocation plasticity or of extreme local temperature increases was found, ruling out the possibility of high temperature-assisted wear mechanisms.
Microanalysis is typically performed to analyze the near surface of materials. There are many instances where chemical information about the third spatial dimension is essential to the solution of materials analyses. The majority of 3D analyses however focus on limited spectral acquisition and/or analysis. For truly comprehensive 3D chemical characterization, 4D spectral images (a complete spectrum from each volume element of a region of a specimen) are needed. Furthermore, a robust statistical method is needed to extract the maximum amount of chemical information from that extremely large amount of data. In this paper, an example of the acquisition and multivariate statistical analysis of 4D (3-spatial and 1-spectral dimension) x-ray spectral images is described. The method of utilizing a single- or dual-beam FIB (w/o or w/SEM) to get at 3D chemistry has been described by others with respect to secondary-ion mass spectrometry. The basic methodology described in those works has been modified for comprehensive x-ray microanalysis in a dual-beam FIB/SEM (FEI Co. DB-235). In brief, the FIB is used to serially section a site-specific region of a sample and then the electron beam is rastered over the exposed surfaces with x-ray spectral images being acquired at each section. All this is performed without rotating or tilting the specimen between FIB cutting and SEM imaging/x-ray spectral image acquisition. The resultant 4D spectral image is then unfolded (number of volume elements by number of channels) and subjected to the same multivariate curve resolution (MCR) approach that has proven successful for the analysis of lower-dimension x-ray spectral images. The TSI data sets can be in excess of 4Gbytes. This problem has been overcome (for now) and images up to 6Gbytes have been analyzed in this work. The method for analyzing such large spectral images will be described in this presentation. A comprehensive 3D chemical analysis was performed on several corrosion specimens of Cu electroplated with various metals. Figure 1A shows the top view of the localized corrosion region prepared for FIB sectioning. The TSI region has been coated with Pt and a trench has been milled along the bottom edge of the region, exposing it to the electron beam as seen in Figure 1B. The TSI consisted of 25 sections and was approximately 6Gbytes. Figure 1C shows several of the components rendered in 3D: Green is Cu; blue is Pb; cyan represents one of the corrosion products that contains Cu, Zn, O, S, and C; and orange represents the other corrosion product with Zn, O, S and C. Figure 1 D shows all of the component spectral shapes from the analysis. There is severe pathological overlap of the spectra from Ni, Cu and Zn as well as Pb and S. in spite of this clean spectral shapes have been extracted from the TSI. This powerful TSI technique could be applied to other sectioning methods well.
Analyzing the performance of a complex System of Systems (SoS) requires a systems engineering approach. Many such SoS exist in the Military domain. Examples include the Army's next generation Future Combat Systems 'Unit of Action' or the Navy's Aircraft Carrier Battle Group. In the case of a Unit of Action, a system of combat vehicles, support vehicles and equipment are organized in an efficient configuration that minimizes logistics footprint while still maintaining the required performance characteristics (e.g., operational availability). In this context, systems engineering means developing a global model of the entire SoS and all component systems and interrelationships. This global model supports analyses that result in an understanding of the interdependencies and emergent behaviors of the SoS. Sandia National Laboratories will present a robust toolset that includes methodologies for developing a SoS model, defining state models and simulating a system of state models over time. This toolset is currently used to perform logistics supportability and performance assessments of the set of Future Combat Systems (FCS) for the U.S. Army's Program Manager Unit of Action.
We design a density-functional-theory (DFT) exchange-correlation functional that enables an accurate treatment of systems with electronic surfaces. Surface-specific approximations for both exchange and correlation energies are developed. A subsystem functional approach is then used: an interpolation index combines the surface functional with a functional for interior regions. When the local density approximation is used in the interior, the result is a straightforward functional for use in self-consistent DFT. The functional is validated for two metals (Al, Pt) and one semiconductor (Si) by calculations of (i) established bulk properties (lattice constants and bulk moduli) and (ii) a property where surface effects exist (the vacancy formation energy). Good and coherent results indicate that this functional may serve well as a universal first choice for solid-state systems and that yet improved functionals can be constructed by this approach.
In this paper, we explore the stability properties of time-domain numerical methods for multitime partial differential equations (MPDEs) in detail. We demonstrate that simple techniques for numerical discretization can lead easily to instability. By investigating the underlying eigenstructure of several discretization techniques along different artificial time scales, we show that not all combinations of techniques are stable. We identify choices of discretization method and step size, along fast and slow time scales, that lead to robust, stable time-domain integration methods for the MPDE. One of our results is that applying overstable methods along one time-scale can compensate for unstable discretization along others. Our novel integration schemes bring robustness to time-domain MPDE solution methods, as we demonstrate with examples.
Solid-state {sup 1}H magic angle spinning (MAS) NMR was used to investigate sulfonated Diels-Alder poly(phenlylene) polymer membranes. Under high spinning speed {sup 1}H MAS conditions, the proton environments of the sulfonic acid and phenylene polymer backbone are resolved. A double-quantum (DQ) filter using the rotor-synchronized back-to-back (BABA) NMR multiple-pulse sequence allowed the selective suppression of the sulfonic proton environment in the {sup 1}H MAS NMR spectra. This DQ filter in conjunction with a spin diffusion NMR experiment was then used to measure the domain size of the sulfonic acid component within the membrane. In addition, the temperature dependence of the sulfonic acid spin-spin relaxation time (T{sub 2}) was determined, providing an estimate of the activation energy for the proton dynamics of the dehydrated membrane.
The fundamental chemical behavior of the AlCl{sub 3}/SO{sub 2}Cl{sub 2} catholyte system was investigated using {sup 27}Al NMR spectroscopy, Raman spectroscopy, and single-crystal X-ray diffraction. Three major Al-containing species were found to be present in this catholyte system, where the ratio of each was dependent upon aging time, concentration, and/or storage temperature. The first species was identified as [Cl{sub 2}Al({mu}-Cl)]{sub 2} in equilibrium with AlCl{sub 3}. The second species results from the decomposition of SO{sub 2}Cl{sub 2} which forms Cl{sub 2}(g) and SO{sub 2}(g). The SO{sub 2}(g) is readily consumed in the presence of AlCl{sub 3} to form the crystallographically characterized species [Cl{sub 2}Al({mu}-O{sub 2}SCl)]{sub 2} (1). For 1, each Al is tetrahedrally (T{sub d}) bound by two terminal Cl and two {mu}-O ligands whereas, the S is three-coordinated by two {mu}-O ligands and one terminal Cl. The third molecular species also has T{sub d}-coordinated Al metal centers but with increased oxygen coordination. Over time it was noted that a precipitate formed from the catholyte solutions. Raman spectroscopic studies show that this gel or precipitate has a component that was consistent with thionyl chloride. We have proposed a polymerization scheme that accounts for the precipitate formation. Further NMR studies indicate that the precipitate is in equilibrium with the solution.
We demonstrate direct diode-bar side pumping of a Yb-doped fiber laser using embedded-mirror side pumping (EMSP). In this method, the pump beam is launched by reflection from a micro-mirror embedded in a channel polished into the inner cladding of a double-clad fiber (DCF). The amplifier employed an unformatted, non-lensed, ten-emitter diode bar (20 W) and glass-clad, polarization-maintaining, large-mode-area fiber. Measurements with passive fiber showed that the coupling efficiency of the raw diode-bar output into the DCF (ten launch sites) was {approx}84%; for comparison, the net coupling efficiency using a conventional, formatted, fiber-coupled diode bar is typically 50-70%, i.e., EMSP results in a factor of 2-3 less wasted pump power. The slope efficiency of the side-pumped fiber laser was {approx}80% with respect to launched pump power and 24% with respect to electrical power consumption of the diode bar; at a fiber-laser output power of 7.5 W, the EMSP diode bar consumed 41 W of electrical power (18% electrical-to-optical efficiency). When end pumped using a formatted diode bar, the fiber laser consumed 96 W at 7.5 W output power, a factor of 2.3 less efficient, and the electrical-to-optical slope efficiency was lower by a factor of 2.0. Passive-fiber measurements showed that the EMSP alignment sensitivity is nearly identical for a single emitter as for the ten-emitter bar. EMSP is the only method capable of directly launching the unformatted output of a diode bar directly into DCF (including glass-clad DCF), enabling fabrication of low-cost, simple, and compact, diode-bar-pumped fiber lasers and amplifiers.
Spreading of bacteria in a highly advective, disordered environment is examined. Predictions of super-diffusive spreading for a simplified reaction-diffusion equation are tested. Concentration profiles display anomalous growth and super-diffusive spreading. A perturbation analysis yields a crossover time between diffusive and super-diffusive behavior. The time's dependence on the convection velocity and disorder is tested. Like the simplified equation, the full linear reaction-diffusion equation displays super-diffusive spreading perpendicular to the convection. However, for mean positive growth rates the full nonlinear reaction-diffusion equation produces symmetric spreading with a Fisher wavefront, whereas net negative growth rates cause an asymmetry, with a slower wavefront velocity perpendicular to the convection.
Chemical crosslinking is an important tool for probing protein structure and protein-protein interactions. The approach usually involves crosslinking of specific amino acids within a folded protein or protein complex, enzymatic digestion of the crosslinked protein(s), and identification of the resulting crosslinked peptides by liquid chromatography/mass spectrometry (LC/MS). In this manner, distance constraints are obtained for residues that must be in close proximity to one another in the native structure or complex. As the complexity of the system under study increases, for example, a large multi-protein complex, simply measuring the mass of a crosslinked species will not always be sufficient to determine the identity of the crosslinked peptides. In such a case, tandem mass spectrometry (MS/MS) could provide the required information if the data can be properly interpreted. In MS/MS, a species of interest is isolated in the gas phase and allowed to undergo collision induced dissociation (CID). Because the gas-phase dissociation pathways of peptides have been well studied, methods are established for determining peptide sequence by MS/MS. However, although crosslinked peptides dissociate through some of the same pathways as isolated peptides, the additional dissociation pathways available to the former have not been studied in detail. Software such as MS2Assign has been written to assist in the interpretation of MS/MS from crosslinked peptide species, but it would be greatly enhanced by a more thorough understanding of how these species dissociate. We are thus systematically investigating the dissociation pathways open to crosslinked peptide species. A series of polyalanine and polyglycine model peptides have been synthesized containing one or two lysine residues to generate defined inter- and intra-molecular crosslinked species, respectively. Each peptide contains 11 total residues, and one arginine residue is present at the carboxy terminus to mimic species generated by tryptic digestion. The peptides have been allowed to react with a series of commonly used crosslinkers such as DSS, DSG, and DST. The tandem mass spectra acquired for these crosslinked species are being examined as a function of crosslinker identity, site(s) of crosslinking, and precursor charge state. Results from these model studies and observations from actual experimental systems are being incorporated into the MS2Assign software to enhance our ability to effectively use chemical crosslinking in protein complex determination.
The Surface Evolver was used to compute the equilibrium microstructure of random soap foams with bidisperse cell-size distributions and to evaluate topological and geometric properties of the foams and individual cells. The simulations agree with the experimental data of Matzke and Nestler for the probability {rho}(F) of finding cells with F faces and its dependence on the fraction of large cells. The simulations also agree with the theory for isotropic Plateau polyhedra (IPP), which describes the F-dependence of cell geometric properties, such as surface area, edge length, and mean curvature (diffusive growth rate); this is consistent with results for polydisperse foams. Cell surface areas are about 10% greater than spheres of equal volume, which leads to a simple but accurate relation for the surface free energy density of foams. The Aboav-Weaire law is not valid for bidisperse foams.
Bulk migration of particles towards regions of lower shear occurs in suspensions of neutrally buoyant spheres in Newtonian fluids undergoing creeping flow in the annular region between two rotating, coaxial cylinders (a wide-gap Couette). For a monomodal suspension of spheres in a viscous fluid, dimensional analysis indicates that the rate of migration at a given concentration should scale with the square of the sphere radius. However, a previous experimental study showed that the rate of migration of spherical particles at 50% volume concentration actually scaled with the sphere radius to approximately the 2.9 power.
Three nested molybdenum wire arrays with initial outer diameters of 45, 50, and 55 mm were imploded by the - 20 MA, 90 ns rise-time current pulse of Sandia's Z accelerator. The implosions generated Mo plasmas with {approx} 10% of the array's initial mass reaching Ne-like and nearby ionization stages. These ions emitted 2-4 keV L-shell x rays with radiative powers approaching 10 TW. Mo L-shell spectra with axial and temporal resolution were captured and have been analyzed using a collisional-radiative model. The measured spectra indicate significant axial variation in the electron density, which increases from a few times 10{sup 20} cm{sup -3} at the cathode up to - 3 x 10{sup 21} cm{sup -3} near the middle of the 20 mm plasma column (8 mm from the anode). Time-resolved spectra indicate that the peak electron density is reached before the peak of the L-shell emission and decreases with time, while the electron temperature remains within 10% of 1.7 keV over the 20-30 ns L-shell radiation pulse. Finally, while the total yield, peak total power, and peak L-shell power all tended to decrease with increasing initial wire array diameters, the L-shell yield and the average plasma conditions varied little with the initial wire array diameter.
Proposed for publication in Association for Computing Machinery Transactions on Mathematical Software.
ODRPAC (TOMS Algorithm 676) has provided a complete package for weighted orthogonal distance regression for many years. The code is complete with user selectable reporting facilities, numerical and analytic derivatives, derivative checking, and many more features. The foundation for the algorithm is a stable and efficient trust region Levenberg-Marquardt minimizer that exploits the structure of the orthogonal distance regression problem. ODRPAC95 is a modification of the original ODRPAC code that adds support for bound constraints, uses the newer Fortran 95 language, and simplifies the interface to the user called subroutine.
We consider the accuracy of predictions made by integer programming (IP) models of sensor placement for water security applications. We have recently shown that IP models can be used to find optimal sensor placements for a variety of different performance criteria (e.g. minimize health impacts and minimize time to detection). However, these models make a variety of simplifying assumptions that might bias the final solution. We show that our IP modeling assumptions are similar to models developed for other sensor placement methodologies, and thus IP models should give similar predictions. However, this discussion highlights that there are significant differences in how temporal effects are modeled for sensor placement. We describe how these modeling assumptions can impact sensor placements.
A focused ion beam (FIB) is used to accurately sculpt predetermined micron-scale, curved shapes in a number of solids. Using a digitally scanned ion beam system, various features are sputtered including hemispheres and sine waves having dimensions from 1-50 {micro}m. Ion sculpting is accomplished by changing pixel dwell time within individual boustrophedonic scans. The pixel dwell times used to sculpt a given shape are determined prior to milling and account for the material-specific, angle-dependent sputter yield, Y({theta}), as well as the amount of beam overlap in adjacent pixels. A number of target materials, including C, Au and Si, are accurately sculpted using this method. For several target materials, the curved feature shape closely matches the intended shape with milled feature depths within 5% of intended values.
Ccaffeine is a Common Component Architecture (CCA) framework devoted to high-performance computing. In this note we give an overview of the system features of Ccaffeine and CCA that support component-based HPC application development. Object-oriented, single-threaded and lightweight, Ccaffeine is designed to get completely out of the way of the running application after it has been composed from components. Ccaffeine is one of the few frameworks, CCA or otherwise, that can compose and run applications on a parallel machine interactively and then automatically generate a static, possibly self-tuning, executable for production runs. Users can experiment with and debug applications interactively, improving their productivity. When the application is ready, a script is automatically generated, parsed and turned into a static executable for production runs. Within this static executable, dynamic replacement of components can be performed by self-tuning applications.
In recent years, several integer programming models have been proposed to place sensors in municipal water networks in order to detect intentional or accidental contamination. Although these initial models assumed that it is equally costly to place a sensor at any place in the network, there clearly are practical cost constraints that would impact a sensor placement decision. Such constraints include not only labor costs but also the general accessibility of a sensor placement location. In this paper, we extend our integer program to explicitly model the cost of sensor placement. We partition network locations into groups of varying placement cost, and we consider the public health impacts of contamination events under varying budget constraints. Thus our models permit cost/benefit analyses for differing sensor placement designs. As a control for our optimization experiments, we compare the set of sensor locations selected by the optimization models to a set of manually-selected sensor locations.
Electrical operation of III-Nitride light emitting diodes (LEDs) with photonic crystal structures is demonstrated. Employing photonic crystal structures in III-Nitride LEDs is a method to increase light extraction efficiency and directionality. The photonic crystal is a triangular lattice formed by dry etching into the III-Nitride LED. A range of lattice constants is considered (a {approx} 270-340nm). The III-Nitride LED layers include a tunnel junction providing good lateral current spreading without a semi-absorbing metal current spreader as is typically done in conventional III-Nitride LEDs. These photonic crystal III-Nitride LED structures are unique because they allow for carrier recombination and light generation proximal to the photonic crystal (light extraction area) yet displaced from the absorbing metal contact. The photonic crystal Bragg scatters what would have otherwise been guided modes out of the LED, increasing the extraction efficiency. The far-field light radiation patterns are heavily modified compared to the typical III-Nitride LED's Lambertian output. The photonic crystal affects the light propagation out of the LED surface, and the radiation pattern changes with lattice size. LEDs with photonic crystals are compared to similar III-Nitride LEDs without the photonic crystal in terms of extraction, directionality, and emission spectra.
Experimental evidence suggests that the energy balance between processes in play during wire array implosions is not well understood. In fact the radiative yields can exceed by several times the implosion kinetic energy. A possible explanation is that the coupling from magnetic energy to kinetic energy as magnetohydrodynamic plasma instabilities develop provides additional energy. It is thus important to model the instabilities produced in the after implosion stage of the wire array in order to determine how the stored magnetic energy can be connected with the radiative yields. To this aim three-dimensional hybrid simulations have been performed. They are initialized with plasma radial density profiles, deduced in recent experiments [C. Deeney et al., Phys. Plasmas 6, 3576 (1999)] that exhibited large x-ray yields, together with the corresponding magnetic field profiles. Unlike previous work, these profiles do not satisfy pressure balance and differ substantially from those of a Bennett equilibrium. They result in faster growth with an associated transfer of magnetic energy to plasma motion and hence kinetic energy.
Pulsed power driven metallic wire-array Z pinches are the most powerful and efficient laboratory x-ray sources. Furthermore, under certain conditions the soft x-ray energy radiated in a 5 ns pulse at stagnation can exceed the estimated kinetic energy of the radial implosion phase by a factor of 3 to 4. A theoretical model is developed here to explain this, allowing the rapid conversion of magnetic energy to a very high ion temperature plasma through the generation of fine scale, fast-growing m=0 interchange MHD instabilities at stagnation. These saturate nonlinearly and provide associated ion viscous heating. Next the ion energy is transferred by equipartition to the electrons and thus to soft x-ray radiation. Recent time-resolved iron spectra at Sandia confirm an ion temperature T{sub i} of over 200 keV (2 x 10{sup 9} degrees), as predicted by theory. These are believed to be record temperatures for a magnetically confined plasma.
Bayesian medical monitoring is a concept based on using real-time performance-related data to make statistical predictions about a patient's future health. The following paper discusses the fundamentals behind the medical monitoring concept and the application to monitoring the health of nuclear reactors. Necessary assumptions are discussed regarding distributions and failure-rate calculations. A simple example is performed to illustrate the effectiveness of the methods. The methods perform very well for the thirteen subjects in the example, with a clear failure sequence identified for eleven of the subjects.
A novel dual stage chemiluminescence detection system incorporating individually controlled hot stages has been developed and applied to probe for material interaction effects during polymer degradation. Utilization of this system has resulted in experimental confirmation for the first time that in an oxidizing environment a degrading polymer A (in this case polypropylene, PP) is capable of infecting a different polymer B (in this case polybutadiene, HTPB) over a relatively large distance. In the presence of the infectious degrading polymer A, the thermal degradation of polymer B is observed over a significantly shorter time period. Consistent with infectious volatiles from material A initiating the degradation process in material B it was demonstrated that traces (micrograms) of a thermally sensitive peroxide in the vicinity of PP could induce degradation remotely. This observation documents cross-infectious phenomena between different polymers and has major consequences for polymer interactions, understanding fundamental degradation processes and long-term aging effects under combined material exposures.
Photocatalytic porphyrins are used to reduce metal complexes from aqueous solution and, further, to control the deposition of metals onto porphyrin nanotubes and surfactant assembly templates to produce metal composite nanostructures and nanodevices. For example, surfactant templates lead to spherical platinum dendrites and foam-like nanomaterials composed of dendritic platinum nanosheets. Porphyrin nanotubes are reported for the first time, and photocatalytic porphyrin nanotubes are shown to reduce metal complexes and deposit the metal selectively onto the inner or outer surface of the tubes, leading to nanotube-metal composite structures that are capable of hydrogen evolution and other nanodevices.
Proposed for publication in Journal of Applied Physics.
Computation of space-charge current-limiting effects across a vacuum cavity between parallel electrodes has previously been carried out only for thermionic emission spectra. In some applications, where the current arises from an injected electron beam or photo-Compton emission from electrode walls, the electron energy spectra may deviate significantly from Maxwellian. Considering the space charge as a collisionless plasma, we derive an implicit equation for the peak cavity potential assuming steady-state currents. For the examples of graphite, nickel, and gold electrodes exposed to x rays, we find that cavity photoemission currents are typically more severely space-charge limited than they would be with the assumption of a purely Maxwellian energy distribution.
Network administrators and security analysts often do not know what network services are being run in every corner of their networks. If they do have a vague grasp of the services running on their networks, they often do not know what specific versions of those services are running. Actively scanning for services and versions does not always yield complete results, and patch and service management, therefore, suffer. We present Net-State, a system for monitoring, storing, and reporting application and operating system version information for a network. NetState gives security and network administrators the ability to know what is running on their networks while allowing for user-managed machines and complex host configurations. Our architecture uses distributed modules to collect network information and a centralized server that stores and issues reports on that collected version information. We discuss some of the challenges to building and operating NetState as well as the legal issues surrounding the promiscuous capture of network data. We conclude that this tool can solve some key problems in network management and has a wide range of possibilities for future uses.
The stainless steel alloy 17-4PH contains a martensitic microstructure and second phase delta ({delta}) ferrite. Strengthening of 17-4PH is attributed to Cu-rich precipitates produced during age hardening treatments at 900-1150 F (H900-H1150). For wrought 17-4PH, the effects of heat treatment and microstructure on mechanical properties are well-documented [for example, Ref. 1]. Fewer studies are available on cast 17-4PH, although it has been a popular casting alloy for high strength applications where moderate corrosion resistance is needed. Microstructural features and defects particular to castings may have adverse effects on properties, especially when the alloy is heat treated to high strength. The objective of this work was to outline the effects of microstructural features specific to castings, such as shrinkage/solidification porosity, on the mechanical behavior of investment cast 17-4PH. Besides heat treatment effects, the results of metallography and SEM studies showed that the largest effect on mechanical properties is from shrinkage/solidification porosity. Figure 1a shows stress-strain curves obtained from samples machined from castings in the H925 condition. The strength levels were fairly similar but the ductility varied significantly. Figure 1b shows an example of porosity on a fracture surface from a room-temperature, quasi-static tensile test. The rounded features represent the surfaces of dendrites which did not fuse or only partially fused together during solidification. Some evidence of local areas of fracture is found on some dendrite surfaces. The shrinkage pores are due to inadequate backfilling of liquid metal and simultaneous solidification shrinkage during casting. A summary of percent elongation results is displayed in Figure 2a. It was found that higher amounts of porosity generally result in lower ductility. Note that the porosity content was measured on the fracture surfaces. The results are qualitatively similar to those found by Gokhale et al. and Surappa et al. in cast A356 Al and by Gokhale et al. for a cast Mg alloys. The quantitative fractography and metallography work by Gokhale et al. illustrated the strong preference for fracture in regions of porosity in cast material. That is, the fracture process is not correlated to the average microstructure in the material but is related to the extremes in microstructure (local regions of high void content). In the present study, image analysis on random cross-sections of several heats indicated an overall porosity content of 0.03%. In contrast, the area % porosity was as high as 16% when measured on fracture surfaces of tensile specimens using stereology techniques. The results confirm that the fracture properties of cast 17-4PH cannot be predicted based on the overall 'average' porosity content in the castings.
Policy-based network management (PBNM) uses policy-driven automation to manage complex enterprise and service provider networks. Such management is strongly supported by industry standards, state of the art technologies and vendor product offerings. We present a case for the use of PBNM and related technologies for end-to-end service delivery. We provide a definition of PBNM terms, a discussion of how such management should function and the current state of the industry. We include recommendations for continued work that would allow for PBNM to be put in place over the next five years in the unclassified environment.
ALEGRA is an arbitrary Lagrangian-Eulerian finite element code that emphasizes large distortion and shock propagation in inviscid fluids and solids. This document describes user options for modeling resistive magnetohydrodynamics, thermal conduction, and radiation transport effects, and two material temperature physics.
To establish mechanical properties and failure criteria of silicon carbide (SiC-N) ceramics, a series of quasi-static compression tests has been completed using a high-pressure vessel and a unique sample alignment jig. This report summarizes the test methods, set-up, relevant observations, and results from the constitutive experimental efforts. Results from the uniaxial and triaxial compression tests established the failure threshold for the SiC-N ceramics in terms of stress invariants (I{sub 1} and J{sub 2}) over the range 1246 < I{sub 1} < 2405. In this range, results are fitted to the following limit function (Fossum and Brannon, 2004) {radical}J{sub 2}(MPa) = a{sub 1} - a{sub 3}e -a{sub 2}(I{sub 1}/3) + a{sub 4} I{sub 1}/3, where a{sub 1} = 10181 MPa, a{sub 2} = 4.2 x 10{sup -4}, a{sub 3} = 11372 MPa, and a{sub 4} = 1.046. Combining these quasistatic triaxial compression strength measurements with existing data at higher pressures naturally results in different values for the least-squares fit to this function, appropriate over a broader pressure range. These triaxial compression tests are significant because they constitute the first successful measurements of SiC-N compressive strength under quasistatic conditions. Having an unconfined compressive strength of {approx}3800 MPa, SiC-N has been heretofore tested only under dynamic conditions to achieve a sufficiently large load to induce failure. Obtaining reliable quasi-static strength measurements has required design of a special alignment jig and load-spreader assembly, as well as redundant gages to ensure alignment. When considered in combination with existing dynamic strength measurements, these data significantly advance the characterization of pressure-dependence of strength, which is important for penetration simulations where failed regions are often at lower pressures than intact regions.
A laser hazard analysis and safety assessment was performed for the 3rd Tech model DeltaSphere-3000{reg_sign} Laser 3D Scene Digitizer, infrared laser scanner model based on the 2000 version of the American National Standard Institute's Standard Z136.1, for the Safe Use of Lasers. The portable scanner system is used in the Robotic Manufacturing Science and Engineering Laboratory (RMSEL). This scanning system had been proposed to be a demonstrator for a new application. The manufacture lists the Nominal Ocular Hazard Distance (NOHD) as less than 2 meters. It was necessary that SNL validate this NOHD prior to its use as a demonstrator involving the general public. A formal laser hazard analysis is presented for the typical mode of operation for the current configuration as well as a possible modified mode and alternative configuration.
Several SIERRA applications make use of third-party libraries to solve systems of linear and nonlinear equations, and to solve eigenproblems. The classes and interfaces in the SIERRA framework that provide linear system assembly services and access to solver libraries are collectively referred to as solver services. This paper provides an overview of SIERRA's solver services including the design goals that drove the development, and relationships and interactions among the various classes. The process of assembling and manipulating linear systems will be described, as well as access to solution methods and other operations.
The Finite Element Interface to Linear Solvers (FEI) is a linear system assembly library. Sparse systems of linear equations arise in many computational engineering applications, and the solution of linear systems is often the most computationally intensive portion of the application. Depending on the complexity of problems addressed by the application, there may be no single solver package capable of solving all of the linear systems that arise. This motivates the need to switch an application from one solver library to another, depending on the problem being solved. The interfaces provided by various solver libraries for data assembly and problem solution differ greatly, making it difficult to switch an application code from one library to another. The amount of library-specific code in an application can be greatly reduced by having an abstraction layer that puts a 'common face' on various solver libraries. The FEI has seen significant use by finite element applications at Sandia National Laboratories and Lawrence Livermore National Laboratory. The original FEI offered several advantages over using linear algebra libraries directly, but also imposed significant limitations and disadvantages. A new set of interfaces has been added with the goal of removing the limitations of the original FEI while maintaining and extending its strengths.
This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runs on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.
The manipulation of physical interactions between structural moieties on the molecular scale is a fundamental hurdle in the realization and operation of nanostructured materials and high surface area microsystem architectures. These include such nano-interaction-based phenomena as self-assembly, fluid flow, and interfacial tribology. The proposed research utilizes photosensitive molecular structures to tune such interactions reversibly. This new material strategy provides optical actuation of nano-interactions impacting behavior on both the nano- and macroscales and with potential to impact directed nanostructure formation, microfluidic rheology, and tribological control.
As part of the U.S. Department of Energy (DOE) Office of Industrial Technologies (OIT) Industries of the Future (IOF) Forest Products research program, a collaborative investigation was conducted on the sources, characteristics, and deposition of particles intermediate in size between submicron fume and carryover in recovery boilers. Laboratory experiments on suspended-drop combustion of black liquor and on black liquor char bed combustion demonstrated that both processes generate intermediate size particles (ISP), amounting to 0.5-2% of the black liquor dry solids mass (BLS). Measurements in two U.S. recovery boilers show variable loadings of ISP in the upper furnace, typically between 0.6-3 g/Nm{sup 3}, or 0.3-1.5% of BLS. The measurements show that the ISP mass size distribution increases with size from 5-100 {micro}m, implying that a substantial amount of ISP inertially deposits on steam tubes. ISP particles are depleted in potassium, chlorine, and sulfur relative to the fuel composition. Comprehensive boiler modeling demonstrates that ISP concentrations are substantially overpredicted when using a previously developed algorithm for ISP generation. Equilibrium calculations suggest that alkali carbonate decomposition occurs at intermediate heights in the furnace and may lead to partial destruction of ISP particles formed lower in the furnace. ISP deposition is predicted to occur in the superheater sections, at temperatures greater than 750 C, when the particles are at least partially molten.
The thermal hazard posed by large hydrocarbon fires is dominated by the radiative emission from high temperature soot. Since the optical properties of soot, especially in the infrared region of the electromagnetic spectrum, as well as its morphological properties, are not well known, efforts are underway to characterize these properties. Measurements of these soot properties in large fires are important for heat transfer calculations, for interpretation of laser-based diagnostics, and for developing soot property models for fire field models. This research uses extractive measurement diagnostics to characterize soot optical properties, morphology, and composition in 2 m pool fires. For measurement of the extinction coefficient, soot extracted from the flame zone is transported to a transmission cell where measurements are made using both visible and infrared lasers. Soot morphological properties are obtained by analysis via transmission electron microscopy of soot samples obtained thermophoretically within the flame zone, in the overfire region, and in the transmission cell. Soot composition, including carbon-to-hydrogen ratio and polycyclic aromatic hydrocarbon concentration, is obtained by analysis of soot collected on filters. Average dimensionless extinction coefficients of 8.4 {+-} 1.2 at 635 nm and 8.7 {+-} 1.1 at 1310 nm agree well with recent measurements in the overfire region of JP-8 and other fuels in lab-scale burners and fires. Average soot primary particle diameters, radius of gyration, and fractal dimensions agree with these recent studies. Rayleigh-Debye-Gans theory of scattering applied to the measured fractal parameters shows qualitative agreement with the trends in measured dimensionless extinction coefficients. Results of the density and chemistry are detailed in the report.
The measurement of heat flux in hydrocarbon fuel fires (e.g., diesel or JP-8) is difficult due to high temperatures and the sooty environment. Un-cooled commercially available heat flux gages do not survive in long duration fires, and cooled gages often become covered with soot, thus changing the gage calibration. An alternate method that is rugged and relatively inexpensive is based on inverse heat conduction methods. Inverse heat-conduction methods estimate absorbed heat flux at specific material interfaces using temperature/time histories, boundary conditions, material properties, and usually an assumption of one-dimensional (1-D) heat flow. This method is commonly used at Sandia.s fire test facilities. In this report, an uncertainty analysis was performed for a specific example to quantify the effect of input parameter variations on the estimated heat flux when using the inverse heat conduction method. The approach used was to compare results from a number of cases using modified inputs to a base-case. The response of a 304 stainless-steel cylinder [about 30.5 cm (12-in.) in diameter and 0.32-cm-thick (1/8-in.)] filled with 2.5-cm-thick (1-in.) ceramic fiber insulation was examined. Input parameters of an inverse heat conduction program varied were steel-wall thickness, thermal conductivity, and volumetric heat capacity; insulation thickness, thermal conductivity, and volumetric heat capacity, temperature uncertainty, boundary conditions, temperature sampling period; and numerical inputs. One-dimensional heat transfer was assumed in all cases. Results of the analysis show that, at the maximum heat flux, the most important parameters were temperature uncertainty, steel thickness and steel volumetric heat capacity. The use of a constant thermal properties rather than temperature dependent values also made a significant difference in the resultant heat flux; therefore, temperature-dependent values should be used. As an example, several parameters were varied to estimate the uncertainty in heat flux. The result was 15-19% uncertainty to 95% confidence at the highest flux, neglecting multidimensional effects.