We have investigated a model athermal system consisting of polystyrene (PS) nanoparticles (NPs) in PS melts. Neutron scattering shows that the chain dimensions expand in the presence of the NPs. We investigate this result theoretically using self-consistent PRISM theory, and also find an expansion in chain dimensions as a function of NP volume fraction. Recently it has been shown that nanoparticles can suppress dewetting in thin polymer films, a counterintuitive result since particles usually induce dewetting. Neutron reflectivity measurements have shown that the NPs phase separate to the surface, so one proposed mechanism for the inhibition of dewetting is that this segregation changes the surface energies. We calculate the density profiles for dilute NPs in polymer melts near a substrate using classical density functional theory, which shows that the NPs do indeed segregate to the surface.
Computer simulation tools used to predict the energy production of photovoltaic systems are needed in order to make informed economic decisions. These tools require input parameters that characterize module performance under various operational and environmental conditions. Depending upon the complexity of the simulation model, the required input parameters can vary from the limited information found on labels affixed to photovoltaic modules to an extensive set of parameters. The required input parameters are normally obtained indoors using a solar simulator or flash tester, or measured outdoors under natural sunlight. This paper compares measured performance parameters for three photovoltaic modules tested outdoors at the National Institute of Standards and Technology (NIST) and Sandia National Laboratories (SNL). Two of the three modules were custom fabricated using monocrystalline and silicon film cells. The third, a commercially available module, utilized triple-junction amorphous silicon cells. The resulting data allow a comparison to be made between performance parameters measured at two laboratories with differing geographical locations and apparatus. This paper describes the apparatus used to collect the experimental data, test procedures utilized, and resulting performance parameters for each of the three modules. Using a computer simulation model, the impact that differences in measured parameters have on predicted energy production is quantified. Data presented for each module includes power output at standard rating conditions and the influence of incident angle, air mass, and module temperature on each module's electrical performance. Measurements from the two laboratories are in excellent agreement. The power at standard rating conditions is within 1% for all three modules. Although the magnitude of the individual temperature coefficients varied as much as 17% between the two laboratories, the impact on predicted performance at various temperature levels was minimal, less than 2%. The influence of air mass on the performance of the three modules measured at the laboratories was in excellent agreement. The largest difference in measured results between the two laboratories was noted in the response of the modules to incident angles that exceed 75 deg.
Vapor pressure and heats of vaporization are computed for the industrial fluid properties simulation challenge (IFPSC) data set using the Towhee Monte Carlo molecular simulation program. Results are presented for the CHARMM27 and OPLS-aa force fields. Once again, the average result using multiple force fields is a better predictor of the experimental value than either individual force field.
We present an exchange-correlation functional that enables an accurate treatment of systems with electronic surfaces. The functional is developed within the subsystem functional paradigm [1], combining the local density approximation for interior regions with a new functional designed for surface regions. It is validated for a variety of materials by calculations of: (i) properties where surface effects exist, and (ii) established bulk properties. Good and coherent results are obtained, indicating that this functional may serve well as universal first choice for solid state systems. The good performance of this first subsystem functional also suggests that yet improved functionals can be constructed by this approach.
We develop a specialized treatment of electronic surface regions which, via the subsystem functional approach [1], can be used in functionals for self-consistent density-functional theory (DFT). Approximations for both exchange and correlation energies are derived for an electronic surface. An interpolation index is used to combine this surface-specific functional with a functional for interior regions. When the local density approximation (LDA) is used for the interior region, the end result is a straightforward density-gradient dependent functional that shows promising results. Further improvement of the treatment of the interior region by the use of a local gradient expansion approximation is also discussed.
Economists, systems analysts, engineers, regulatory specialists, and other experts were assembled from academia, the national laboratories, and the energy industry to discuss present restoration practices (many have already been defined to the level of operational protocols) in the sectors of the energy infrastructure as well as other infrastructures, to identify whether economics, a discipline concerned with the allocation of scarce resources, is explicitly or implicitly a part of restoration strategies, and if there are novel economic techniques and solution methods that could be used help encourage the restoration of energy services more quickly than present practices or to restore service more efficiently from an economic perspective. AcknowledgementsDevelopment of this work into a coherent product with a useful message has occurred thanks to the thoughtful support of several individuals:Kenneth Friedman, Department of Energy, Office of Energy Assurance, provided the impetus for the work, as well as several suggestions and reminders of direction along the way. Funding from DOE/OEA was critical to the completion of this effort.Arnold Baker, Chief Economist, Sandia National Laboratories, and James Peerenboom, Director, Infrastructure Assurance Center, Argonne National Laboratory, provided valuable contacts that helped to populate the authoring team with the proper mix of economists, engineers, and systems and regulatory specialists to meet the objectives of the work.Several individuals provided valuable review of the document at various stages of completion, and provided suggestions that were valuable to the editing process. This list of reviewers includes Jeffrey Roark, Economist, Tennessee Valley Authority; James R. Dalrymple, Manager of Transmission System Services and Transmission/Power Supply, Tennessee Valley Authority; William Mampre, Vice President, EN Engineering; Kevin Degenstein, EN Engineering; and Patrick Wilgang, Department of Energy, Office of Energy Assurance.With many authors, creating a document with a single voice is a difficult task. Louise Maffitt, Senior Research Associate, Institute for Engineering Research and Applications at New Mexico Institute of Mining & Technology (on contract to Sandia National Laboratories) served a vital role in the development of this document by taking the unedited material (in structured format) and refining the basic language so as to make the flow of the document as close to a single voice as one could hope for. Louise's work made the job of reducing the content to a readable length an easier process. Additional editorial suggestions from the authors themselves, particularly from Sam Flaim, Steve Folga, and Doug Gotham, expedited this process.
Visualization in scientific computing refers to the process of transforming data produced by a simulation into graphical representations that help scientific users interpret the results. While the back-end rendering phase of this work can be performed efficiently in graphics card hardware, the front-end 'post processing' portion of visualization is currently performed entirely in software. Field-Programmable Gate Arrays (FPGAs) are an attractive option for accelerating post-processing operations because they enable users to offload computations into reconfigurable hardware. A key challenge in utilizing FPGAs for this work is developing an infrastructure that allows FPGAs to be integrated into a distributed visualization system. We propose a networked approach, where each post-processing FPGA is equipped with specialized network interface (NI) hardware that is capable of transporting graphics commands across the network to existing rendering resources. In this paper we discuss a NI for FPGAs that is comprised of a Chromium OpenGL interface, a TCP Offload Engine, and a Gigabit Ethernet module. A prototype system has been tested for a distributed isosurfacing application.
The two primary uses for SOAR are: (1) Predictive--(i) Science based process models enable optimized automated weld procedures, (ii) virtual manufacturing enables the user to ask 'what if' and quickly find the answer, (iii) with SOAR, multiple welds do not need to be made in order to determine weld effects and required parameters; and (2) Investigative--(i) welding problem mysteries can be solved by gathering evidence, identifying problem suspects, and testing with SOAR; (ii) most SOAR models are universal and can be applied to many different weld processes; and (iii) understand your welding process.
The appearance of phosphatidylserine on the membrane surface of apoptotic cells (Jurkat, CHO, HeLa) is monitored by using a family of bis(Zn{sup 2+}-2,2{prime}-dipicolylamine) coordination compounds with appended fluorescein or biotin groups as reporter elements. The phosphatidylserine affinity group is also conjugated directly to a CdSe/CdS quantum dot to produce a probe suitable for prolonged observation without photobleaching. Apoptosis can be detected under a wide variety of conditions, including variations in temperature, incubation time, and binding media. Binding of each probe appears to be restricted to the cell membrane exterior, because no staining of organelles or internal membranes is observed.
Optimization-ready reduced-order models should target a particular output functional, span an applicable range of dynamic and parametric inputs, and respect the underlying governing equations of the system. To achieve this goal, we present an approach for determining a projection basis that uses a goal-oriented, model-based optimization framework. The mathematical framework permits consideration of general dynamical systems with general parametric variations. The methodology is applicable to both linear and nonlinear systems and to systems with many input parameters. This paper focuses on an initial presentation and demonstration of the methodology on a simple linear model problem of the two-dimensional, time-dependent heat equation with a small number of inputs. For this example, the reduced models determined by the new approach provide considerable improvement over those derived using the proper orthogonal decomposition.
The Sandia Z accelerator has become a unique platform to study matter at extreme conditions. The large currents (20 MA, 200-300 ns rise time) and magnetic fields (several MG) produced by Z generate magnetic compression in the multi-Mbar regime, enabling quasi-isentropic compression experiments (ICE) to several Mbar stresses. Thus, the Z platform is useful in determining high stress material isentropes, performing phase transition studies (including rapid solidification), obtaining constitutive property information, and estimating material strength at high stress. Furthermore, the magnetic pressure can also accelerate macroscopic flyer plates to velocities in excess of 30 km/s. Thus, impact experiments can be performed to ultra-high pressures. Furthermore, the adiabatic release response of materials can be investigated through shock and release experiments, allowing hot, dense liquid states to be probed. The Z platform allows a large expanse of the equation of state surface to be explored enabling new and exciting material dynamics experiments. Specific examples from each of the areas mentioned above will be discussed.
Ohmic contacts on p-type GaN utilizing Pd/Ir/Au metallization were fabricated and characterized. Metallized samples that were rapid thermally annealed at 400 C for 1 min exhibited linear current-voltage characteristics. Specific ohmic contact resistivities as low as 2 x 10{sup -5} {Omega} cm{sup 2} were achieved. Auger electron spectroscopy and x-ray photoelectron spectroscopy depth profiles of annealed Pd/Ir/Au contact revealed the formation of Pd- and Ir-related alloys at the metal-semiconductor junction with the creation of Ga vacancies below the contact. The excellent contact resistance obtained is attributed to the formation of these Ga vacancies which resulted in the reduction of the depletion region width at the junction.
The performance and reliability of microelectromechanical (MEMS) devices can be highly dependent on the control of the surface energetics in these structures. Examples of this sensitivity include the use of surface modifying chemistries to control stiction, to minimize friction and wear, and to preserve favorable electrical characteristics in surface micromachined structures. Silane modification of surfaces is one classic approach to controlling stiction in Si-based devices. The time-dependent efficacy of this modifying treatment has traditionally been evaluated by studying the impact of accelerated aging on device performance and conducting subsequent failure analysis. Our interest has been in identifying aging related chemical signatures that represent the early stages of processes like silane displacement or chemical modification that eventually lead to device performance changes. We employ a series of classic surface characterization techniques along with multivariate statistical methods to study subtle changes in the silanized silicon surface and relate these to degradation mechanisms. Examples include the use of spatially resolved time-of-flight secondary ion mass spectrometric, photoelectron spectroscopic, photoluminescence imaging, and scanning probe microscopic techniques to explore the penetration of water through a silane monolayer, the incorporation of contaminant species into a silane monolayer, and local displacement of silane molecules from the Si surface. We have applied this analytical methodology at the Si coupon level up to MEMS devices. This approach can be generalized to other chemical systems to address issues of new materials integration into micro- and nano-scale systems.
Concerns about acts of terrorism against critical infrastructures have been on the rise for several years. Critical infrastructures are those physical structures and information systems (including cyber) essential to the minimum operations of the economy and government. The President's Commission on Critical Infrastructure Protection (PCCIP) probed the security of the nation's critical infrastructures. The PCCIP determined the water infrastructure is highly vulnerable to a range of potential attacks. In October 1997, the PCCIP proposed a public/private partnership between the federal government and private industry to improve the protection of the nation's critical infrastructures. In early 2000, the EPA partnered with the Awwa Research Foundation (AwwaRF) and Sandia National Laboratories to create the Risk Assessment Methodology for Water Utilities (RAM-W{trademark}). Soon thereafter, they initiated an effort to create a template and minimum requirements for water utility Emergency Response Plans (ERP). All public water utilities in the US serving populations greater than 3,300 are required to undertaken both a vulnerability assessment and the development of an emergency response plan. This paper explains the initial steps of RAM-W{trademark} and then demonstrates how the security risk assessment is fundamental to the ERP. During the development of RAM-W{trademark}, Sandia performed several security risk assessments at large metropolitan water utilities. As part of the scope of that effort, ERPs at each utility were reviewed to determine how well they addressed significant vulnerabilities uncovered during the risk assessment. The ERP will contain responses to other events as well (e.g. natural disasters) but should address all major findings in the security risk assessment.
Poly(N-isopropyl acrylamide) (PNIPAM) is perhaps the most well known member of the class of responsive polymers. Free PNIPAM chains have a lower critical solution temperature in water at {approx}31 C. This very sharp transition ({approx}5 C) is attributed to alterations in the hydrogen bonding interactions of the amide group. Grafted chains of PNIPAM have shown promise for creating responsive surfaces. Examples include controlling the adsorption of proteins or bacteria, regulating the flow of liquids in narrow filaments or mesoporous materials, control of enzymatic activity, and releasing the contents of liposomes. Conformational changes of the polymer are likely to play a role in some of these applications, in addition to changes in local interactions. In this work we investigated the T-dependent conformational changes of grafted PNIPAM chains in D2O using neutron reflection and AFM. The molecular weight (M) and surface density of the PNIPAM brushes were controlled using atom-transfer radical polymerization. We discovered a strong effect of surface density. At lower surface densities, in the range typically achieved with grafting-to methods, we observed very little conformational change. At higher surface densities, significant changes with T were observed. The results will be compared with numerical SCF calculations employing an effective (conc.-dependent) Flory-Huggins chi parameter extracted from the solution phase diagram. For the case of high M and high surface density, a non-monotonic change in profile shape with T was observed. This will be discussed in the context of vertical phase separation predicted for brushes of water-soluble polymers within two-state models.
Our study (1) reported on the deformation response of nanocrystalline Ni during in situ dark-field transmission electron microscopy (DFTEM) straining experiments and showed what we view as direct and compelling evidence of grain boundary-mediated plasticity. Based on their analysis of the limited experimental data we presented, however, Chen and Yan (2) propose that the reported contrast changes more likely resulted from grain growth caused by electron irradiation and applied stress rather than from plastic deformation. Here, we give specific reasons why their assertions are incorrect and discuss how the measurement approaches they have used are inappropriate. Additionally, we present further evidence that supports our original conclusions. The method Chen and Yan employed to measure displacement merely probes the in-plane (two-dimensional) components of incremental strain occurring during the very short time interval shown [figure 3 in (1)] instead of the accumulated strain. As we noted explicitly in the supporting online material in (1), the loading was applied by pulsing the displacement manually. After each small displacement pulse, the monitored area always moved significantly within or even out of the field of view. Clear images could be obtained only when the sample position stabilized within the field of view, and at that time severe deformation was nearly complete. Thus, little incremental strain occurs during this short image sequence [figure 3 in (1)], as one might expect. We believe that the images shown in figure 3 of (1) are particularly valuable in understanding deformation in nanocrystalline materials. In general, the formation process of grain agglomerates simply occurred too fast to be recorded clearly. Moreover, instead of remaining constant after formation, the sizes of the grain agglomerates changed in a rather irregular manner in responding to the deformation and fracture process (see, for example, Fig. 1, B to D). This indicates that strong grain boundary-related activity occurred inside the grain agglomerates. Figure 3 in (1), a short (0.5 s) extract from more than 6 hours of videotaped experimentation (imaged ahead of cracks), not only reveals the formation process of a grain agglomerate, but also shows conclusive evidence for grain rotation and excludes the effect of overall sample rotation. It should be noted that other small grains still exhibit some minor contrast changes in figure 3 in (1). Hence, using them as reference points yields measurements that may not be accurate to {+-}1 nm [as Chen and Yan (2) claim in their analysis] and limits the accuracy of their conclusions. Chen and Yan also claim that no deformation has occurred, yet simultaneously state that the analysis has a deformation measurement error of 0.5%. This is simply not consistent; even small strains of this order may cause plastic deformation. In contrast with previous in situ TEM experiments (3-5), the special sample design adopted in our investigation (1) ensured that all deformation was primarily concentrated in a bandlike area ahead of the propagating crack. We found that these grain agglomerates were observed only in this bandlike thinning area as a response to the applied loads (Fig. 1B). No similar phenomena were detected under the electron beam alone or in stressed areas apart from the main deformation area, and these phenomena have not been reported during in situ observations of this same material made by other researchers (5). Subsequent cracks were always observed to follow this deformation area upon further displacement pulses (Fig. 1, C and D). This clearly indicates that the enlarged agglomerates do not result simply from electron irradiation plus stress, but rather from stress-induced deformation. In their comment, Chen and Yan claimed a linear relation between 'grain' area and time based on their measurements made from figure 3 in (1) and claimed that these measurements are exactly consistent with the classical grain growth equation. However, as we noted (1), the growth in size of this agglomerate is not isotropic and occurs in an irregular manner. For example, after bright contrast emerged from a grain about 6 nm in diameter, it remained well defined in size as a single, approximately equiaxed grain until t = 0.1 s (fig. S1). We have reproduced the 'grain growth' plot of Chen and Yan (Fig. 2) using our entire video image sequence (fig. S1). Clearly, the growth in area of the agglomerate is not consistent with linear grain growth. (Unfortunately, only a portion of these data could be included in the original paper for reasons of space.) Notably, Chen and Yan did not apply a similar 'grain growth' analysis to nearby grains; this would have yielded no information in support of their argument, as those grains show essentially no growth.
This report summarizes progress from the Laboratory Directed Research and Development (LDRD) program during fiscal year 2004. In addition to a programmatic and financial overview, the report includes progress reports from 352 individual R and D projects in 15 categories. The 15 categories are: (1) Advanced Concepts; (2) Advanced Manufacturing; (3) Biotechnology; (4) Chemical and Earth Sciences; (5) Computational and Information Sciences; (6) Differentiating Technologies; (7) Electronics and Photonics; (8) Emerging Threats; (9) Energy and Critical Infrastructures; (10) Engineering Sciences; (11) Grand Challenges; (12) Materials Science and Technology; (13) Nonproliferation and Materials Control; (14) Pulsed Power and High Energy Density Sciences; and (15) Corporate Objectives.
This paper surveys the needs associated with environmental monitoring and long-term environmental stewardship. Emerging sensor technologies are reviewed to identify compatible technologies for various environmental monitoring applications. The contaminants that are considered in this report are grouped into the following categories: (1) metals, (2) radioisotopes, (3) volatile organic compounds, and (4) biological contaminants. United States regulatory drivers are evaluated for different applications (e.g., drinking water, storm water, pretreatment, and air emissions), and sensor requirements are derived from these regulatory metrics. Sensor capabilities are then summarized according to contaminant type, and the applicability of the different sensors to various environmental monitoring applications is discussed.
Time domain reflectometry (TDR) operates by propagating a radar frequency electromagnetic pulse down a transmission line while monitoring the reflected signal. As the electromagnetic pulse propagates along the transmission line, it is subject to impedance by the dielectric properties of the media along the transmission line (e.g., air, water, and sediment), reflection at dielectric discontinuities (e.g., air-water or water-sediment interface), and attenuation by electrically conductive materials (e.g., salts and clays). Taken together, these characteristics provide a basis for integrated stream monitoring, specifically, concurrent measurement of stream stage, channel profile, and aqueous conductivity. Requisite for such application is a means of extracting the desired stream parameters from measured TDR traces. Analysis is complicated by the fact that interface location and aqueous conductivity vary concurrently and multiple interfaces may be present at any time. For this reason a physically based multisection model employing the S11 scatter function and Debeye parameters for dielectric dispersion and loss is used to analyze acquired TDR traces. Here we explore the capability of this multisection modeling approach for interpreting TDR data acquired from complex environments, such as encountered in stream monitoring. A series of laboratory tank experiments was performed in which the depth of water, depth of sediment, and conductivity were varied systematically. Comparisons between modeled and independently measured data indicate that TDR measurements can be made with an accuracy of {+-} 3.4 x 10{sup -3} m for sensing the location of an air-water or water-sediment interface and {+-} 7.4% of actual for the aqueous conductivity.
How might the quality of a city's delivered water be compromised through natural or malevolent causes? What are the consequences of a contamination event? What water utility assets are at greatest risk to compromise? Utility managers have been scrambling to find answers to these questions since the events of 9/11. However, even before this date utility mangers were concerned with the potential for system contamination through natural or accidental causes. Unfortunately, an integrated tool for assessing both the threat of attack/failure and the subsequent consequence is lacking. To help with this problem we combine Markov Latent Effects modeling for performing threat assessment calculations with the widely used pipe hydraulics/transport code, EPANET, for consequences analysis. Together information from these models defines the risk posed to the public due to natural or malevolent contamination of a water utility system. Here, this risk assessment framework is introduced and demonstrated within the context of vulnerability assessment for water distribution systems.
An S-band 20 MeV electron linear accelerator formerly used for medical applications has been recommissioned to provide a wide range of photonuclear activation studies as well as various radiation effects on biological and microelectronic systems. Four radiation effect applications involving the electron/photon beams are described. Photonuclear activation of a stable isotope of oxygen provides an active means of characterizing polymer degradation. Biological irradiations of microorganisms including bacteria were used to study total dose and dose-rate effects on survivability and the adaptation of these organisms to repeated exposures. Microelectronic devices including bipolar junction transistors (BJTs) and diodes were irradiated to study photocurrent from these devices as a function of peak dose rate with comparisons to computer modeling results. In addition, the 20 MeV linac may easily be converted to a medium energy neutron source which has been used to study neutron damage effects on transistors.
Experimental results [1] for the reflection coefficient of shock compressed xenon are compared with results from quantum molecular dynamics calculations with density functional theory (DFT). The real part of the optical conductivity is calculated within the Kubo-Greenwood formalism and the Kramers-Kroenig relations are used to generate the reflectivity and other optical properties. Improved agreement over non-ideal plasma theory [2] is found with the DFT calculations, but significant differences with the data remain. Since DFT in the various local density approximations tends to underestimate the band gap and overestimate the free electron population, we have used the ionizations from [2] to correct the DFT band gaps. This results in much improved agreement with the xenon reflectivity data and demonstrates a new approach to correcting DFT band gaps.
INVICE (INVerse analysis of Isentropic Compression Experiments) is a FORTRAN computer code that implements the inverse finite-difference method to analyze velocity data from isentropic compression experiments. This report gives a brief description of the methods used and the options available in the first beta version of the code, as well as instructions for using the code.
We report a fully integrated high-Q factor micro-ring resonator using silicon nitride/dioxide on a silicon wafer. The micro-ring resonator is critically coupled to a low loss straight waveguide. An intrinsic quality factor of 2.4 x 10{sup 5} has been measured.
A series of amorphous silicate materials with the general formula Na{sub x+2y}M{sub x}{sup 3+}Si{sub 1-x}O{sub 2+y}(M{sup 3+} = Al, Mn, Fe, Y) were studied. Samples were synthesized by a precipitation reaction at room temperature. The results indicate that the ion-exchange capacity (IEC) decreases as follows: Al > Fe > Mn > Y. Additionally, the IEC increases with increasing aluminum concentration. Structural studies show that the relative amount of octahedrally coordinated aluminum increases with increasing Al content, as does the total amount of AlO{sub 4} species increases. The data suggest that the IEC value of these amorphous aluminosilicates is dependent on the tetrahedrally coordinated aluminum. Regeneration of the Al-silicate with acetic acid does not decrease the IEC significantly.
Organic light-emitting diodes (OLEDs), with few exvceptions, are fabricated in the standard way of sequentially depositing active layers and elecrodes onto a substrate. The conventional devices have 'a detrimental layer' at the interface between the organic and the top metal electrode because evaporation results in metal in-diffusion and chemical disruption at the metal-organic interface, Here, a different approach is introduced to construct OLEDs: soft contact lamination (SCL) is based on thysical lamination of thin metal electrodes supported by an elastomeric layer against the electrolumnescent organic layer. Thei method produces spatially homogeneous, intimate contacts via van der Waals interaction between the metal and the organic, resulting in no chemical and physical damages to the organic. Devices fabricated by SCL are shown to have no detrimental layer and fewer luminescence-quenching channels than conventional devices that have evaporated top metal electrodes.
Policy-based network management (PBNM) uses policy-driven automation to manage complex enterprise and service provider networks. Such management is strongly supported by industry standards, state of the art technologies and vendor product offerings. We present a case for the use of PBNM and related technologies for end-to-end service delivery. We provide a definition of PBNM terms, a discussion of how such management should function and the current state of the industry. We include recommendations for continued work that would allow for PBNM to be put in place over the next five years in the unclassified environment.
ALEGRA is an arbitrary Lagrangian-Eulerian finite element code that emphasizes large distortion and shock propagation in inviscid fluids and solids. This document describes user options for modeling resistive magnetohydrodynamics, thermal conduction, and radiation transport effects, and two material temperature physics.
To establish mechanical properties and failure criteria of silicon carbide (SiC-N) ceramics, a series of quasi-static compression tests has been completed using a high-pressure vessel and a unique sample alignment jig. This report summarizes the test methods, set-up, relevant observations, and results from the constitutive experimental efforts. Results from the uniaxial and triaxial compression tests established the failure threshold for the SiC-N ceramics in terms of stress invariants (I{sub 1} and J{sub 2}) over the range 1246 < I{sub 1} < 2405. In this range, results are fitted to the following limit function (Fossum and Brannon, 2004) {radical}J{sub 2}(MPa) = a{sub 1} - a{sub 3}e -a{sub 2}(I{sub 1}/3) + a{sub 4} I{sub 1}/3, where a{sub 1} = 10181 MPa, a{sub 2} = 4.2 x 10{sup -4}, a{sub 3} = 11372 MPa, and a{sub 4} = 1.046. Combining these quasistatic triaxial compression strength measurements with existing data at higher pressures naturally results in different values for the least-squares fit to this function, appropriate over a broader pressure range. These triaxial compression tests are significant because they constitute the first successful measurements of SiC-N compressive strength under quasistatic conditions. Having an unconfined compressive strength of {approx}3800 MPa, SiC-N has been heretofore tested only under dynamic conditions to achieve a sufficiently large load to induce failure. Obtaining reliable quasi-static strength measurements has required design of a special alignment jig and load-spreader assembly, as well as redundant gages to ensure alignment. When considered in combination with existing dynamic strength measurements, these data significantly advance the characterization of pressure-dependence of strength, which is important for penetration simulations where failed regions are often at lower pressures than intact regions.
A laser hazard analysis and safety assessment was performed for the 3rd Tech model DeltaSphere-3000{reg_sign} Laser 3D Scene Digitizer, infrared laser scanner model based on the 2000 version of the American National Standard Institute's Standard Z136.1, for the Safe Use of Lasers. The portable scanner system is used in the Robotic Manufacturing Science and Engineering Laboratory (RMSEL). This scanning system had been proposed to be a demonstrator for a new application. The manufacture lists the Nominal Ocular Hazard Distance (NOHD) as less than 2 meters. It was necessary that SNL validate this NOHD prior to its use as a demonstrator involving the general public. A formal laser hazard analysis is presented for the typical mode of operation for the current configuration as well as a possible modified mode and alternative configuration.
Several SIERRA applications make use of third-party libraries to solve systems of linear and nonlinear equations, and to solve eigenproblems. The classes and interfaces in the SIERRA framework that provide linear system assembly services and access to solver libraries are collectively referred to as solver services. This paper provides an overview of SIERRA's solver services including the design goals that drove the development, and relationships and interactions among the various classes. The process of assembling and manipulating linear systems will be described, as well as access to solution methods and other operations.
The Finite Element Interface to Linear Solvers (FEI) is a linear system assembly library. Sparse systems of linear equations arise in many computational engineering applications, and the solution of linear systems is often the most computationally intensive portion of the application. Depending on the complexity of problems addressed by the application, there may be no single solver package capable of solving all of the linear systems that arise. This motivates the need to switch an application from one solver library to another, depending on the problem being solved. The interfaces provided by various solver libraries for data assembly and problem solution differ greatly, making it difficult to switch an application code from one library to another. The amount of library-specific code in an application can be greatly reduced by having an abstraction layer that puts a 'common face' on various solver libraries. The FEI has seen significant use by finite element applications at Sandia National Laboratories and Lawrence Livermore National Laboratory. The original FEI offered several advantages over using linear algebra libraries directly, but also imposed significant limitations and disadvantages. A new set of interfaces has been added with the goal of removing the limitations of the original FEI while maintaining and extending its strengths.
This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runs on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.
The manipulation of physical interactions between structural moieties on the molecular scale is a fundamental hurdle in the realization and operation of nanostructured materials and high surface area microsystem architectures. These include such nano-interaction-based phenomena as self-assembly, fluid flow, and interfacial tribology. The proposed research utilizes photosensitive molecular structures to tune such interactions reversibly. This new material strategy provides optical actuation of nano-interactions impacting behavior on both the nano- and macroscales and with potential to impact directed nanostructure formation, microfluidic rheology, and tribological control.
As part of the U.S. Department of Energy (DOE) Office of Industrial Technologies (OIT) Industries of the Future (IOF) Forest Products research program, a collaborative investigation was conducted on the sources, characteristics, and deposition of particles intermediate in size between submicron fume and carryover in recovery boilers. Laboratory experiments on suspended-drop combustion of black liquor and on black liquor char bed combustion demonstrated that both processes generate intermediate size particles (ISP), amounting to 0.5-2% of the black liquor dry solids mass (BLS). Measurements in two U.S. recovery boilers show variable loadings of ISP in the upper furnace, typically between 0.6-3 g/Nm{sup 3}, or 0.3-1.5% of BLS. The measurements show that the ISP mass size distribution increases with size from 5-100 {micro}m, implying that a substantial amount of ISP inertially deposits on steam tubes. ISP particles are depleted in potassium, chlorine, and sulfur relative to the fuel composition. Comprehensive boiler modeling demonstrates that ISP concentrations are substantially overpredicted when using a previously developed algorithm for ISP generation. Equilibrium calculations suggest that alkali carbonate decomposition occurs at intermediate heights in the furnace and may lead to partial destruction of ISP particles formed lower in the furnace. ISP deposition is predicted to occur in the superheater sections, at temperatures greater than 750 C, when the particles are at least partially molten.
The thermal hazard posed by large hydrocarbon fires is dominated by the radiative emission from high temperature soot. Since the optical properties of soot, especially in the infrared region of the electromagnetic spectrum, as well as its morphological properties, are not well known, efforts are underway to characterize these properties. Measurements of these soot properties in large fires are important for heat transfer calculations, for interpretation of laser-based diagnostics, and for developing soot property models for fire field models. This research uses extractive measurement diagnostics to characterize soot optical properties, morphology, and composition in 2 m pool fires. For measurement of the extinction coefficient, soot extracted from the flame zone is transported to a transmission cell where measurements are made using both visible and infrared lasers. Soot morphological properties are obtained by analysis via transmission electron microscopy of soot samples obtained thermophoretically within the flame zone, in the overfire region, and in the transmission cell. Soot composition, including carbon-to-hydrogen ratio and polycyclic aromatic hydrocarbon concentration, is obtained by analysis of soot collected on filters. Average dimensionless extinction coefficients of 8.4 {+-} 1.2 at 635 nm and 8.7 {+-} 1.1 at 1310 nm agree well with recent measurements in the overfire region of JP-8 and other fuels in lab-scale burners and fires. Average soot primary particle diameters, radius of gyration, and fractal dimensions agree with these recent studies. Rayleigh-Debye-Gans theory of scattering applied to the measured fractal parameters shows qualitative agreement with the trends in measured dimensionless extinction coefficients. Results of the density and chemistry are detailed in the report.
The measurement of heat flux in hydrocarbon fuel fires (e.g., diesel or JP-8) is difficult due to high temperatures and the sooty environment. Un-cooled commercially available heat flux gages do not survive in long duration fires, and cooled gages often become covered with soot, thus changing the gage calibration. An alternate method that is rugged and relatively inexpensive is based on inverse heat conduction methods. Inverse heat-conduction methods estimate absorbed heat flux at specific material interfaces using temperature/time histories, boundary conditions, material properties, and usually an assumption of one-dimensional (1-D) heat flow. This method is commonly used at Sandia.s fire test facilities. In this report, an uncertainty analysis was performed for a specific example to quantify the effect of input parameter variations on the estimated heat flux when using the inverse heat conduction method. The approach used was to compare results from a number of cases using modified inputs to a base-case. The response of a 304 stainless-steel cylinder [about 30.5 cm (12-in.) in diameter and 0.32-cm-thick (1/8-in.)] filled with 2.5-cm-thick (1-in.) ceramic fiber insulation was examined. Input parameters of an inverse heat conduction program varied were steel-wall thickness, thermal conductivity, and volumetric heat capacity; insulation thickness, thermal conductivity, and volumetric heat capacity, temperature uncertainty, boundary conditions, temperature sampling period; and numerical inputs. One-dimensional heat transfer was assumed in all cases. Results of the analysis show that, at the maximum heat flux, the most important parameters were temperature uncertainty, steel thickness and steel volumetric heat capacity. The use of a constant thermal properties rather than temperature dependent values also made a significant difference in the resultant heat flux; therefore, temperature-dependent values should be used. As an example, several parameters were varied to estimate the uncertainty in heat flux. The result was 15-19% uncertainty to 95% confidence at the highest flux, neglecting multidimensional effects.
This study demonstrates that containment of municipal and hazardous waste in arid and semiarid environments can be accomplished effectively without traditional, synthetic materials and complex, multi-layer systems. This research demonstrates that closure covers combining layers of natural soil, native plant species, and climatic conditions to form a sustainable, functioning ecosystem will meet the technical equivalency criteria prescribed by the U. S. Environmental Protection Agency. In this study, percolation through a natural analogue and an engineered cover is simulated using the one-dimensional, numerical code UNSAT-H. UNSAT-H is a Richards. equation-based model that simulates soil water infiltration, unsaturated flow, redistribution, evaporation, plant transpiration, and deep percolation. This study incorporates conservative, site-specific soil hydraulic and vegetation parameters. Historical meteorological data are used to simulate percolation through the natural analogue and an engineered cover, with and without vegetation. This study indicates that a 3-foot (ft) cover in arid and semiarid environments is the minimum design thickness necessary to meet the U. S. Environmental Protection Agency-prescribed technical equivalency criteria of 31.5 millimeters/year and 1 x 10{sup -7} centimeters/second for net annual percolation and average flux, respectively. Increasing cover thickness to 4 or 5 ft results in limited additional improvement in cover performance.
Metallic Phases in extraterrestrial materials are composed of Fe-Ni with minor amounts of Co, P, Si, Cr, etc. Electron microscopy techniques (SEM, TEM, EPMA, AEM) have been used for almost 50 years to study micron and submicron microscopic features in the metal phases (Fig. 1) such as clear taenite, cloudy zone, plessite, etc [1,2]. However lack of instrumentation to prepare TEM thin foils in specific sample locations and to obtain micro-scale crystallographic data have limited these investigations. New techniques such as the focused ion beam (FIB) and the electron backscatter electron diffraction (EBSD) techniques have overcome these limitations. The application of the FIB instrument has allowed us to prepare {approx}10 um long by {approx} 5um deep TEM thin sections of metal phases from specific regions of metal particles, in chondrites, irons and stony iron meteorites, identified by optical and SEM observation. Using a FEI dual beam FIB we were able to study very small metal particles in samples of CH chondrites [3] and zoneless plessite (ZP) in ordinary chondrites. Fig. 2 shows a SEM photomicrograph of a {approx}40 um ZP particle in Kernouve, a H6 chondrite. Fig. 3a,b shows a TEM photograph of a section of the FIB prepared TEM foil of the ZP particle and a Ni trace through a tetrataenite/kamacite region of the particle. It has been proposed that the Widmanstatten pattern in low P iron meteorites forms by martensite decomposition, via the reaction {gamma} {yields} {alpha}{sub 2} + {gamma} {yields} {alpha} + {gamma} in which {alpha}{sub 2}, martensite, decomposes to the equilibrium {alpha} and {gamma} phases during the cooling process [4]. In order to show if this mechanism for Widmanstatten pattern formation is correct, crystallographic information is needed from the {gamma} or taenite phases throughout a given meteorite. The EBSD technique was employed in this study to obtain the orientation of the taenite surrounding the initial martensite phase and the kamacite which forms as {alpha}{sub 2} or as Widmanstatten plates in a series of IVB irons. Fig. 4a,b shows EBSD orientation maps of taenite and kamacite from the Tawallah Valley IVB iron. We observe that the orientation of the taenite in the IVB meteorites is the same throughout the sample consistent with the orientation of the high temperature single phase taenite before formation of the Widmanstatten pattern.
Effective, high-performance, networked file systems and storage is needed to solve I/O bottlenecks between large compute platforms. Frequently, parallel techniques such as PFTP, are employed to overcome the adverse effect of TCP's congestion avoidance algorithm in order to achieve reasonable aggregate throughput. These techniques can suffer from end-system bottlenecks due to the protocol processing overhead and memory copies involved in moving large amounts of data during I/O. Moreover, transferring data using PFTP requires manual operation, lacking the transparency to allow for interactive visualization and computational steering of large-scale simulations from distributed locations. This paper evaluates the emerging Internet SCSI (iSCSI) protocol [2] as the file/data transport in order that remote clients can transparently access data through a distributed global file system available to local clients. We started our work characterizing the performance behavior of iSCSI in Local Area Networks (LANs). We then proceeded to study the effect of propagation delay on throughput using remote iSCSI storage and explored optimization techniques to mitigate the adverse effects of long delay in high-bandwidth Wide Area Networks (WANs). Lastly, we evaluated iSCSI in a Storage Area Network (SAN) for a Global Parallel Filesystem. We conducted our benchmark based on typical usage model of large-scale scientific applications at Sandia. We demonstrated the benefit of high-performance parallel VO to scientific applications at the IEEE 2004 Supercomputing Conference, using experiences and knowledge gained from this study.
The potential energy surface for the reaction between OH and acetylene has been calculated using the RQCISD(T) method and extrapolated to the complete basis-set limit. Rate coefficients were determined for a wide range of temperatures and pressures, based on this surface and the solution of the one-dimensional and two-dimensional master equations. With a small adjustment to the association energy barrier (1.1 kcal/mol), agreement with experiments is good, considering the discrepancies in such data. The rate coefficient for direct hydrogen abstraction is significantly smaller than that commonly used in combustion models. Also in contrast to previous models, ketene + H is found to be the main product at normal combustion conditions. At low temperatures and high pressures, stabilization of the C{sub 2}H{sub 2}OH adduct is the dominant process. Rate coefficient expressions for use in modeling are provided.
The National Spent Nuclear Fuel Program, located at the Idaho National Laboratory (INL), coordinates and integrates national efforts in management and disposal of US Department of Energy (DOE)-owned spent nuclear fuel. These management functions include development of standardised systems for long-term disposal in the proposed Yucca Mountain repository. Nuclear criticality control measures are needed in these systems to avoid restrictive fissile loading limits because of the enrichment and total quantity of fissile material in some types of the DOE spent nuclear fuel. This need is being addressed by development of corrosion-resistant, neutron-absorbing structural alloys for nuclear criticality control. This paper outlines results of a metallurgical development programme that is investigating the alloying of gadolinium into a nickel-chromium-molybdenum alloy matrix. Gadolinium has been chosen as the neutron absorption alloying element due to its high thermal neutron absorption cross section and low solubility in the expected repository environment. The nickel-chromium-molybdenum alloy family was chosen for its known corrosion performance, mechanical properties, and weldability. The workflow of this programme includes chemical composition definition, primary and secondary melting studies, ingot conversion processes, properties testing, and national consensus codes and standards work. The microstructural investigation of these alloys shows that the gadolinium addition is present in the alloy as a gadolinium-rich second phase. The mechanical strength values are similar to those expected for commercial Ni-Cr-Mo alloys. The alloys have been corrosion tested with acceptable results. The initial results of weldability tests have also been acceptable. Neutronic testing in a moderated critical array has generated favourable results. An American Society for Testing and Materials material specification has been issued for the alloy and a Code Case has been submitted to the American Society of Mechanical Engineers for code qualification.
Techniques for mitigating the adsorption of {sup 137}Cs and {sup 60}Co on metal surfaces (e.g. RAM packages) exposed to contaminated water (e.g. spent-fuel pools) have been developed and experimentally verified. The techniques are also effective in removing some of the {sup 60}Co and {sup 137}Cs that may have been adsorbed on the surfaces after removal from the contaminated water. The principle for the {sup 137}Cs mitigation technique is based upon ion-exchange processes. In contrast, {sup 60}Co contamination primarily resides in minute particles of crud that become lodged on cask surfaces. Crud is an insoluble Fe-Ni-Cr oxide that forms colloidal-sized particles as reactor cooling systems corrode. Because of the similarity between Ni{sup 2+} and Co{sup 2+}, crud is able to scavenge and retain traces of cobalt as it forms. A number of organic compounds have a great specificity for combining with nickel and cobalt. Ongoing research is investigating the effectiveness of chemical complexing agent EDTA with regard to its ability to dissolve the host phase (crud) thereby liberating the entrained {sup 60}Co into a solution where it can be rinsed away.
Using a magnetic pressure drive, an absolute measurement of stress and density along the principal compression isentrope is obtained for solid aluminum to 240 GPa. Reduction of the free-surface velocity data relies on a backward integration technique, with approximate accounting for unknown systematic errors in experimental timing. Maximum experimental uncertainties are {+-}4.7% in stress and {+-}1.4% in density, small enough to distinguish between different equation-of-state (EOS) models. The result agrees well with a tabular EOS that uses an empirical universal zero-temperature isotherm.
Spectral imaging where a complete spectrum is collected from each of a series of spatial locations (1D lines, 2D images or 3D volumes) is now available on a wide range of analytical tools - from electron and x-ray to ion beam instruments. With this capability to collect extremely large spectral images comes the need for automated data analysis tools that can rapidly and without bias reduce a large number of raw spectra to a compact, chemically relevant, and easily interpreted representation. It is clear that manual interrogation of individual spectra is impractical even for very small spectral images (< 5000 spectra). More typical spectral images can contain tens of thousands to millions of spectra, which given the constraint of acquisition time may contain between 5 and 300 counts per 1000-channel spectrum. Conventional manual approaches to spectral image analysis such as summing spectra from regions or constructing x-ray maps are prone to bias and possibly error. One way to comprehensively analyze spectral image data, which has been automated, is to utilize an unsupervised self-modeling multivariate statistical analysis method such as multivariate curve resolution (MCR). This approach has proven capable of solving a wide range of analytical problems based upon the counting of x-rays (SEM/STEM-EDX, XRF, PIXE), electrons (EELS, XPS) and ions (TOF-SIMS). As an example of the MCR approach, a STEM x-ray spectral image from a ZrB2-SiC composite was acquired and analyzed. The data were generated in a FEI Tecnai F30-ST TEM/STEM operated at 300kV, equipped with an EDAX SUTW x-ray detector. The spectral image was acquired with the TIA software on the STEM at 128 by 128 pixels (12nm/pixel) for 100msec dwell per pixel (total acquisition time was 30 minutes) with a probe of approximately the same size as each pixel. Each spectrum in the image had, on average, 500 counts. The calculation took 5 seconds on a PC workstation with dual 2.4GHz PentiumIV Xeon processors and 2Gbytes of RAM and resulted in four chemically relevant components, which are shown in Figure 1. The analysis region was at a triple junction of three ZrB2 grains that contained zirconium oxide, aluminum oxide and a glass phase. The power of unbiased statistical methods, such as MCR as applied here, is that no a priori knowledge of the material's chemistry is required. The algorithms, in this case, effectively reduced over 16,000 2000-channel spectra (64Mbytes) to four images and four spectral shapes (72kbytes), which in this case represent chemical phases. This three order of magnitude compression is achieved rapidly with no loss of chemical information. There is also the potential to correlate multiple analytical techniques like, for example, EELS and EDS in the STEM adding sensitivity to light elements as well as bonding information for EELS to the more comprehensive spectral coverage of EDS.
Wear is a critical factor in determining the durability of microelectromechanical systems (MEMS). While the reliability of polysilicon MEMS has received extensive attention, the mechanisms responsible for this failure mode at the microscale have yet to be conclusively determined. We have used on-chip polycrystalline silicon side-wall friction MEMS specimens to study active mechanisms during sliding wear in ambient air. Worn parts were examined by analytical scanning and transmission electron microscopy, while local temperature changes were monitored using advanced infrared microscopy. Observations show that small amorphous debris particles ({approx}50-100 nm) are removed by fracture through the silicon grains ({approx}500 nm) and are oxidized during this process. Agglomeration of such debris particles into larger clusters also occurs. Some of these debris particles/clusters create plowing tracks on the beam surface. A nano-crystalline surface layer ({approx}20-200 nm), with higher oxygen content, forms during wear at and below regions of the worn surface; its formation is likely aided by high local stresses. No evidence of dislocation plasticity or of extreme local temperature increases was found, ruling out the possibility of high temperature-assisted wear mechanisms.
Microanalysis is typically performed to analyze the near surface of materials. There are many instances where chemical information about the third spatial dimension is essential to the solution of materials analyses. The majority of 3D analyses however focus on limited spectral acquisition and/or analysis. For truly comprehensive 3D chemical characterization, 4D spectral images (a complete spectrum from each volume element of a region of a specimen) are needed. Furthermore, a robust statistical method is needed to extract the maximum amount of chemical information from that extremely large amount of data. In this paper, an example of the acquisition and multivariate statistical analysis of 4D (3-spatial and 1-spectral dimension) x-ray spectral images is described. The method of utilizing a single- or dual-beam FIB (w/o or w/SEM) to get at 3D chemistry has been described by others with respect to secondary-ion mass spectrometry. The basic methodology described in those works has been modified for comprehensive x-ray microanalysis in a dual-beam FIB/SEM (FEI Co. DB-235). In brief, the FIB is used to serially section a site-specific region of a sample and then the electron beam is rastered over the exposed surfaces with x-ray spectral images being acquired at each section. All this is performed without rotating or tilting the specimen between FIB cutting and SEM imaging/x-ray spectral image acquisition. The resultant 4D spectral image is then unfolded (number of volume elements by number of channels) and subjected to the same multivariate curve resolution (MCR) approach that has proven successful for the analysis of lower-dimension x-ray spectral images. The TSI data sets can be in excess of 4Gbytes. This problem has been overcome (for now) and images up to 6Gbytes have been analyzed in this work. The method for analyzing such large spectral images will be described in this presentation. A comprehensive 3D chemical analysis was performed on several corrosion specimens of Cu electroplated with various metals. Figure 1A shows the top view of the localized corrosion region prepared for FIB sectioning. The TSI region has been coated with Pt and a trench has been milled along the bottom edge of the region, exposing it to the electron beam as seen in Figure 1B. The TSI consisted of 25 sections and was approximately 6Gbytes. Figure 1C shows several of the components rendered in 3D: Green is Cu; blue is Pb; cyan represents one of the corrosion products that contains Cu, Zn, O, S, and C; and orange represents the other corrosion product with Zn, O, S and C. Figure 1 D shows all of the component spectral shapes from the analysis. There is severe pathological overlap of the spectra from Ni, Cu and Zn as well as Pb and S. in spite of this clean spectral shapes have been extracted from the TSI. This powerful TSI technique could be applied to other sectioning methods well.
Analyzing the performance of a complex System of Systems (SoS) requires a systems engineering approach. Many such SoS exist in the Military domain. Examples include the Army's next generation Future Combat Systems 'Unit of Action' or the Navy's Aircraft Carrier Battle Group. In the case of a Unit of Action, a system of combat vehicles, support vehicles and equipment are organized in an efficient configuration that minimizes logistics footprint while still maintaining the required performance characteristics (e.g., operational availability). In this context, systems engineering means developing a global model of the entire SoS and all component systems and interrelationships. This global model supports analyses that result in an understanding of the interdependencies and emergent behaviors of the SoS. Sandia National Laboratories will present a robust toolset that includes methodologies for developing a SoS model, defining state models and simulating a system of state models over time. This toolset is currently used to perform logistics supportability and performance assessments of the set of Future Combat Systems (FCS) for the U.S. Army's Program Manager Unit of Action.
Reducing agricultural water use in arid regions while maintaining or improving economic productivity of the agriculture sector is a major challenge. Controlled environment agriculture (CEA, or, greenhouse agriculture) affords advantages in direct resource use (less land and water required) and productivity (i.e., much higher product yield and quality per unit of resources used) relative to conventional open-field practices. These advantages come at the price of higher operating complexity and costs per acre. The challenge is to implement and apply CEA such that the productivity and resource use advantages will sufficiently outweigh the higher operating costs to provide for overall benefit and viability. This project undertook an investigation of CEA for livestock forage production as a water-saving alternative to open-field forage production in arid regions. Forage production is a large consumer of fresh water in many arid regions of the world, including the southwestern U.S. and northern Mexico. With increasing competition among uses (agriculture, municipalities, industry, recreation, ecosystems, etc.) for limited fresh water supplies, agricultural practice alternatives that can potentially maintain or enhance productivity while reducing water use warrant consideration. The project established a pilot forage production greenhouse facility in southern New Mexico based on a relatively modest and passive (no active heating or cooling) system design pioneered in Chihuahua, Mexico. Experimental operations were initiated in August 2004 and carried over into early-FY05 to collect data and make initial assessments of operational and technical system performance, assess forage nutrition content and suitability for livestock, identify areas needing improvement, and make initial assessment of overall feasibility. The effort was supported through the joint leveraging of late-start FY04 LDRD funds and bundled CY2004 project funding from the New Mexico Small Business Technical Assistance program at Sandia. Despite lack of optimization with the project system, initial results show the dramatic water savings potential of hydroponic forage production compared with traditional irrigated open field practice. This project produced forage using only about 4.5% of the water required for equivalent open field production. Improved operation could bring water use to 2% or less. The hydroponic forage production system and process used in this project are labor intensive and not optimized for minimum water usage. Freshly harvested hydroponic forage has high moisture content that dilutes its nutritional value by requiring that livestock consume more of it to get the same nutritional content as conventional forage. In most other aspects the nutritional content compares well on a dry weight equivalent basis with other conventional forage. More work is needed to further explore and quantify the opportunities, limitations, and viability of this technique for broader use. Collection of greenhouse environmental data in this project was uniquely facilitated through the implementation and use of a self-organizing, wirelessly networked, multi-modal sensor system array with remote cell phone data link capability. Applications of wirelessly networked sensing with improved modeling/simulation and other Sandia technologies (e.g., advanced sensing and control, embedded reasoning, modeling and simulation, materials, robotics, etc.) can potentially contribute to significant improvement across a broad range of CEA applications.
Video and image data are knowledge-rich sources of information, but their utility for current and future systems is limited without autonomous methods for understanding and characterizing their content. Semantic-based video understanding may benefit systems dedicated to the detection of insiders, alarm patterns, unauthorized activities in material monitoring applications, etc. A direct benefit of this technology is not only intelligent alarm analysis, but the ability to browse and perform query-based searches for useful and interesting information after video data has been acquired and stored. These searches can provide a tremendous benefit for use in intelligence agency, government, military, and DOE site investigations. This report provides an initial investigation into the algorithms and methods needed to characterize and understand video content. Such algorithms include background modeling, detecting dynamic image regions, grouping dynamic pixels into coherent objects, and robust tracking strategies. With solid approaches for addressing these problems, analysis can be performed seeking to recognize distinctive objects and their motions leading to semantic-based video searches.
This SAND report provides the technical progress through October 2004 of the Sandia-led project, %22Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling,%22 funded by the DOE Office of Science Genomes to Life Program. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO2 are important terms in the global environmental response to anthropogenic atmospheric inputs of CO2 and that oceanic microorganisms play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. In this project, we will investigate the carbon sequestration behavior of Synechococcus Sp., an abundant marine cyanobacteria known to be important to environmental responses to carbon dioxide levels, through experimental and computational methods. This project is a combined experimental and computational effort with emphasis on developing and applying new computational tools and methods. Our experimental effort will provide the biology and data to drive the computational efforts and include significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Computational tools will be essential to our efforts to discover and characterize the function of the molecular machines of Synechococcus. To this end, molecular simulation methods will be coupled with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes. In addition, we will develop a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these - 4 - pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. The ultimate goal of this effort is develop and apply new experimental and computational methods needed to generate a new level of understanding of how the Synechococcus genome affects carbon fixation at the global scale. Anticipated experimental and computational methods will provide ever-increasing insight about the individual elements and steps in the carbon fixation process, however relating an organism's genome to its cellular response in the presence of varying environments will require systems biology approaches. Thus a primary goal for this effort is to integrate the genomic data generated from experiments and lower level simulations with data from the existing body of literature into a whole cell model. We plan to accomplish this by developing and applying a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats. These challenges are unprecedented in high performance scientific computing and necessitate the development of a companion computational infrastructure to support this effort. More information about this project, including a copy of the original proposal, can be found at www.genomes-to-life.org Acknowledgment We want to gratefully acknowledge the contributions of the GTL Project Team as follows: Grant S. Heffelfinger1*, Anthony Martino2, Andrey Gorin3, Ying Xu10,3, Mark D. Rintoul1, Al Geist3, Matthew Ennis1, Hashimi Al-Hashimi8, Nikita Arnold3, Andrei Borziak3, Bianca Brahamsha6, Andrea Belgrano12, Praveen Chandramohan3, Xin Chen9, Pan Chongle3, Paul Crozier1, PguongAn Dam10, George S. Davidson1, Robert Day3, Jean Loup Faulon2, Damian Gessler12, Arlene Gonzalez2, David Haaland1, William Hart1, Victor Havin3, Tao Jiang9, Howland Jones1, David Jung3, Ramya Krishnamurthy3, Yooli Light2, Shawn Martin1, Rajesh Munavalli3, Vijaya Natarajan3, Victor Olman10, Frank Olken4, Brian Palenik6, Byung Park3, Steven Plimpton1, Diana Roe2, Nagiza Samatova3, Arie Shoshani4, Michael Sinclair1, Alex Slepoy1, Shawn Stevens8, Chris Stork1, Charlie Strauss5, Zhengchang Su10, Edward Thomas1, Jerilyn A. Timlin1, Xiufeng Wan11, HongWei Wu10, Dong Xu11, Gong-Xin Yu3, Grover Yip8, Zhaoduo Zhang2, Erik Zuiderweg8 *Author to whom correspondence should be addressed (gsheffe%40sandia.gov) 1. Sandia National Laboratories, Albuquerque, NM 2. Sandia National Laboratories, Livermore, CA 3. Oak Ridge National Laboratory, Oak Ridge, TN 4. Lawrence Berkeley National Laboratory, Berkeley, CA 5. Los Alamos National Laboratory, Los Alamos, NM 6. University of California, San Diego 7. University of Illinois, Urbana/Champaign 8. University of Michigan, Ann Arbor 9. University of California, Riverside 10. University of Georgia, Athens 11. University of Missouri, Columbia 12. National Center for Genome Resources, Santa Fe, NM Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Local topological modification is widely used to improve mesh quality after automatic generation of tetrahedral and quadrilateral meshes. These same techniques are also used to support adaptive refinement of these meshes. In contrast, few methods are known for locally modifying the topology of hexahedral meshes. Most efforts to do this have been based on fixed transition templates or global refinement. In contrast, a dual-based 'pillowing' method has been used which, while local, is still quite restricted in its application, and is typically applied in a template-based fashion. In this presentation, I will describe the generalization of a dual-based approach to the local topological modification of hex meshes and its application to clean up hexahedral meshes. A set of three operations for locally modifying hex mesh topology has been shown to reproduce the so-called 'flipping' operations described by Bern et. al as well as other commonly-used refinement templates. I will describe the implementation of these operators and their application to real meshes. Challenging aspects of this work have included visualization of a hex mesh and its dual (especially for poor-quality meshes); the incremental modification of both the primal (i.e. the mesh) and the dual simultaneously; and the interactive steering of these operations with the goal of improving hex meshes which would otherwise have unacceptable quality. These aspects will be discussed in the context of improving hex meshes generated by curve contraction-based whisker weaving. Application of these techniques for improving other hexahedral mesh types, for example those resulting from tetrahedral subdivision, will also be discussed.
We design a density-functional-theory (DFT) exchange-correlation functional that enables an accurate treatment of systems with electronic surfaces. Surface-specific approximations for both exchange and correlation energies are developed. A subsystem functional approach is then used: an interpolation index combines the surface functional with a functional for interior regions. When the local density approximation is used in the interior, the result is a straightforward functional for use in self-consistent DFT. The functional is validated for two metals (Al, Pt) and one semiconductor (Si) by calculations of (i) established bulk properties (lattice constants and bulk moduli) and (ii) a property where surface effects exist (the vacancy formation energy). Good and coherent results indicate that this functional may serve well as a universal first choice for solid-state systems and that yet improved functionals can be constructed by this approach.
Many highly oscillatory circuits have a wide separation of time scales between the underlying oscillation and the behavior of interest. This is particularly true of communication circuits. Multiple-time Partial Differential Equation (MPDE) methods offer substantial speed-up for these circuits by introducing a periodic artificial time variable that represents the highly oscillatory behavior. This leaves just the slowly changing behavior of interest, which can be integrated with much larger steps. One problem of particular interest is the larger initial condition that must be specified for this periodic artificial time variable. One possible solution is to formulate an optimization problem in the hopes of increasing the step sizes taken in the slow time direction. This talk will discuss one possible unconstrained optimization problem for determining this initial condition. Numerical results and comparisons to several other initial condition strategies will be presented in addition to MPDE background research and implementation issues.
In this paper, we explore the stability properties of time-domain numerical methods for multitime partial differential equations (MPDEs) in detail. We demonstrate that simple techniques for numerical discretization can lead easily to instability. By investigating the underlying eigenstructure of several discretization techniques along different artificial time scales, we show that not all combinations of techniques are stable. We identify choices of discretization method and step size, along fast and slow time scales, that lead to robust, stable time-domain integration methods for the MPDE. One of our results is that applying overstable methods along one time-scale can compensate for unstable discretization along others. Our novel integration schemes bring robustness to time-domain MPDE solution methods, as we demonstrate with examples.
Experimental evidence and corresponding theoretical analyses have led to the conclusion that the system composed of Xe hollow atom states, that produce a characteristic Xe(L) spontaneous emission spectrum at 1 {at} 2.9 {angstrom} and arise from the excitation of Xe clusters with an intense pulse of 248 nm radiation propagating in a self-trapped plasma channel, closely represents the ideal situation sought for amplification in the multikilovolt region. The key innovation that is central to all aspects of the proposed work is the controlled compression of power to the level ({approx} 10{sup 20} W/cm{sup 3}) corresponding to the maximum achieved by thermonuclear events. Furthermore, since the x-ray power that is produced appears in a coherent form, an entirely new domain of physical interaction is encountered that involves states of matter that are both highly excited and highly ordered. Moreover, these findings lead to the concept of 'photonstaging', an idea which offers the possibility of advancing the power compression by an additional factor of {approx} 10{sup 9} to {approx} 10{sup 29} W/cm{sup 3}. In this completely unexplored regime, g-ray production ({h_bar}{omega}{sub {gamma}} {approx} 1 MeV) is expected to be a leading process. A new technology for the production of very highly penetrating radiation would then be available. The Xe(L) source at {h_bar}{omega}{sub x} {approx} 4.5 keV can be applied immediately to the experimental study of many aspects of the coupling of intense femtosecond x-ray pulses to materials. In a joint collaboration, the UIC group and Sandia plan to explore the following areas. These are specifically, (1) anomalous electromagnetic coupling to solid state materials, (2) 3D nanoimaging of solid matter and hydrated biological materials (e.g. interchromosomal linkers and actin filaments in muscle), and (3) EMP generation with attosecond x-rays.
Because many solid objects, both stationary and mobile, will be present in an indoor environment, the design of an indoor aerosol cloud finding lidar (light detection and ranging) instrument presents a number of challenges. The cloud finder must be able to discriminate between these solid objects and aerosol clouds as small as 1-meter in depth in order to probe suspect clouds. While a near IR ({approx}1.5-{micro}m) laser is desirable for eye-safety, aerosol scattering cross sections are significantly lower in the near-IR than at visible or W wavelengths. The receiver must deal with a large dynamic range since the backscatter from solid object will be orders of magnitude larger than for aerosol clouds. Fast electronics with significant noise contributions will be required to obtain the necessary temporal resolution. We have developed a laboratory instrument to detect aerosol clouds in the presence of solid objects. In parallel, we have developed a lidar performance model for performing trade studies. Careful attention was paid to component details so that results obtained in this study could be applied towards the development of a practical instrument. The amplitude and temporal shape of the signal return are analyzed for discrimination of aerosol clouds in an indoor environment. We have assessed the feasibility and performance of candidate approaches for a fieldable instrument. With the near-IR PMT and a 1.5-{micro}m laser source providing 20-{micro}J pulses, we estimate a bio-aerosol detection limit of 3000 particles/l.
Solution-based synthesis is a powerful approach for creating nano-structured materials. Although there have been significant recent successes in its application to fabricating nanomaterials, the general principles that control solution synthesis are not well understood. The purpose of this LDRD project was to develop the scientific principles required to design and build unique nanostructures in crystalline oxides and II/VI semiconductors using solution-based molecular self-assembly techniques. The ability to synthesize these materials in a range of different nano-architectures (from controlled morphology nanocrystals to surface templated 3-D structures) has provided the foundation for new opportunities in such areas as interactive interfaces for optics, electronics, and sensors. The homogeneous precipitation of ZnO in aqueous solution was used primarily as the model system for the project. We developed a low temperature, aqueous solution synthesis route for preparation of large arrays of oriented ZnO nanostructures. Through control of heterogeneous nucleation and growth, methods to predicatively alter the ZnO microstructures by tailoring the surface chemistry of the crystals were established. Molecular mechanics simulations, involving single point energy calculations and full geometry optimizations, were developed to assist in selecting appropriate chemical systems and understanding physical adsorption and ultimately growth mechanisms in the design of oxide nanoarrays. The versatility of peptide chemistry in controlling the formation of cadmium sulfide nanoparticles and zinc oxide/cadmium sulfide heterostructures was also demonstrated.
Solid-state {sup 1}H magic angle spinning (MAS) NMR was used to investigate sulfonated Diels-Alder poly(phenlylene) polymer membranes. Under high spinning speed {sup 1}H MAS conditions, the proton environments of the sulfonic acid and phenylene polymer backbone are resolved. A double-quantum (DQ) filter using the rotor-synchronized back-to-back (BABA) NMR multiple-pulse sequence allowed the selective suppression of the sulfonic proton environment in the {sup 1}H MAS NMR spectra. This DQ filter in conjunction with a spin diffusion NMR experiment was then used to measure the domain size of the sulfonic acid component within the membrane. In addition, the temperature dependence of the sulfonic acid spin-spin relaxation time (T{sub 2}) was determined, providing an estimate of the activation energy for the proton dynamics of the dehydrated membrane.
The fundamental chemical behavior of the AlCl{sub 3}/SO{sub 2}Cl{sub 2} catholyte system was investigated using {sup 27}Al NMR spectroscopy, Raman spectroscopy, and single-crystal X-ray diffraction. Three major Al-containing species were found to be present in this catholyte system, where the ratio of each was dependent upon aging time, concentration, and/or storage temperature. The first species was identified as [Cl{sub 2}Al({mu}-Cl)]{sub 2} in equilibrium with AlCl{sub 3}. The second species results from the decomposition of SO{sub 2}Cl{sub 2} which forms Cl{sub 2}(g) and SO{sub 2}(g). The SO{sub 2}(g) is readily consumed in the presence of AlCl{sub 3} to form the crystallographically characterized species [Cl{sub 2}Al({mu}-O{sub 2}SCl)]{sub 2} (1). For 1, each Al is tetrahedrally (T{sub d}) bound by two terminal Cl and two {mu}-O ligands whereas, the S is three-coordinated by two {mu}-O ligands and one terminal Cl. The third molecular species also has T{sub d}-coordinated Al metal centers but with increased oxygen coordination. Over time it was noted that a precipitate formed from the catholyte solutions. Raman spectroscopic studies show that this gel or precipitate has a component that was consistent with thionyl chloride. We have proposed a polymerization scheme that accounts for the precipitate formation. Further NMR studies indicate that the precipitate is in equilibrium with the solution.
There is a general lack of compact electromagnetic radiation sources between 1 and 10 terahertz (THz). This a challenging spectral region lying between optical devices at high frequencies and electronic devices at low frequencies. While technologically very underdeveloped the THz region has the promise to be of significant technological importance, yet demonstrating its relevance has proven difficult due to the immaturity of the area. While the last decade has seen much experimental work in ultra-short pulsed terahertz sources, many applications will require continuous wave (cw) sources, which are just beginning to demonstrate adequate performance for application use. In this project, we proposed examination of two potential THz sources based on intersubband semiconductor transitions, which were as yet unproven. In particular we wished to explore quantum cascade lasers based sources and electronic based harmonic generators. Shortly after the beginning of the project, we shifted our emphasis to the quantum cascade lasers due to two events; the publication of the first THz quantum cascade laser by another group thereby proving feasibility, and the temporary shut down of the UC Santa Barbara free-electron lasers which were to be used as the pump source for the harmonic generation. The development efforts focused on two separate cascade laser thrusts. The ultimate goal of the first thrust was for a quantum cascade laser to simultaneously emit two mid-infrared frequencies differing by a few THz and to use these to pump a non-linear optical material to generate THz radiation via parametric interactions in a specifically engineered intersubband transition. While the final goal was not realized by the end of the project, many of the completed steps leading to the goal will be described in the report. The second thrust was to develop direct THz QC lasers operating at terahertz frequencies. This is simpler than a mixing approach, and has now been demonstrated by a few groups with wavelengths spanning 65-150 microns. We developed and refined the MBE growth for THz for both internally and externally designed QC lasers. Processing related issues continued to plague many of our demonstration efforts and will also be addressed in this report.
We demonstrate direct diode-bar side pumping of a Yb-doped fiber laser using embedded-mirror side pumping (EMSP). In this method, the pump beam is launched by reflection from a micro-mirror embedded in a channel polished into the inner cladding of a double-clad fiber (DCF). The amplifier employed an unformatted, non-lensed, ten-emitter diode bar (20 W) and glass-clad, polarization-maintaining, large-mode-area fiber. Measurements with passive fiber showed that the coupling efficiency of the raw diode-bar output into the DCF (ten launch sites) was {approx}84%; for comparison, the net coupling efficiency using a conventional, formatted, fiber-coupled diode bar is typically 50-70%, i.e., EMSP results in a factor of 2-3 less wasted pump power. The slope efficiency of the side-pumped fiber laser was {approx}80% with respect to launched pump power and 24% with respect to electrical power consumption of the diode bar; at a fiber-laser output power of 7.5 W, the EMSP diode bar consumed 41 W of electrical power (18% electrical-to-optical efficiency). When end pumped using a formatted diode bar, the fiber laser consumed 96 W at 7.5 W output power, a factor of 2.3 less efficient, and the electrical-to-optical slope efficiency was lower by a factor of 2.0. Passive-fiber measurements showed that the EMSP alignment sensitivity is nearly identical for a single emitter as for the ten-emitter bar. EMSP is the only method capable of directly launching the unformatted output of a diode bar directly into DCF (including glass-clad DCF), enabling fabrication of low-cost, simple, and compact, diode-bar-pumped fiber lasers and amplifiers.
Spreading of bacteria in a highly advective, disordered environment is examined. Predictions of super-diffusive spreading for a simplified reaction-diffusion equation are tested. Concentration profiles display anomalous growth and super-diffusive spreading. A perturbation analysis yields a crossover time between diffusive and super-diffusive behavior. The time's dependence on the convection velocity and disorder is tested. Like the simplified equation, the full linear reaction-diffusion equation displays super-diffusive spreading perpendicular to the convection. However, for mean positive growth rates the full nonlinear reaction-diffusion equation produces symmetric spreading with a Fisher wavefront, whereas net negative growth rates cause an asymmetry, with a slower wavefront velocity perpendicular to the convection.
Chemical crosslinking is an important tool for probing protein structure and protein-protein interactions. The approach usually involves crosslinking of specific amino acids within a folded protein or protein complex, enzymatic digestion of the crosslinked protein(s), and identification of the resulting crosslinked peptides by liquid chromatography/mass spectrometry (LC/MS). In this manner, distance constraints are obtained for residues that must be in close proximity to one another in the native structure or complex. As the complexity of the system under study increases, for example, a large multi-protein complex, simply measuring the mass of a crosslinked species will not always be sufficient to determine the identity of the crosslinked peptides. In such a case, tandem mass spectrometry (MS/MS) could provide the required information if the data can be properly interpreted. In MS/MS, a species of interest is isolated in the gas phase and allowed to undergo collision induced dissociation (CID). Because the gas-phase dissociation pathways of peptides have been well studied, methods are established for determining peptide sequence by MS/MS. However, although crosslinked peptides dissociate through some of the same pathways as isolated peptides, the additional dissociation pathways available to the former have not been studied in detail. Software such as MS2Assign has been written to assist in the interpretation of MS/MS from crosslinked peptide species, but it would be greatly enhanced by a more thorough understanding of how these species dissociate. We are thus systematically investigating the dissociation pathways open to crosslinked peptide species. A series of polyalanine and polyglycine model peptides have been synthesized containing one or two lysine residues to generate defined inter- and intra-molecular crosslinked species, respectively. Each peptide contains 11 total residues, and one arginine residue is present at the carboxy terminus to mimic species generated by tryptic digestion. The peptides have been allowed to react with a series of commonly used crosslinkers such as DSS, DSG, and DST. The tandem mass spectra acquired for these crosslinked species are being examined as a function of crosslinker identity, site(s) of crosslinking, and precursor charge state. Results from these model studies and observations from actual experimental systems are being incorporated into the MS2Assign software to enhance our ability to effectively use chemical crosslinking in protein complex determination.
A numerical screening study of the interaction between a penetrator and a geological target with a preformed hole has been carried out to identify the main parameters affecting the penetration event. The planning of the numerical experiment was based on the orthogonal array OA(18,7,3,2), which allows 18 simulation runs with 7 parameters at 3 levels each. The strength of 2 of the array allows also for two-factor interaction studies. The seven parameters chosen for this study are: penetrator offset, hole diameter, hole taper, vertical and horizontal velocity of the penetrator, angle of attack of the penetrator and target material. The analysis of the simulation results has been based on main effects plots and analysis of variance (ANOVA), and it has been performed using three metrics: the maximum values of the penetration depth, penetrator deceleration and plastic strain in the penetrator case. This screening study shows that target material has a major influence on penetration depth and penetrator deceleration, while penetrator offset has the strongest effect on the maximum plastic strain.
The Surface Evolver was used to compute the equilibrium microstructure of random soap foams with bidisperse cell-size distributions and to evaluate topological and geometric properties of the foams and individual cells. The simulations agree with the experimental data of Matzke and Nestler for the probability {rho}(F) of finding cells with F faces and its dependence on the fraction of large cells. The simulations also agree with the theory for isotropic Plateau polyhedra (IPP), which describes the F-dependence of cell geometric properties, such as surface area, edge length, and mean curvature (diffusive growth rate); this is consistent with results for polydisperse foams. Cell surface areas are about 10% greater than spheres of equal volume, which leads to a simple but accurate relation for the surface free energy density of foams. The Aboav-Weaire law is not valid for bidisperse foams.
Bulk migration of particles towards regions of lower shear occurs in suspensions of neutrally buoyant spheres in Newtonian fluids undergoing creeping flow in the annular region between two rotating, coaxial cylinders (a wide-gap Couette). For a monomodal suspension of spheres in a viscous fluid, dimensional analysis indicates that the rate of migration at a given concentration should scale with the square of the sphere radius. However, a previous experimental study showed that the rate of migration of spherical particles at 50% volume concentration actually scaled with the sphere radius to approximately the 2.9 power.
Three nested molybdenum wire arrays with initial outer diameters of 45, 50, and 55 mm were imploded by the - 20 MA, 90 ns rise-time current pulse of Sandia's Z accelerator. The implosions generated Mo plasmas with {approx} 10% of the array's initial mass reaching Ne-like and nearby ionization stages. These ions emitted 2-4 keV L-shell x rays with radiative powers approaching 10 TW. Mo L-shell spectra with axial and temporal resolution were captured and have been analyzed using a collisional-radiative model. The measured spectra indicate significant axial variation in the electron density, which increases from a few times 10{sup 20} cm{sup -3} at the cathode up to - 3 x 10{sup 21} cm{sup -3} near the middle of the 20 mm plasma column (8 mm from the anode). Time-resolved spectra indicate that the peak electron density is reached before the peak of the L-shell emission and decreases with time, while the electron temperature remains within 10% of 1.7 keV over the 20-30 ns L-shell radiation pulse. Finally, while the total yield, peak total power, and peak L-shell power all tended to decrease with increasing initial wire array diameters, the L-shell yield and the average plasma conditions varied little with the initial wire array diameter.
The proposed Yucca Mountain repository is anticipated to be the first facility for long-term disposal of commercial spent nuclear fuel and high-level radioactive waste in the United States. The facility, located in the southern Nevada desert, is currently in the planning stages with initial exploratory excavations completed. It is an underground facility mined into the tuffaceous volcanic rocks that sit above the local water table. The focus of the work described in this paper is the development of radionuclide absorbers or 'getter' materials for neptunium (Np), iodine (I), and technetium (Tc) for potential deployment in the repository. 'Getter' materials retard the migration of radionuclides through sorption, reduction, or other chemical and physical processes, thereby slowing or preventing the release and transport of radionuclides. An overview of the objectives and approaches utilized in this work with respect to materials selection and modeling of ion 'getters' is presented. The benefits of the 'getter' development program to the United States Department of Energy (US DOE) are outlined.
We have investigated the potential for intense particle beam surface modification to improve the mechanical properties of materials commonly used in the human body for contact surfaces in, for example, hip and knee implants. The materials studied include Ultra-High Molecular Weight Polyethylene (UHMWPE), Ti-6Al-4Al (titanium alloy), and Co-Cr-Mo alloy. Samples in flat form were exposed to both ion and electron beams (UHMWPE), and to ion beam treatment (metals). Post-analysis indicated a degradation in bulk properties of the UHMWPE, except in the case of the lightest ion fluence tested. A surface-alloyed Hf/Ti layer on the Ti-6Al-4V is found to improve surface wear durability, and have favorable biocompatibility. A promising nanolaminate ceramic coating is applied to the Co-Cr-Mo to improve surface hardness.
Proposed for publication in Association for Computing Machinery Transactions on Mathematical Software.
ODRPAC (TOMS Algorithm 676) has provided a complete package for weighted orthogonal distance regression for many years. The code is complete with user selectable reporting facilities, numerical and analytic derivatives, derivative checking, and many more features. The foundation for the algorithm is a stable and efficient trust region Levenberg-Marquardt minimizer that exploits the structure of the orthogonal distance regression problem. ODRPAC95 is a modification of the original ODRPAC code that adds support for bound constraints, uses the newer Fortran 95 language, and simplifies the interface to the user called subroutine.
We consider the accuracy of predictions made by integer programming (IP) models of sensor placement for water security applications. We have recently shown that IP models can be used to find optimal sensor placements for a variety of different performance criteria (e.g. minimize health impacts and minimize time to detection). However, these models make a variety of simplifying assumptions that might bias the final solution. We show that our IP modeling assumptions are similar to models developed for other sensor placement methodologies, and thus IP models should give similar predictions. However, this discussion highlights that there are significant differences in how temporal effects are modeled for sensor placement. We describe how these modeling assumptions can impact sensor placements.
A focused ion beam (FIB) is used to accurately sculpt predetermined micron-scale, curved shapes in a number of solids. Using a digitally scanned ion beam system, various features are sputtered including hemispheres and sine waves having dimensions from 1-50 {micro}m. Ion sculpting is accomplished by changing pixel dwell time within individual boustrophedonic scans. The pixel dwell times used to sculpt a given shape are determined prior to milling and account for the material-specific, angle-dependent sputter yield, Y({theta}), as well as the amount of beam overlap in adjacent pixels. A number of target materials, including C, Au and Si, are accurately sculpted using this method. For several target materials, the curved feature shape closely matches the intended shape with milled feature depths within 5% of intended values.
Ccaffeine is a Common Component Architecture (CCA) framework devoted to high-performance computing. In this note we give an overview of the system features of Ccaffeine and CCA that support component-based HPC application development. Object-oriented, single-threaded and lightweight, Ccaffeine is designed to get completely out of the way of the running application after it has been composed from components. Ccaffeine is one of the few frameworks, CCA or otherwise, that can compose and run applications on a parallel machine interactively and then automatically generate a static, possibly self-tuning, executable for production runs. Users can experiment with and debug applications interactively, improving their productivity. When the application is ready, a script is automatically generated, parsed and turned into a static executable for production runs. Within this static executable, dynamic replacement of components can be performed by self-tuning applications.
Large-scale three dimensional Discrete Element simulations of granular flow in a modified split-bottom Couette cell for packs of up to 180,000 mono-disperse spheres are presented and compared with experiments. We find that the velocity profiles collapse onto a universal curve not only at the surface but also in the bulk of the pack until slip between layers becomes significant. In agreement with experiment, we find similar relations between the cell geometry and parameters involved in rescaling the velocities at the surface and in the bulk. Likewise, a change in the shape of the shear zone is observed as predicted for tall packs once the center of the shear zone is correctly defined; although the transition does not appear to be first order. Finally, the effect of cohesion is considered as a means to test the theoretical predictions.
In recent years, several integer programming models have been proposed to place sensors in municipal water networks in order to detect intentional or accidental contamination. Although these initial models assumed that it is equally costly to place a sensor at any place in the network, there clearly are practical cost constraints that would impact a sensor placement decision. Such constraints include not only labor costs but also the general accessibility of a sensor placement location. In this paper, we extend our integer program to explicitly model the cost of sensor placement. We partition network locations into groups of varying placement cost, and we consider the public health impacts of contamination events under varying budget constraints. Thus our models permit cost/benefit analyses for differing sensor placement designs. As a control for our optimization experiments, we compare the set of sensor locations selected by the optimization models to a set of manually-selected sensor locations.
Electrical operation of III-Nitride light emitting diodes (LEDs) with photonic crystal structures is demonstrated. Employing photonic crystal structures in III-Nitride LEDs is a method to increase light extraction efficiency and directionality. The photonic crystal is a triangular lattice formed by dry etching into the III-Nitride LED. A range of lattice constants is considered (a {approx} 270-340nm). The III-Nitride LED layers include a tunnel junction providing good lateral current spreading without a semi-absorbing metal current spreader as is typically done in conventional III-Nitride LEDs. These photonic crystal III-Nitride LED structures are unique because they allow for carrier recombination and light generation proximal to the photonic crystal (light extraction area) yet displaced from the absorbing metal contact. The photonic crystal Bragg scatters what would have otherwise been guided modes out of the LED, increasing the extraction efficiency. The far-field light radiation patterns are heavily modified compared to the typical III-Nitride LED's Lambertian output. The photonic crystal affects the light propagation out of the LED surface, and the radiation pattern changes with lattice size. LEDs with photonic crystals are compared to similar III-Nitride LEDs without the photonic crystal in terms of extraction, directionality, and emission spectra.
Experimental evidence suggests that the energy balance between processes in play during wire array implosions is not well understood. In fact the radiative yields can exceed by several times the implosion kinetic energy. A possible explanation is that the coupling from magnetic energy to kinetic energy as magnetohydrodynamic plasma instabilities develop provides additional energy. It is thus important to model the instabilities produced in the after implosion stage of the wire array in order to determine how the stored magnetic energy can be connected with the radiative yields. To this aim three-dimensional hybrid simulations have been performed. They are initialized with plasma radial density profiles, deduced in recent experiments [C. Deeney et al., Phys. Plasmas 6, 3576 (1999)] that exhibited large x-ray yields, together with the corresponding magnetic field profiles. Unlike previous work, these profiles do not satisfy pressure balance and differ substantially from those of a Bennett equilibrium. They result in faster growth with an associated transfer of magnetic energy to plasma motion and hence kinetic energy.
Pulsed power driven metallic wire-array Z pinches are the most powerful and efficient laboratory x-ray sources. Furthermore, under certain conditions the soft x-ray energy radiated in a 5 ns pulse at stagnation can exceed the estimated kinetic energy of the radial implosion phase by a factor of 3 to 4. A theoretical model is developed here to explain this, allowing the rapid conversion of magnetic energy to a very high ion temperature plasma through the generation of fine scale, fast-growing m=0 interchange MHD instabilities at stagnation. These saturate nonlinearly and provide associated ion viscous heating. Next the ion energy is transferred by equipartition to the electrons and thus to soft x-ray radiation. Recent time-resolved iron spectra at Sandia confirm an ion temperature T{sub i} of over 200 keV (2 x 10{sup 9} degrees), as predicted by theory. These are believed to be record temperatures for a magnetically confined plasma.
Bayesian medical monitoring is a concept based on using real-time performance-related data to make statistical predictions about a patient's future health. The following paper discusses the fundamentals behind the medical monitoring concept and the application to monitoring the health of nuclear reactors. Necessary assumptions are discussed regarding distributions and failure-rate calculations. A simple example is performed to illustrate the effectiveness of the methods. The methods perform very well for the thirteen subjects in the example, with a clear failure sequence identified for eleven of the subjects.