39th ASES National Solar Conference 2010, SOLAR 2010
Gupta, Vipin P.; Boudra, Will; Kuszmaul, Scott S.; Rosenthal, Andrew; Cisneros, Gaby; Merrigan, Tim; Miller, Ryan; Dominick, Jeff
In May 2007, Forest City Military Communities won a US Department of Energy Solar America Showcase Award. As part of this award, executives and staff from Forest City Military Communities worked side-by-side with a DOE technical assistance team to overcome technical obstacles encountered by this large-scale real estate developer and manager. This paper describes the solar technical assistance that was provided and the key solar experiences acquired by Forest City Military Communities over an 18 month period.
The Department of Energy's 2008 Yucca Mountain Performance Assessment represents the culmination of more than two decades of analyses of post-closure repository performance in support of programmatic decision making for the proposed Yucca Mountain repository. The 2008 performance assessment summarizes the estimated long-term risks to the health and safety of the public resulting from disposal of spent nuclear fuel and high-level radioactive waste in the proposed Yucca Mountain repository. The standards at 10 CFR Part 63 request several numerical estimates quantifying performance of the repository over time. This paper summarizes the key quantitative results from the performance assessment and presents uncertainty and sensitivity analyses for these results.
We present an RDFS closure algorithm, specifically designed and implemented on the Cray XMT supercomputer, that obtains inference rates of 13 million inferences per second on the largest system configuration we used. The Cray XMT, with its large global memory (4TB for our experiments), permits the construction of a conceptually straightforward algorithm, fundamentally a series of operations on a shared hash table. Each thread is given a partition of triple data to process, a dedicated copy of the ontology to apply to the data, and a reference to the hash table into which it inserts inferred triples. The global nature of the hash table allows the algorithm to avoid a common obstacle for distributed memory machines: the creation of duplicate triples. On LUBM data sets ranging between 1.3 billion and 5.3 billion triples, we obtain nearly linear speedup except for two portions: file I/O, which can be ameliorated with the additional service nodes, and data structure initialization, which requires nearly constant time for runs involving 32 processors or more.
The U.S. Strategic Petroleum Reserve stores crude oil in 62 solution-mined caverns in salt domes located in Texas and Louisiana. Historically, three-dimensional geomechanical simulations of the behavior of the caverns have been performed using a power law creep model. Using this method, and calibrating the creep coefficient to field data such as cavern closure and surface subsidence, has produced varying degrees of agreement with observed phenomena. However, as new salt dome locations are considered for oil storage facilities, pre-construction geomechanical analyses are required that need site-specific parameters developed from laboratory data obtained from core samples. The multi-mechanism deformation (M-D) model is a rigorous mathematical description of both transient and steady-state creep phenomena. Recent enhancements to the numerical integration algorithm within the model have created a more numerically stable implementation of the M-D model. This report presents computational analyses to compare the results of predictions of the geomechanical behavior at the West Hackberry SPR site using both models. The recently-published results using the power law creep model produced excellent agreement with an extensive set of field data. The M-D model results show similar agreement using parameters developed directly from laboratory data. It is also used to predict the behavior for the construction and operation of oil storage caverns at a new site, to identify potential problems before a final cavern layout is designed. Copyright 2010 ARMA, American Rock Mechanics Association.
Uncultivable microorganisms likely play significant roles in the ecology within the human body, with subtle but important implications for human health. Focusing on the oral microbiome, we are developing a processor for targeted isolation of individual microbial cells, facilitating whole-genome analysis without the need for isolation of pure cultures. The processor consists of three microfluidic modules: identification based on 16S rRNA fluorescence in situ hybridization (FISH), fluorescence-based sorting, and encapsulation of individual selected cells into small droplets for whole-genome amplification. We present here a technique for performing microscale FISH and flow cytometry, as a prelude to single cell sorting.
Here we demonstrate the suitability of robust nucleic acid affinity reagents in an integrated point-of-care diagnostic platform for monitoring proteomic biomarkers indicative of astronaut health in spaceflight applications. A model thioaptamer[1] targeting nuclear factor-kappa B (NF-KB) is evaluated in an on-chip electrophoretic gel-shift assay for human serum. Key steps of i) mixing sample with the aptamer, ii) buffer exchange, and iii) preconcentration of sample were successfully integrated upstream of fluorescence-based detection. Challenges due to i) nonspecific interactions with serum, and ii) preconcentration at a nanoporous membrane are discussed and successfully resolved to yield a robust, rapid, and fully-integrated diagnostic system.
We present a method for counting white blood cells that is uniquely compatible with centrifugation based microfluidics. Blood is deposited on top of one or more layers of density media within a microfluidic disk. Spinning the disk causes the cell populations within whole blood to settle through the media, reaching an equilibrium based on the density of each cell type. Separation and fluorescence measurement of cell types stained with a DNA dye is demonstrated using this technique. The integrated signal from bands of fluorescent microspheres is shown to be proportional to their initial concentration in suspension.
We present a platform that combines patterned photopolymerized polymer monoliths with living radical polymerization (LRP) to develop a low cost microfluidic based immunoassay capable of sensitive (low to sub pM) and rapid (<30 minute) detection of protein in 100 μL sample. The introduction of LRP functionality to the porous monolith allows one step grafting of functionalized affinity probes from the monolith surface while the composition of the hydrophilic graft chain reduces non-specific interactions and helps to significantly improve the limit of detection.
We have designed, fabricated, and characterized a digital microfluidic (DMF) platform to function as a central hub for interfacing multiple lab-on-a-chip sample processing modules towards automating the preparation of clinically-derived DNA samples for ultrahigh throughput sequencing (UHTS). The platform enables plug-and-play installation of a two-plate DMF device with consistent spacing, offers flexible connectivity for transferring samples between modules, and uses an intuitive programmable interface to control droplet/electrode actuations. Additionally, the hub platform uses transparent indium-tin oxide (ITO) electrodes to allow complete top and bottom optical access to the droplets on the DMF array, providing additional flexibility for various detection schemes.
We report on advancements of our microscale isoelectric fractionation (μIEFr) methodology for fast on-chip separation and concentration of proteins based on their isoelectric points (pI). We establish that proteins can be fractionated depending on posttranslational modifications into different pH specific bins, from where they can be efficiently transferred to downstream membranes for additional processing and analysis. This technology can enable on-chip multidimensional glycoproteomics analysis, as a new approach to expedite biomarker identification and verification.
This report presents the results of an aging experiment that was established in FY09 and completed in FY10 for the Sandia MEMS Passive Shock Sensor. A total of 37 packages were aged at different temperatures and times, and were then tested after aging to determine functionality. Aging temperatures were selected at 100 C and 150 C, with times ranging from as short as 100 hours to as long as 1 year to simulate a predicted aging of up to 20 years. In all of the tests and controls, 100% of the devices continued to function normally.
We report on advancements of our microscale isoelectric fractionation (μIEFr) methodology for fast on-chip separation and concentration of proteins based on their isoelectric points (pI). We establish that proteins can be fractionated depending on posttranslational modifications into different pH specific bins, from where they can be efficiently transferred to downstream membranes for additional processing and analysis. This technology can enable on-chip multidimensional glycoproteomics analysis, as a new approach to expedite biomarker identification and verification.
Mandell, John F.; Ashwill, Thomas D.; Wilson, Timothy J.; Sears, Aaron T.; Agastra, Pancasatya; Laird, Daniel L.; Samborsky, Daniel D.
This report presents an analysis of trends in fatigue results from the Montana State University program on the fatigue of composite materials for wind turbine blades for the period 2005-2009. Test data can be found in the SNL/MSU/DOE Fatigue of Composite Materials Database which is updated annually. This is the fifth report in this series, which summarizes progress of the overall program since its inception in 1989. The primary thrust of this program has been research and testing of a broad range of structural laminate materials of interest to blade structures. The report is focused on current types of infused and prepreg blade materials, either processed in-house or by industry partners. Trends in static and fatigue performance are analyzed for a range of materials, geometries and loading conditions. Materials include: sixteen resins of three general types, five epoxy based paste adhesives, fifteen reinforcing fabrics including three fiber types, three prepregs, many laminate lay-ups and process variations. Significant differences in static and fatigue performance and delamination resistance are quantified for particular materials and process conditions. When blades do fail, the likely cause is fatigue in the structural detail areas or at major flaws. The program is focused strongly on these issues in addition to standard laminates. Structural detail tests allow evaluation of various blade materials options in the context of more realistic representations of blade structure than do the standard test methods. Types of structural details addressed in this report include ply drops used in thickness tapering, and adhesive joints, each tested over a range of fatigue loading conditions. Ply drop studies were in two areas: (1) a combined experimental and finite element study of basic ply drop delamination parameters for glass and carbon prepreg laminates, and (2) the development of a complex structured resin-infused coupon including ply drops, for comparison studies of various resins, fabrics and pry drop thicknesses. Adhesive joint tests using typical blade adhesives included both generic testing of materials parameters using a notched-lap-shear test geometry developed in this study, and also a series of simulated blade web joint geometries fabricated by an industry partner.
The purpose of this report is to describe the methods commonly used to measure heat flux in fire applications at Sandia National Laboratories in both hydrocarbon (JP-8 jet fuel, diesel fuel, etc.) and propellant fires. Because these environments are very severe, many commercially available heat flux gauges do not survive the test, so alternative methods had to be developed. Specially built sensors include 'calorimeters' that use a temperature measurement to infer heat flux by use of a model (heat balance on the sensing surface) or by using an inverse heat conduction method. These specialty-built sensors are made rugged so they will survive the environment, so are not optimally designed for ease of use or accuracy. Other methods include radiometers, co-axial thermocouples, directional flame thermometers (DFTs), Sandia 'heat flux gauges', transpiration radiometers, and transverse Seebeck coefficient heat flux gauges. Typical applications are described and pros and cons of each method are listed.
For the U.S. Nuclear Regulatory Commission (NRC) Extremely Low Probability of Rupture (xLPR) pilot study, Sandia National Laboratories (SNL) was tasked to develop and evaluate a probabilistic framework using a commercial software package for Version 1.0 of the xLPR Code. Version 1.0 of the xLPR code is focused assessing the probability of rupture due to primary water stress corrosion cracking in dissimilar metal welds in pressurizer surge nozzles. Future versions of this framework will expand the capabilities to other cracking mechanisms, and other piping systems for both pressurized water reactors and boiling water reactors. The goal of the pilot study project is to plan the xLPR framework transition from Version 1.0 to Version 2.0; hence the initial Version 1.0 framework and code development will be used to define the requirements for Version 2.0. The software documented in this report has been developed and tested solely for this purpose. This framework and demonstration problem will be used to evaluate the commercial software's capabilities and applicability for use in creating the final version of the xLPR framework. This report details the design, system requirements, and the steps necessary to use the commercial-code based xLPR framework developed by SNL.
Sandia National Laboratories (SNL) conducts pioneering research and development in Micro-Electro-Mechanical Systems (MEMS) and solar cell research. This dissertation project combines these two areas to create ultra-thin small-form-factor crystalline silicon (c-Si) solar cells. These miniature solar cells create a new class of photovoltaics with potentially novel applications and benefits such as dramatic reductions in cost, weight and material usage. At the beginning of the project, unusually low efficiencies were obtained in the research group. The intention of this research was thus to investigate the main causes of the low efficiencies through simulation, design, fabrication, and characterization. Commercial simulation tools were used to find the main causes of low efficiency. Once the causes were identified, the results were used to create improved designs and build new devices. In the simulations, parameters were varied to see the effect on the performance. The researched parameters were: resistance, wafer lifetime, contact separation, implant characteristics (size, dosage, energy, ratio between the species), contact size, substrate thickness, surface recombination, and light concentration. Out of these parameters, it was revealed that a high quality surface passivation was the most important for obtaining higher performing cells. Therefore, several approaches for enhancing the passivation were tried, characterized, and tested on cells. In addition, a methodology to contact and test the performance of all the cells presented in the dissertation under calibrated light was created. Also, next generation cells that could incorporate all the optimized layers including the passivation was designed, built, and tested. In conclusion, through this investigation, solar cells that incorporate optimized designs and passivation schemes for ultrathin solar cells were created for the first time. Through the application of the methods discussed in this document, the efficiency of the solar cells increased from below 1% to 15% in Microsystems Enabled Photovoltaic (MEPV) devices.
We report on a scalable electrostatic process to transfer epitaxial graphene to arbitrary glass substrates, including Pyrex and Zerodur. This transfer process could enable wafer-level integration of graphene with structured and electronically-active substrates such as MEMS and CMOS. We will describe the electrostatic transfer method and will compare the properties of the transferred graphene with nominally-equivalent 'as-grown' epitaxial graphene on SiC. The electronic properties of the graphene will be measured using magnetoresistive, four-probe, and graphene field effect transistor geometries [1]. To begin, high-quality epitaxial graphene (mobility 14,000 cm2/Vs and domains >100 {micro}m2) is grown on SiC in an argon-mediated environment [2,3]. The electrostatic transfer then takes place through the application of a large electric field between the donor graphene sample (anode) and the heated acceptor glass substrate (cathode). Using this electrostatic technique, both patterned few-layer graphene from SiC(000-1) and chip-scale monolayer graphene from SiC(0001) are transferred to Pyrex and Zerodur substrates. Subsequent examination of the transferred graphene by Raman spectroscopy confirms that the graphene can be transferred without inducing defects. Furthermore, the strain inherent in epitaxial graphene on SiC(0001) is found to be partially relaxed after the transfer to the glass substrates.
We report on the novel room temperature method of synthesizing advanced nuclear fuels; a method that virtually eliminates any volatility of components. This process uses radiolysis to form stable nanoparticle (NP) nuclear transuranic (TRU) fuel surrogates and in-situ heated stage TEM to sinter the NPs. The radiolysis is performed at Sandia's Gamma Irradiation Facility (GIF) 60Co source (3 x 10{sup 6} rad/hr). Using this method, sufficient quantities of fuels for research purposes can be produced for accelerated advanced nuclear fuel development. We are focused on both metallic and oxide alloy nanoparticles of varying compositions, in particular d-U, d-U/La alloys and d-UO2 NPs. We present detailed descriptions of the synthesis procedures, the characterization of the NPs, the sintering of the NPs, and their stability with temperature. We have employed UV-vis, HRTEM, HAADF-STEM imaging, single particle EDX and EFTEM mapping characterization techniques to confirm the composition and alloying of these NPs.
We have developed a high sensitivity (<5 fTesla/{radical}Hz), fiber-optically coupled magnetometer to detect magnetic fields produced by the human brain. This is the first demonstration of a noncryogenic sensor that could replace cryogenic superconducting quantum interference device (SQUID) magnetometers in magnetoencephalography (MEG) and is an important advance in realizing cost-effective MEG. Within the sensor, a rubidium vapor is optically pumped with 795 laser light while field-induced optical rotations are measured with 780 nm laser light. Both beams share a single optical axis to maximize simplicity and compactness. In collaboration with neuroscientists at The Mind Research Network in Albuquerque, NM, the evoked responses resulting from median nerve and auditory stimulation were recorded with the atomic magnetometer and a commercial SQUID-based MEG system with signals comparing favorably. Multi-sensor operation has been demonstrated with two AMs placed on opposite sides of the head. Straightforward miniaturization would enable high-density sensor arrays for whole-head magnetoencephalography.
This presentation on wind energy discusses: (1) current industry status; (2) turbine technologies; (3) assessment and siting; and (4) grid integration. There are no fundamental technical barriers to the integration of 20% wind energy into the nation's electrical system, but there needs to be a continuing evolution of transmission planning and system operation policy and market development for this to be most economically achieved.
Sandia National Laboratories (SNL) participated in a Pilot Study to examine the process and requirements to create a software system to assess the extremely low probability of pipe rupture (xLPR) in nuclear power plants. This project was tasked to develop a prototype xLPR model leveraging existing fracture mechanics models and codes coupled with a commercial software framework to determine the framework, model, and architecture requirements appropriate for building a modular-based code. The xLPR pilot study was conducted to demonstrate the feasibility of the proposed developmental process and framework for a probabilistic code to address degradation mechanisms in piping system safety assessments. The pilot study includes a demonstration problem to assess the probability of rupture of DM pressurizer surge nozzle welds degraded by primary water stress-corrosion cracking (PWSCC). The pilot study was designed to define and develop the framework and model; then construct a prototype software system based on the proposed model. The second phase of the project will be a longer term program and code development effort focusing on the generic, primary piping integrity issues (xLPR code). The results and recommendations presented in this report will be used to help the U.S. Nuclear Regulatory Commission (NRC) define the requirements for the longer term program.
Magnetic Liner Inertial Fusion (MagLIF) [S. A. Slutz, et al., Phys. Plasmas 17 056303 (2010)] is a promising new concept for achieving >100 kJ of fusion yield on Z. The greatest threat to this concept is the Magneto-Rayleigh-Taylor (MRT) instability. Thus an experimental campaign has been initiated to study MRT growth in fast-imploding (<100 ns) cylindrical liners. The first sets of experiments studied aluminum liner implosions with prescribed sinusoidal perturbations (see talk by D. Sinars). By contrast, this poster presents results from the latest sets of experiments that used unperturbed beryllium (Be) liners. The purpose for using Be is that we are able to radiograph 'through' the liner using the 6-keV photons produced by the Z-Beamlet backlighting system. This has enabled us to obtain time-resolved measurements of the imploding liner's density as a function of both axial and radial location throughout the field of view. This data is allowing us to evaluate the integrity of the inside (fuel-confining) surface of the imploding liner as it approaches stagnation.
We present the preliminary design of a Z experiment intended to observe the growth of several hydrodynamic instabilities (RT, RM, and KH) in a high-energy-density plasma. These experiments rely on the Z-machine's unique ability to launch cm-sized slabs of cold material (known as flyer plates) to velocities of several times 10 km/s. During the proposed experiment, the flyer plate will impact a cm-sized target with an embedded interface that has a prescribed sinusoidal perturbation. The flyer plate will generate a strong shock that propagates into the target and later initiates unstable growth of the perturbation. The goal of the experiment is to observe the perturbation at various stages of its evolution as it transitions from linear to non-linear growth, and finally to a fully turbulent state.
The risk assessment approach has been applied to support numerous radioactive waste management activities over the last 30 years. A risk assessment methodology provides a solid and readily adaptable framework for evaluating the risks of CO2 sequestration in geologic formations to prioritize research, data collection, and monitoring schemes. This paper reviews the tasks of a risk assessment, and provides a few examples related to each task. This paper then describes an application of sensitivity analysis to identify important parameters to reduce the uncertainty in the performance of a geologic repository for radioactive waste repository, which because of importance of the geologic barrier, is similar to CO2 sequestration. The paper ends with a simple stochastic analysis of idealized CO2 sequestration site with a leaking abandoned well and a set of monitoring wells in an aquifer above the CO2 sequestration unit in order to evaluate the efficacy of monitoring wells to detect adverse leakage.
Ranking search results is a thorny issue for enterprise search. Search engines rank results using a variety of sophisticated algorithms, but users still complain that search can't ever seem to find anything useful or relevant! The challenge is to provide results that are ranked according to the users' definition of relevancy. Sandia National Laboratories has enhanced its commercial search engine to discover user preferences, re-ranking results accordingly. Immediate positive impact was achieved by modeling historical data consisting of user queries and subsequent result clicks. New data is incorporated into the model daily. An important benefit is that results improve naturally and automatically over time as a function of user actions. This session presents the method employed, how it was integrated with the search engine,metrics illustrating the subsequent improvement to the users' search experience, and plans for implementation with Sandia's FAST for SharePoint 2010 search engine.
Laser-accelerated proton beams can be used in a variety of applications, e.g. ultrafast radiography of dense objects or strong electromagnetic fields. Therefore high energies of tens of MeV are required. We report on proton-acceleration experiments with a 150 TW laser system using mm-sized thin foils and mass-reduced targets of various thicknesses. Thin- foil targets yielded maximum energies of 50 MeV. A further reduction of the target dimensions from mm-size to 250 x 250 x 25 microns increased the maximum proton energy to >65 MeV, which is comparable to proton energies measured only at higher-energy, Petawatt-class laser systems. The dependence of the maximum energy on target dimensions was investigated, and differences between mm-sized thin foils and mass-reduced targets will be reported.
The objectives of this presentation are: (1) To determine if healthcare settings serve as intensive transmission environments for influenza epidemics, increasing effects on communities; (2) To determine which mitigation strategies are best for use in healthcare settings and in communities to limit influenza epidemic effects; and (3) To determine which mitigation strategies are best to prevent illness in healthcare workers.
Cigarette smoking presented the most significant public health challenge in the United States in the 20th Century and remains the single most preventable cause of morbidity and mortality in this country. A number of System Dynamics models exist that inform tobacco control policies. We reviewed them and discuss their contributions. We developed a theory of the societal lifecycle of smoking, using a parsimonious set of feedback loops to capture historical trends and explore future scenarios. Previous work did not explain the long-term historical patterns of smoking behaviors. Much of it used stock-and-flow to represent the decline in prevalence in the recent past. With noted exceptions, information feedbacks were not embedded in these models. We present and discuss our feedback-rich conceptual model and illustrate the results of a series of simulations. A formal analysis shows phenomena composed of different phases of behavior with specific dominant feedbacks associated with each phase. We discuss the implications of our society's current phase, and conclude with simulations of what-if scenarios. Because System Dynamics models must contain information feedback to be able to anticipate tipping points and to help identify policies that exploit leverage in a complex system, we expanded this body of work to provide an endogenous representation of the century-long societal lifecycle of smoking.
Negative bias temperature instability is an issue of critical importance as the space electronics industry evolves because it may dominate the reliability lifetime. Understanding its physical origin is therefore essential in determining how best to search for methods of mitigation. It has been suggested that the magnitude of the effect is strongly dependent on circuit operation conditions (static or dynamic modes). In the present work, we examine the time constants related to the charging and recovery of trapped charged induced by NBTI in HfSiON gate dielectric devices. In previous work, we avoided the issue of charge relaxation during acquisition of the I{sub ds}(V{sub gs}) curve by invoking a continuous stressing technique whereby {Delta}V{sub th} was extracted from a series of single point I{sub ds} measurements. This method relied heavily on determination of the initial value of the source-drain current (I{sub ds}{sup o}) prior to application of gate-source stress. In the present work we have used a new pulsed measurement system (Keithley SCS 4200-PIV) which not only removes this uncertainty but also permits dynamic measurements in which devices are AC stressed (Fig. 1a) or subjected to cycles of continued DC stresses followed by relaxation (Fig. 1b). We can now examine the charging and recovery characteristics of NBTI with higher precision than previously possible. We have performed NBTI stress experiments at room temperature on p-channel MOSFETs made with HfSiON gate dielectrics. In all cases the devices were stressed in the linear regime with V{sub ds}=-0.1V. We have defined two separate waveforms/pulse trains as illustrated in Fig 1. These were applied to the gate of the MOSFET. Firstly we examined the charging characteristics by applying an AC stress at 2.5MHz or 10Hz for different times. For a 50% duty cycle this corresponded to V{sub gs} = - 2V pulses for 200ns or 500ms followed by V{sub gs} = 0V pulses for 200ns or 500ms recovery respectively. In between 'bursts' of AC stress cycles, the I{sub ds}(V{sub gs}) characteristic in the range (-0.6V, -1.3V) was measured in 10.2 {micro}s. V{sub th} was extracted directly from this curve, or from a single I{sub ds} point normalized to the initial I{sub ds}{sup o} using our previous method. The resulting I{sub ds}/I{sub ds}{sup o} curves are compared; in Fig 2, the continuous stress results are included. In the second method, we examined the recovery dynamic by holding V{sub gs} = 0V for a finite amount of time (range 100 ns to 100 ms) following stress at V{sub gs} = - 2V for various times. In Fig 3 we compare |{Delta}V{sub th}(t)| results for recovery times of 100ms, 1ms, 100{micro}s, 50{micro}s, 25{micro}s, 10{micro}s, 100ns, and DC (i.e. no recovery) The data in Fig 2 shows that with a high frequency stress (2.5MHz) devices undergo significantly less (but finite) current degradation than devices stressed at 10Hz. This appears to be limited by charging and not by recovery. Fig 3 supports this hypothesis since for 100ns recovery periods, only a small percentage of the trapped charge relaxes. Detailed explanation of these experiments will be presented at the conference.
Loki-Infect 3 is a desktop application intended for use by community-level decision makers. It allows rapid construction of small-scale studies of emerging or hypothetical infectious diseases in their communities and evaluation of the potential effectiveness of various containment strategies. It was designed with an emphasis on modularity, portability, and ease of use. Our goal is to make this program freely available to community workers across the world.
Injection of CO2 into formations containing brine is proposed as a long-term sequestration solution. A significant obstacle to sequestration performance is the presence of existing wells providing a transport pathway out of the sequestration formation. To understand how heterogeneity impacts the leakage rate, we employ two dimensional models of the CO2 injection process into a sandstone aquifer with shale inclusions to examine the parameters controlling release through an existing well. This scenario is modeled as a constant-rate injection of super-critical CO2 into the existing formation where buoyancy effects, relative permeabilities, and capillary pressures are employed. Three geologic controls are considered: stratigraphic dip angle, shale inclusion size and shale fraction. In this study, we examine the impact of heterogeneity on the amount and timing of CO2 released through a leaky well. Sensitivity analysis is performed to classify how various geologic controls influence CO2 loss. A 'Design of Experiments' approach is used to identify the most important parameters and combinations of parameters to control CO2 migration while making efficient use of a limited number of computations. Results are used to construct a low-dimensional description of the transport scenario. The goal of this exploration is to develop a small set of parametric descriptors that can be generalized to similar scenarios. Results of this work will allow for estimation of the amount of CO2 that will be lost for a given scenario prior to commencing injection. Additionally, two-dimensional and three-dimensional simulations are compared to quantify the influence that surrounding geologic media has on the CO2 leakage rate.
Complex metal hydrides continue to be investigated as solid-materials for hydrogen storage. Traditional interstitial metal hydrides offer favorable thermodynamics and kinetics for hydrogen release but do not meet energy density requires. Anionic metal hydrides, and complex metal hydrides like magnesium borohydride have higher energy densities compared to interstitial metal hydrides, but poor kinetics and/or thermodynamically unfavorable side products limit their deployment as hydrogen storage materials in transportation applications. Main-group anionic materials such as the bis(borane)hypophosphite salt [PH2(BH3)2] have been known for decades, but only recently have we begun to explore their ability to release hydrogen. We have developed a new procedure for synthesizing the lithium and sodium hypophosphite salts. Routes for accessing other metal bis(borane)hypophosphite salts will be discussed. A significant advantage of this class of material is the air and water stability of the anion. Compared to metal borohydrides, which reactive violently with water, these phosphorus-based salts can be dissolved in protic solvents, including water, with little to no decomposition over the course of multiple days. The ability of these salts to release hydrogen upon heating has been assessed. While preliminary results indicate phosphine and boron-containing species are released, hydrogen is also a major component of the volatile species observed during the thermal decomposition. Additives such as NaH or KH mixed with the sodium salt Na[PH2(BH3)2] significantly perturb the decomposition reaction and greatly increase the mass loss as determined by thermal gravimetric analysis (TGA). This symbiotic behavior has the potential to affect the hydrogen storage ability of bis(borane)hypophosphite salts.
Ethanol and ethanol/gasoline blends are being widely considered as alternative fuels for light-duty automotive applications. At the same time, HCCI combustion has the potential to provide high efficiency and ultra-low exhaust emissions. However, the application of HCCI is typically limited to low and moderate loads because of unacceptably high heat-release rates (HRR) at higher fueling rates. This work investigates the potential of lowering the HCCI HRR at high loads by using partial fuel stratification to increase the in-cylinder thermal stratification. This strategy is based on ethanol's high heat of vaporization combined with its true single-stage ignition characteristics. Using partial fuel stratification, the strong fuel-vaporization cooling produces thermal stratification due to variations in the amount of fuel vaporization in different parts of the combustion chamber. The low sensitivity of the autoignition reactions to variations of the local fuel concentration allows the temperature variations to govern the combustion event. This results in a sequential autoignition event from leaner and hotter zones to richer and colder zones, lowering the overall combustion rate compared to operation with a uniform fuel/air mixture. The amount of partial fuel stratification was varied by adjusting the fraction of fuel injected late to produce stratification, and also by changing the timing of the late injection. The experiments show that a combination of 60-70% premixed charge and injection of 30-40 % of the fuel at 80{sup o}CA before TDC is effective for smoothing the HRR. With CA50 held fixed, this increases the burn duration by 55% and reduces the maximum pressure-rise rate by 40%. Combustion stability remains high but engine-out NO{sub x} has to be monitored carefully. For operation with strong reduction of the peak HRR, ISNO{sub x} rises to around 0.20 g/kWh for an IMEP{sub g} of 440 kPa. The single-cylinder HCCI research engine was operated naturally aspirated without EGR at 1200 rpm, and had low residual level using a CR = 14 piston.
A silicon photonics based integrated optical phase locked loop is utilized to synchronize a 10.2 GHz voltage controlled oscillator with a 509 MHz mode locked laser, achieving 32 fs integrated jitter over 300 kHz bandwidth.
A design concept, device layout, and monolithic microfabrication processing sequence have been developed for a dual-metal layer atom chip for next-generation positional control of ultracold ensembles of trapped atoms. Atom chips are intriguing systems for precision metrology and quantum information that use ultracold atoms on microfabricated chips. Using magnetic fields generated by current carrying wires, atoms are confined via the Zeeman effect and controllably positioned near optical resonators. Current state-of-the-art atom chips are single-layer or hybrid-integrated multilayer devices with limited flexibility and repeatability. An attractive feature of multi-level metallization is the ability to construct more complicated conductor patterns and thereby realize the complex magnetic potentials necessary for the more precise spatial and temporal control of atoms that is required. Here, we have designed a true, monolithically integrated, planarized, multi-metal-layer atom chip for demonstrating crossed-wire conductor patterns that trap and controllably transport atoms across the chip surface to targets of interest.
Climate models have a large number of inputs and outputs. In addition, diverse parameters sets can match observations similarly well. These factors make calibrating the models difficult. But as the Earth enters a new climate regime, parameters sets may cease to match observations. History matching is necessary but not sufficient for good predictions. We seek a 'Pareto optimal' ensemble of calibrated parameter sets for the CCSM climate model, in which no individual criteria can be improved without worsening another. One Multi Objective Genetic Algorithm (MOGA) optimization typically requires thousands of simulations but produces an ensemble of Pareto optimal solutions. Our simulation budget of 500-1000 runs allows us to perform the MOGA optimization once, but with far fewer evaluations than normal. We devised an analytic test problem to aid in the selection MOGA settings. The test problem's Pareto set is the surface of a 6 dimensional hypersphere with radius 1 centered at the origin, or rather the portion of it in the [0,1] octant. We also explore starting MOGA from a space-filling Latin Hypercube sample design, specifically Binning Optimal Symmetric Latin Hypercube Sampling (BOSLHS), instead of Monte Carlo (MC). We compare the Pareto sets based on: their number of points, N, larger is better; their RMS distance, d, to the ensemble's center, 0.5553 is optimal; their average radius, {mu}(r), 1 is optimal; their radius standard deviation, {sigma}(r), 0 is optimal. The estimated distributions for these metrics when starting from MC and BOSLHS are shown in Figs. 1 and 2.
Mudstone mechanical testing is often limited by poor core recovery and sample size, preservation and preparation issues, which can lead to sampling bias, damage, and time-dependent effects. A micropillar compression technique, originally developed by Uchic et al. 2004, here is applied to elasto-plastic deformation of small volumes of mudstone, in the range of cubic microns. This study examines behavior of the Gothic shale, the basal unit of the Ismay zone of the Pennsylvanian Paradox Formation and potential shale gas play in southeastern Utah, USA. Precision manufacture of micropillars 5 microns in diameter and 10 microns in length are prepared using an ion-milling method. Characterization of samples is carried out using: dual focused ion - scanning electron beam imaging of nano-scaled pores and distribution of matrix clay and quartz, as well as pore-filling organics; laser scanning confocal (LSCM) 3D imaging of natural fractures; and gas permeability, among other techniques. Compression testing of micropillars under load control is performed using two different nanoindenter techniques. Deformation of 0.5 cm in diameter by 1 cm in length cores is carried out and visualized by a microscope loading stage and laser scanning confocal microscopy. Axisymmetric multistage compression testing and multi-stress path testing is carried out using 2.54 cm plugs. Discussion of results addresses size of representative elementary volumes applicable to continuum-scale mudstone deformation, anisotropy, and size-scale plasticity effects. Other issues include fabrication-induced damage, alignment, and influence of substrate.
Arctic sea ice is an important component of the global climate system and, due to feedback effects, the Arctic ice cover is changing rapidly. Predictive mathematical models are of paramount importance for accurate estimates of the future ice trajectory. However, the sea ice components of Global Climate Models (GCMs) vary significantly in their prediction of the future state of Arctic sea ice and have generally underestimated the rate of decline in minimum sea ice extent seen over the past thirty years. One of the contributing factors to this variability is the sensitivity of the sea ice state to internal model parameters. A new sea ice model that holds some promise for improving sea ice predictions incorporates an anisotropic elastic-decohesive rheology and dynamics solved using the material-point method (MPM), which combines Lagrangian particles for advection with a background grid for gradient computations. We evaluate the variability of this MPM sea ice code and compare it with the Los Alamos National Laboratory CICE code for a single year simulation of the Arctic basin using consistent ocean and atmospheric forcing. Sensitivities of ice volume, ice area, ice extent, root mean square (RMS) ice speed, central Arctic ice thickness,and central Arctic ice speed with respect to ten different dynamic and thermodynamic parameters are evaluated both individually and in combination using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA). We find similar responses for the two codes and some interesting seasonal variability in the strength of the parameters on the solution.
Measurements of the electrical and thermal transport properties of one-dimensional nanostructures (e.g., nanotubes and nanowires) typically are obtained without detailed knowledge of the specimen's atomic-scale structure or defects. To address this deficiency we have developed a microfabricated, chip-based characterization platform that enables both transmission electron microscopy (TEM) of atomic structure and defects as well as measurement of the thermal transport properties of individual nanostructures. The platform features a suspended heater line that contacts the center of a suspended nanostructure/nanowire that was placed using in-situ scanning electron microscope nanomanipulators. One key advantage of this platform is that it is possible to measure the thermal conductivity of both halves of the nanostructure (on each side of the central heater), and this feature permits identification of possible changes in thermal conductance along the wire and measurement of the thermal contact resistance. Suspension of the nanostructure across a through-hole enables TEM characterization of the atomic and defect structure (dislocations, stacking faults, etc.) of the test sample. As a model study, we report the use of this platform to measure the thermal conductivity and defect structure of GaN nanowires. The utilization of this platform for the measurements of other nanostructures will also be discussed.
Sandia National Laboratories is collaborating with the National Research Council (NRC) Canada and the National Renewable Energy Laboratory (NREL) to develop a decision-support model that will evaluate the tradeoffs associated with high-latitude algae biofuel production co-located with wastewater, CO2, and waste heat. This project helps Canada meet its goal of diversifying fuel sources with algae-based biofuels. The biofuel production will provide a wide range of benefits including wastewater treatment, CO2 reuse and reduction of demand for fossil-based fuels. The higher energy density in algae-based fuels gives them an advantage over crop-based biofuels as the 'production' footprint required is much less, resulting in less water consumed and little, if any conversion of agricultural land from food to fuel production. Besides being a potential source for liquid fuel, algae have the potential to be used to generate electricity through the burning of dried biomass, or anaerobically digested to generate methane for electricity production. Co-locating algae production with waste streams may be crucial for making algae an economically valuable fuel source, and will certainly improve its overall ecological sustainability. The modeling process will address these questions, and others that are important to the use of water for energy production: What are the locations where all resources are co-located, and what volumes of algal biomass and oil can be produced there? In locations where co-location does not occur, what resources should be transported, and how far, while maintaining economic viability? This work is being funded through the U.S. Department of Energy (DOE) Biomass Program Office of Energy Efficiency and Renewable Energy, and is part of a larger collaborative effort that includes sampling, strain isolation, strain characterization and cultivation being performed by the NREL and Canada's NRC. Results from the NREL / NRC collaboration including specific productivities of selected algal strains will eventually be incorporated into this model.
Concerns over rising concentrations of greenhouse gases in the atmosphere have resulted in serious consideration of policies aimed at reduction of anthropogenic carbon dioxide (CO2) emissions. If large scale abatement efforts are undertaken, one critical tool will be geologic sequestration of CO2 captured from large point sources, specifically coal and natural gas fired power plants. Current CO2 capture technologies exact a substantial energy penalty on the source power plant, which must be offset with make-up power. Water demands increase at the source plant due to added cooling loads. In addition, new water demand is created by water requirements associated with generation of the make-up power. At the sequestration site however, saline water may be extracted to manage CO2 plum migration and pressure build up in the geologic formation. Thus, while CO2 capture creates new water demands, CO2 sequestration has the potential to create new supplies. Some or all of the added demand may be offset by treatment and use of the saline waters extracted from geologic formations during CO2 sequestration. Sandia National Laboratories, with guidance and support from the National Energy Technology Laboratory, is creating a model to evaluate the potential for a combined approach to saline formations, as a sink for CO2 and a source for saline waters that can be treated and beneficially reused to serve power plant water demands. This presentation will focus on the magnitude of added U.S. power plant water demand under different CO2 emissions reduction scenarios, and the portion of added demand that might be offset by saline waters extracted during the CO2 sequestration process.
Disposal of high-level radioactive waste, including spent nuclear fuel, in deep (3 to 5 km) boreholes is a potential option for safely isolating these wastes from the surface and near-surface environment. Existing drilling technology permits reliable and cost-effective construction of such deep boreholes. Conditions favorable for deep borehole disposal in crystalline basement rocks, including low permeability, high salinity, and geochemically reducing conditions, exist at depth in many locations, particularly in geologically stable continental regions. Isolation of waste depends, in part, on the effectiveness of borehole seals and potential alteration of permeability in the disturbed host rock surrounding the borehole. Coupled thermal-mechanical-hydrologic processes induced by heat from the radioactive waste may impact the disturbed zone near the borehole and borehole wall stability. Numerical simulations of the coupled thermal-mechanical response in the host rock surrounding the borehole were conducted with three software codes or combinations of software codes. Software codes used in the simulations were FEHM, JAS3D, Aria, and Adagio. Simulations were conducted for disposal of spent nuclear fuel assemblies and for the higher heat output of vitrified waste from the reprocessing of fuel. Simulations were also conducted for both isotropic and anisotropic ambient horizontal stress in the host rock. Physical, thermal, and mechanical properties representative of granite host rock at a depth of 4 km were used in the models. Simulation results indicate peak temperature increases at the borehole wall of about 30 C and 180 C for disposal of fuel assemblies and vitrified waste, respectively. Peak temperatures near the borehole occur within about 10 years and decline rapidly within a few hundred years and with distance. The host rock near the borehole is placed under additional compression. Peak mechanical stress is increased by about 15 MPa (above the assumed ambient isotropic stress of 100 MPa) at the borehole wall for the disposal of fuel assemblies and by about 90 MPa for vitrified waste. Simulated peak volumetric strain at the borehole wall is about 420 and 2600 microstrain for the disposal of fuel assemblies and vitrified waste, respectively. Stress and volumetric strain decline rapidly with distance from the borehole and with time. Simulated peak stress at and parallel to the borehole wall for the disposal of vitrified waste with anisotropic ambient horizontal stress is about 440 MPa, which likely exceeds the compressive strength of granite if unconfined by fluid pressure within the borehole. The relatively small simulated displacements and volumetric strain near the borehole suggest that software codes using a nondeforming grid provide an adequate approximation of mechanical deformation in the coupled thermal-mechanical model. Additional modeling is planned to incorporate the effects of hydrologic processes coupled to thermal transport and mechanical deformation in the host rock near the heated borehole.
Because the potential effects of climate change are more severe than had previously been thought, increasing focus on uncertainty quantification is required for risk assessment needed by policy makers. Current scientific efforts focus almost exclusively on establishing best estimates of future climate change. However, the greatest consequences occur in the extreme tail of the probability density functions for climate sensitivity (the 'high-sensitivity tail'). To this end, we are exploring the impacts of newly postulated, highly uncertain, but high-consequence physical mechanisms to better establish the climate change risk. We define consequence in terms of dramatic change in physical conditions and in the resulting socioeconomic impact (hence, risk) on populations. Although we are developing generally applicable risk assessment methods, we have focused our initial efforts on uncertainty and risk analyses for the Arctic region. Instead of focusing on best estimates, requiring many years of model parameterization development and evaluation, we are focusing on robust emergent phenomena (those that are not necessarily intuitive and are insensitive to assumptions, subgrid-parameterizations, and tunings). For many physical systems, under-resolved models fail to generate such phenomena, which only develop when model resolution is sufficiently high. Our ultimate goal is to discover the patterns of emergent climate precursors (those that cannot be predicted with lower-resolution models) that can be used as a 'sensitivity fingerprint' and make recommendations for a climate early warning system that would use satellites and sensor arrays to look for the various predicted high-sensitivity signatures. Our initial simulations are focused on the Arctic region, where underpredicted phenomena such as rapid loss of sea ice are already emerging, and because of major geopolitical implications associated with increasing Arctic accessibility to natural resources, shipping routes, and strategic locations. We anticipate that regional climate will be strongly influenced by feedbacks associated with a seasonally ice-free Arctic, but with unknown emergent phenomena.
The controlled self-assembly of polymer thin-films into ordered domains has attracted significant academic and industrial interest. Most work has focused on controlling domain size and morphology through modification of the polymer block-lengths, n, and the Flory-Huggins interaction parameter, {chi}. Models, such as Self-Consistent Field Theory (SCFT), have been successful in describing the experimentally observed morphology of phase-separated polymers. We have developed a computational method which uses SCFT calculations as a predictive tool in order to guide our polymer synthesis. Armed with this capability, we have the ability to select {chi} and then search for an ideal value of n such that a desired morphology is the most thermodynamically favorable. This approach enables us to synthesize new block-polymers with the exactly segment lengths that will undergo self-assembly to the desired morphology. As proof-of-principle we have used our model to predict the gyroidal domain for various block lengths using a fixed {chi} value. To validate our computational model, we have synthesized a series of block-copolymers in which only the total molecular length changes. All of these materials have a predicted thermodynamically favorable gyroidal morphology based on the results of our SCFT calculations. Thin-films of these polymers are cast and annealed in order to equilibrate the structure. Final characterization of the polymer thin-film morphology has been performed. The accuracy of our calculations compared to experimental results is discussed. Extension of this predictive ability to tri-block polymer systems and the implications to making functionalizable nanoporous membranes will be discussed.
This presentation discusses the following questions: (1) What are the Global Problems that require Systems Engineering; (2) Where is Systems Engineering going; (3) What are the boundaries of Systems Engineering; (4) What is the distinction between Systems Thinking and Systems Engineering; (5) Can we use Systems Engineering on Complex Systems; and (6) Can we use Systems Engineering on Wicked Problems?
Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of discontinuous model data with adjustable sharpness and structure.
The area of wind turbine component manufacturing represents a business opportunity in the wind energy industry. Modern wind turbines can provide large amounts of electricity, cleanly and reliably, at prices competitive with any other new electricity source. Over the next twenty years, the US market for wind power is expected to continue to grow, as is the domestic content of installed turbines, driving demand for American-made components. Between 2005 and 2009, components manufactured domestically grew eight-fold to reach 50 percent of the value of new wind turbines installed in the U.S. in 2009. While that growth is impressive, the industry expects domestic content to continue to grow, creating new opportunities for suppliers. In addition, ever-growing wind power markets around the world provide opportunities for new export markets.
The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.
CACTUS (Code for Axial and Cross-flow TUrbine Simulation) is a turbine performance simulation code, based on a free wake vortex method, under development at Sandia National Laboratories (SNL) as part of a Department of Energy program to study marine hydrokinetic (MHK) devices. The current effort builds upon work previously done at SNL in the area of vertical axis wind turbine simulation, and aims to add models to handle generic device geometry and physical models specific to the marine environment. An overview of the current state of the project and validation effort is provided.
We present results from a recently developed multiscale inversion technique for binary media, with emphasis on the effect of subgrid model errors on the inversion. Binary media are a useful fine-scale representation of heterogeneous porous media. Averaged properties of the binary field representations can be used to characterize flow through the porous medium at the macroscale. Both direct measurements of the averaged properties and upscaling are complicated and may not provide accurate results. However, it may be possible to infer upscaled properties of the binary medium from indirect measurements at the coarse scale. Multiscale inversion, performed with a subgrid model to connect disparate scales together, can also yield information on the fine-scale properties. We model the binary medium using truncated Gaussian fields, and develop a subgrid model for the upscaled permeability based on excursion sets of those fields. The subgrid model requires an estimate of the proportion of inclusions at the block scale as well as some geometrical parameters of the inclusions as inputs, and predicts the effective permeability. The inclusion proportion is assumed to be spatially varying, modeled using Gaussian processes and represented using a truncated Karhunen-Louve (KL) expansion. This expansion is used, along with the subgrid model, to pose as a Bayesian inverse problem for the KL weights and the geometrical parameters of the inclusions. The model error is represented in two different ways: (1) as a homoscedastic error and (2) as a heteroscedastic error, dependent on inclusion proportionality and geometry. The error models impact the form of the likelihood function in the expression for the posterior density of the objects of inference. The problem is solved using an adaptive Markov Chain Monte Carlo method, and joint posterior distributions are developed for the KL weights and inclusion geometry. Effective permeabilities and tracer breakthrough times at a few 'sensor' locations (obtained by simulating a pump test) form the observables used in the inversion. The inferred quantities can be used to generate an ensemble of permeability fields, both upscaled and fine-scale, which are consistent with the observations. We compare the inferences developed using the two error models, in terms of the KL weights and fine-scale realizations that could be supported by the coarse-scale inferences. Permeability differences are observed mainly in regions where the inclusions proportion is near the percolation threshold, and the subgrid model incurs its largest approximation. These differences also reflected in the tracer breakthrough times and the geometry of flow streamlines, as obtained from a permeameter simulation. The uncertainty due to subgrid model error is also compared to the uncertainty in the inversion due to incomplete data.
Lawrence, R.J.; Remo, John L.; Furnish, Michael D.
X-ray momentum coupling coefficients, C{sub M}, were determined by measuring stress waveforms in planetary materials subjected to impulsive radiation loading from the Sandia National Laboratories Z-machine. Results from the velocity interferometry (VISAR) diagnostic provided limited equation-of-state data as well. Targets were iron and stone meteorites, magnesium rich olivine (dunite) solid and powder ({approx}5--300 {mu}m), and Si, Al, and Fe calibration targets. All samples were {approx}1 mm thick and, except for Si, backed by LiF single-crystal windows. The x-ray spectrum included a combination of thermal radiation (blackbody 170--237 eV) and line emissions from the pinch material (Cu, Ni, Al, or stainless steel). Target fluences 0.4--1.7 kJ/cm{sup 2} at intensities 43--260 GW/cm{sup 2} produced front surface plasma pressures 2.6--12.4 GPa. Stress waves driven into the samples were attenuating due to the short ({approx}5 ns) duration of the drive pulse. Attenuating wave impulse is constant allowing accurate C{sub M} measurements provided mechanical impedance mismatch between samples and the window are known. Impedance-corrected C{sub M} determined from rear-surface motion was 1.9--3.1 x 10{sup -5} s/m for stony meteorites, 2.7 and 0.5 x 10{sup -5} s/m for solid and powdered dunite, 0.8--1.4 x 10{sup -5}.
We are developing a low-emissivity thermal management coating system to minimize radiative heat losses under a high-vacuum environment. Good adhesion, low outgassing, and good thermal stability of the coating material are essential elements for a long-life, reliable thermal management device. The system of electroplated Au coating on the adhesion-enhancing Wood's Ni strike and 304L substrate was selected due to its low emissivity and low surface chemical reactivity. The physical and chemical properties, interface bonding, thermal aging, and compatibility of the above Au/Ni/304L system were examined extensively. The study shows that the as-plated electroplated Au and Ni samples contain submicron columnar grains, stringers of nanopores, and/or H{sub 2} gas bubbles, as expected. The grain structure of Au and Ni are thermally stable up to 250 C for 63 days. The interface bonding is strong, which can be attributed to good mechanical locking among the Au, the 304L, and the porous Ni strike. However, thermal instability of the nanopore structure (i.e., pore coalescence and coarsening due to vacancy and/or entrapped gaseous phase diffusion) and Ni diffusion were observed. In addition, the study also found that prebaking 304L in the furnace at {ge} 1 x 10{sup -4} Torr promotes surface Cr-oxides on the 304L surface, which reduces the effectiveness of the intended H-removal. The extent of the pore coalescence and coarsening and their effect on the long-term system integrity and outgassing are yet to be understood. Mitigating system outgassing and improving Au adhesion require a further understanding of the process-structure-system performance relationships within the electroplated Au/Ni/304L system.
Exascale systems will have hundred thousands of compute nodes and millions of components which increases the likelihood of faults. Today, applications use checkpoint/restart to recover from these faults. Even under ideal conditions, applications running on more than 50,000 nodes will spend more than half of their total running time saving checkpoints, restarting, and redoing work that was lost. Redundant computing is a method that allows an application to continue working even when failures occur. Instead of each failure causing an application interrupt, multiple failures can be absorbed by the application until redundancy is exhausted. In this paper we present a method to analyze the benefits of redundant computing, present simulation results of the cost, and compare it to other proposed methods for fault resilience.
Subsurface containment of CO2 is predicated on effective caprock sealing. Many previous studies have relied on macroscopic measurements of capillary breakthrough pressure and other petrophysical properties without direct examination of solid phases that line pore networks and directly contact fluids. However, pore-lining phases strongly contribute to sealing behavior through interfacial interactions among CO2, brine, and the mineral or non-mineral phases. Our high resolution (i.e., sub-micron) examination of the composition of pore-lining phases of several continental and marine mudstones indicates that sealing efficiency (i.e., breakthrough pressure) is governed by pore shapes and pore-lining phases that are not identifiable except through direct characterization of pores. Bulk X-ray diffraction data does not indicate which phases line the pores and may be especially lacking for mudstones with organic material. Organics can line pores and may represent once-mobile phases that modify the wettability of an originally clay-lined pore network. For shallow formations (i.e., < {approx}800 m depth), interfacial tension and contact angles result in breakthrough pressures that may be as high as those needed to fracture the rock - thus, in the absence of fractures, capillary sealing efficiency is indicated. Deeper seals have poorer capillary sealing if mica-like wetting dominates the wettability. We thank the U.S. Department of Energy's National Energy Technology Laboratory and the Office of Basic Energy Sciences, and the Southeast and Southwest Carbon Sequestration Partnerships for supporting this work.
Injection of CO2 into underground rock formations can reduce atmospheric CO2 emissions. Caprocks present above potential storage formations are the main structural trap inhibiting CO2 from leaking into overlying aquifers or back to the Earth's surface. Dissolution and precipitation of caprock minerals resulting from reaction with CO2 may alter the pore network where many pores are of the micrometer to nanometer scale, thus altering the structural trapping potential of the caprock. However, the distribution, geometry and volume of pores at these scales are poorly characterized. In order to evaluate the overall risk of leakage of CO2 from storage formations, a first critical step is understanding the distribution and shape of pores in a variety of different caprocks. As the caprock is often comprised of mudstones, we analyzed samples from several mudstone formations with small angle neutron scattering (SANS) and high-resolution transmission electron microscopy (TEM) imaging to compare the pore networks. Mudstones were chosen from current or potential sites for carbon sequestration projects including the Marine Tuscaloosa Group, the Lower Tuscaloosa Group, the upper and lower shale members of the Kirtland Formation, and the Pennsylvanian Gothic shale. Expandable clay contents ranged from 10% to approximately 40% in the Gothic shale and Kirtland Formation, respectively. During SANS, neutrons effectively scatter from interfaces between materials with differing scattering length density (i.e., minerals and pores). The intensity of scattered neutrons, I(Q), where Q is the scattering vector, gives information about the volume and arrangement of pores in the sample. The slope of the scattering data when plotted as log I(Q) vs. log Q provides information about the fractality or geometry of the pore network. On such plots slopes from -2 to -3 represent mass fractals while slopes from -3 to -4 represent surface fractals. Scattering data showed surface fractal dimensions for the Kirtland formation and one sample from the Tuscaloosa formation close to 3, indicating very rough surfaces. In contrast, scattering data for the Gothic shale formation exhibited mass fractal behavior. In one sample of the Tuscaloosa formation the data are described by a surface fractal at low Q (larger pores) and a mass fractal at high Q (smaller pores), indicating two pore populations contributing to the scattering behavior. These small angle neutron scattering results, combined with high-resolution TEM imaging, provided a means for both qualitative and quantitative analysis of the differences in pore networks between these various mudstones.
We consider the problem of placing a limited number of sensors in a municipal water distribution network to minimize the impact over a given suite of contamination incidents. In its simplest form, the sensor placement problem is a p-median problem that has structure extremely amenable to exact and heuristic solution methods. We describe the solution of real-world instances using integer programming or local search or a Lagrangian method. The Lagrangian method is necessary for solution of large problems on small PCs. We summarize a number of other heuristic methods for effectively addressing issues such as sensor failures, tuning sensors based on local water quality variability, and problem size/approximation quality tradeoffs. These algorithms are incorporated into the TEVA-SPOT toolkit, a software suite that the US Environmental Protection Agency has used and is using to design contamination warning systems for US municipal water systems.
The acoustic field generated during a Direct Field Acoustic Test (DFAT) has been analytically modeled in two space dimensions using a properly phased distribution of propagating plane waves. Both the pure-tone and broadband acoustic field were qualitatively and quantitatively compared to a diffuse acoustic field. The modeling indicates significant non-uniformity of sound pressure level for an empty (no test article) DFAT, specifically a center peak and concentric maxima/minima rings. This spatial variation is due to the equivalent phase among all propagating plane waves at each frequency. The excitation of a simply supported slender beam immersed within the acoustic fields was also analytically modeled. Results indicate that mid-span response is dependent upon location and orientation of the beam relative to the center of the DFAT acoustic field. For a diffuse acoustic field, due to its spatial uniformity, mid-span response sensitivity to location and orientation is nonexistent.
Mercury intrusion porosimetry (MIP) is an often-applied technique for determining pore throat distributions and seal analysis of fine-grained rocks. Due to closure effects, potential pore collapse, and complex pore network topologies, MIP data interpretation can be ambiguous, and often biased toward smaller pores in the distribution. We apply 3D imaging techniques and lattice-Boltzmann modeling in interpreting MIP data for samples of the Cretaceous Selma Group Chalk. In the Mississippi Interior Salt Basin, the Selma Chalk is the apparent seal for oil and gas fields in the underlying Eutaw Fm., and, where unfractured, the Selma Chalk is one of the regional-scale seals identified by the Southeast Regional Carbon Sequestration Partnership for CO2 injection sites. Dual focused ion - scanning electron beam and laser scanning confocal microscopy methods are used for 3D imaging of nanometer-to-micron scale microcrack and pore distributions in the Selma Chalk. A combination of image analysis software is used to obtain geometric pore body and throat distributions and other topological properties, which are compared to MIP results. 3D data sets of pore-microfracture networks are used in Lattice Boltzmann simulations of drainage (wetting fluid displaced by non-wetting fluid via the Shan-Chen algorithm), which in turn are used to model MIP procedures. Results are used in interpreting MIP results, understanding microfracture-matrix interaction during multiphase flow, and seal analysis for underground CO2 storage.
Photovoltaic systems are often priced in $/W{sub p}, where Wp refers to the DC power rating of the modules at Standard Test Conditions (1000 W/m{sup 2}, 25 C cell temperature) and $ refers to the installed cost of the system. However, the true value of the system is in the energy it will produce in kWhs, not the power rating. System energy production is a function of the system design and location, the mounting configuration, the power conversion system, and the module technology, as well as the solar resource. Even if all other variables are held constant, the annual energy yield (kWh/kW{sup p}) will vary among module technologies because of differences in response to low-light levels and temperature. Understanding energy yield is a key part of understanding system value. System performance models are used during project development to estimate the expected output of PV systems for a given design and location. Performance modeling is normally done by the system designer/system integrator. Often, an independent engineer will also model system output during a due diligence review of a project. A variety of system performance models are available. The most commonly used modeling tool for project development and due diligence in the United States is probably PVsyst, while those seeking a quick answer to expected energy production may use PVWatts. In this paper, we examine the variation in predicted energy output among modeling tools and users and compare that to measured output.
A Rayleigh wave propagates laterally without dispersion in the vicinity of the plane stress-free surface of a homogeneous and isotropic elastic halfspace. The phase speed is independent of frequency and depends only on the Poisson ratio of the medium. However, after temporal and spatial discretization, a Rayleigh wave simulated by a 3D staggered-grid finite-difference (FD) seismic wave propagation algorithm suffers from frequency- and direction-dependent numerical dispersion. The magnitude of this dispersion depends critically on FD algorithm implementation details. Nevertheless, proper gridding can control numerical dispersion to within an acceptable level, leading to accurate Rayleigh wave simulations. Many investigators have derived dispersion relations appropriate for body wave propagation by various FD algorithms. However, the situation for surface waves is less well-studied. We have devised a numerical search procedure to estimate Rayleigh phase speed and group speed curves for 3D O(2,2) and O(2,4) staggered-grid FD algorithms. In contrast with the continuous time-space situation (where phase speed is obtained by extracting the appropriate root of the Rayleigh cubic), we cannot develop a closed-form mathematical formula governing the phase speed. Rather, we numerically seek the particular phase speed that leads to a solution of the discrete wave propagation equations, while holding medium properties, frequency, horizontal propagation direction, and gridding intervals fixed. Group speed is then obtained by numerically differentiating the phase speed with respect to frequency. The problem is formulated for an explicit stress-free surface positioned at two different levels within the staggered spatial grid. Additionally, an interesting variant involving zero-valued medium properties above the surface is addressed. We refer to the latter as an implicit free surface. Our preliminary conclusion is that an explicit free surface, implemented with O(4) spatial FD operators and positioned at the level of the compressional stress components, leads to superior numerical dispersion performance. Phase speeds measured from fixed-frequency synthetic seismograms agree very well with the numerical predictions.
Shales and other mudstones are the most abundant rock types in sedimentary basins, yet have received comparatively little attention. Common as hydrocarbon seals, these are increasingly being targeted as unconventional gas reservoirs, caprocks for CO2 sequestration, and storage repositories for waste. The small pore and grain size, large specific surface areas, and clay mineral structures lend themselves to rapid reaction rates, high capillary pressures, and semi-permeable membrane behavior accompanying changes in stress, pressure, temperature and chemical conditions. Under far from equilibrium conditions, mudrocks display a variety of spatio-temporal self-organized phenomena arising from nonlinear thermo-mechano-chemo-hydro coupling. Beginning with a detailed examination of nano-scale pore network structures in mudstones, we discuss the dynamics behind such self-organized phenomena as pressure solitons in unconsolidated muds, chemically-induced flow self focusing and permeability transients, localized compaction, time dependent well-bore failure, and oscillatory osmotic fluxes as they occur in clay-bearing sediments. Examples are draw from experiments, numerical simulation, and the field. These phenomena bear on the ability of these rocks to serve as containment barriers.
To-date, the application of high-performance computing resources to Semantic Web data has largely focused on commodity hardware and distributed memory platforms. In this paper we make the case that more specialized hardware can offer superior scaling and close to an order of magnitude improvement in performance. In particular we examine the Cray XMT. Its key characteristics, a large, global shared-memory, and processors with a memory-latency tolerant design, offer an environment conducive to programming for the Semantic Web and have engendered results that far surpass current state of the art. We examine three fundamental pieces requisite for a fully functioning semantic database: dictionary encoding, RDFS inference, and query processing. We show scaling up to 512 processors (the largest configuration we had available), and the ability to process 20 billion triples completely in-memory.
Dissolved CO2 from geological CO2 sequestration may react with dissolved minerals in fractured rocks or confined aquifers and cause mineral precipitation. The overall rate of reaction can be limited by diffusive or dispersive mixing, and mineral precipitation can block pores and further hinder these processes. Mixing-induced calcite precipitation experiments were performed by injecting solutions containing CaCl2 and Na2CO3 through two separate inlets of a micromodel (1-cm x 2-cm x 40-microns); transverse dispersion caused the two solutions to mix along the center of the micromodel, resulting in calcite precipitation. The amount of calcite precipitation initially increased to a maximum and then decreased to a steady state value. Fluorescent microscopy and imaging techniques were used to visualize calcite precipitation, and the corresponding effects on the flow field. Experimental micromodel results were evaluated with pore-scale simulations using a 2-D Lattice-Boltzmann code for water flow and a finite volume code for reactive transport. The reactive transport model included the impact of pH upon carbonate speciation and calcite dissolution. We found that proper estimation of the effective diffusion coefficient and the reaction surface area is necessary to adequately simulate precipitation and dissolution rates. The effective diffusion coefficient was decreased in grid cells where calcite precipitated, and keeping track of reactive surface over time played a significant role in predicting reaction patterns. Our results may improve understanding of the fundamental physicochemical processes during CO2 sequestration in geologic formations.
Heterogeneity plays an important role in groundwater flow and contaminant transport in natural systems. Since it is impossible to directly measure spatial variability of hydraulic conductivity, predictions of solute transport based on mathematical models are always uncertain. While in most cases groundwater flow and tracer transport problems are investigated in two-dimensional (2D) systems, it is important to study more realistic and well-controlled 3D systems to fully evaluate inverse parameter estimation techniques and evaluate uncertainty in the resulting estimates. We used tracer concentration breakthrough curves (BTCs) obtained from a magnetic resonance imaging (MRI) technique in a small flow cell (14 x 8 x 8 cm) that was packed with a known pattern of five different sands (i.e., zones) having cm-scale variability. In contrast to typical inversion systems with head, conductivity and concentration measurements at limited points, the MRI data included BTCs measured at a voxel scale ({approx}0.2 cm in each dimension) over 13 x 8 x 8 cm with a well controlled boundary condition, but did not have direct measurements of head and conductivity. Hydraulic conductivity and porosity were conceptualized as spatial random fields and estimated using pilot points along layers of the 3D medium. The steady state water flow and solute transport were solved using MODFLOW and MODPATH. The inversion problem was solved with a nonlinear parameter estimation package - PEST. Two approaches to parameterization of the spatial fields are evaluated: (1) The detailed zone information was used as prior information to constrain the spatial impact of the pilot points and reduce the number of parameters; and (2) highly parameterized inversion at cm scale (e.g., 1664 parameters) using singular value decomposition (SVD) methodology to significantly reduce the run-time demands. Both results will be compared to measured BTCs. With MRI, it is easy to change the averaging scale of the observed concentration from point to cross-section. This comparison allows us to evaluate which method best matches experimental results at different scales. To evaluate the uncertainty in parameter estimation, the null space Monte Carlo method will be used to reduce computational burden of the development of calibration-constrained Monte Carlo based parameter fields. This study will illustrate how accurately a well-calibrated model can predict contaminant transport.