Ethanol and ethanol/gasoline blends are being widely considered as alternative fuels for light-duty automotive applications. At the same time, HCCI combustion has the potential to provide high efficiency and ultra-low exhaust emissions. However, the application of HCCI is typically limited to low and moderate loads because of unacceptably high heat-release rates (HRR) at higher fueling rates. This work investigates the potential of lowering the HCCI HRR at high loads by using partial fuel stratification to increase the in-cylinder thermal stratification. This strategy is based on ethanol's high heat of vaporization combined with its true single-stage ignition characteristics. Using partial fuel stratification, the strong fuel-vaporization cooling produces thermal stratification due to variations in the amount of fuel vaporization in different parts of the combustion chamber. The low sensitivity of the autoignition reactions to variations of the local fuel concentration allows the temperature variations to govern the combustion event. This results in a sequential autoignition event from leaner and hotter zones to richer and colder zones, lowering the overall combustion rate compared to operation with a uniform fuel/air mixture. The amount of partial fuel stratification was varied by adjusting the fraction of fuel injected late to produce stratification, and also by changing the timing of the late injection. The experiments show that a combination of 60-70% premixed charge and injection of 30-40 % of the fuel at 80{sup o}CA before TDC is effective for smoothing the HRR. With CA50 held fixed, this increases the burn duration by 55% and reduces the maximum pressure-rise rate by 40%. Combustion stability remains high but engine-out NO{sub x} has to be monitored carefully. For operation with strong reduction of the peak HRR, ISNO{sub x} rises to around 0.20 g/kWh for an IMEP{sub g} of 440 kPa. The single-cylinder HCCI research engine was operated naturally aspirated without EGR at 1200 rpm, and had low residual level using a CR = 14 piston.
A silicon photonics based integrated optical phase locked loop is utilized to synchronize a 10.2 GHz voltage controlled oscillator with a 509 MHz mode locked laser, achieving 32 fs integrated jitter over 300 kHz bandwidth.
Mudstone mechanical testing is often limited by poor core recovery and sample size, preservation and preparation issues, which can lead to sampling bias, damage, and time-dependent effects. A micropillar compression technique, originally developed by Uchic et al. 2004, here is applied to elasto-plastic deformation of small volumes of mudstone, in the range of cubic microns. This study examines behavior of the Gothic shale, the basal unit of the Ismay zone of the Pennsylvanian Paradox Formation and potential shale gas play in southeastern Utah, USA. Precision manufacture of micropillars 5 microns in diameter and 10 microns in length are prepared using an ion-milling method. Characterization of samples is carried out using: dual focused ion - scanning electron beam imaging of nano-scaled pores and distribution of matrix clay and quartz, as well as pore-filling organics; laser scanning confocal (LSCM) 3D imaging of natural fractures; and gas permeability, among other techniques. Compression testing of micropillars under load control is performed using two different nanoindenter techniques. Deformation of 0.5 cm in diameter by 1 cm in length cores is carried out and visualized by a microscope loading stage and laser scanning confocal microscopy. Axisymmetric multistage compression testing and multi-stress path testing is carried out using 2.54 cm plugs. Discussion of results addresses size of representative elementary volumes applicable to continuum-scale mudstone deformation, anisotropy, and size-scale plasticity effects. Other issues include fabrication-induced damage, alignment, and influence of substrate.
Arctic sea ice is an important component of the global climate system and, due to feedback effects, the Arctic ice cover is changing rapidly. Predictive mathematical models are of paramount importance for accurate estimates of the future ice trajectory. However, the sea ice components of Global Climate Models (GCMs) vary significantly in their prediction of the future state of Arctic sea ice and have generally underestimated the rate of decline in minimum sea ice extent seen over the past thirty years. One of the contributing factors to this variability is the sensitivity of the sea ice state to internal model parameters. A new sea ice model that holds some promise for improving sea ice predictions incorporates an anisotropic elastic-decohesive rheology and dynamics solved using the material-point method (MPM), which combines Lagrangian particles for advection with a background grid for gradient computations. We evaluate the variability of this MPM sea ice code and compare it with the Los Alamos National Laboratory CICE code for a single year simulation of the Arctic basin using consistent ocean and atmospheric forcing. Sensitivities of ice volume, ice area, ice extent, root mean square (RMS) ice speed, central Arctic ice thickness,and central Arctic ice speed with respect to ten different dynamic and thermodynamic parameters are evaluated both individually and in combination using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA). We find similar responses for the two codes and some interesting seasonal variability in the strength of the parameters on the solution.
Measurements of the electrical and thermal transport properties of one-dimensional nanostructures (e.g., nanotubes and nanowires) typically are obtained without detailed knowledge of the specimen's atomic-scale structure or defects. To address this deficiency we have developed a microfabricated, chip-based characterization platform that enables both transmission electron microscopy (TEM) of atomic structure and defects as well as measurement of the thermal transport properties of individual nanostructures. The platform features a suspended heater line that contacts the center of a suspended nanostructure/nanowire that was placed using in-situ scanning electron microscope nanomanipulators. One key advantage of this platform is that it is possible to measure the thermal conductivity of both halves of the nanostructure (on each side of the central heater), and this feature permits identification of possible changes in thermal conductance along the wire and measurement of the thermal contact resistance. Suspension of the nanostructure across a through-hole enables TEM characterization of the atomic and defect structure (dislocations, stacking faults, etc.) of the test sample. As a model study, we report the use of this platform to measure the thermal conductivity and defect structure of GaN nanowires. The utilization of this platform for the measurements of other nanostructures will also be discussed.
Sandia National Laboratories is collaborating with the National Research Council (NRC) Canada and the National Renewable Energy Laboratory (NREL) to develop a decision-support model that will evaluate the tradeoffs associated with high-latitude algae biofuel production co-located with wastewater, CO2, and waste heat. This project helps Canada meet its goal of diversifying fuel sources with algae-based biofuels. The biofuel production will provide a wide range of benefits including wastewater treatment, CO2 reuse and reduction of demand for fossil-based fuels. The higher energy density in algae-based fuels gives them an advantage over crop-based biofuels as the 'production' footprint required is much less, resulting in less water consumed and little, if any conversion of agricultural land from food to fuel production. Besides being a potential source for liquid fuel, algae have the potential to be used to generate electricity through the burning of dried biomass, or anaerobically digested to generate methane for electricity production. Co-locating algae production with waste streams may be crucial for making algae an economically valuable fuel source, and will certainly improve its overall ecological sustainability. The modeling process will address these questions, and others that are important to the use of water for energy production: What are the locations where all resources are co-located, and what volumes of algal biomass and oil can be produced there? In locations where co-location does not occur, what resources should be transported, and how far, while maintaining economic viability? This work is being funded through the U.S. Department of Energy (DOE) Biomass Program Office of Energy Efficiency and Renewable Energy, and is part of a larger collaborative effort that includes sampling, strain isolation, strain characterization and cultivation being performed by the NREL and Canada's NRC. Results from the NREL / NRC collaboration including specific productivities of selected algal strains will eventually be incorporated into this model.
Concerns over rising concentrations of greenhouse gases in the atmosphere have resulted in serious consideration of policies aimed at reduction of anthropogenic carbon dioxide (CO2) emissions. If large scale abatement efforts are undertaken, one critical tool will be geologic sequestration of CO2 captured from large point sources, specifically coal and natural gas fired power plants. Current CO2 capture technologies exact a substantial energy penalty on the source power plant, which must be offset with make-up power. Water demands increase at the source plant due to added cooling loads. In addition, new water demand is created by water requirements associated with generation of the make-up power. At the sequestration site however, saline water may be extracted to manage CO2 plum migration and pressure build up in the geologic formation. Thus, while CO2 capture creates new water demands, CO2 sequestration has the potential to create new supplies. Some or all of the added demand may be offset by treatment and use of the saline waters extracted from geologic formations during CO2 sequestration. Sandia National Laboratories, with guidance and support from the National Energy Technology Laboratory, is creating a model to evaluate the potential for a combined approach to saline formations, as a sink for CO2 and a source for saline waters that can be treated and beneficially reused to serve power plant water demands. This presentation will focus on the magnitude of added U.S. power plant water demand under different CO2 emissions reduction scenarios, and the portion of added demand that might be offset by saline waters extracted during the CO2 sequestration process.
Because the potential effects of climate change are more severe than had previously been thought, increasing focus on uncertainty quantification is required for risk assessment needed by policy makers. Current scientific efforts focus almost exclusively on establishing best estimates of future climate change. However, the greatest consequences occur in the extreme tail of the probability density functions for climate sensitivity (the 'high-sensitivity tail'). To this end, we are exploring the impacts of newly postulated, highly uncertain, but high-consequence physical mechanisms to better establish the climate change risk. We define consequence in terms of dramatic change in physical conditions and in the resulting socioeconomic impact (hence, risk) on populations. Although we are developing generally applicable risk assessment methods, we have focused our initial efforts on uncertainty and risk analyses for the Arctic region. Instead of focusing on best estimates, requiring many years of model parameterization development and evaluation, we are focusing on robust emergent phenomena (those that are not necessarily intuitive and are insensitive to assumptions, subgrid-parameterizations, and tunings). For many physical systems, under-resolved models fail to generate such phenomena, which only develop when model resolution is sufficiently high. Our ultimate goal is to discover the patterns of emergent climate precursors (those that cannot be predicted with lower-resolution models) that can be used as a 'sensitivity fingerprint' and make recommendations for a climate early warning system that would use satellites and sensor arrays to look for the various predicted high-sensitivity signatures. Our initial simulations are focused on the Arctic region, where underpredicted phenomena such as rapid loss of sea ice are already emerging, and because of major geopolitical implications associated with increasing Arctic accessibility to natural resources, shipping routes, and strategic locations. We anticipate that regional climate will be strongly influenced by feedbacks associated with a seasonally ice-free Arctic, but with unknown emergent phenomena.
Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of discontinuous model data with adjustable sharpness and structure.
CACTUS (Code for Axial and Cross-flow TUrbine Simulation) is a turbine performance simulation code, based on a free wake vortex method, under development at Sandia National Laboratories (SNL) as part of a Department of Energy program to study marine hydrokinetic (MHK) devices. The current effort builds upon work previously done at SNL in the area of vertical axis wind turbine simulation, and aims to add models to handle generic device geometry and physical models specific to the marine environment. An overview of the current state of the project and validation effort is provided.
We present results from a recently developed multiscale inversion technique for binary media, with emphasis on the effect of subgrid model errors on the inversion. Binary media are a useful fine-scale representation of heterogeneous porous media. Averaged properties of the binary field representations can be used to characterize flow through the porous medium at the macroscale. Both direct measurements of the averaged properties and upscaling are complicated and may not provide accurate results. However, it may be possible to infer upscaled properties of the binary medium from indirect measurements at the coarse scale. Multiscale inversion, performed with a subgrid model to connect disparate scales together, can also yield information on the fine-scale properties. We model the binary medium using truncated Gaussian fields, and develop a subgrid model for the upscaled permeability based on excursion sets of those fields. The subgrid model requires an estimate of the proportion of inclusions at the block scale as well as some geometrical parameters of the inclusions as inputs, and predicts the effective permeability. The inclusion proportion is assumed to be spatially varying, modeled using Gaussian processes and represented using a truncated Karhunen-Louve (KL) expansion. This expansion is used, along with the subgrid model, to pose as a Bayesian inverse problem for the KL weights and the geometrical parameters of the inclusions. The model error is represented in two different ways: (1) as a homoscedastic error and (2) as a heteroscedastic error, dependent on inclusion proportionality and geometry. The error models impact the form of the likelihood function in the expression for the posterior density of the objects of inference. The problem is solved using an adaptive Markov Chain Monte Carlo method, and joint posterior distributions are developed for the KL weights and inclusion geometry. Effective permeabilities and tracer breakthrough times at a few 'sensor' locations (obtained by simulating a pump test) form the observables used in the inversion. The inferred quantities can be used to generate an ensemble of permeability fields, both upscaled and fine-scale, which are consistent with the observations. We compare the inferences developed using the two error models, in terms of the KL weights and fine-scale realizations that could be supported by the coarse-scale inferences. Permeability differences are observed mainly in regions where the inclusions proportion is near the percolation threshold, and the subgrid model incurs its largest approximation. These differences also reflected in the tracer breakthrough times and the geometry of flow streamlines, as obtained from a permeameter simulation. The uncertainty due to subgrid model error is also compared to the uncertainty in the inversion due to incomplete data.
Subsurface containment of CO2 is predicated on effective caprock sealing. Many previous studies have relied on macroscopic measurements of capillary breakthrough pressure and other petrophysical properties without direct examination of solid phases that line pore networks and directly contact fluids. However, pore-lining phases strongly contribute to sealing behavior through interfacial interactions among CO2, brine, and the mineral or non-mineral phases. Our high resolution (i.e., sub-micron) examination of the composition of pore-lining phases of several continental and marine mudstones indicates that sealing efficiency (i.e., breakthrough pressure) is governed by pore shapes and pore-lining phases that are not identifiable except through direct characterization of pores. Bulk X-ray diffraction data does not indicate which phases line the pores and may be especially lacking for mudstones with organic material. Organics can line pores and may represent once-mobile phases that modify the wettability of an originally clay-lined pore network. For shallow formations (i.e., < {approx}800 m depth), interfacial tension and contact angles result in breakthrough pressures that may be as high as those needed to fracture the rock - thus, in the absence of fractures, capillary sealing efficiency is indicated. Deeper seals have poorer capillary sealing if mica-like wetting dominates the wettability. We thank the U.S. Department of Energy's National Energy Technology Laboratory and the Office of Basic Energy Sciences, and the Southeast and Southwest Carbon Sequestration Partnerships for supporting this work.
Injection of CO2 into underground rock formations can reduce atmospheric CO2 emissions. Caprocks present above potential storage formations are the main structural trap inhibiting CO2 from leaking into overlying aquifers or back to the Earth's surface. Dissolution and precipitation of caprock minerals resulting from reaction with CO2 may alter the pore network where many pores are of the micrometer to nanometer scale, thus altering the structural trapping potential of the caprock. However, the distribution, geometry and volume of pores at these scales are poorly characterized. In order to evaluate the overall risk of leakage of CO2 from storage formations, a first critical step is understanding the distribution and shape of pores in a variety of different caprocks. As the caprock is often comprised of mudstones, we analyzed samples from several mudstone formations with small angle neutron scattering (SANS) and high-resolution transmission electron microscopy (TEM) imaging to compare the pore networks. Mudstones were chosen from current or potential sites for carbon sequestration projects including the Marine Tuscaloosa Group, the Lower Tuscaloosa Group, the upper and lower shale members of the Kirtland Formation, and the Pennsylvanian Gothic shale. Expandable clay contents ranged from 10% to approximately 40% in the Gothic shale and Kirtland Formation, respectively. During SANS, neutrons effectively scatter from interfaces between materials with differing scattering length density (i.e., minerals and pores). The intensity of scattered neutrons, I(Q), where Q is the scattering vector, gives information about the volume and arrangement of pores in the sample. The slope of the scattering data when plotted as log I(Q) vs. log Q provides information about the fractality or geometry of the pore network. On such plots slopes from -2 to -3 represent mass fractals while slopes from -3 to -4 represent surface fractals. Scattering data showed surface fractal dimensions for the Kirtland formation and one sample from the Tuscaloosa formation close to 3, indicating very rough surfaces. In contrast, scattering data for the Gothic shale formation exhibited mass fractal behavior. In one sample of the Tuscaloosa formation the data are described by a surface fractal at low Q (larger pores) and a mass fractal at high Q (smaller pores), indicating two pore populations contributing to the scattering behavior. These small angle neutron scattering results, combined with high-resolution TEM imaging, provided a means for both qualitative and quantitative analysis of the differences in pore networks between these various mudstones.
We consider the problem of placing a limited number of sensors in a municipal water distribution network to minimize the impact over a given suite of contamination incidents. In its simplest form, the sensor placement problem is a p-median problem that has structure extremely amenable to exact and heuristic solution methods. We describe the solution of real-world instances using integer programming or local search or a Lagrangian method. The Lagrangian method is necessary for solution of large problems on small PCs. We summarize a number of other heuristic methods for effectively addressing issues such as sensor failures, tuning sensors based on local water quality variability, and problem size/approximation quality tradeoffs. These algorithms are incorporated into the TEVA-SPOT toolkit, a software suite that the US Environmental Protection Agency has used and is using to design contamination warning systems for US municipal water systems.
The acoustic field generated during a Direct Field Acoustic Test (DFAT) has been analytically modeled in two space dimensions using a properly phased distribution of propagating plane waves. Both the pure-tone and broadband acoustic field were qualitatively and quantitatively compared to a diffuse acoustic field. The modeling indicates significant non-uniformity of sound pressure level for an empty (no test article) DFAT, specifically a center peak and concentric maxima/minima rings. This spatial variation is due to the equivalent phase among all propagating plane waves at each frequency. The excitation of a simply supported slender beam immersed within the acoustic fields was also analytically modeled. Results indicate that mid-span response is dependent upon location and orientation of the beam relative to the center of the DFAT acoustic field. For a diffuse acoustic field, due to its spatial uniformity, mid-span response sensitivity to location and orientation is nonexistent.
Mercury intrusion porosimetry (MIP) is an often-applied technique for determining pore throat distributions and seal analysis of fine-grained rocks. Due to closure effects, potential pore collapse, and complex pore network topologies, MIP data interpretation can be ambiguous, and often biased toward smaller pores in the distribution. We apply 3D imaging techniques and lattice-Boltzmann modeling in interpreting MIP data for samples of the Cretaceous Selma Group Chalk. In the Mississippi Interior Salt Basin, the Selma Chalk is the apparent seal for oil and gas fields in the underlying Eutaw Fm., and, where unfractured, the Selma Chalk is one of the regional-scale seals identified by the Southeast Regional Carbon Sequestration Partnership for CO2 injection sites. Dual focused ion - scanning electron beam and laser scanning confocal microscopy methods are used for 3D imaging of nanometer-to-micron scale microcrack and pore distributions in the Selma Chalk. A combination of image analysis software is used to obtain geometric pore body and throat distributions and other topological properties, which are compared to MIP results. 3D data sets of pore-microfracture networks are used in Lattice Boltzmann simulations of drainage (wetting fluid displaced by non-wetting fluid via the Shan-Chen algorithm), which in turn are used to model MIP procedures. Results are used in interpreting MIP results, understanding microfracture-matrix interaction during multiphase flow, and seal analysis for underground CO2 storage.
Photovoltaic systems are often priced in $/W{sub p}, where Wp refers to the DC power rating of the modules at Standard Test Conditions (1000 W/m{sup 2}, 25 C cell temperature) and $ refers to the installed cost of the system. However, the true value of the system is in the energy it will produce in kWhs, not the power rating. System energy production is a function of the system design and location, the mounting configuration, the power conversion system, and the module technology, as well as the solar resource. Even if all other variables are held constant, the annual energy yield (kWh/kW{sup p}) will vary among module technologies because of differences in response to low-light levels and temperature. Understanding energy yield is a key part of understanding system value. System performance models are used during project development to estimate the expected output of PV systems for a given design and location. Performance modeling is normally done by the system designer/system integrator. Often, an independent engineer will also model system output during a due diligence review of a project. A variety of system performance models are available. The most commonly used modeling tool for project development and due diligence in the United States is probably PVsyst, while those seeking a quick answer to expected energy production may use PVWatts. In this paper, we examine the variation in predicted energy output among modeling tools and users and compare that to measured output.
To-date, the application of high-performance computing resources to Semantic Web data has largely focused on commodity hardware and distributed memory platforms. In this paper we make the case that more specialized hardware can offer superior scaling and close to an order of magnitude improvement in performance. In particular we examine the Cray XMT. Its key characteristics, a large, global shared-memory, and processors with a memory-latency tolerant design, offer an environment conducive to programming for the Semantic Web and have engendered results that far surpass current state of the art. We examine three fundamental pieces requisite for a fully functioning semantic database: dictionary encoding, RDFS inference, and query processing. We show scaling up to 512 processors (the largest configuration we had available), and the ability to process 20 billion triples completely in-memory.
Dissolved CO2 from geological CO2 sequestration may react with dissolved minerals in fractured rocks or confined aquifers and cause mineral precipitation. The overall rate of reaction can be limited by diffusive or dispersive mixing, and mineral precipitation can block pores and further hinder these processes. Mixing-induced calcite precipitation experiments were performed by injecting solutions containing CaCl2 and Na2CO3 through two separate inlets of a micromodel (1-cm x 2-cm x 40-microns); transverse dispersion caused the two solutions to mix along the center of the micromodel, resulting in calcite precipitation. The amount of calcite precipitation initially increased to a maximum and then decreased to a steady state value. Fluorescent microscopy and imaging techniques were used to visualize calcite precipitation, and the corresponding effects on the flow field. Experimental micromodel results were evaluated with pore-scale simulations using a 2-D Lattice-Boltzmann code for water flow and a finite volume code for reactive transport. The reactive transport model included the impact of pH upon carbonate speciation and calcite dissolution. We found that proper estimation of the effective diffusion coefficient and the reaction surface area is necessary to adequately simulate precipitation and dissolution rates. The effective diffusion coefficient was decreased in grid cells where calcite precipitated, and keeping track of reactive surface over time played a significant role in predicting reaction patterns. Our results may improve understanding of the fundamental physicochemical processes during CO2 sequestration in geologic formations.
Multi-scale binary permeability field estimation from static and dynamic data is completed using Markov Chain Monte Carlo (MCMC) sampling. The binary permeability field is defined as high permeability inclusions within a lower permeability matrix. Static data are obtained as measurements of permeability with support consistent to the coarse scale discretization. Dynamic data are advective travel times along streamlines calculated through a fine-scale field and averaged for each observation point at the coarse scale. Parameters estimated at the coarse scale (30 x 20 grid) are the spatially varying proportion of the high permeability phase and the inclusion length and aspect ratio of the high permeability inclusions. From the non-parametric, posterior distributions estimated for these parameters, a recently developed sub-grid algorithm is employed to create an ensemble of realizations representing the fine-scale (3000 x 2000), binary permeability field. Each fine-scale ensemble member is instantiated by convolution of an uncorrelated multiGaussian random field with a Gaussian kernel defined by the estimated inclusion length and aspect ratio. Since the multiGaussian random field is itself a realization of a stochastic process, the procedure for generating fine-scale binary permeability field realizations is also stochastic. Two different methods are hypothesized to perform posterior predictive tests. Different mechanisms for combining multi Gaussian random fields with kernels defined from the MCMC sampling are examined. Posterior predictive accuracy of the estimated parameters is assessed against a simulated ground truth for predictions at both the coarse scale (effective permeabilities) and at the fine scale (advective travel time distributions). The two techniques for conducting posterior predictive tests are compared by their ability to recover the static and dynamic data. The skill of the inference and the method for generating fine-scale binary permeability fields are evaluated through flow calculations on the resulting fields using fine-scale realizations and comparing them against results obtained with the ground truth fine-scale and coarse-scale permeability fields.
We present advances with a 32 element scalable, segmented dual mode imager. Scaling up the number of cells results in a 1.4 increase in efficiency over a system we deployed last year. Variable plane separation has been incorporated which further improves the efficiency of the detector. By using 20 cm diameter cells we demonstrate that we could increase sensitivity by a factor of 6. We further demonstrate gamma ray imaging in from Compton scattering. This feature allows for powerful dual mode imaging. Selected results are presented that demonstrate these new capabilities.
The neutron scatter camera was originally developed for a range of SNM detection applications. We are now exploring the feasibility of applications in treaty verification and warhead monitoring using experimentation, maximum likelihood estimation method (MLEM), detector optimization, and MCNP-PoliMi simulations.
A rational approach was used to design polymeric materials for thin-film electronics applications, whereby theoretical modeling was used to determine synthetic targets. Time-dependent density functional theory calculations were used as a tool to predict the electrical properties of conjugated polymer systems. From these results, polymers with desirable energy levels and band-gaps were designed and synthesized. Measurements of optoelectronic properties were performed on the synthesized polymers and the results were compared to those of the theoretical model. From this work, the efficacy of the model was evaluated and new target polymers were identified.
Decisions made to address climate change must start with an understanding of the risk of an uncertain future to human systems, which in turn means understanding both the consequence as well as the probability of a climate induced impact occurring. In other words, addressing climate change is an exercise in risk-informed policy making, which implies that there is no single correct answer or even a way to be certain about a single answer; the uncertainty in future climate conditions will always be present and must be taken as a working-condition for decision making. In order to better understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions, this study estimates the impacts from responses to climate change on U.S. state- and national-level economic activity by employing a risk-assessment methodology for evaluating uncertain future climatic conditions. Using the results from the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report (AR4) as a proxy for climate uncertainty, changes in hydrology over the next 40 years were mapped and then modeled to determine the physical consequences on economic activity and to perform a detailed 70-industry analysis of the economic impacts among the interacting lower-48 states. The analysis determines industry-level effects, employment impacts at the state level, interstate population migration, consequences to personal income, and ramifications for the U.S. trade balance. The conclusions show that the average risk of damage to the U.S. economy from climate change is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs. Further analysis shows that an increase in uncertainty raises this risk. This paper will present the methodology behind the approach, a summary of the underlying models, as well as the path forward for improving the approach.
Motivated by social network data mining problems such as link prediction and collaborative filtering, significant research effort has been devoted to computing topological measures including the Katz score and the commute time. Existing approaches typically approximate all pairwise relationships simultaneously. In this paper, we are interested in computing: the score for a single pair of nodes, and the top-k nodes with the best scores from a given source node. For the pairwise problem, we apply an iterative algorithm that computes upper and lower bounds for the measures we seek. This algorithm exploits a relationship between the Lanczos process and a quadrature rule. For the top-k problem, we propose an algorithm that only accesses a small portion of the graph and is related to techniques used in personalized PageRank computing. To test the scalability and accuracy of our algorithms we experiment with three real-world networks and find that these algorithms run in milliseconds to seconds without any preprocessing.
We describe an integrated information management system for an independent spent fuel dry-storage installation (ISFSI) that can provide for (1) secure and authenticated data collection, (2) data analysis, (3) dissemination of information to appropriate stakeholders via a secure network, and (4) increased public confidence and support of the facility licensing and operation through increased transparency. This information management system is part of a collaborative project between Sandia National Laboratories, Taiwan Power Co., and the Fuel Cycle Materials Administration of Taiwan's Atomic Energy Council, which is investigating how to implement this concept.
A novel multiphase shock tube has been constructed to test the interaction of a planar shock wave with a dense gas-solid field of particles. The particle field is generated by a gravity-fed method that results in a spanwise curtain of 100-micron particles producing a volume fraction of about 15%. Interactions with incident shock Mach numbers of 1.67 and 1.95 are reported. High-speed schlieren imaging is used to reveal the complex wave structure associated with the interaction. After the impingement of the incident shock, transmitted and reflected shocks are observed, which lead to differences in flow properties across the streamwise dimension of the curtain. Tens of microseconds after the onset of the interaction, the particle field begins to propagate downstream, and disperse. The spread of the particle field, as a function of its position, is seen to be nearly identical for both Mach numbers. Immediately downstream of the curtain, the peak pressures associated with the Mach 1.67 and 1.95 interactions are about 35% and 45% greater than tests without particles, respectively. For both Mach numbers tested, the energy and momentum fluxes in the induced flow far downstream are reduced by about 30-40% by the presence of the particle field.
The very high temperature reactor (VHTR) concept is being developed by the US Department of Energy (DOE) and other groups around the world for the future generation of electricity at high thermal efficiency (> 48%) and co-generation of hydrogen and process heat. This Generation-IV reactor would operate at elevated exit temperatures of 1,000-1,273 K, and the fueled core would be cooled by forced convection helium gas. For the prismatic-core VHTR, which is the focus of this analysis, the velocity of the hot helium flow exiting the core into the lower plenum (LP) could be 35-70 m/s. The impingement of the resulting gas jets onto the adiabatic plate at the bottom of the LP could develop hot spots and thermal stratification and inadequate mixing of the gas exiting the vessel to the turbo-machinery for energy conversion. The complex flow field in the LP is further complicated by the presence of large cylindrical graphite posts that support the massive core and inner and outer graphite reflectors. Because there are approximately 276 channels in the VHTR core from which helium exits into the LP and a total of 155 support posts, the flow field in the LP includes cross flow, multiple jet flow interaction, flow stagnation zones, vortex interaction, vortex shedding, entrainment, large variation in Reynolds number (Re), recirculation, and mixing enhancement and suppression regions. For such a complex flow field, experimental results at operating conditions are not currently available. Instead, the objective of this paper is to numerically simulate the flow field in the LP of a prismatic core VHTR using the Sandia National Laboratories Fuego, which is a 3D, massively parallel generalized computational fluid dynamics (CFD) code with numerous turbulence and buoyancy models and simulation capabilities for complex gas flow fields, with and without thermal effects. The code predictions for simpler flow fields of single and swirling gas jets, with and without a cross flow, are validated using reported experimental data and theory. The key processes in the LP are identified using phenomena identification and ranking table (PIRT). It may be argued that a CFD code that accurately simulates simplified, single-effect flow fields with increasing complexity is likely to adequately model the complex flow field in the VHTR LP, subject to a future experimental validation. The PIRT process and spatial and temporal discretizations implemented in the present analysis using Fuego established confidence in the validation and verification (V and V) calculations and in the conclusions reached based on the simulation results. The performed calculations included the helicoid vortex swirl model, the dynamic Smagorinsky large eddy simulation (LES) turbulence model, participating media radiation (PMR), and 1D conjugate heat transfer (CHT). The full-scale, half-symmetry LP mesh used in the LP simulation included unstructured hexahedral elements and accounted for the graphite posts, the helium jets, the exterior walls, and the bottom plate with an adiabatic outer surface. Results indicated significant enhancements in heat transfer, flow mixing, and entrainment in the VHTR LP when using swirling inserts at the exit of the helium flow channels into the LP. The impact of using various swirl angles on the flow mixing and heat transfer in the LP is qualified, including the formation of the central recirculation zone (CRZ), and the effect of LP height. Results also showed that in addition to the enhanced mixing, the swirling inserts result in negligible additional pressure losses and are likely to eliminate the formation of hot spots.
The development of turbulent spots in a hypersonic boundary layer was studied on the nozzle wall of the Boeing/AFOSR Mach-6 Quiet Tunnel. Under quiet flow conditions, the nozzle wall boundary layer remains laminar and grows very thick over the long nozzle length. This allows the development of large turbulent spots that can be readily measured with pressure transducers. Measurements of naturally occurring wave packets and developing turbulent spots were made. The peak frequencies of these natural wave packets were in agreement with second-mode computations. For a controlled study, the breakdown of disturbances created by spark and glow perturbations were studied at similar freestream conditions. The spark perturbations were the most effective at creating large wave packets that broke down into turbulent spots. The flow disturbances created by the controlled perturbations were analyzed to obtain amplitude criteria for nonlinearity and breakdown as well as the convection velocities of the turbulent spots. Disturbances first grew into linear instability waves and then quickly became nonlinear. Throughout the nonlinear growth of the wave packets, large harmonics are visible in the power spectra. As breakdown begins, the peak amplitudes of the instability waves and harmonics decrease into the rising broad-band frequencies. Instability waves are still visible on either side of the growing turbulent spots during this breakdown process.
In recent years there has been an unstable supply of the critical diagnostic medical isotope 99Tc. Several concepts and designs have been proposed to produce 99Mo the parent nuclide of 99Tc, at a commercial scale sufficient to stabilize the world supply. This work lays out a testing and experiment plan for a proposed 2 MW open pool reactor fueled by Low Enriched Uranium (LEU) 99Mo targets. The experiments and tests necessary to support licensing of the reactor design are described and how these experiments and tests will help establish the safe operating envelop for a medical isotope production reactor is discussed. The experiments and tests will facilitate a focused and efficient licensing process in order to bring on line a needed production reactor dedicated to supplying medical isotopes. The Target Fuel Isotope Reactor (TFIR) design calls for an active core region that is approximately 40 cm in diameter and 40 cm in fuel height. It contains up to 150 cylindrical, 1-cm diameter, LEU oxide fuel pins clad with Zircaloy (zirconium alloy), in an annular hexagonal array on a {approx}2.0 cm pitch surrounded, radially, by a graphite or a Be reflector. The reactor is similar to U.S. university reactors in power, hardware, and safety/control systems. Fuel/target pin fabrication is based on existing light water reactor fuel fabrication processes. However, as part of licensing process, experiments must be conducted to confirm analytical predictions of steady-state power and accident conditions. The experiment and test plan will be conducted in phases and will utilize existing facilities at the U.S. Department of Energy's Sandia National Laboratories. The first phase is to validate the predicted reactor core neutronics at delayed critical, zero power and very low power. This will be accomplished by using the Sandia Critical Experiment (CX) platform. A full scale TFIR core will be built in the CX and delayed critical measurements will be taken. For low power experiments, fuel pins can be removed after the experiment and using Sandia's metrology lab, relative power profiles (radially and axially) can be determined. In addition to validating neutronic analyses, confirming heat transfer properties of the target/fuel pins and core will be conducted. Fuel/target pin power limits can be verified with out-of-pile (electrical heating) thermal-hydraulic experiments. This will yield data on the heat flux across the Zircaloy clad and establish safety margin and operating limits. Using Sandia's Annular Core Research Reactor (ACRR) a 4 MW TRIGA type research reactor, target/fuel pins can be driven to desired fission power levels for long durations. Post experiment inspection of the pins can be conducted in the Auxiliary Hot Cell Facility to observe changes in the mechanical properties of the LEU matrix and burn-up effects. Transient tests can also be conducted at the ACRR to observe target/fuel pin performance during accident conditions. Target/fuel pins will be placed in double experiment containment and driven by pulsing the ACRR until target/fuel failure is observed. This will allow for extrapolation of analytical work to confirm safety margins.
Although planar heterostructures dominate current solid-state lighting architectures (SSL), 1D nanowires have distinct and advantageous properties that may eventually enable higher efficiency, longer wavelength, and cheaper devices. However, in order to fully realize the potential of nanowire-based SSL, several challenges exist in the areas of controlled nanowire synthesis, nanowire device integration, and understanding and controlling the nanowire electrical, optical, and thermal properties. Here recent results are reported regarding the aligned growth of GaN and III-nitride core-shell nanowires, along with extensive results providing insights into the nanowire properties obtained using cutting-edge structural, electrical, thermal, and optical nanocharacterization techniques. A new top-down fabrication method for fabricating periodic arrays of GaN nanorods and subsequent nanorod LED fabrication is also presented.
Cielo, a Cray XE6, is the Department of Energy NNSA Advanced Simulation and Computing (ASC) campaign's newest capability machine. Rated at 1.37 PFLOPS, it consists of 8,944 dual-socket oct-core AMD Magny-Cours compute nodes, linked using Cray's Gemini interconnect. Its primary mission objective is to enable a suite of the ASC applications implemented using MPI to scale to tens of thousands of cores. Cielo is an evolutionary improvement to a successful architecture previously available to many of our codes, thus enabling a basis for understanding the capabilities of this new architecture. Using three codes strategically important to the ASC campaign, and supplemented with some micro-benchmarks that expose the fundamental capabilities of the XE6, we report on the performance characteristics and capabilities of Cielo.
A conceptual structure for performance assessments (PAs) for radioactive waste disposal facilities and other complex engineered facilities based on the following three basic conceptual entities is described: EN1, a probability space that characterizes aleatory uncertainty; EN2, a function that predicts consequences for individual elements of the sample space for aleatory uncertainty; and EN3, a probability space that characterizes epistemic uncertainty. The implementation of this structure is illustrated with results from PAs for the Waste Isolation Pilot Plant and the proposed Yucca Mountain repository for high-level radioactive waste.
In this work, we describe a novel design for a H2SO 4decomposer. The decomposition of H2SO4 to produce SO2is a common processing operation in the sulfur-based thermochemical cycles for hydrogen production where acid decomposition takes place at 850°C in the presence of a catalyst. The combination of high temperature and sulfuric acid creates a very corrosive environment that presents significant design challenges. The new decomposer design is based on a bayonet-type heat exchanger tube with the annular space packed with a catalyst. The unit is constructed of silicon carbide and other highly corrosion resistant materials. The new design integrates acid boiling, superheating, decomposition and heat recuperation into a single process and eliminates problems of corrosion and failure of high temperature seals encountered in previous testing using metallic construction materials. The unit was tested by varying the acid feed rate and decomposition temperature and pressure.
This report presents the results of an aging experiment that was established in FY09 and completed in FY10 for the Sandia MEMS Passive Shock Sensor. A total of 37 packages were aged at different temperatures and times, and were then tested after aging to determine functionality. Aging temperatures were selected at 100 C and 150 C, with times ranging from as short as 100 hours to as long as 1 year to simulate a predicted aging of up to 20 years. In all of the tests and controls, 100% of the devices continued to function normally.
Mandell, John F.; Samborsky, Daniel D.; Agastra, Pancasatya; Sears, Aaron T.; Wilson, Timothy J.; Ashwill, Thomas D.; Laird, Daniel L.
This report presents an analysis of trends in fatigue results from the Montana State University program on the fatigue of composite materials for wind turbine blades for the period 2005-2009. Test data can be found in the SNL/MSU/DOE Fatigue of Composite Materials Database which is updated annually. This is the fifth report in this series, which summarizes progress of the overall program since its inception in 1989. The primary thrust of this program has been research and testing of a broad range of structural laminate materials of interest to blade structures. The report is focused on current types of infused and prepreg blade materials, either processed in-house or by industry partners. Trends in static and fatigue performance are analyzed for a range of materials, geometries and loading conditions. Materials include: sixteen resins of three general types, five epoxy based paste adhesives, fifteen reinforcing fabrics including three fiber types, three prepregs, many laminate lay-ups and process variations. Significant differences in static and fatigue performance and delamination resistance are quantified for particular materials and process conditions. When blades do fail, the likely cause is fatigue in the structural detail areas or at major flaws. The program is focused strongly on these issues in addition to standard laminates. Structural detail tests allow evaluation of various blade materials options in the context of more realistic representations of blade structure than do the standard test methods. Types of structural details addressed in this report include ply drops used in thickness tapering, and adhesive joints, each tested over a range of fatigue loading conditions. Ply drop studies were in two areas: (1) a combined experimental and finite element study of basic ply drop delamination parameters for glass and carbon prepreg laminates, and (2) the development of a complex structured resin-infused coupon including ply drops, for comparison studies of various resins, fabrics and pry drop thicknesses. Adhesive joint tests using typical blade adhesives included both generic testing of materials parameters using a notched-lap-shear test geometry developed in this study, and also a series of simulated blade web joint geometries fabricated by an industry partner.
The purpose of this report is to describe the methods commonly used to measure heat flux in fire applications at Sandia National Laboratories in both hydrocarbon (JP-8 jet fuel, diesel fuel, etc.) and propellant fires. Because these environments are very severe, many commercially available heat flux gauges do not survive the test, so alternative methods had to be developed. Specially built sensors include 'calorimeters' that use a temperature measurement to infer heat flux by use of a model (heat balance on the sensing surface) or by using an inverse heat conduction method. These specialty-built sensors are made rugged so they will survive the environment, so are not optimally designed for ease of use or accuracy. Other methods include radiometers, co-axial thermocouples, directional flame thermometers (DFTs), Sandia 'heat flux gauges', transpiration radiometers, and transverse Seebeck coefficient heat flux gauges. Typical applications are described and pros and cons of each method are listed.
For the U.S. Nuclear Regulatory Commission (NRC) Extremely Low Probability of Rupture (xLPR) pilot study, Sandia National Laboratories (SNL) was tasked to develop and evaluate a probabilistic framework using a commercial software package for Version 1.0 of the xLPR Code. Version 1.0 of the xLPR code is focused assessing the probability of rupture due to primary water stress corrosion cracking in dissimilar metal welds in pressurizer surge nozzles. Future versions of this framework will expand the capabilities to other cracking mechanisms, and other piping systems for both pressurized water reactors and boiling water reactors. The goal of the pilot study project is to plan the xLPR framework transition from Version 1.0 to Version 2.0; hence the initial Version 1.0 framework and code development will be used to define the requirements for Version 2.0. The software documented in this report has been developed and tested solely for this purpose. This framework and demonstration problem will be used to evaluate the commercial software's capabilities and applicability for use in creating the final version of the xLPR framework. This report details the design, system requirements, and the steps necessary to use the commercial-code based xLPR framework developed by SNL.
Sandia National Laboratories (SNL) conducts pioneering research and development in Micro-Electro-Mechanical Systems (MEMS) and solar cell research. This dissertation project combines these two areas to create ultra-thin small-form-factor crystalline silicon (c-Si) solar cells. These miniature solar cells create a new class of photovoltaics with potentially novel applications and benefits such as dramatic reductions in cost, weight and material usage. At the beginning of the project, unusually low efficiencies were obtained in the research group. The intention of this research was thus to investigate the main causes of the low efficiencies through simulation, design, fabrication, and characterization. Commercial simulation tools were used to find the main causes of low efficiency. Once the causes were identified, the results were used to create improved designs and build new devices. In the simulations, parameters were varied to see the effect on the performance. The researched parameters were: resistance, wafer lifetime, contact separation, implant characteristics (size, dosage, energy, ratio between the species), contact size, substrate thickness, surface recombination, and light concentration. Out of these parameters, it was revealed that a high quality surface passivation was the most important for obtaining higher performing cells. Therefore, several approaches for enhancing the passivation were tried, characterized, and tested on cells. In addition, a methodology to contact and test the performance of all the cells presented in the dissertation under calibrated light was created. Also, next generation cells that could incorporate all the optimized layers including the passivation was designed, built, and tested. In conclusion, through this investigation, solar cells that incorporate optimized designs and passivation schemes for ultrathin solar cells were created for the first time. Through the application of the methods discussed in this document, the efficiency of the solar cells increased from below 1% to 15% in Microsystems Enabled Photovoltaic (MEPV) devices.
We have developed a high sensitivity (<5 fTesla/{radical}Hz), fiber-optically coupled magnetometer to detect magnetic fields produced by the human brain. This is the first demonstration of a noncryogenic sensor that could replace cryogenic superconducting quantum interference device (SQUID) magnetometers in magnetoencephalography (MEG) and is an important advance in realizing cost-effective MEG. Within the sensor, a rubidium vapor is optically pumped with 795 laser light while field-induced optical rotations are measured with 780 nm laser light. Both beams share a single optical axis to maximize simplicity and compactness. In collaboration with neuroscientists at The Mind Research Network in Albuquerque, NM, the evoked responses resulting from median nerve and auditory stimulation were recorded with the atomic magnetometer and a commercial SQUID-based MEG system with signals comparing favorably. Multi-sensor operation has been demonstrated with two AMs placed on opposite sides of the head. Straightforward miniaturization would enable high-density sensor arrays for whole-head magnetoencephalography.
Sandia National Laboratories (SNL) participated in a Pilot Study to examine the process and requirements to create a software system to assess the extremely low probability of pipe rupture (xLPR) in nuclear power plants. This project was tasked to develop a prototype xLPR model leveraging existing fracture mechanics models and codes coupled with a commercial software framework to determine the framework, model, and architecture requirements appropriate for building a modular-based code. The xLPR pilot study was conducted to demonstrate the feasibility of the proposed developmental process and framework for a probabilistic code to address degradation mechanisms in piping system safety assessments. The pilot study includes a demonstration problem to assess the probability of rupture of DM pressurizer surge nozzle welds degraded by primary water stress-corrosion cracking (PWSCC). The pilot study was designed to define and develop the framework and model; then construct a prototype software system based on the proposed model. The second phase of the project will be a longer term program and code development effort focusing on the generic, primary piping integrity issues (xLPR code). The results and recommendations presented in this report will be used to help the U.S. Nuclear Regulatory Commission (NRC) define the requirements for the longer term program.
The risk assessment approach has been applied to support numerous radioactive waste management activities over the last 30 years. A risk assessment methodology provides a solid and readily adaptable framework for evaluating the risks of CO2 sequestration in geologic formations to prioritize research, data collection, and monitoring schemes. This paper reviews the tasks of a risk assessment, and provides a few examples related to each task. This paper then describes an application of sensitivity analysis to identify important parameters to reduce the uncertainty in the performance of a geologic repository for radioactive waste repository, which because of importance of the geologic barrier, is similar to CO2 sequestration. The paper ends with a simple stochastic analysis of idealized CO2 sequestration site with a leaking abandoned well and a set of monitoring wells in an aquifer above the CO2 sequestration unit in order to evaluate the efficacy of monitoring wells to detect adverse leakage.
Ranking search results is a thorny issue for enterprise search. Search engines rank results using a variety of sophisticated algorithms, but users still complain that search can't ever seem to find anything useful or relevant! The challenge is to provide results that are ranked according to the users' definition of relevancy. Sandia National Laboratories has enhanced its commercial search engine to discover user preferences, re-ranking results accordingly. Immediate positive impact was achieved by modeling historical data consisting of user queries and subsequent result clicks. New data is incorporated into the model daily. An important benefit is that results improve naturally and automatically over time as a function of user actions. This session presents the method employed, how it was integrated with the search engine,metrics illustrating the subsequent improvement to the users' search experience, and plans for implementation with Sandia's FAST for SharePoint 2010 search engine.
Complex metal hydrides continue to be investigated as solid-materials for hydrogen storage. Traditional interstitial metal hydrides offer favorable thermodynamics and kinetics for hydrogen release but do not meet energy density requires. Anionic metal hydrides, and complex metal hydrides like magnesium borohydride have higher energy densities compared to interstitial metal hydrides, but poor kinetics and/or thermodynamically unfavorable side products limit their deployment as hydrogen storage materials in transportation applications. Main-group anionic materials such as the bis(borane)hypophosphite salt [PH2(BH3)2] have been known for decades, but only recently have we begun to explore their ability to release hydrogen. We have developed a new procedure for synthesizing the lithium and sodium hypophosphite salts. Routes for accessing other metal bis(borane)hypophosphite salts will be discussed. A significant advantage of this class of material is the air and water stability of the anion. Compared to metal borohydrides, which reactive violently with water, these phosphorus-based salts can be dissolved in protic solvents, including water, with little to no decomposition over the course of multiple days. The ability of these salts to release hydrogen upon heating has been assessed. While preliminary results indicate phosphine and boron-containing species are released, hydrogen is also a major component of the volatile species observed during the thermal decomposition. Additives such as NaH or KH mixed with the sodium salt Na[PH2(BH3)2] significantly perturb the decomposition reaction and greatly increase the mass loss as determined by thermal gravimetric analysis (TGA). This symbiotic behavior has the potential to affect the hydrogen storage ability of bis(borane)hypophosphite salts.
A design concept, device layout, and monolithic microfabrication processing sequence have been developed for a dual-metal layer atom chip for next-generation positional control of ultracold ensembles of trapped atoms. Atom chips are intriguing systems for precision metrology and quantum information that use ultracold atoms on microfabricated chips. Using magnetic fields generated by current carrying wires, atoms are confined via the Zeeman effect and controllably positioned near optical resonators. Current state-of-the-art atom chips are single-layer or hybrid-integrated multilayer devices with limited flexibility and repeatability. An attractive feature of multi-level metallization is the ability to construct more complicated conductor patterns and thereby realize the complex magnetic potentials necessary for the more precise spatial and temporal control of atoms that is required. Here, we have designed a true, monolithically integrated, planarized, multi-metal-layer atom chip for demonstrating crossed-wire conductor patterns that trap and controllably transport atoms across the chip surface to targets of interest.
Climate models have a large number of inputs and outputs. In addition, diverse parameters sets can match observations similarly well. These factors make calibrating the models difficult. But as the Earth enters a new climate regime, parameters sets may cease to match observations. History matching is necessary but not sufficient for good predictions. We seek a 'Pareto optimal' ensemble of calibrated parameter sets for the CCSM climate model, in which no individual criteria can be improved without worsening another. One Multi Objective Genetic Algorithm (MOGA) optimization typically requires thousands of simulations but produces an ensemble of Pareto optimal solutions. Our simulation budget of 500-1000 runs allows us to perform the MOGA optimization once, but with far fewer evaluations than normal. We devised an analytic test problem to aid in the selection MOGA settings. The test problem's Pareto set is the surface of a 6 dimensional hypersphere with radius 1 centered at the origin, or rather the portion of it in the [0,1] octant. We also explore starting MOGA from a space-filling Latin Hypercube sample design, specifically Binning Optimal Symmetric Latin Hypercube Sampling (BOSLHS), instead of Monte Carlo (MC). We compare the Pareto sets based on: their number of points, N, larger is better; their RMS distance, d, to the ensemble's center, 0.5553 is optimal; their average radius, {mu}(r), 1 is optimal; their radius standard deviation, {sigma}(r), 0 is optimal. The estimated distributions for these metrics when starting from MC and BOSLHS are shown in Figs. 1 and 2.
Disposal of high-level radioactive waste, including spent nuclear fuel, in deep (3 to 5 km) boreholes is a potential option for safely isolating these wastes from the surface and near-surface environment. Existing drilling technology permits reliable and cost-effective construction of such deep boreholes. Conditions favorable for deep borehole disposal in crystalline basement rocks, including low permeability, high salinity, and geochemically reducing conditions, exist at depth in many locations, particularly in geologically stable continental regions. Isolation of waste depends, in part, on the effectiveness of borehole seals and potential alteration of permeability in the disturbed host rock surrounding the borehole. Coupled thermal-mechanical-hydrologic processes induced by heat from the radioactive waste may impact the disturbed zone near the borehole and borehole wall stability. Numerical simulations of the coupled thermal-mechanical response in the host rock surrounding the borehole were conducted with three software codes or combinations of software codes. Software codes used in the simulations were FEHM, JAS3D, Aria, and Adagio. Simulations were conducted for disposal of spent nuclear fuel assemblies and for the higher heat output of vitrified waste from the reprocessing of fuel. Simulations were also conducted for both isotropic and anisotropic ambient horizontal stress in the host rock. Physical, thermal, and mechanical properties representative of granite host rock at a depth of 4 km were used in the models. Simulation results indicate peak temperature increases at the borehole wall of about 30 C and 180 C for disposal of fuel assemblies and vitrified waste, respectively. Peak temperatures near the borehole occur within about 10 years and decline rapidly within a few hundred years and with distance. The host rock near the borehole is placed under additional compression. Peak mechanical stress is increased by about 15 MPa (above the assumed ambient isotropic stress of 100 MPa) at the borehole wall for the disposal of fuel assemblies and by about 90 MPa for vitrified waste. Simulated peak volumetric strain at the borehole wall is about 420 and 2600 microstrain for the disposal of fuel assemblies and vitrified waste, respectively. Stress and volumetric strain decline rapidly with distance from the borehole and with time. Simulated peak stress at and parallel to the borehole wall for the disposal of vitrified waste with anisotropic ambient horizontal stress is about 440 MPa, which likely exceeds the compressive strength of granite if unconfined by fluid pressure within the borehole. The relatively small simulated displacements and volumetric strain near the borehole suggest that software codes using a nondeforming grid provide an adequate approximation of mechanical deformation in the coupled thermal-mechanical model. Additional modeling is planned to incorporate the effects of hydrologic processes coupled to thermal transport and mechanical deformation in the host rock near the heated borehole.
The controlled self-assembly of polymer thin-films into ordered domains has attracted significant academic and industrial interest. Most work has focused on controlling domain size and morphology through modification of the polymer block-lengths, n, and the Flory-Huggins interaction parameter, {chi}. Models, such as Self-Consistent Field Theory (SCFT), have been successful in describing the experimentally observed morphology of phase-separated polymers. We have developed a computational method which uses SCFT calculations as a predictive tool in order to guide our polymer synthesis. Armed with this capability, we have the ability to select {chi} and then search for an ideal value of n such that a desired morphology is the most thermodynamically favorable. This approach enables us to synthesize new block-polymers with the exactly segment lengths that will undergo self-assembly to the desired morphology. As proof-of-principle we have used our model to predict the gyroidal domain for various block lengths using a fixed {chi} value. To validate our computational model, we have synthesized a series of block-copolymers in which only the total molecular length changes. All of these materials have a predicted thermodynamically favorable gyroidal morphology based on the results of our SCFT calculations. Thin-films of these polymers are cast and annealed in order to equilibrate the structure. Final characterization of the polymer thin-film morphology has been performed. The accuracy of our calculations compared to experimental results is discussed. Extension of this predictive ability to tri-block polymer systems and the implications to making functionalizable nanoporous membranes will be discussed.
This presentation discusses the following questions: (1) What are the Global Problems that require Systems Engineering; (2) Where is Systems Engineering going; (3) What are the boundaries of Systems Engineering; (4) What is the distinction between Systems Thinking and Systems Engineering; (5) Can we use Systems Engineering on Complex Systems; and (6) Can we use Systems Engineering on Wicked Problems?
The area of wind turbine component manufacturing represents a business opportunity in the wind energy industry. Modern wind turbines can provide large amounts of electricity, cleanly and reliably, at prices competitive with any other new electricity source. Over the next twenty years, the US market for wind power is expected to continue to grow, as is the domestic content of installed turbines, driving demand for American-made components. Between 2005 and 2009, components manufactured domestically grew eight-fold to reach 50 percent of the value of new wind turbines installed in the U.S. in 2009. While that growth is impressive, the industry expects domestic content to continue to grow, creating new opportunities for suppliers. In addition, ever-growing wind power markets around the world provide opportunities for new export markets.
The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.
Lawrence, R.J.; Remo, John L.; Furnish, Michael D.
X-ray momentum coupling coefficients, C{sub M}, were determined by measuring stress waveforms in planetary materials subjected to impulsive radiation loading from the Sandia National Laboratories Z-machine. Results from the velocity interferometry (VISAR) diagnostic provided limited equation-of-state data as well. Targets were iron and stone meteorites, magnesium rich olivine (dunite) solid and powder ({approx}5--300 {mu}m), and Si, Al, and Fe calibration targets. All samples were {approx}1 mm thick and, except for Si, backed by LiF single-crystal windows. The x-ray spectrum included a combination of thermal radiation (blackbody 170--237 eV) and line emissions from the pinch material (Cu, Ni, Al, or stainless steel). Target fluences 0.4--1.7 kJ/cm{sup 2} at intensities 43--260 GW/cm{sup 2} produced front surface plasma pressures 2.6--12.4 GPa. Stress waves driven into the samples were attenuating due to the short ({approx}5 ns) duration of the drive pulse. Attenuating wave impulse is constant allowing accurate C{sub M} measurements provided mechanical impedance mismatch between samples and the window are known. Impedance-corrected C{sub M} determined from rear-surface motion was 1.9--3.1 x 10{sup -5} s/m for stony meteorites, 2.7 and 0.5 x 10{sup -5} s/m for solid and powdered dunite, 0.8--1.4 x 10{sup -5}.
We are developing a low-emissivity thermal management coating system to minimize radiative heat losses under a high-vacuum environment. Good adhesion, low outgassing, and good thermal stability of the coating material are essential elements for a long-life, reliable thermal management device. The system of electroplated Au coating on the adhesion-enhancing Wood's Ni strike and 304L substrate was selected due to its low emissivity and low surface chemical reactivity. The physical and chemical properties, interface bonding, thermal aging, and compatibility of the above Au/Ni/304L system were examined extensively. The study shows that the as-plated electroplated Au and Ni samples contain submicron columnar grains, stringers of nanopores, and/or H{sub 2} gas bubbles, as expected. The grain structure of Au and Ni are thermally stable up to 250 C for 63 days. The interface bonding is strong, which can be attributed to good mechanical locking among the Au, the 304L, and the porous Ni strike. However, thermal instability of the nanopore structure (i.e., pore coalescence and coarsening due to vacancy and/or entrapped gaseous phase diffusion) and Ni diffusion were observed. In addition, the study also found that prebaking 304L in the furnace at {ge} 1 x 10{sup -4} Torr promotes surface Cr-oxides on the 304L surface, which reduces the effectiveness of the intended H-removal. The extent of the pore coalescence and coarsening and their effect on the long-term system integrity and outgassing are yet to be understood. Mitigating system outgassing and improving Au adhesion require a further understanding of the process-structure-system performance relationships within the electroplated Au/Ni/304L system.
Exascale systems will have hundred thousands of compute nodes and millions of components which increases the likelihood of faults. Today, applications use checkpoint/restart to recover from these faults. Even under ideal conditions, applications running on more than 50,000 nodes will spend more than half of their total running time saving checkpoints, restarting, and redoing work that was lost. Redundant computing is a method that allows an application to continue working even when failures occur. Instead of each failure causing an application interrupt, multiple failures can be absorbed by the application until redundancy is exhausted. In this paper we present a method to analyze the benefits of redundant computing, present simulation results of the cost, and compare it to other proposed methods for fault resilience.
A Rayleigh wave propagates laterally without dispersion in the vicinity of the plane stress-free surface of a homogeneous and isotropic elastic halfspace. The phase speed is independent of frequency and depends only on the Poisson ratio of the medium. However, after temporal and spatial discretization, a Rayleigh wave simulated by a 3D staggered-grid finite-difference (FD) seismic wave propagation algorithm suffers from frequency- and direction-dependent numerical dispersion. The magnitude of this dispersion depends critically on FD algorithm implementation details. Nevertheless, proper gridding can control numerical dispersion to within an acceptable level, leading to accurate Rayleigh wave simulations. Many investigators have derived dispersion relations appropriate for body wave propagation by various FD algorithms. However, the situation for surface waves is less well-studied. We have devised a numerical search procedure to estimate Rayleigh phase speed and group speed curves for 3D O(2,2) and O(2,4) staggered-grid FD algorithms. In contrast with the continuous time-space situation (where phase speed is obtained by extracting the appropriate root of the Rayleigh cubic), we cannot develop a closed-form mathematical formula governing the phase speed. Rather, we numerically seek the particular phase speed that leads to a solution of the discrete wave propagation equations, while holding medium properties, frequency, horizontal propagation direction, and gridding intervals fixed. Group speed is then obtained by numerically differentiating the phase speed with respect to frequency. The problem is formulated for an explicit stress-free surface positioned at two different levels within the staggered spatial grid. Additionally, an interesting variant involving zero-valued medium properties above the surface is addressed. We refer to the latter as an implicit free surface. Our preliminary conclusion is that an explicit free surface, implemented with O(4) spatial FD operators and positioned at the level of the compressional stress components, leads to superior numerical dispersion performance. Phase speeds measured from fixed-frequency synthetic seismograms agree very well with the numerical predictions.
Shales and other mudstones are the most abundant rock types in sedimentary basins, yet have received comparatively little attention. Common as hydrocarbon seals, these are increasingly being targeted as unconventional gas reservoirs, caprocks for CO2 sequestration, and storage repositories for waste. The small pore and grain size, large specific surface areas, and clay mineral structures lend themselves to rapid reaction rates, high capillary pressures, and semi-permeable membrane behavior accompanying changes in stress, pressure, temperature and chemical conditions. Under far from equilibrium conditions, mudrocks display a variety of spatio-temporal self-organized phenomena arising from nonlinear thermo-mechano-chemo-hydro coupling. Beginning with a detailed examination of nano-scale pore network structures in mudstones, we discuss the dynamics behind such self-organized phenomena as pressure solitons in unconsolidated muds, chemically-induced flow self focusing and permeability transients, localized compaction, time dependent well-bore failure, and oscillatory osmotic fluxes as they occur in clay-bearing sediments. Examples are draw from experiments, numerical simulation, and the field. These phenomena bear on the ability of these rocks to serve as containment barriers.
Monge first posed his (L{sup 1}) optimal mass transfer problem: to find a mapping of one distribution into another, minimizing total distance of transporting mass, in 1781. It remained unsolved in R{sup n} until the late 1990's. This result has since been extended to Riemannian manifolds. In both cases, optimal mass transfer relies upon a key lemma providing a Lipschitz control on the directions of geodesics. We will discuss the Lipschitz control of geodesics in the (subRiemannian) Heisenberg group. This provides an important step towards a potential theoretic proof of Monge's problem in the Heisenberg group.
A range of core operations and planning problems for the national electrical grid are naturally formulated and solved as stochastic programming problems, which minimize expected costs subject to a range of uncertain outcomes relating to, for example, uncertain demands or generator output. A critical decision issue relating to such stochastic programs is: How many scenarios are required to ensure a specific error bound on the solution cost? Scenarios are the key mechanism used to sample from the uncertainty space, and the number of scenarios drives computational difficultly. We explore this question in the context of a long-term grid generation expansion problem, using a bounding procedure introduced by Mak, Morton, and Wood. We discuss experimental results using problem formulations independently minimizing expected cost and down-side risk. Our results indicate that we can use a surprisingly small number of scenarios to yield tight error bounds in the case of expected cost minimization, which has key practical implications. In contrast, error bounds in the case of risk minimization are significantly larger, suggesting more research is required in this area in order to achieve rigorous solutions for decision makers.
While advances in manufacturing enable the fabrication of integrated circuits containing tens-to-hundreds of millions of devices, the time-sensitive modeling and simulation necessary to design these circuits poses a significant computational challenge. This is especially true for mixed-signal integrated circuits where detailed performance analyses are necessary for the individual analog/digital circuit components as well as the full system. When the integrated circuit has millions of devices, performing a full system simulation is practically infeasible using currently available Electrical Design Automation (EDA) tools. The principal reason for this is the time required for the nonlinear solver to compute the solutions of large linearized systems during the simulation of these circuits. The research presented in this report aims to address the computational difficulties introduced by these large linearized systems by using Model Order Reduction (MOR) to (i) generate specialized preconditioners that accelerate the computation of the linear system solution and (ii) reduce the overall dynamical system size. MOR techniques attempt to produce macromodels that capture the desired input-output behavior of larger dynamical systems and enable substantial speedups in simulation time. Several MOR techniques that have been developed under the LDRD on 'Solution Methods for Very Highly Integrated Circuits' will be presented in this report. Among those presented are techniques for linear time-invariant dynamical systems that either extend current approaches or improve the time-domain performance of the reduced model using novel error bounds and a new approach for linear time-varying dynamical systems that guarantees dimension reduction, which has not been proven before. Progress on preconditioning power grid systems using multi-grid techniques will be presented as well as a framework for delivering MOR techniques to the user community using Trilinos and the Xyce circuit simulator, both prominent world-class software tools.
This report summarizes a series of three-dimensional simulations for the Bayou Choctaw Strategic Petroleum Reserve. The U.S. Department of Energy plans to leach two new caverns and convert one of the existing caverns within the Bayou Choctaw salt dome to expand its petroleum reserve storage capacity. An existing finite element mesh from previous analyses is modified by changing the locations of two caverns. The structural integrity of the three expansion caverns and the interaction between all the caverns in the dome are investigated. The impacts of the expansion on underground creep closure, surface subsidence, infrastructure, and well integrity are quantified. Two scenarios were used for the duration and timing of workover conditions where wellhead pressures are temporarily reduced to atmospheric pressure. The three expansion caverns are predicted to be structurally stable against tensile failure for both scenarios. Dilatant failure is not expected within the vicinity of the expansion caverns. Damage to surface structures is not predicted and there is not a marked increase in surface strains due to the presence of the three expansion caverns. The wells into the caverns should not undergo yield. The results show that from a structural viewpoint, the locations of the two newly proposed expansion caverns are acceptable, and all three expansion caverns can be safely constructed and operated.
Magnesium batteries are alternatives to the use of lithium ion and nickel metal hydride secondary batteries due to magnesium's abundance, safety of operation, and lower toxicity of disposal. The divalency of the magnesium ion and its chemistry poses some difficulties for its general and industrial use. This work developed a continuous and fibrous nanoscale network of the cathode material through the use of electrospinning with the goal of enhancing performance and reactivity of the battery. The system was characterized and preliminary tests were performed on the constructed battery cells. We were successful in building and testing a series of electrochemical systems that demonstrated good cyclability maintaining 60-70% of discharge capacity after more than 50 charge-discharge cycles.