Publications

Results 70601–70700 of 99,299

Search results

Jump to search filters

Smoothing HCCI heat release with vaporization-cooling-induced thermal stratification using ethanol

Sjoberg, Carl M.; Dec, John E.

Ethanol and ethanol/gasoline blends are being widely considered as alternative fuels for light-duty automotive applications. At the same time, HCCI combustion has the potential to provide high efficiency and ultra-low exhaust emissions. However, the application of HCCI is typically limited to low and moderate loads because of unacceptably high heat-release rates (HRR) at higher fueling rates. This work investigates the potential of lowering the HCCI HRR at high loads by using partial fuel stratification to increase the in-cylinder thermal stratification. This strategy is based on ethanol's high heat of vaporization combined with its true single-stage ignition characteristics. Using partial fuel stratification, the strong fuel-vaporization cooling produces thermal stratification due to variations in the amount of fuel vaporization in different parts of the combustion chamber. The low sensitivity of the autoignition reactions to variations of the local fuel concentration allows the temperature variations to govern the combustion event. This results in a sequential autoignition event from leaner and hotter zones to richer and colder zones, lowering the overall combustion rate compared to operation with a uniform fuel/air mixture. The amount of partial fuel stratification was varied by adjusting the fraction of fuel injected late to produce stratification, and also by changing the timing of the late injection. The experiments show that a combination of 60-70% premixed charge and injection of 30-40 % of the fuel at 80{sup o}CA before TDC is effective for smoothing the HRR. With CA50 held fixed, this increases the burn duration by 55% and reduces the maximum pressure-rise rate by 40%. Combustion stability remains high but engine-out NO{sub x} has to be monitored carefully. For operation with strong reduction of the peak HRR, ISNO{sub x} rises to around 0.20 g/kWh for an IMEP{sub g} of 440 kPa. The single-cylinder HCCI research engine was operated naturally aspirated without EGR at 1200 rpm, and had low residual level using a CR = 14 piston.

More Details

Micropillar compression technique applied to micron-scale mudstone elasto-plastic deformation

Dewers, Thomas; Boyce, Brad L.; Buchheit, Thomas E.; Heath, Jason E.; Michael, Joseph R.

Mudstone mechanical testing is often limited by poor core recovery and sample size, preservation and preparation issues, which can lead to sampling bias, damage, and time-dependent effects. A micropillar compression technique, originally developed by Uchic et al. 2004, here is applied to elasto-plastic deformation of small volumes of mudstone, in the range of cubic microns. This study examines behavior of the Gothic shale, the basal unit of the Ismay zone of the Pennsylvanian Paradox Formation and potential shale gas play in southeastern Utah, USA. Precision manufacture of micropillars 5 microns in diameter and 10 microns in length are prepared using an ion-milling method. Characterization of samples is carried out using: dual focused ion - scanning electron beam imaging of nano-scaled pores and distribution of matrix clay and quartz, as well as pore-filling organics; laser scanning confocal (LSCM) 3D imaging of natural fractures; and gas permeability, among other techniques. Compression testing of micropillars under load control is performed using two different nanoindenter techniques. Deformation of 0.5 cm in diameter by 1 cm in length cores is carried out and visualized by a microscope loading stage and laser scanning confocal microscopy. Axisymmetric multistage compression testing and multi-stress path testing is carried out using 2.54 cm plugs. Discussion of results addresses size of representative elementary volumes applicable to continuum-scale mudstone deformation, anisotropy, and size-scale plasticity effects. Other issues include fabrication-induced damage, alignment, and influence of substrate.

More Details

Arctic Sea ice model sensitivities

Bochev, Pavel B.; Paskaleva, Biliana S.

Arctic sea ice is an important component of the global climate system and, due to feedback effects, the Arctic ice cover is changing rapidly. Predictive mathematical models are of paramount importance for accurate estimates of the future ice trajectory. However, the sea ice components of Global Climate Models (GCMs) vary significantly in their prediction of the future state of Arctic sea ice and have generally underestimated the rate of decline in minimum sea ice extent seen over the past thirty years. One of the contributing factors to this variability is the sensitivity of the sea ice state to internal model parameters. A new sea ice model that holds some promise for improving sea ice predictions incorporates an anisotropic elastic-decohesive rheology and dynamics solved using the material-point method (MPM), which combines Lagrangian particles for advection with a background grid for gradient computations. We evaluate the variability of this MPM sea ice code and compare it with the Los Alamos National Laboratory CICE code for a single year simulation of the Arctic basin using consistent ocean and atmospheric forcing. Sensitivities of ice volume, ice area, ice extent, root mean square (RMS) ice speed, central Arctic ice thickness,and central Arctic ice speed with respect to ten different dynamic and thermodynamic parameters are evaluated both individually and in combination using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA). We find similar responses for the two codes and some interesting seasonal variability in the strength of the parameters on the solution.

More Details

A nanostructure thermal property measurement platform

Martinez, Julio M.; Shaner, Eric A.; Swartzentruber, Brian; Huang, Jian Y.; Sullivan, John P.

Measurements of the electrical and thermal transport properties of one-dimensional nanostructures (e.g., nanotubes and nanowires) typically are obtained without detailed knowledge of the specimen's atomic-scale structure or defects. To address this deficiency we have developed a microfabricated, chip-based characterization platform that enables both transmission electron microscopy (TEM) of atomic structure and defects as well as measurement of the thermal transport properties of individual nanostructures. The platform features a suspended heater line that contacts the center of a suspended nanostructure/nanowire that was placed using in-situ scanning electron microscope nanomanipulators. One key advantage of this platform is that it is possible to measure the thermal conductivity of both halves of the nanostructure (on each side of the central heater), and this feature permits identification of possible changes in thermal conductance along the wire and measurement of the thermal contact resistance. Suspension of the nanostructure across a through-hole enables TEM characterization of the atomic and defect structure (dislocations, stacking faults, etc.) of the test sample. As a model study, we report the use of this platform to measure the thermal conductivity and defect structure of GaN nanowires. The utilization of this platform for the measurements of other nanostructures will also be discussed.

More Details

Fuel from wastewater : harnessing a potential energy source in Canada through the co-location of algae biofuel production to sources of effluent, heat and CO2

Passell, Howard; Roach, Jesse D.

Sandia National Laboratories is collaborating with the National Research Council (NRC) Canada and the National Renewable Energy Laboratory (NREL) to develop a decision-support model that will evaluate the tradeoffs associated with high-latitude algae biofuel production co-located with wastewater, CO2, and waste heat. This project helps Canada meet its goal of diversifying fuel sources with algae-based biofuels. The biofuel production will provide a wide range of benefits including wastewater treatment, CO2 reuse and reduction of demand for fossil-based fuels. The higher energy density in algae-based fuels gives them an advantage over crop-based biofuels as the 'production' footprint required is much less, resulting in less water consumed and little, if any conversion of agricultural land from food to fuel production. Besides being a potential source for liquid fuel, algae have the potential to be used to generate electricity through the burning of dried biomass, or anaerobically digested to generate methane for electricity production. Co-locating algae production with waste streams may be crucial for making algae an economically valuable fuel source, and will certainly improve its overall ecological sustainability. The modeling process will address these questions, and others that are important to the use of water for energy production: What are the locations where all resources are co-located, and what volumes of algal biomass and oil can be produced there? In locations where co-location does not occur, what resources should be transported, and how far, while maintaining economic viability? This work is being funded through the U.S. Department of Energy (DOE) Biomass Program Office of Energy Efficiency and Renewable Energy, and is part of a larger collaborative effort that includes sampling, strain isolation, strain characterization and cultivation being performed by the NREL and Canada's NRC. Results from the NREL / NRC collaboration including specific productivities of selected algal strains will eventually be incorporated into this model.

More Details

New demands, new supplies : a national look at the water balance of carbon dioxide capture and sequestration

Roach, Jesse D.; Kobos, Peter; Klise, Geoffrey T.; Krumhansl, James L.

Concerns over rising concentrations of greenhouse gases in the atmosphere have resulted in serious consideration of policies aimed at reduction of anthropogenic carbon dioxide (CO2) emissions. If large scale abatement efforts are undertaken, one critical tool will be geologic sequestration of CO2 captured from large point sources, specifically coal and natural gas fired power plants. Current CO2 capture technologies exact a substantial energy penalty on the source power plant, which must be offset with make-up power. Water demands increase at the source plant due to added cooling loads. In addition, new water demand is created by water requirements associated with generation of the make-up power. At the sequestration site however, saline water may be extracted to manage CO2 plum migration and pressure build up in the geologic formation. Thus, while CO2 capture creates new water demands, CO2 sequestration has the potential to create new supplies. Some or all of the added demand may be offset by treatment and use of the saline waters extracted from geologic formations during CO2 sequestration. Sandia National Laboratories, with guidance and support from the National Energy Technology Laboratory, is creating a model to evaluate the potential for a combined approach to saline formations, as a sink for CO2 and a source for saline waters that can be treated and beneficially reused to serve power plant water demands. This presentation will focus on the magnitude of added U.S. power plant water demand under different CO2 emissions reduction scenarios, and the portion of added demand that might be offset by saline waters extracted during the CO2 sequestration process.

More Details

Robust emergent climate phenomena associated with the high-sensitivity tail

Boslough, Mark; Levy, Michael N.; Backus, George A.

Because the potential effects of climate change are more severe than had previously been thought, increasing focus on uncertainty quantification is required for risk assessment needed by policy makers. Current scientific efforts focus almost exclusively on establishing best estimates of future climate change. However, the greatest consequences occur in the extreme tail of the probability density functions for climate sensitivity (the 'high-sensitivity tail'). To this end, we are exploring the impacts of newly postulated, highly uncertain, but high-consequence physical mechanisms to better establish the climate change risk. We define consequence in terms of dramatic change in physical conditions and in the resulting socioeconomic impact (hence, risk) on populations. Although we are developing generally applicable risk assessment methods, we have focused our initial efforts on uncertainty and risk analyses for the Arctic region. Instead of focusing on best estimates, requiring many years of model parameterization development and evaluation, we are focusing on robust emergent phenomena (those that are not necessarily intuitive and are insensitive to assumptions, subgrid-parameterizations, and tunings). For many physical systems, under-resolved models fail to generate such phenomena, which only develop when model resolution is sufficiently high. Our ultimate goal is to discover the patterns of emergent climate precursors (those that cannot be predicted with lower-resolution models) that can be used as a 'sensitivity fingerprint' and make recommendations for a climate early warning system that would use satellites and sensor arrays to look for the various predicted high-sensitivity signatures. Our initial simulations are focused on the Arctic region, where underpredicted phenomena such as rapid loss of sea ice are already emerging, and because of major geopolitical implications associated with increasing Arctic accessibility to natural resources, shipping routes, and strategic locations. We anticipate that regional climate will be strongly influenced by feedbacks associated with a seasonally ice-free Arctic, but with unknown emergent phenomena.

More Details

Uncertainty quantification given discontinuous climate model response and a limited number of model runs

Sargsyan, Khachik; Safta, Cosmin; Debusschere, Bert; Najm, Habib N.

Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of discontinuous model data with adjustable sharpness and structure.

More Details

The development of CACTUS : a wind and marine turbine performance simulation code

Murray, Jonathan; Barone, Matthew F.

CACTUS (Code for Axial and Cross-flow TUrbine Simulation) is a turbine performance simulation code, based on a free wake vortex method, under development at Sandia National Laboratories (SNL) as part of a Department of Energy program to study marine hydrokinetic (MHK) devices. The current effort builds upon work previously done at SNL in the area of vertical axis wind turbine simulation, and aims to add models to handle generic device geometry and physical models specific to the marine environment. An overview of the current state of the project and validation effort is provided.

More Details

The effect of error models in the multiscale inversion of binary permeability fields

Ray, Jaideep; Van Bloemen Waanders, Bart; Mckenna, Sean A.

We present results from a recently developed multiscale inversion technique for binary media, with emphasis on the effect of subgrid model errors on the inversion. Binary media are a useful fine-scale representation of heterogeneous porous media. Averaged properties of the binary field representations can be used to characterize flow through the porous medium at the macroscale. Both direct measurements of the averaged properties and upscaling are complicated and may not provide accurate results. However, it may be possible to infer upscaled properties of the binary medium from indirect measurements at the coarse scale. Multiscale inversion, performed with a subgrid model to connect disparate scales together, can also yield information on the fine-scale properties. We model the binary medium using truncated Gaussian fields, and develop a subgrid model for the upscaled permeability based on excursion sets of those fields. The subgrid model requires an estimate of the proportion of inclusions at the block scale as well as some geometrical parameters of the inclusions as inputs, and predicts the effective permeability. The inclusion proportion is assumed to be spatially varying, modeled using Gaussian processes and represented using a truncated Karhunen-Louve (KL) expansion. This expansion is used, along with the subgrid model, to pose as a Bayesian inverse problem for the KL weights and the geometrical parameters of the inclusions. The model error is represented in two different ways: (1) as a homoscedastic error and (2) as a heteroscedastic error, dependent on inclusion proportionality and geometry. The error models impact the form of the likelihood function in the expression for the posterior density of the objects of inference. The problem is solved using an adaptive Markov Chain Monte Carlo method, and joint posterior distributions are developed for the KL weights and inclusion geometry. Effective permeabilities and tracer breakthrough times at a few 'sensor' locations (obtained by simulating a pump test) form the observables used in the inversion. The inferred quantities can be used to generate an ensemble of permeability fields, both upscaled and fine-scale, which are consistent with the observations. We compare the inferences developed using the two error models, in terms of the KL weights and fine-scale realizations that could be supported by the coarse-scale inferences. Permeability differences are observed mainly in regions where the inclusions proportion is near the percolation threshold, and the subgrid model incurs its largest approximation. These differences also reflected in the tracer breakthrough times and the geometry of flow streamlines, as obtained from a permeameter simulation. The uncertainty due to subgrid model error is also compared to the uncertainty in the inversion due to incomplete data.

More Details

Pore-lining composition and capillary breakthrough pressure of mudstone caprocks : sealing efficiency at geologic CO2 storage sites

Dewers, Thomas; Kotula, Paul G.; Nemer, Martin

Subsurface containment of CO2 is predicated on effective caprock sealing. Many previous studies have relied on macroscopic measurements of capillary breakthrough pressure and other petrophysical properties without direct examination of solid phases that line pore networks and directly contact fluids. However, pore-lining phases strongly contribute to sealing behavior through interfacial interactions among CO2, brine, and the mineral or non-mineral phases. Our high resolution (i.e., sub-micron) examination of the composition of pore-lining phases of several continental and marine mudstones indicates that sealing efficiency (i.e., breakthrough pressure) is governed by pore shapes and pore-lining phases that are not identifiable except through direct characterization of pores. Bulk X-ray diffraction data does not indicate which phases line the pores and may be especially lacking for mudstones with organic material. Organics can line pores and may represent once-mobile phases that modify the wettability of an originally clay-lined pore network. For shallow formations (i.e., < {approx}800 m depth), interfacial tension and contact angles result in breakthrough pressures that may be as high as those needed to fracture the rock - thus, in the absence of fractures, capillary sealing efficiency is indicated. Deeper seals have poorer capillary sealing if mica-like wetting dominates the wettability. We thank the U.S. Department of Energy's National Energy Technology Laboratory and the Office of Basic Energy Sciences, and the Southeast and Southwest Carbon Sequestration Partnerships for supporting this work.

More Details

Comparison of caprock pore networks which potentially will be impacted by carbon sequestration projects

Dewers, Thomas

Injection of CO2 into underground rock formations can reduce atmospheric CO2 emissions. Caprocks present above potential storage formations are the main structural trap inhibiting CO2 from leaking into overlying aquifers or back to the Earth's surface. Dissolution and precipitation of caprock minerals resulting from reaction with CO2 may alter the pore network where many pores are of the micrometer to nanometer scale, thus altering the structural trapping potential of the caprock. However, the distribution, geometry and volume of pores at these scales are poorly characterized. In order to evaluate the overall risk of leakage of CO2 from storage formations, a first critical step is understanding the distribution and shape of pores in a variety of different caprocks. As the caprock is often comprised of mudstones, we analyzed samples from several mudstone formations with small angle neutron scattering (SANS) and high-resolution transmission electron microscopy (TEM) imaging to compare the pore networks. Mudstones were chosen from current or potential sites for carbon sequestration projects including the Marine Tuscaloosa Group, the Lower Tuscaloosa Group, the upper and lower shale members of the Kirtland Formation, and the Pennsylvanian Gothic shale. Expandable clay contents ranged from 10% to approximately 40% in the Gothic shale and Kirtland Formation, respectively. During SANS, neutrons effectively scatter from interfaces between materials with differing scattering length density (i.e., minerals and pores). The intensity of scattered neutrons, I(Q), where Q is the scattering vector, gives information about the volume and arrangement of pores in the sample. The slope of the scattering data when plotted as log I(Q) vs. log Q provides information about the fractality or geometry of the pore network. On such plots slopes from -2 to -3 represent mass fractals while slopes from -3 to -4 represent surface fractals. Scattering data showed surface fractal dimensions for the Kirtland formation and one sample from the Tuscaloosa formation close to 3, indicating very rough surfaces. In contrast, scattering data for the Gothic shale formation exhibited mass fractal behavior. In one sample of the Tuscaloosa formation the data are described by a surface fractal at low Q (larger pores) and a mass fractal at high Q (smaller pores), indicating two pore populations contributing to the scattering behavior. These small angle neutron scattering results, combined with high-resolution TEM imaging, provided a means for both qualitative and quantitative analysis of the differences in pore networks between these various mudstones.

More Details

Sensor placement for municipal water networks

Phillips, Cynthia A.; Boman, Erik G.; Carr, Robert D.; Hart, William E.; Berry, Jonathan; Watson, Jean-Paul; Hart, David; Mckenna, Sean A.; Riesen, Lee A.

We consider the problem of placing a limited number of sensors in a municipal water distribution network to minimize the impact over a given suite of contamination incidents. In its simplest form, the sensor placement problem is a p-median problem that has structure extremely amenable to exact and heuristic solution methods. We describe the solution of real-world instances using integer programming or local search or a Lagrangian method. The Lagrangian method is necessary for solution of large problems on small PCs. We summarize a number of other heuristic methods for effectively addressing issues such as sensor failures, tuning sensors based on local water quality variability, and problem size/approximation quality tradeoffs. These algorithms are incorporated into the TEVA-SPOT toolkit, a software suite that the US Environmental Protection Agency has used and is using to design contamination warning systems for US municipal water systems.

More Details

Analytical modeling of the acoustic field during a direct field acoustic test

Mesh, Mikhail; Stasiunas, Eric C.

The acoustic field generated during a Direct Field Acoustic Test (DFAT) has been analytically modeled in two space dimensions using a properly phased distribution of propagating plane waves. Both the pure-tone and broadband acoustic field were qualitatively and quantitatively compared to a diffuse acoustic field. The modeling indicates significant non-uniformity of sound pressure level for an empty (no test article) DFAT, specifically a center peak and concentric maxima/minima rings. This spatial variation is due to the equivalent phase among all propagating plane waves at each frequency. The excitation of a simply supported slender beam immersed within the acoustic fields was also analytically modeled. Results indicate that mid-span response is dependent upon location and orientation of the beam relative to the center of the DFAT acoustic field. For a diffuse acoustic field, due to its spatial uniformity, mid-span response sensitivity to location and orientation is nonexistent.

More Details

Dual FIB-SEM 3D imaging and lattice boltzmann modeling of porosimetry and multiphase flow in chalk

Rinehart, Alex R.; Yoon, Hongkyu; Heath, Jason E.; Dewers, Thomas

Mercury intrusion porosimetry (MIP) is an often-applied technique for determining pore throat distributions and seal analysis of fine-grained rocks. Due to closure effects, potential pore collapse, and complex pore network topologies, MIP data interpretation can be ambiguous, and often biased toward smaller pores in the distribution. We apply 3D imaging techniques and lattice-Boltzmann modeling in interpreting MIP data for samples of the Cretaceous Selma Group Chalk. In the Mississippi Interior Salt Basin, the Selma Chalk is the apparent seal for oil and gas fields in the underlying Eutaw Fm., and, where unfractured, the Selma Chalk is one of the regional-scale seals identified by the Southeast Regional Carbon Sequestration Partnership for CO2 injection sites. Dual focused ion - scanning electron beam and laser scanning confocal microscopy methods are used for 3D imaging of nanometer-to-micron scale microcrack and pore distributions in the Selma Chalk. A combination of image analysis software is used to obtain geometric pore body and throat distributions and other topological properties, which are compared to MIP results. 3D data sets of pore-microfracture networks are used in Lattice Boltzmann simulations of drainage (wetting fluid displaced by non-wetting fluid via the Shan-Chen algorithm), which in turn are used to model MIP procedures. Results are used in interpreting MIP results, understanding microfracture-matrix interaction during multiphase flow, and seal analysis for underground CO2 storage.

More Details

Evaluation of PV performance models and their impact on project risk

Stein, Joshua; Hansen, Clifford

Photovoltaic systems are often priced in $/W{sub p}, where Wp refers to the DC power rating of the modules at Standard Test Conditions (1000 W/m{sup 2}, 25 C cell temperature) and $ refers to the installed cost of the system. However, the true value of the system is in the energy it will produce in kWhs, not the power rating. System energy production is a function of the system design and location, the mounting configuration, the power conversion system, and the module technology, as well as the solar resource. Even if all other variables are held constant, the annual energy yield (kWh/kW{sup p}) will vary among module technologies because of differences in response to low-light levels and temperature. Understanding energy yield is a key part of understanding system value. System performance models are used during project development to estimate the expected output of PV systems for a given design and location. Performance modeling is normally done by the system designer/system integrator. Often, an independent engineer will also model system output during a due diligence review of a project. A variety of system performance models are available. The most commonly used modeling tool for project development and due diligence in the United States is probably PVsyst, while those seeking a quick answer to expected energy production may use PVWatts. In this paper, we examine the variation in predicted energy output among modeling tools and users and compare that to measured output.

More Details

High-performance computing applied to semantic databases

Jimenez, Edward S.; Goodman, Eric

To-date, the application of high-performance computing resources to Semantic Web data has largely focused on commodity hardware and distributed memory platforms. In this paper we make the case that more specialized hardware can offer superior scaling and close to an order of magnitude improvement in performance. In particular we examine the Cray XMT. Its key characteristics, a large, global shared-memory, and processors with a memory-latency tolerant design, offer an environment conducive to programming for the Semantic Web and have engendered results that far surpass current state of the art. We examine three fundamental pieces requisite for a fully functioning semantic database: dictionary encoding, RDFS inference, and query processing. We show scaling up to 512 processors (the largest configuration we had available), and the ability to process 20 billion triples completely in-memory.

More Details

Mixing-induced calcite precipitation and dissolution kinetics in micromodel experiments

Yoon, Hongkyu; Dewers, Thomas

Dissolved CO2 from geological CO2 sequestration may react with dissolved minerals in fractured rocks or confined aquifers and cause mineral precipitation. The overall rate of reaction can be limited by diffusive or dispersive mixing, and mineral precipitation can block pores and further hinder these processes. Mixing-induced calcite precipitation experiments were performed by injecting solutions containing CaCl2 and Na2CO3 through two separate inlets of a micromodel (1-cm x 2-cm x 40-microns); transverse dispersion caused the two solutions to mix along the center of the micromodel, resulting in calcite precipitation. The amount of calcite precipitation initially increased to a maximum and then decreased to a steady state value. Fluorescent microscopy and imaging techniques were used to visualize calcite precipitation, and the corresponding effects on the flow field. Experimental micromodel results were evaluated with pore-scale simulations using a 2-D Lattice-Boltzmann code for water flow and a finite volume code for reactive transport. The reactive transport model included the impact of pH upon carbonate speciation and calcite dissolution. We found that proper estimation of the effective diffusion coefficient and the reaction surface area is necessary to adequately simulate precipitation and dissolution rates. The effective diffusion coefficient was decreased in grid cells where calcite precipitated, and keeping track of reactive surface over time played a significant role in predicting reaction patterns. Our results may improve understanding of the fundamental physicochemical processes during CO2 sequestration in geologic formations.

More Details

Posterior predictive modeling using multi-scale stochastic inverse parameter estimates

Mckenna, Sean A.; Ray, Jaideep; Van Bloemen Waanders, Bart

Multi-scale binary permeability field estimation from static and dynamic data is completed using Markov Chain Monte Carlo (MCMC) sampling. The binary permeability field is defined as high permeability inclusions within a lower permeability matrix. Static data are obtained as measurements of permeability with support consistent to the coarse scale discretization. Dynamic data are advective travel times along streamlines calculated through a fine-scale field and averaged for each observation point at the coarse scale. Parameters estimated at the coarse scale (30 x 20 grid) are the spatially varying proportion of the high permeability phase and the inclusion length and aspect ratio of the high permeability inclusions. From the non-parametric, posterior distributions estimated for these parameters, a recently developed sub-grid algorithm is employed to create an ensemble of realizations representing the fine-scale (3000 x 2000), binary permeability field. Each fine-scale ensemble member is instantiated by convolution of an uncorrelated multiGaussian random field with a Gaussian kernel defined by the estimated inclusion length and aspect ratio. Since the multiGaussian random field is itself a realization of a stochastic process, the procedure for generating fine-scale binary permeability field realizations is also stochastic. Two different methods are hypothesized to perform posterior predictive tests. Different mechanisms for combining multi Gaussian random fields with kernels defined from the MCMC sampling are examined. Posterior predictive accuracy of the estimated parameters is assessed against a simulated ground truth for predictions at both the coarse scale (effective permeabilities) and at the fine scale (advective travel time distributions). The two techniques for conducting posterior predictive tests are compared by their ability to recover the static and dynamic data. The skill of the inference and the method for generating fine-scale binary permeability fields are evaluated through flow calculations on the resulting fields using fine-scale realizations and comparing them against results obtained with the ground truth fine-scale and coarse-scale permeability fields.

More Details

Results with a 32-element dual mode imager

Mascarenhas, Nicholas M.; Cooper, Robert; Marleau, P.; Mrowka, Stanley; Brennan, J.

We present advances with a 32 element scalable, segmented dual mode imager. Scaling up the number of cells results in a 1.4 increase in efficiency over a system we deployed last year. Variable plane separation has been incorporated which further improves the efficiency of the detector. By using 20 cm diameter cells we demonstrate that we could increase sensitivity by a factor of 6. We further demonstrate gamma ray imaging in from Compton scattering. This feature allows for powerful dual mode imaging. Selected results are presented that demonstrate these new capabilities.

More Details

Applying the neutron scatter camera to treaty verification and warhead monitoring

Mascarenhas, Nicholas M.; Cooper, Robert; Mrowka, Stanley; Brennan, J.; Marleau, P.

The neutron scatter camera was originally developed for a range of SNM detection applications. We are now exploring the feasibility of applications in treaty verification and warhead monitoring using experimentation, maximum likelihood estimation method (MLEM), detector optimization, and MCNP-PoliMi simulations.

More Details

Rational design and synthesis of semi-conducting polymers

Cordaro, Joseph G.; Wong, Bryan M.

A rational approach was used to design polymeric materials for thin-film electronics applications, whereby theoretical modeling was used to determine synthetic targets. Time-dependent density functional theory calculations were used as a tool to predict the electrical properties of conjugated polymer systems. From these results, polymers with desirable energy levels and band-gaps were designed and synthesized. Measurements of optoelectronic properties were performed on the synthesized polymers and the results were compared to those of the theoretical model. From this work, the efficacy of the model was evaluated and new target polymers were identified.

More Details

Modeling the near-term risk of climate uncertainty : interdependencies among the U.S. states

Backus, George A.; Warren, Drake E.; Tidwell, Vincent C.

Decisions made to address climate change must start with an understanding of the risk of an uncertain future to human systems, which in turn means understanding both the consequence as well as the probability of a climate induced impact occurring. In other words, addressing climate change is an exercise in risk-informed policy making, which implies that there is no single correct answer or even a way to be certain about a single answer; the uncertainty in future climate conditions will always be present and must be taken as a working-condition for decision making. In order to better understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions, this study estimates the impacts from responses to climate change on U.S. state- and national-level economic activity by employing a risk-assessment methodology for evaluating uncertain future climatic conditions. Using the results from the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report (AR4) as a proxy for climate uncertainty, changes in hydrology over the next 40 years were mapped and then modeled to determine the physical consequences on economic activity and to perform a detailed 70-industry analysis of the economic impacts among the interacting lower-48 states. The analysis determines industry-level effects, employment impacts at the state level, interstate population migration, consequences to personal income, and ramifications for the U.S. trade balance. The conclusions show that the average risk of damage to the U.S. economy from climate change is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs. Further analysis shows that an increase in uncertainty raises this risk. This paper will present the methodology behind the approach, a summary of the underlying models, as well as the path forward for improving the approach.

More Details

Fast Katz and commuters : efficient estimation of social relatedness

Gleich, David F.

Motivated by social network data mining problems such as link prediction and collaborative filtering, significant research effort has been devoted to computing topological measures including the Katz score and the commute time. Existing approaches typically approximate all pairwise relationships simultaneously. In this paper, we are interested in computing: the score for a single pair of nodes, and the top-k nodes with the best scores from a given source node. For the pairwise problem, we apply an iterative algorithm that computes upper and lower bounds for the measures we seek. This algorithm exploits a relationship between the Lanczos process and a quadrature rule. For the top-k problem, we propose an algorithm that only accesses a small portion of the graph and is related to techniques used in personalized PageRank computing. To test the scalability and accuracy of our algorithms we experiment with three real-world networks and find that these algorithms run in milliseconds to seconds without any preprocessing.

More Details

An information management system for a spent nuclear fuel interim storage facility

Giles, Todd G.; Finch, Robert

We describe an integrated information management system for an independent spent fuel dry-storage installation (ISFSI) that can provide for (1) secure and authenticated data collection, (2) data analysis, (3) dissemination of information to appropriate stakeholders via a secure network, and (4) increased public confidence and support of the facility licensing and operation through increased transparency. This information management system is part of a collaborative project between Sandia National Laboratories, Taiwan Power Co., and the Fuel Cycle Materials Administration of Taiwan's Atomic Energy Council, which is investigating how to implement this concept.

More Details

Interaction of a planar shock with a dense field of particles in a multiphase shock tube

Beresh, Steven J.; Kearney, Sean P.; Trott, Wayne M.; Castaeda, Jaime N.; Pruett, Brian; Baer, M.R.

A novel multiphase shock tube has been constructed to test the interaction of a planar shock wave with a dense gas-solid field of particles. The particle field is generated by a gravity-fed method that results in a spanwise curtain of 100-micron particles producing a volume fraction of about 15%. Interactions with incident shock Mach numbers of 1.67 and 1.95 are reported. High-speed schlieren imaging is used to reveal the complex wave structure associated with the interaction. After the impingement of the incident shock, transmitted and reflected shocks are observed, which lead to differences in flow properties across the streamwise dimension of the curtain. Tens of microseconds after the onset of the interaction, the particle field begins to propagate downstream, and disperse. The spread of the particle field, as a function of its position, is seen to be nearly identical for both Mach numbers. Immediately downstream of the curtain, the peak pressures associated with the Mach 1.67 and 1.95 interactions are about 35% and 45% greater than tests without particles, respectively. For both Mach numbers tested, the energy and momentum fluxes in the induced flow far downstream are reduced by about 30-40% by the presence of the particle field.

More Details

Coupled computational fluid dynamics and heat transfer analysis of the VHTR lower plenum

The very high temperature reactor (VHTR) concept is being developed by the US Department of Energy (DOE) and other groups around the world for the future generation of electricity at high thermal efficiency (> 48%) and co-generation of hydrogen and process heat. This Generation-IV reactor would operate at elevated exit temperatures of 1,000-1,273 K, and the fueled core would be cooled by forced convection helium gas. For the prismatic-core VHTR, which is the focus of this analysis, the velocity of the hot helium flow exiting the core into the lower plenum (LP) could be 35-70 m/s. The impingement of the resulting gas jets onto the adiabatic plate at the bottom of the LP could develop hot spots and thermal stratification and inadequate mixing of the gas exiting the vessel to the turbo-machinery for energy conversion. The complex flow field in the LP is further complicated by the presence of large cylindrical graphite posts that support the massive core and inner and outer graphite reflectors. Because there are approximately 276 channels in the VHTR core from which helium exits into the LP and a total of 155 support posts, the flow field in the LP includes cross flow, multiple jet flow interaction, flow stagnation zones, vortex interaction, vortex shedding, entrainment, large variation in Reynolds number (Re), recirculation, and mixing enhancement and suppression regions. For such a complex flow field, experimental results at operating conditions are not currently available. Instead, the objective of this paper is to numerically simulate the flow field in the LP of a prismatic core VHTR using the Sandia National Laboratories Fuego, which is a 3D, massively parallel generalized computational fluid dynamics (CFD) code with numerous turbulence and buoyancy models and simulation capabilities for complex gas flow fields, with and without thermal effects. The code predictions for simpler flow fields of single and swirling gas jets, with and without a cross flow, are validated using reported experimental data and theory. The key processes in the LP are identified using phenomena identification and ranking table (PIRT). It may be argued that a CFD code that accurately simulates simplified, single-effect flow fields with increasing complexity is likely to adequately model the complex flow field in the VHTR LP, subject to a future experimental validation. The PIRT process and spatial and temporal discretizations implemented in the present analysis using Fuego established confidence in the validation and verification (V and V) calculations and in the conclusions reached based on the simulation results. The performed calculations included the helicoid vortex swirl model, the dynamic Smagorinsky large eddy simulation (LES) turbulence model, participating media radiation (PMR), and 1D conjugate heat transfer (CHT). The full-scale, half-symmetry LP mesh used in the LP simulation included unstructured hexahedral elements and accounted for the graphite posts, the helium jets, the exterior walls, and the bottom plate with an adiabatic outer surface. Results indicated significant enhancements in heat transfer, flow mixing, and entrainment in the VHTR LP when using swirling inserts at the exit of the helium flow channels into the LP. The impact of using various swirl angles on the flow mixing and heat transfer in the LP is qualified, including the formation of the central recirculation zone (CRZ), and the effect of LP height. Results also showed that in addition to the enhanced mixing, the swirling inserts result in negligible additional pressure losses and are likely to eliminate the formation of hot spots.

More Details

Pressure fluctuations beneath turbulent spots and instability wave packets in a hypersonic boundary layer

Beresh, Steven J.

The development of turbulent spots in a hypersonic boundary layer was studied on the nozzle wall of the Boeing/AFOSR Mach-6 Quiet Tunnel. Under quiet flow conditions, the nozzle wall boundary layer remains laminar and grows very thick over the long nozzle length. This allows the development of large turbulent spots that can be readily measured with pressure transducers. Measurements of naturally occurring wave packets and developing turbulent spots were made. The peak frequencies of these natural wave packets were in agreement with second-mode computations. For a controlled study, the breakdown of disturbances created by spark and glow perturbations were studied at similar freestream conditions. The spark perturbations were the most effective at creating large wave packets that broke down into turbulent spots. The flow disturbances created by the controlled perturbations were analyzed to obtain amplitude criteria for nonlinearity and breakdown as well as the convection velocities of the turbulent spots. Disturbances first grew into linear instability waves and then quickly became nonlinear. Throughout the nonlinear growth of the wave packets, large harmonics are visible in the power spectra. As breakdown begins, the peak amplitudes of the instability waves and harmonics decrease into the rising broad-band frequencies. Instability waves are still visible on either side of the growing turbulent spots during this breakdown process.

More Details

Fuel and core testing plan for a target fueled isotope production reactor

Dahl, James J.; Coats, Richard L.; Parma, Edward J.

In recent years there has been an unstable supply of the critical diagnostic medical isotope 99Tc. Several concepts and designs have been proposed to produce 99Mo the parent nuclide of 99Tc, at a commercial scale sufficient to stabilize the world supply. This work lays out a testing and experiment plan for a proposed 2 MW open pool reactor fueled by Low Enriched Uranium (LEU) 99Mo targets. The experiments and tests necessary to support licensing of the reactor design are described and how these experiments and tests will help establish the safe operating envelop for a medical isotope production reactor is discussed. The experiments and tests will facilitate a focused and efficient licensing process in order to bring on line a needed production reactor dedicated to supplying medical isotopes. The Target Fuel Isotope Reactor (TFIR) design calls for an active core region that is approximately 40 cm in diameter and 40 cm in fuel height. It contains up to 150 cylindrical, 1-cm diameter, LEU oxide fuel pins clad with Zircaloy (zirconium alloy), in an annular hexagonal array on a {approx}2.0 cm pitch surrounded, radially, by a graphite or a Be reflector. The reactor is similar to U.S. university reactors in power, hardware, and safety/control systems. Fuel/target pin fabrication is based on existing light water reactor fuel fabrication processes. However, as part of licensing process, experiments must be conducted to confirm analytical predictions of steady-state power and accident conditions. The experiment and test plan will be conducted in phases and will utilize existing facilities at the U.S. Department of Energy's Sandia National Laboratories. The first phase is to validate the predicted reactor core neutronics at delayed critical, zero power and very low power. This will be accomplished by using the Sandia Critical Experiment (CX) platform. A full scale TFIR core will be built in the CX and delayed critical measurements will be taken. For low power experiments, fuel pins can be removed after the experiment and using Sandia's metrology lab, relative power profiles (radially and axially) can be determined. In addition to validating neutronic analyses, confirming heat transfer properties of the target/fuel pins and core will be conducted. Fuel/target pin power limits can be verified with out-of-pile (electrical heating) thermal-hydraulic experiments. This will yield data on the heat flux across the Zircaloy clad and establish safety margin and operating limits. Using Sandia's Annular Core Research Reactor (ACRR) a 4 MW TRIGA type research reactor, target/fuel pins can be driven to desired fission power levels for long durations. Post experiment inspection of the pins can be conducted in the Auxiliary Hot Cell Facility to observe changes in the mechanical properties of the LEU matrix and burn-up effects. Transient tests can also be conducted at the ACRR to observe target/fuel pin performance during accident conditions. Target/fuel pins will be placed in double experiment containment and driven by pulsing the ACRR until target/fuel failure is observed. This will allow for extrapolation of analytical work to confirm safety margins.

More Details

III-nitride nanowires : novel materials for solid-state lighting

Wang, George T.; Li, Qiming L.; Huang, Jian Y.; Armstrong, Andrew A.

Although planar heterostructures dominate current solid-state lighting architectures (SSL), 1D nanowires have distinct and advantageous properties that may eventually enable higher efficiency, longer wavelength, and cheaper devices. However, in order to fully realize the potential of nanowire-based SSL, several challenges exist in the areas of controlled nanowire synthesis, nanowire device integration, and understanding and controlling the nanowire electrical, optical, and thermal properties. Here recent results are reported regarding the aligned growth of GaN and III-nitride core-shell nanowires, along with extensive results providing insights into the nanowire properties obtained using cutting-edge structural, electrical, thermal, and optical nanocharacterization techniques. A new top-down fabrication method for fabricating periodic arrays of GaN nanorods and subsequent nanorod LED fabrication is also presented.

More Details

Investigating the impact of the cielo cray XE6 architecture on scientific application codes

Vaughan, Courtenay T.; Rajan, Mahesh; Barrett, Richard F.; Doerfler, Douglas W.; Pedretti, Kevin P.

Cielo, a Cray XE6, is the Department of Energy NNSA Advanced Simulation and Computing (ASC) campaign's newest capability machine. Rated at 1.37 PFLOPS, it consists of 8,944 dual-socket oct-core AMD Magny-Cours compute nodes, linked using Cray's Gemini interconnect. Its primary mission objective is to enable a suite of the ASC applications implemented using MPI to scale to tens of thousands of cores. Cielo is an evolutionary improvement to a successful architecture previously available to many of our codes, thus enabling a basis for understanding the capabilities of this new architecture. Using three codes strategically important to the ASC campaign, and supplemented with some micro-benchmarks that expose the fundamental capabilities of the XE6, we report on the performance characteristics and capabilities of Cielo.

More Details

Fast generation of space-filling latin hypercube sample designs

13th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference 2010

Dalbey, Keith; Karystinos, George N.

Latin Hypercube Sampling (LHS) and Jittered Sampling (JS) both achieve better convergence than standard Monte Carlo Sampling (MCS) by using stratification to obtain a more uniform selection of samples, although LHS and JS use different stratification strategies. The "Koksma-Hlawka-like inequality" bounds the error in a computed mean in terms of the sample design's discrepancy, which is a common metric of uniformity. However, even the "fast" formulas available for certain useful L2 norm discrepancies require O (N2M) operations, where M is the number of dimensions and N is the number of points in the design. It is intuitive that "space-filling" designs will have a high degree of uniformity. In this paper we propose a new metric of the space-filling property, called "Binning Optimality," which can be evaluated in O(N log(N)) operations. A design is "Binning Optimal" in base 6. if when you recursively divide the hypercube into bM congruent disjoint sub-cubes, each sub-cube of a particular generation has the same number of points until the sub-cubes are small enough that they all contain either 0 or 1 points. The O(Nlog(N)) cost of determining if a design is binning optimal comes from quick-sorting the points into Morton order, i.e. sorting the points according to their position on a space-filling Z-curve. We also present a O(N log(N)) fast algorithm to generate Binning Optimal Symmetric Latin Hypercube Sample (BOSLHS) designs. These BOSLHS designs combine the best features of. and are superior in several metrics to. standard LHS and JS designs. Our algorithm takes significantly less than 1 second to generate M = 8 dimensional space-filling LHS designs with N = 216 = 65536 points compared to previous work which requires "minutes" to generate designs with N = 100 points.© 2010 by the American Institute of Aeronautics and Astronautics, Inc.

More Details

Conceptual structure of performance assessments for the geologic disposal of radioactive waste

10th International Conference on Probabilistic Safety Assessment and Management 2010, PSAM 2010

Helton, Jon C.; Hansen, Clifford; Sallaberry, Cedric J.

A conceptual structure for performance assessments (PAs) for radioactive waste disposal facilities and other complex engineered facilities based on the following three basic conceptual entities is described: EN1, a probability space that characterizes aleatory uncertainty; EN2, a function that predicts consequences for individual elements of the sample space for aleatory uncertainty; and EN3, a probability space that characterizes epistemic uncertainty. The implementation of this structure is illustrated with results from PAs for the Waste Isolation Pilot Plant and the proposed Yucca Mountain repository for high-level radioactive waste.

More Details

Fracture and fatigue of commercial grade api pipeline steels in gaseous hydrogen

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

San Marchi, Chris; Somerday, Brian P.; Nibur, Kevin A.; Stalheim, Douglas G.; Boggess, Todd; Jansto, Steve

Gaseous hydrogen is an alternative to petroleum-based fuels, but it is known to significantly reduce the fatigue and fracture resistance of steels. Steels are commonly used for containment and distribution of gaseous hydrogen, albeit under conservative operating conditions (i.e., large safety factors) to mitigate so-called gaseous hydrogen embrittlement. Economical methods of distributing gaseous hydrogen (such as using existing pipeline infrastructure) are necessary to make hydrogen fuel competitive with alternatives. the effects of gaseous hydrogen on fracture resistance and fatigue resistance of pipeline steels, however, has not been comprehensively evaluated and this data is necessary for structural integrity assessment in gaseous hydrogen environments. In addition, existing standardized test methods for environment assisted cracking under sustained load appear to be inadequate to characterize low-strength steels (such as pipeline steels) exposed to relevant gaseous hydrogen environments. In this study, the principles of fracture mechanics are used to compare the fracture and fatigue performance of two pipeline steels in high-purity gaseous hydrogen at two pressures: 5.5 MPa and 21 MPa. In particular, elastic-plastic fracture toughness and fatigue crack growth rates were measured using the compact tension geometry and a pressure vessel designed for testing materials while exposed to gaseous hydrogen. Copyright © 2010 by ASME.

More Details

Solid oxide fuel cell electrolytes produced by a combination of suspension plasma spray and very low pressure plasma spray

Materials Science and Technology Conference and Exhibition 2010, MS and T'10

Fleetwood, James D.; Slamovich, E.; Trice, R.; Hall, A.; Mccloskey, James F.

Plasma spray coating techniques allow unique control of electrolyte microstructures and properties as well as facilitating deposition on complex surfaces. This can enable significantly improved solid oxide fuel cells (SOFCs), including non-planar designs. SOFCs are promising because they directly convert the oxidization of fuel into electrical energy. However, electrolytes deposited using conventional plasma spray are porous and >50 microns thick. One solution to form dense, thin electrolytes of ideal composition for SOFCs is to combine suspension plasma spray (SPS) with very low pressure plasma spray (VLPPS). Increased compositional control is achieved due to dissolved dopant compounds in the suspension that are incorporated into the coating during plasma spraying. Thus, it is possible to change the chemistry of the feed stock during deposition. In the work reported, suspensions of sub-micron diameter 8 mol.% Y2O3-ZrO2 (YSZ) powders were sprayed on NiO-YSZ anodes at Sandia National Laboratories' (SNL) Thermal Spray Research Laboratory (TSRL). These coatings were compared to the same suspensions doped with scandium nitrate. The pressure in the chamber was 2.4 torr and the plasma was formed from a combination of argon and hydrogen gases. The resultant electrolytes were well adhered to the anode substrates and appeared to be ∼10 microns thick. The microstructure of the resultant electrolytes will be reported. Copyright © 2010 MS&T'10®.

More Details

Sulfuric acid decomposition for the sulfur based thermochemical cycles

2nd International Topical Meeting on Safety and Technology of Nuclear Hydrogen Production, Control, and Management 2010

Moore, Robert; Vernon, Milton E.; Parma, Edward J.; Pickard, Paul; Rochau, Gary E.

In this work, we describe a novel design for a H2SO 4decomposer. The decomposition of H2SO4 to produce SO2is a common processing operation in the sulfur-based thermochemical cycles for hydrogen production where acid decomposition takes place at 850°C in the presence of a catalyst. The combination of high temperature and sulfuric acid creates a very corrosive environment that presents significant design challenges. The new decomposer design is based on a bayonet-type heat exchanger tube with the annular space packed with a catalyst. The unit is constructed of silicon carbide and other highly corrosion resistant materials. The new design integrates acid boiling, superheating, decomposition and heat recuperation into a single process and eliminates problems of corrosion and failure of high temperature seals encountered in previous testing using metallic construction materials. The unit was tested by varying the acid feed rate and decomposition temperature and pressure.

More Details

The Sandia MEMS Passive Shock Sensor : dormancy and aging

Tanner, Danelle M.

This report presents the results of an aging experiment that was established in FY09 and completed in FY10 for the Sandia MEMS Passive Shock Sensor. A total of 37 packages were aged at different temperatures and times, and were then tested after aging to determine functionality. Aging temperatures were selected at 100 C and 150 C, with times ranging from as short as 100 hours to as long as 1 year to simulate a predicted aging of up to 20 years. In all of the tests and controls, 100% of the devices continued to function normally.

More Details

Analysis of SNL/MSU/DOE Fatigue Database Trends for Wind Turbine Blade Materials

Mandell, John F.; Samborsky, Daniel D.; Agastra, Pancasatya; Sears, Aaron T.; Wilson, Timothy J.; Ashwill, Thomas D.; Laird, Daniel L.

More Details

Description of heat flux measurement methods used in hydrocarbon and propellant fuel fires at Sandia

Nakos, James T.

The purpose of this report is to describe the methods commonly used to measure heat flux in fire applications at Sandia National Laboratories in both hydrocarbon (JP-8 jet fuel, diesel fuel, etc.) and propellant fires. Because these environments are very severe, many commercially available heat flux gauges do not survive the test, so alternative methods had to be developed. Specially built sensors include 'calorimeters' that use a temperature measurement to infer heat flux by use of a model (heat balance on the sensing surface) or by using an inverse heat conduction method. These specialty-built sensors are made rugged so they will survive the environment, so are not optimally designed for ease of use or accuracy. Other methods include radiometers, co-axial thermocouples, directional flame thermometers (DFTs), Sandia 'heat flux gauges', transpiration radiometers, and transverse Seebeck coefficient heat flux gauges. Typical applications are described and pros and cons of each method are listed.

More Details

U.S. Nuclear Regulatory Commission Extremely Low Probability of Rupture pilot study : xLPR framework model user's guide

Mattie, Patrick; Sallaberry, Cedric J.; Mcclellan, Yvonne

For the U.S. Nuclear Regulatory Commission (NRC) Extremely Low Probability of Rupture (xLPR) pilot study, Sandia National Laboratories (SNL) was tasked to develop and evaluate a probabilistic framework using a commercial software package for Version 1.0 of the xLPR Code. Version 1.0 of the xLPR code is focused assessing the probability of rupture due to primary water stress corrosion cracking in dissimilar metal welds in pressurizer surge nozzles. Future versions of this framework will expand the capabilities to other cracking mechanisms, and other piping systems for both pressurized water reactors and boiling water reactors. The goal of the pilot study project is to plan the xLPR framework transition from Version 1.0 to Version 2.0; hence the initial Version 1.0 framework and code development will be used to define the requirements for Version 2.0. The software documented in this report has been developed and tested solely for this purpose. This framework and demonstration problem will be used to evaluate the commercial software's capabilities and applicability for use in creating the final version of the xLPR framework. This report details the design, system requirements, and the steps necessary to use the commercial-code based xLPR framework developed by SNL.

More Details

Simulation and optimization of ultra thin photovoltaics

Cruz-Campa, Jose L.

Sandia National Laboratories (SNL) conducts pioneering research and development in Micro-Electro-Mechanical Systems (MEMS) and solar cell research. This dissertation project combines these two areas to create ultra-thin small-form-factor crystalline silicon (c-Si) solar cells. These miniature solar cells create a new class of photovoltaics with potentially novel applications and benefits such as dramatic reductions in cost, weight and material usage. At the beginning of the project, unusually low efficiencies were obtained in the research group. The intention of this research was thus to investigate the main causes of the low efficiencies through simulation, design, fabrication, and characterization. Commercial simulation tools were used to find the main causes of low efficiency. Once the causes were identified, the results were used to create improved designs and build new devices. In the simulations, parameters were varied to see the effect on the performance. The researched parameters were: resistance, wafer lifetime, contact separation, implant characteristics (size, dosage, energy, ratio between the species), contact size, substrate thickness, surface recombination, and light concentration. Out of these parameters, it was revealed that a high quality surface passivation was the most important for obtaining higher performing cells. Therefore, several approaches for enhancing the passivation were tried, characterized, and tested on cells. In addition, a methodology to contact and test the performance of all the cells presented in the dissertation under calibrated light was created. Also, next generation cells that could incorporate all the optimized layers including the passivation was designed, built, and tested. In conclusion, through this investigation, solar cells that incorporate optimized designs and passivation schemes for ultrathin solar cells were created for the first time. Through the application of the methods discussed in this document, the efficiency of the solar cells increased from below 1% to 15% in Microsystems Enabled Photovoltaic (MEPV) devices.

More Details

Atomic magnetometer for human magnetoencephalograpy

Johnson, Cort N.; Schwindt, Peter D.

We have developed a high sensitivity (<5 fTesla/{radical}Hz), fiber-optically coupled magnetometer to detect magnetic fields produced by the human brain. This is the first demonstration of a noncryogenic sensor that could replace cryogenic superconducting quantum interference device (SQUID) magnetometers in magnetoencephalography (MEG) and is an important advance in realizing cost-effective MEG. Within the sensor, a rubidium vapor is optically pumped with 795 laser light while field-induced optical rotations are measured with 780 nm laser light. Both beams share a single optical axis to maximize simplicity and compactness. In collaboration with neuroscientists at The Mind Research Network in Albuquerque, NM, the evoked responses resulting from median nerve and auditory stimulation were recorded with the atomic magnetometer and a commercial SQUID-based MEG system with signals comparing favorably. Multi-sensor operation has been demonstrated with two AMs placed on opposite sides of the head. Straightforward miniaturization would enable high-density sensor arrays for whole-head magnetoencephalography.

More Details
Results 70601–70700 of 99,299
Results 70601–70700 of 99,299