Publications

Results 73201–73400 of 99,299

Search results

Jump to search filters

Enhanced training effectiveness using automated student assessment

Forsythe, James C.

Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. In this work, we follow-up on previous evaluations of the Automated Expert Modeling and Automated Student Evaluation (AEMASE) system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain. The current study provides an empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback.

More Details

Dual wavelength laser damage testing for high energy lasers

Kimmel, Mark; Rambo, Patrick K.; Schwarz, Jens; Atherton, B.

As high energy laser systems evolve towards higher energies, fundamental material properties such as the laser-induced damage threshold (LIDT) of the optics limit the overall system performance. The Z-Backlighter Laser Facility at Sandia National Laboratories uses a pair of such kiljoule-class Nd:Phosphate Glass lasers for x-ray radiography of high energy density physics events on the Z-Accelerator. These two systems, the Z-Beamlet system operating at 527nm/ 1ns and the Z-Petawatt system operating at 1054nm/ 0.5ps, can be combined for some experimental applications. In these scenarios, dichroic beam combining optics and subsequent dual wavelength high reflectors will see a high fluence from combined simultaneous laser exposure and may even see lingering effects when used for pump-probe configurations. Only recently have researchers begun to explore such concerns, looking at individual and simultaneous exposures of optics to 1064 and third harmonic 355nm light from Nd:YAG [1]. However, to our knowledge, measurements of simultaneous and delayed dual wavelength damage thresholds on such optics have not been performed for exposure to 1054nm and its second harmonic light, especially when the pulses are of disparate pulse duration. The Z-Backlighter Facility has an instrumented damage tester setup to examine the issues of laser-induced damage thresholds in a variety of such situations [2] . Using this damage tester, we have measured the LIDT of dual wavelength high reflectors at 1054nm/0.5ps and 532nm/7ns, separately and spatially combined, both co-temporal and delayed, with single and multiple exposures. We found that the LIDT of the sample at 1054nm/0.5ps can be significantly lowered, from 1.32J/cm{sup 2} damage fluence with 1054/0.5ps only to 1.05 J/cm{sup 2} with the simultaneous presence of 532nm/7ns laser light at a fluence of 8.1 J/cm{sup 2}. This reduction of LIDT of the sample at 1054nm/0.5ps continues as the fluence of 532nm/7ns laser light simultaneously present increases. The reduction of LIDT does not occur when the 2 pulses are temporally separated. This paper will also present dual wavelength LIDT results of commercial dichroic beam-combining optics simultaneously exposed with laser light at 1054nm/2.5ns and 532nm/7ns.

More Details

Adagio 4.16 users guide

Spencer, Benjamin W.

Adagio is a three-dimensional, implicit solid mechanics code with a versatile element library, nonlinear material models, and capabilities for modeling large deformation and contact. Adagio is a parallel code, and its nonlinear solver and contact capabilities enable scalable solutions of large problems. It is built on the SIERRA Framework [1, 2]. SIERRA provides a data management framework in a parallel computing environment that allows the addition of capabilities in a modular fashion. The Adagio 4.16 User's Guide provides information about the functionality in Adagio and the command structure required to access this functionality in a user input file. This document is divided into chapters based primarily on functionality. For example, the command structure related to the use of various element types is grouped in one chapter; descriptions of material models are grouped in another chapter. The input and usage of Adagio is similar to that of the code Presto [3]. Presto, like Adagio, is a solid mechanics code built on the SIERRA Framework. The primary difference between the two codes is that Presto uses explicit time integration for transient dynamics analysis, whereas Adagio is an implicit code. Because of the similarities in input and usage between Adagio and Presto, the user's guides for the two codes are structured in the same manner and share common material. (Once you have mastered the input structure for one code, it will be easy to master the syntax structure for the other code.) To maintain the commonality between the two user's guides, we have used a variety of techniques. For example, references to Presto may be found in the Adagio user's guide and vice versa, and the chapter order across the two guides is the same. On the other hand, each of the two user's guides is expressly tailored to the features of the specific code and documents the particular functionality for that code. For example, though both Presto and Adagio have contact functionality, the content of the chapter on contact in the two guides differs. Important references for both Adagio and Presto are given in the references section at the end of this chapter. Adagio was preceded by the codes JAC and JAS3D; JAC is described in Reference 4; JAS3D is described in Reference 5. Presto was preceded by the code Pronto3D. Pronto3D is described in References 6 and 7. Some of the fundamental nonlinear technology used by both Presto and Adagio are described in References 8, 9, and 10. Currently, both Presto and Adagio use the Exodus II database and the XDMF database; Exodus II is more commonly used than XDMF. (Other options may be added in the future.) The Exodus II database format is described in Reference 11, and the XDMF database format is described in Reference 12. Important information about contact is provided in the reference document for ACME [13]. ACME is a third-party library for contact. One of the key concepts for the command structure in the input file is a concept referred to as scope. A detailed explanation of scope is provided in Section 1.2. Most of the command lines in Chapter 2 are related to a certain scope rather than to some particular functionality.

More Details

Presto 4.16 users guide

Spencer, Benjamin W.

Presto is a three-dimensional transient dynamics code with a versatile element library, nonlinear material models, large deformation capabilities, and contact. It is built on the SIERRA Framework [1, 2]. SIERRA provides a data management framework in a parallel computing environment that allows the addition of capabilities in a modular fashion. Contact capabilities are parallel and scalable. The Presto 4.16 User's Guide provides information about the functionality in Presto and the command structure required to access this functionality in a user input file. This document is divided into chapters based primarily on functionality. For example, the command structure related to the use of various element types is grouped in one chapter; descriptions of material models are grouped in another chapter. The input and usage of Presto is similar to that of the code Adagio [3]. Adagio is a three-dimensional quasi-static code with a versatile element library, nonlinear material models, large deformation capabilities, and contact. Adagio, like Presto, is built on the SIERRA Framework [1]. Contact capabilities for Adagio are also parallel and scalable. A significant feature of Adagio is that it offers a multilevel, nonlinear iterative solver. Because of the similarities in input and usage between Presto and Adagio, the user's guides for the two codes are structured in the same manner and share common material. (Once you have mastered the input structure for one code, it will be easy to master the syntax structure for the other code.) To maintain the commonality between the two user's guides, we have used a variety of techniques. For example, references to Adagio may be found in the Presto user's guide and vice versa, and the chapter order across the two guides is the same. On the other hand, each of the two user's guides is expressly tailored to the features of the specific code and documents the particular functionality for that code. For example, though both Presto and Adagio have contact functionality, the content of the chapter on contact in the two guides differs. Important references for both Adagio and Presto are given in the references section at the end of this chapter. Adagio was preceded by the codes JAC and JAS3D; JAC is described in Reference 4; JAS3D is described in Reference 5. Presto was preceded by the code Pronto3D. Pronto3D is described in References 6 and 7. Some of the fundamental nonlinear technology used by both Presto and Adagio are described in References 8, 9, and 10. Currently, both Presto and Adagio use the Exodus II database and the XDMF database; Exodus II is more commonly used than XDMF. (Other options may be added in the future.) The Exodus II database format is described in Reference 11, and the XDMF database format is described in Reference 12. Important information about contact is provided in the reference document for ACME [13]. ACME is a third-party library for contact. One of the key concepts for the command structure in the input file is a concept referred to as scope. A detailed explanation of scope is provided in Section 1.2. Most of the command lines in Chapter 2 are related to a certain scope rather than to some particular functionality.

More Details

Systematic parameter estimation and sensitivity analysis using a multidimensional PEMFC model coupled with DAKOTA

Chen, Ken S.

Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated in order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.

More Details

Reliability-based design optimization using efficient global reliability analysis

Eldred, Michael

Finding the optimal (lightest, least expensive, etc.) design for an engineered component that meets or exceeds a specified level of reliability is a problem of obvious interest across a wide spectrum of engineering fields. Various methods for this reliability-based design optimization problem have been proposed. Unfortunately, this problem is rarely solved in practice because, regardless of the method used, solving the problem is too expensive or the final solution is too inaccurate to ensure that the reliability constraint is actually satisfied. This is especially true for engineering applications involving expensive, implicit, and possibly nonlinear performance functions (such as large finite element models). The Efficient Global Reliability Analysis method was recently introduced to improve both the accuracy and efficiency of reliability analysis for this type of performance function. This paper explores how this new reliability analysis method can be used in a design optimization context to create a method of sufficient accuracy and efficiency to enable the use of reliability-based design optimization as a practical design tool.

More Details

Defining capabilities of Si and InP photonics

Vawter, Gregory A.

Monolithic photonic integrated circuits (PICs) have a long history reaching back more than 40 years. During that time, and particularly in the past 15 years, the technology has matured and the application space grown to span sophisticated tunable diode lasers, 40 Gb/s electrical-to-optical signal converters with complex data formats, wavelength multiplexors and routers, as well as chemical/biological sensors. Most of this activity has centered in recent years on optical circuits built on either Silicon or InP substrates. This talk will review the three classes of PIC and highlight the unique strengths, and weaknesses, of PICs based on Silicon and InP substrates. Examples will be provided from recent R&D activity.

More Details

Data-free inference of the joint distribution of uncertain model parameters

Berry, Robert D.; Najm, Habib N.; Debusschere, Bert; Adalsteinsson, Helgi

It is known that, in general, the correlation structure in the joint distribution of model parameters is critical to the uncertainty analysis of that model. Very often, however, studies in the literature only report nominal values for parameters inferred from data, along with confidence intervals for these parameters, but no details on the correlation or full joint distribution of these parameters. When neither posterior nor data are available, but only summary statistics such as nominal values and confidence intervals, a joint PDF must be chosen. Given the summary statistics it may not be reasonable nor necessary to assume the parameters are independent random variables. We demonstrate, using a Bayesian inference procedure, how to construct a posterior density for the parameters exhibiting self consistent correlations, in the absence of data, given (1) the fit-model, (2) nominal parameter values, (3) bounds on the parameters, and (4) a postulated statistical model, around the fit-model, for the missing data. Our approach ensures external Bayesian updating while marginalizing over possible data realizations. We then address the matching of given parameter bounds through the choice of hyperparameters, which are introduced in postulating the statistical model, but are not given nominal values. We discuss some possible approaches, including (1) inferring them in a separate Bayesian inference loop and (2) optimization. We also perform an empirical evaluation of the algorithm showing the posterior obtained with this data free inference compares well with the true posterior obtained from inference against the full data set.

More Details

Exact results and field-theoretic bounds for randomly advected propagating fronts, and implications for turbulent combustion

Mayo, Jackson R.; Kerstein, Alan R.

One of the authors previously conjectured that the wrinkling of propagating fronts by weak random advection increases the bulk propagation rate (turbulent burning velocity) in proportion to the 4/3 power of the advection strength. An exact derivation of this scaling is reported. The analysis shows that the coefficient of this scaling is equal to the energy density of a lower-dimensional Burgers fluid with a white-in-time forcing whose spatial structure is expressed in terms of the spatial autocorrelation of the flow that advects the front. The replica method of field theory has been used to derive an upper bound on the coefficient as a function of the spatial autocorrelation. High precision numerics show that the bound is usefully sharp. Implications for strongly advected fronts (e.g., turbulent flames) are noted.

More Details

Comparison of entrainment rates from a tank experiment with results using the one-dimensional turbulence model

Kerstein, Alan R.

Recent work suggests that cloud effects remain one of the largest sources of uncertainty in model-based estimates of climate sensitivity. In particular, the entrainment rate in stratocumulus-topped mixed layers needs better models. More than thirty years ago a clever laboratory experiment was conducted by McEwan and Paltridge to examine an analog of the entrainment process at the top of stratiform clouds. Sayler and Breidenthal extended this pioneering work and determined the effect of the Richardson number on the dimensionless entrainment rate. The experiments gave hints that the interaction between molecular effects and the one-sided turbulence seems to be crucial for understanding entrainment. From the numerical point of view large-eddy simulation (LES) does not allow explicitly resolving all the fine scale processes at the entrainment interface. Direct numerical simulation (DNS) is limited due to the Reynolds number and is not the tool of choice for parameter studies. Therefore it is useful to investigate new modeling strategies, such as stochastic turbulence models which allow sufficient resolution at least in one dimension while having acceptable run times. We will present results of the One-Dimensional Turbulence stochastic simulation model applied to the experimental setup of Sayler and Breidenthal. The results on radiatively induced entrainment follow quite well the scaling of the entrainment rate with the Richardson number that was experimentally found for a set of trials. Moreover, we investigate the influence of molecular effects, the fluids optical properties, and the artifact of parasitic turbulence experimentally observed in the laminar layer. In the simulations the parameters are varied systematically for even larger ranges than in the experiment. Based on the obtained results a more complex parameterization of the entrainment rate than currently discussed in the literature seems to be necessary.

More Details

Toward developing a computational capability for PEM fuel cell design and optimization

Chen, Ken S.; Carnes, Brian R.

In this paper, we report the progress made in our project recently funded by the US Department of Energy (DOE) toward developing a computational capability, which includes a two-phase, three-dimensional PEM (polymer electrolyte membrane) fuel cell model and its coupling with DAKOTA (a design and optimization toolkit developed and being enhanced by Sandia National Laboratories). We first present a brief literature survey in which the prominent/notable PEM fuel cell models developed by various researchers or groups are reviewed. Next, we describe the two-phase, three-dimensional PEM fuel cell model being developed, tested, and later validated by experimental data. Results from case studies are presented to illustrate the utility of our comprehensive, integrated cell model. The coupling between the PEM fuel cell model and DAKOTA is briefly discussed. Our efforts in this DOE-funded project are focused on developing a validated computational capability that can be employed for PEM fuel cell design and optimization.

More Details

Extraction and applications of skeletons in finite element mesh generation

Quadros, William

This paper focuses on the extraction of skeletons of CAD models and its applications in finite element (FE) mesh generation. The term 'skeleton of a CAD model' can be visualized as analogous to the 'skeleton of a human body'. The skeletal representations covered in this paper include medial axis transform (MAT), Voronoi diagram (VD), chordal axis transform (CAT), mid surface, digital skeletons, and disconnected skeletons. In the literature, the properties of a skeleton have been utilized in developing various algorithms for extracting skeletons. Three main approaches include: (1) the bisection method where the skeleton exists at equidistant from at least two points on boundary, (2) the grassfire propagation method in which the skeleton exists where the opposing fronts meet, and (3) the duality method where the skeleton is a dual of the object. In the last decade, the author has applied different skeletal representations in all-quad meshing, hex meshing, mid-surface meshing, mesh size function generation, defeaturing, and decomposition. A brief discussion on the related work from other researchers in the area of tri meshing, tet meshing, and anisotropic meshing is also included. This paper concludes by summarizing the strengths and weaknesses of the skeleton-based approaches in solving various geometry-centered problems in FE mesh generation. The skeletons have proved to be a great shape abstraction tool in analyzing the geometric complexity of CAD models as they are symmetric, simpler (reduced dimension), and provide local thickness information. However, skeletons generally require some cleanup, and stability and sensitivity of the skeletons should be controlled during extraction. Also, selecting a suitable application-specific skeleton and a computationally efficient method of extraction is critical.

More Details

Design studies for the transmission simulator method of experimental dynamic substructuring

Arviso, Michael

In recent years, a successful method for generating experimental dynamic substructures has been developed using an instrumented fixture, the transmission simulator. The transmission simulator method solves many of the problems associated with experimental substructuring. These solutions effectively address: (1) rotation and moment estimation at connection points; (2) providing substructure Ritz vectors that adequately span the connection motion space; and (3) adequately addressing multiple and continuous attachment locations. However, the transmission simulator method may fail if the transmission simulator is poorly designed. Four areas of the design addressed here are: (1) designating response sensor locations; (2) designating force input locations; (3) physical design of the transmission simulator; and (4) modal test design. In addition to the transmission simulator design investigations, a review of the theory with an example problem is presented.

More Details

Pre-photolithographic GaAs surface treatment for improved photoresist adhesion during wet chemical etching and improved wet etch profiles

Grine, Alejandro J.; Clevenger, Jascinda; Patrizi, Gary; Martinez, Marino; Timon, Robert; Sullivan, Charles T.

Results of several experiments aimed at remedying photoresist adhesion failure during spray wet chemical etching of InGaP/GaAs NPN HBTs are reported. Several factors were identified that could influence adhesion and a Design of Experiment (DOE) approach was used to study the effects and interactions of selected factors. The most significant adhesion improvement identified is the incorporation of a native oxide etch immediately prior to the photoresist coat. In addition to improving adhesion, this pre-coat treatment also alters the wet etch profile of (100) GaAs so that the reaction limited etch is more isotropic compared to wafers without surface treatment; the profiles have a positive taper in both the [011] and [011] directions, but the taper angles are not identical. The altered profiles have allowed us to predictably yield fully probe-able HBTs with 5 x 5 {micro}m emitters using 5200 {angstrom} evaporated metal without planarization.

More Details

Use of ceregenins to create novel biofouling resistant water water-treatment membranes

Altman, Susan J.; Jones, Howland D.T.; Kirk, Matthew F.

Scoping studies have demonstrated that ceragenins, when linked to water-treatment membranes have the potential to create biofouling resistant water-treatment membranes. Ceragenins are synthetically produced molecules that mimic antimicrobial peptides. Evidence includes measurements of CSA-13 prohibiting the growth of and killing planktonic Pseudomonas fluorescens. In addition, imaging of biofilms that were in contact of a ceragenin showed more dead cells relative to live cells than in a biofilm that had not been treated with a ceragenin. This work has demonstrated that ceragenins can be attached to polyamide reverse osmosis (RO) membranes, though work needs to improve the uniformity of the attachment. Finally, methods have been developed to use hyperspectral imaging with multivariate curve resolution to view ceragenins attached to the RO membrane. Future work will be conducted to better attach the ceragenin to the RO membranes and more completely test the biocidal effectiveness of the ceragenins on the membranes.

More Details

New benefit-cost studies of renewable and energy efficiency programs of the U.S. Department of Energy : methodology and findings

Jordan, Gretchen B.

Objectives of the Office of Energy Efficiency and Renewable Energy (EERE) 2009-2010 Studies (Solar, Wind, Geothermal, & Combustion Engine R&D) are to: (1) Demonstrate to investors that EERE research and technology development (R&D) programs & subprograms are 'Worth It'; (2) Develop an improved Benefit-Cost methodology for determining realized economic and other benefits of EERE R&D programs - (a) Model government additionality more thoroughly and on a case-by-case basis; (b) Move beyond economic benefits; and (c) Have each study calculate returns to a whole EERE program/subprogram; and (3) Develop a consistent, workable Methods Guide for independent contractors who will perform the evaluation studies.

More Details

TaN resistor process development and integration

Sullivan, Charles T.; Patrizi, Gary; Wolfley, Steven; Grine, Alejandro J.; Clevenger, Jascinda

This paper describes the development and implementation of an integrated resistor process based on reactively sputtered tantalum nitride. Image reversal lithography was shown to be a superior method for liftoff patterning of these films. The results of a response surface DOE for the sputter deposition of the films are discussed. Several approaches to stabilization baking were examined and the advantages of the hot plate method are shown. In support of a new capability to produce special-purpose HBT-based Small-Scale Integrated Circuits (SSICs), we developed our existing TaN resistor process, designed for research prototyping, into one with greater maturity and robustness. Included in this work was the migration of our TaN deposition process from a research-oriented tool to a tool more suitable for production. Also included was implementation and optimization of a liftoff process for the sputtered TaN to avoid the complicating effects of subtractive etching over potentially sensitive surfaces. Finally, the method and conditions for stabilization baking of the resistors was experimentally determined to complete the full implementation of the resistor module. Much of the work to be described involves the migration between sputter deposition tools - from a Kurt J. Lesker CMS-18 to a Denton Discovery 550. Though they use nominally the same deposition technique (reactive sputtering of Ta with N{sup +} in a RF-excited Ar plasma), they differ substantially in their design and produce clearly different results in terms of resistivity, conformity of the film and the difference between as-deposited and stabilized films. We will describe the design of and results from the design of experiments (DOE)-based method of process optimization on the new tool and compare this to what had been used on the old tool.

More Details

Fourier analysis and synthesis tomography

Sinclair, Michael B.

Most far-field optical imaging systems rely on a lens and spatially-resolved detection to probe distinct locations on the object. We describe and demonstrate a novel high-speed wide-field approach to imaging that instead measures the complex spatial Fourier transform of the object by detecting its spatially-integrated response to dynamic acousto-optically synthesized structured illumination. Tomographic filtered backprojection is applied to reconstruct the object in two or three dimensions. This technique decouples depth-of-field and working-distance from resolution, in contrast to conventional imaging, and can be used to image biological and synthetic structures in fluoresced or scattered light employing coherent or broadband illumination. We discuss the electronically programmable transfer function of the optical system and its implications for imaging dynamic processes. Finally, we present for the first time two-dimensional high-resolution image reconstructions demonstrating a three-orders-of-magnitude improvement in depth-of-field over conventional lens-based microscopy.

More Details

Integrating event detection system operation characteristics into sensor placement optimization

Hart, David; Hart, William E.; Mckenna, Sean A.; Phillips, Cynthia A.

We consider the problem of placing sensors in a municipal water network when we can choose both the location of sensors and the sensitivity and specificity of the contamination warning system. Sensor stations in a municipal water distribution network continuously send sensor output information to a centralized computing facility, and event detection systems at the control center determine when to signal an anomaly worthy of response. Although most sensor placement research has assumed perfect anomaly detection, signal analysis software has parameters that control the tradeoff between false alarms and false negatives. We describe a nonlinear sensor placement formulation, which we heuristically optimize with a linear approximation that can be solved as a mixed-integer linear program. We report the results of initial experiments on a real network and discuss tradeoffs between early detection of contamination incidents, and control of false alarms.

More Details

Project acceleration : making the leap from pilot to commercialization

Borneo, Daniel R.

Since the energy storage technology market is in a relatively emergent phase, narrowing the gap between pilot project status and commercialization is fundamental to the accelerating of this innovative market space. This session will explore regional market design factors to facilitate the storage enterprise. You will also hear about: quantifying transmission and generation efficiency enhancements; resource planning for storage; and assessing market mechanisms to accelerate storage adoption regionally.

More Details

Modeling solar thermochemical splitting of CO2 using metal oxide and a CR5

Chen, Ken S.; Hogan Jr., Roy E.

A two-dimensional, multi-physics computational model based on the finite-element method is developed for simulating the process of solar thermochemical splitting of carbon dioxide (CO{sub 2}) using ferrites (Fe{sub 3}O{sub 4}/FeO) and a counter-rotating-ring receiver/recuperator or CR5, in which carbon monoxide (CO) is produced from gaseous CO{sub 2}. The model takes into account heat transfer, gas-phase flow and multiple-species diffusion in open channels and through pores of the porous reactant layer, and redox chemical reactions at the gas/solid interfaces. Results (temperature distribution, velocity field, and species concentration contours) computed using the model in a case study are presented to illustrate model utility. The model is then employed to examine the effects of injection rates of CO{sub 2} and argon neutral gas, respectively, on CO production rate and the extent of the product-species crossover.

More Details

Development of a %22Solar Patch%22 calculator to evaluate heliostat-field irradiance as a boundary condition in CFD models

Ho, Clifford K.

A rigorous computational fluid dynamics (CFD) approach to calculating temperature distributions, radiative and convective losses, and flow fields in a cavity receiver irradiated by a heliostat field is typically limited to the receiver domain alone for computational reasons. A CFD simulation cannot realistically yield a precise solution that includes the details within the vast domain of an entire heliostat field in addition to the detailed processes and features within a cavity receiver. Instead, the incoming field irradiance can be represented as a boundary condition on the receiver domain. This paper describes a program, the Solar Patch Calculator, written in Microsoft Excel VBA to characterize multiple beams emanating from a 'solar patch' located at the aperture of a cavity receiver, in order to represent the incoming irradiance from any field of heliostats as a boundary condition on the receiver domain. This program accounts for cosine losses; receiver location; heliostat reflectivity, areas and locations; field location; time of day and day of year. This paper also describes the implementation of the boundary conditions calculated by this program into a Discrete Ordinates radiation model using Ansys{reg_sign} FLUENT (www.fluent.com), and compares the results to experimental data and to results generated by the code DELSOL.

More Details

A coarsening method for linear peridynamics

Silling, Stewart

A method is obtained for deriving peridynamic material models for a sequence of increasingly coarsened descriptions of a body. The starting point is a known detailed, small scale linearized state-based description. Each successively coarsened model excludes some of the aterial present in the previous model, and the length scale increases accordingly. This excluded material, while not present explicitly in the coarsened model, is nevertheless taken into account implicitly through its effect on the forces in the coarsened material. Numerical examples emonstrate that the method accurately reproduces the effective elastic properties of a composite as well as the effect of a small defect in a homogeneous medium.

More Details

Foam structure, rheology and coarsening : the shape, feel and aging of random soap froth

Kraynik, Andrew M.

Simulations are in excellent agreement with experiments: structure - Matzke, shear modulus - Princen and Kiss E = 3.30 {sigma}/R{sub 32} = 5.32/(1 + p) {sigma}/(V){sup 1/2}, G {approx} 0.155 E = 0.512 {sigma}/R{sub 32}. IPP theory captures dependence of cell geometry on V and F. Future challenges are: simulating simple shearing flow is very expensive because of frequent topological transitions. Random wet foams require very large simulations.

More Details

U.S. national security through global technical engagement presentation to composite group A : homeland and international operations

Abeyta, Henry J.

This talk will discuss Sandia's Global Security Program focused on reducing proliferation and terrorism threats to U.S. national security through global technical engagement. Elements include nuclear and radiological risks, biological and chemical risks, and multi-threat risk reduction. Also, recent work addressing the need to better integrate nonproliferation, arms control, counterterrorism, and nuclear deterrent objectives will be discussed.

More Details

Investigation of the neutron response anisotropy in crystalline organic scintillators

Brubaker, E.; Steele, J.

An anisotropy in the response of crystalline organic scintillators such as anthracene to neutron elastic scattering interactions has been known for some time. Both the amplitude and the time structure of the scintillation light pulse vary with the direction of the proton recoil with respect to the crystalline axes. In principle, this effect could be exploited to develop compact, high-efficiency fast neutron detectors that have directional sensitivity via a precise measurement of the pulse shape. We are investigating the feasibility and sensitivity of such a detector, particularly for neutrons in the fission energy spectrum. Here we will report new measurements of the pulse shape dependence on proton recoil angle in anthracene and stilbene single crystals, for proton energies in the few MeV range. Digital pulse acquisition and processing are used to allow an exploration of different pulse shape analysis techniques.

More Details

Cooperative global security programs modeling & simulation

Briand, Daniel

The national laboratories global security programs implement sustainable technical solutions for cooperative nonproliferation, arms control, and physical security systems worldwide. To help in the development and execution of these programs, a wide range of analytical tools are used to model, for example, synthetic tactical environments for assessing infrastructure protection initiatives and tactics, systematic approaches for prioritizing nuclear and biological threat reduction opportunities worldwide, and nuclear fuel cycle enrichment and spent fuel management for nuclear power countries. This presentation will describe how these models are used in analyses to support the Obama Administration's agenda and bilateral/multinational treaties, and ultimately, to reduce weapons of mass destruction and terrorism threats through international technical cooperation.

More Details

A novel dual mode neutron-gamma imager

Mascarenhas, Nicholas M.; Brennan, J.; Cooper, Robert; Mrowka, Stanley; Marleau, P.

The Neutron Scatter Camera (NSC) can image fission sources and determine their energy spectra at distances of tens of meters and through significant thicknesses of intervening materials in relatively short times [1]. We recently completed a 32 element scatter camera and will present recent advances made with this instrument. A novel capability for the scatter camera is dual mode imaging. In normal neutron imaging mode we identify and image neutron events using pulse shape discrimination (PSD) and time of flight in liquid scintillator. Similarly gamma rays are identified from Compton scatter in the front and rear planes for our segmented detector. Rather than reject these events, we show it is possible to construct a gamma-ray image by running the analysis in a 'Compton mode'. Instead of calculating the scattering angle by the kinematics of elastic scatters as is appropriate for neutron events, it can be found by the kinematics of Compton scatters. Our scatter camera has not been optimized as a Compton gamma-ray imager but is found to work reasonably. We studied imaging performance using a Cs137 source. We find that we are able to image the gamma source with reasonable fidelity. We are able to determine gamma energy after some reasonable assumptions. We will detail the various algorithms we have developed for gamma image reconstruction. We will outline areas for improvement, include additional results and compare neutron and gamma mode imaging.

More Details

[Skeleton extractions and applications]

Quadros, William

This paper focuses on the extraction of skeletons of CAD models and its applications in finite element (FE) mesh generation. The term 'skeleton of a CAD model' can be visualized as analogous to the 'skeleton of a human body'. The skeletal representations covered in this paper include medial axis transform (MAT), Voronoi diagram (VD), chordal axis transform (CAT), mid surface, digital skeletons, and disconnected skeletons. In the literature, the properties of a skeleton have been utilized in developing various algorithms for extracting skeletons. Three main approaches include: (1) the bisection method where the skeleton exists at equidistant from at least two points on boundary, (2) the grassfire propagation method in which the skeleton exists where the opposing fronts meet, and (3) the duality method where the skeleton is a dual of the object. In the last decade, the author has applied different skeletal representations in all-quad meshing, hex meshing, mid-surface meshing, mesh size function generation, defeaturing, and decomposition. A brief discussion on the related work from other researchers in the area of tri meshing, tet meshing, and anisotropic meshing is also included. This paper concludes by summarizing the strengths and weaknesses of the skeleton-based approaches in solving various geometry-centered problems in FE mesh generation. The skeletons have proved to be a great shape abstraction tool in analyzing the geometric complexity of CAD models as they are symmetric, simpler (reduced dimension), and provide local thickness information. However, skeletons generally require some cleanup, and stability and sensitivity of the skeletons should be controlled during extraction. Also, selecting a suitable application-specific skeleton and a computationally efficient method of extraction is critical.

More Details

Xyce™ Parallel Electronic Simulator Release Notes (Release 5.1.2)

Keiter, Eric R.; Santarelli, Keith R.; Hoekstra, Robert J.; Russo, Thomas V.; Schiek, Richard; Mei, Ting; Thornquist, Heidi K.; Pawlowski, Roger; Coffey, Todd S.

The Xyce Parallel Electronic Simulator has been written to support, in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. Specific requirements include, among others, the ability to solve extremely large circuit problems by supporting large-scale parallel computing platforms, improved numerical performance and object-oriented code design and implementation. The Xyce release notes describe: hardware and software requirements, new features and enhancements, any defects fixed since the last release, and current known defects and defect workarounds. For up-to-date information not available at the time these notes were produced, please visit the Xyce web page at http://www.cs.sandia.gov/xyce.

More Details

Nonlinear power flow control applications to conventional generator swing equations subject to variable generation

Robinett, Rush D.; Wilson, David G.

In this paper, the swing equations for renewable generators are formulated as a natural Hamiltonian system with externally applied non-conservative forces. A two-step process referred to as Hamiltonian Surface Shaping and Power Flow Control (HSSPFC) is used to analyze and design feedback controllers for the renewable generator system. This formulation extends previous results on the analytical verification of the Potential Energy Boundary Surface (PEBS) method to nonlinear control analysis and design and justifies the decomposition of the system into conservative and non-conservative systems to enable a two-step, serial analysis and design procedure. In particular, this approach extends the work done by developing a formulation which applies to a larger set of Hamiltonian Systems that has Nearly Hamiltonian Systems as a subset. The results of this research include the determination of the required performance of a proposed Flexible AC Transmission System (FACTS)/storage device to enable the maximum power output of a wind turbine while meeting the power system constraints on frequency and phase. The FACTS/storage device is required to operate as both a generator and load (energy storage) on the power system in this design. The Second Law of Thermodynamics is applied to the power flow equations to determine the stability boundaries (limit cycles) of the renewable generator system and enable design of feedback controllers that meet stability requirements while maximizing the power generation and flow to the load. Necessary and sufficient conditions for stability of renewable generators systems are determined based on the concepts of Hamiltonian systems, power flow, exergy (the maximum work that can be extracted from an energy flow) rate, and entropy rate.

More Details

High fidelity equation of state for xenon : integrating experiments and first principles simulations in developing a wide-range equation of state model for a fifth-row element

Magyar, Rudolph J.; Root, Seth; Carpenter, John H.; Mattsson, Thomas

The noble gas xenon is a particularly interesting element. At standard pressure xenon is an fcc solid which melts at 161 K and then boils at 165 K, thus displaying a rather narrow liquid range on the phase diagram. On the other hand, under pressure the melting point is significantly higher: 3000 K at 30 GPa. Under shock compression, electronic excitations become important at 40 GPa. Finally, xenon forms stable molecules with fluorine (XeF{sub 2}) suggesting that the electronic structure is significantly more complex than expected for a noble gas. With these reasons in mind, we studied the xenon Hugoniot using DFT/QMD and validated the simulations with multi-Mbar shock compression experiments. The results show that existing equation of state models lack fidelity and so we developed a wide-range free-energy based equation of state using experimental data and results from first-principles simulations.

More Details

Achromatic circular polarization generation for ultra-intense lasers

Rambo, Patrick K.; Kimmel, Mark; Bennett, Guy R.; Schwarz, Jens; Schollmeier, Marius; Atherton, B.

Generating circular polarization for ultra-intense lasers requires solutions beyond traditional transmissive waveplates which have insufficient bandwidth and pose nonlinear phase (B-integral) problems. We demonstrate a reflective design employing 3 metallic mirrors to generate circular polarization.

More Details

Peer review presentation on systems performance modeling and solar advisor support

Cameron, Christopher P.

Accurate Performance Models are Critical to Project Development and Technology Evaluation - Accuracy and Uncertainty of Commonly-Used Models Unknown and Models Disagree. A Model Evaluation Process Has Been Developed with Industry, and High-Quality Weather and System Performance Data Sets Have Been Collected: (1) Evaluation is Underway using Residual Analysis of Hourly and Sub-Hourly Data for Clear and Diffuse Climates to Evaluate and Improve Models; and (2) Initial Results Have Been or Will Soon Be Presented at Key Conferences. Evaluation of Widely-Used Module, Inverter, and Irradiance Models, Including Those in SAM, PVWatts, and PVSyst, Will Be Completed This Year. Stochastic Modeling Has Been Performed to Support Reliability Task and Will Add Value to Parametric Analysis. An Industry Workshop will be Held This Fall To Review Results, Set Priorities. Support and Analysis has been Provided for TPP's, SETP, and PV Community. Goals for Future Work Include: (1) Improving Understanding of and Validating System Derate Factors; and (2) Developing a Dynamic Electrical Model of Arrays with Shaded or Mismatched Modules to Support Transient Analysis of Large Fields.

More Details

Freeze-thaw tests of trough receivers employing a molten salt working fluid

Ho, Clifford K.; Iverson, Brian D.; Moss, Timothy A.; Siegel, Nathan P.

Several studies predict an economic benefit of using nitrate-based salts instead of the current synthetic oil within a solar parabolic trough field. However, the expected economic benefit can only be realized if the reliability and optical performance of the salt trough system is comparable to today's oil trough. Of primary concern is whether a salt-freeze accident and subsequent thaw will lead to damage of the heat collection elements (HCEs). This topic was investigated by experiments and analytical analysis. Results to date suggest that damage will not occur if the HCEs are not completely filled with salt. However, if the HCE is completely filled at the time of the freeze, the subsequent thaw can lead to plastic deformation and significant bending of the absorber tube.

More Details

Hybrid optimization schemes for simulation-based problems

Gray, Genetha A.

The inclusion of computer simulations in the study and design of complex engineering systems has created a need for efficient approaches to simulation-based optimization. For example, in water resources management problems, optimization problems regularly consist of objective functions and constraints that rely on output from a PDE-based simulator. Various assumptions can be made to simplify either the objective function or the physical system so that gradient-based methods apply, however the incorporation of realistic objection functions can be accomplished given the availability of derivative-free optimization methods. A wide variety of derivative-free methods exist and each method has both advantages and disadvantages. Therefore, to address such problems, we propose a hybrid approach, which allows the combining of beneficial elements of multiple methods in order to more efficiently search the design space. Specifically, in this paper, we illustrate the capabilities of two novel algorithms; one which hybridizes pattern search optimization with Gaussian Process emulation and the other which hybridizes pattern search and a genetic algorithm. We describe the hybrid methods and give some numerical results for a hydrological application which illustrate that the hybrids find an optimal solution under conditions for which traditional optimal search methods fail.

More Details

Steps toward fault-tolerant quantum chemistry

Taube, Andrew G.

Developing quantum chemistry programs on the coming generation of exascale computers will be a difficult task. The programs will need to be fault-tolerant and minimize the use of global operations. This work explores the use a task-based model that uses a data-centric approach to allocate work to different processes as it applies to quantum chemistry. After introducing the key problems that appear when trying to parallelize a complicated quantum chemistry method such as coupled-cluster theory, we discuss the implications of that model as it pertains to the computational kernel of a coupled-cluster program - matrix multiplication. Also, we discuss the extensions that would required to build a full coupled-cluster program using the task-based model. Current programming models for high-performance computing are fault-intolerant and use global operations. Those properties are unsustainable as computers scale to millions of CPUs; instead one must recognize that these systems will be hierarchical in structure, prone to constant faults, and global operations will be infeasible. The FAST-OS HARE project is introducing a scale-free computing model to address these issues. This model is hierarchical and fault-tolerant by design, allows for the clean overlap of computation and communication, reducing the network load, does not require checkpointing, and avoids the complexity of many HPC runtimes. Development of an algorithm within this model requires a change in focus from imperative programming to a data-centric approach. Quantum chemistry (QC) algorithms, in particular electronic structure methods, are an ideal test bed for this computing model. These methods describe the distribution of electrons in a molecule, which determine the properties of the molecule. The computational cost of these methods is high, scaling quartically or higher in the size of the molecule, which is why QC applications are major users of HPC resources. The complexity of these algorithms means that MPI alone is insufficient to achieve parallel scaling; QC developers have been forced to use alternative approaches to achieve scalability and would be receptive to radical shifts in the programming paradigm. Initial work in adapting the simplest QC method, Hartree-Fock, to this the new programming model indicates that the approach is beneficial for QC applications. However, the advantages to being able to scale to exascale computers are greatest for the computationally most expensive algorithms; within QC these are the high-accuracy coupled-cluster (CC) methods. Parallel coupledcluster programs are available, however they are based on the conventional MPI paradigm. Much of the effort is spent handling the complicated data dependencies between the various processors, especially as the size of the problem becomes large. The current paradigm will not survive the move to exascale computers. Here we discuss the initial steps toward designing and implementing a CC method within this model. First, we introduce the general concepts behind a CC method, focusing on the aspects that make these methods difficult to parallelize with conventional techniques. Then we outline what is the computational core of the CC method - a matrix multiply - within the task-based approach that the FAST-OS project is designed to take advantage of. Finally we outline the general setup to implement the simplest CC method in this model, linearized CC doubles (LinCC).

More Details

Microfabricated surface ion traps for quantum computation

Highstrete, Clark; Stick, Daniel L.; Tigges, Chris P.; Blain, Matthew G.; Fortier, Kevin; Haltli, Raymond A.; Kemme, Shanalyn A.; Lindgren, Thomas L.; Moehring, David L.

We will present results of the design, operation, and performance of surface ion micro-traps fabricated at Sandia. Recent progress in the testing of the micro-traps will be highlighted, including successful motional control of ions and the validation of simulations with experiments.

More Details

Characterization of the absorbance bleaching in AllnAs/AlGaInAs multiple-quantum wells for semiconductor saturable absorbers

Bender, Daniel A.; Wanke, Michael C.; Montano, Ines; Cross, Karen C.

Semiconductor saturable absorbers (SESAs) introduce loss into a solid-state laser cavity until the cavity field bleaches the absorber producing a high-energy pulse. Multiple quantum wells (MQWs) of AlGaInAs grown lattice-matched to InP have characteristics that make them attractive for SESAs. The band gap can be tuned around the target wavelength, 1064 nm, and the large conduction band offset relative to the AlInAs barrier material helps reduces the saturation fluence, and transparent substrate reduces nonsaturable losses. We have characterized the lifetime of the bleaching process, the modulation depth, the nonsaturable losses, and the saturation fluence associated with SESAs. We compare different growth conditions and structure designs. These parameters give insight into the quality of the epitaxy and effect structure design has on SESA performance in a laser cavity. AlGaInAs MQWs were grown by MOVPE using a Veeco D125 machine using methyl-substituted metal-organics and hydride sources at a growth temperature of 660 C at a pressure of 60 Torr. A single period of the basic SESA design consists of approximately 130 to 140 nm of AlInAs barrier followed by two AlGaInAs quantum wells separated by 10 nm AlInAs. This design places the QWs near the nodes of the 1064-nm laser cavity standing wave. Structures consisting of 10-, 20-, and 30-periods were grown and evaluated. The SESAs were measured at 1064 nm using an optical pump-probe technique. The absorbance bleaching lifetime varies from 160 to 300 nsec. The nonsaturable loss was as much as 50% for structures grown on n-type, sulfur-doped InP substrates, but was reduced to 16% when compensated, Fe-doped InP substrates were used. The modulation depth of the SESAs increased linearly from 9% to 30% with the number of periods. We are currently investigating how detuning the QW transition energy impacts the bleaching characteristics. We will discuss how each of these parameters impacts the laser performance.

More Details

Characterization and modeling of thermal diffusion and aggregation in nanofluids

Gharagozloo, Patricia E.

Fluids with higher thermal conductivities are sought for fluidic cooling systems in applications including microprocessors and high-power lasers. By adding high thermal conductivity nanoscale metal and metal oxide particles to a fluid the thermal conductivity of the fluid is enhanced. While particle aggregates play a central role in recent models for the thermal conductivity of nanofluids, the effect of particle diffusion in a temperature field on the aggregation and transport has yet to be studied in depth. The present work separates the effects of particle aggregation and diffusion using parallel plate experiments, infrared microscopy, light scattering, Monte Carlo simulations, and rate equations for particle and heat transport in a well dispersed nanofluid. Experimental data show non-uniform temporal increases in thermal conductivity above effective medium theory and can be well described through simulation of the combination of particle aggregation and diffusion. The simulation shows large concentration distributions due to thermal diffusion causing variations in aggregation, thermal conductivity and viscosity. Static light scattering shows aggregates form more quickly at higher concentrations and temperatures, which explains the increased enhancement with temperature reported by other research groups. The permanent aggregates in the nanofluid are found to have a fractal dimension of 2.4 and the aggregate formations that grow over time are found to have a fractal dimension of 1.8, which is consistent with diffusion limited aggregation. Calculations show as aggregates grow the viscosity increases at a faster rate than thermal conductivity making the highly aggregated nanofluids unfavorable, especially at the low fractal dimension of 1.8. An optimum nanoparticle diameter for these particular fluid properties is calculated to be 130 nm to optimize the fluid stability by reducing settling, thermal diffusion and aggregation.

More Details

Characteristics of isopentanol as a fuel for HCCI engines

Yang, Yi; Dronniou, Nicolas; Simmons, Blake

Long chain alcohols possess major advantages over the currently used ethanol as bio-components for gasoline, including higher energy content, better engine compatibility, and less water solubility. The rapid developments in biofuel technology have made it possible to produce C{sub 4}-C{sub 5} alcohols cost effectively. These higher alcohols could significantly expand the biofuel content and potentially substitute ethanol in future gasoline mixtures. This study characterizes some fundamental properties of a C{sub 5} alcohol, isopentanol, as a fuel for HCCI engines. Wide ranges of engine speed, intake temperature, intake pressure, and equivalence ratio are investigated. Results are presented in comparison with gasoline or ethanol data previously reported. For a given combustion phasing, isopentanol requires lower intake temperatures than gasoline or ethanol at all tested speeds, indicating a higher HCCI reactivity. Similar to ethanol but unlike gasoline, isopentanol does not show two-stage ignition even at very low engine speed (350 rpm) or with considerable intake pressure boost (200 kPa abs.). However, isopentanol does show considerable intermediate temperature heat release (ITHR) that is comparable to gasoline. Our previous work has found that ITHR is critical for maintaining combustion stability at the retarded combustion phasings required to achieve high loads without knock. The stronger ITHR causes the combustion phasing of isopentanol to be less sensitive to intake temperature variations than ethanol. With the capability to retard combustion phasing, a maximum IMEP{sub g} of 5.4 and 11.6 bar was achieved with isopentanol at 100 and 200 kPa intake pressure, respectively. These loads are even slightly higher than those achieved with gasoline. The ITHR of isopentanol depends on operating conditions and is enhanced by simultaneously increasing pressures and reducing temperatures. However, increasing the temperature seems to have little effect on ITHR at atmospheric pressure, but it does promote hot ignition. Finally, the dependence of ignition timing on equivalence ratio, here called {phi}-sensitivity, is measured at atmospheric intake pressure, showing that the ignition of isopentanol is nearly insensitive to equivalence ratio when thermal effects are removed. This suggests that partial fuel stratification, which has been found effective to control the HRR with two-stage ignition fuels, may not work well with isopentanol at these conditions. Overall, these results indicate that isopentanol has a good potential as a HCCI fuel, either in neat form or in blend with gasoline.

More Details

Metal%3CU%2B2010%3Eorganic frameworks for radiation detection and particle discrimination

Feng, Patrick L.; Allendorf, Mark

Metal-organic frameworks (MOFs) represent a diverse and rapidly expanding class of materials comprising metal ions bridged by organic linker molecules. These robust crystalline structures have been found to exhibit exceptionally large surface areas, paving the way for diverse applications ranging from gas storage and separations to catalysis, drug delivery, and sensing. Less well understood are the intrinsic luminescence properties of MOFs, which arise from the electronic transitions within the hybrid metal-organic structure. Recently, we reported the observation of scintillation in stilbene-based MOFs, representing the discovery of the first completely new class of radiation detection materials since the advent of plastic scintillators in 1950. Photoluminescence and ion-induced luminescence spectroscopy of these materials show that both the luminescence spectrum and its timing can be varied by altering the local environment of the chromophore, establishing critical insight towards the rational design of materials for specific radiation detection applications. In this work, we describe the luminescence and scintillating properties of a series of isoreticular MOFs (IRMOFs), emphasizing the structural and electronic effects associated with systematic modification of the chromophore. Among these structures are IRMOFs based on naphthyl, biphenyl, terphenyl, and stilbene dicarboxylate linkers, for which unique structural changes and optical properties are observed. In addition to chemical changes in the structure, framework interpenetration may also be synthetically controlled, resulting in pairs of catenated and non-catenated IRMOFs based upon the same organic linker. The distinct interchromophore distances and solvate structure in these pairs lead to unique luminescence spectra that are interpreted in terms of energy transfer interactions. These spectral changes provide insight into the mechanism for radiation-induced luminescence, which for MOFs may differ significantly from the photoluminescence spectrum.

More Details

Cerium doped elpasolite halide scintillators

Doty, F.P.; Zhou, Xiaowang; Noda, Frank T.

Low-cost, high-performance gamma-ray spectrometers are urgently needed for proliferation detection and homeland security. The cost and availability of large scintillators used in the spectrometer generally hinge on their mechanical property and crystal symmetry. Low symmetry, intrinsically brittle crystals, such as these emerging lanthanide halide scintillators, are particularly difficult to grow in large sizes due to the development of large anisotropic thermomechanical stresses during solidification process. Isotropic cubic scintillators, such as alkali halides, while affordable and can be produced in large sizes, are poor spectrometers due to severe nonproportional response and modest light yield. This work investigates and compares four new elpasolite based lanthanide halides, including Cs2LiLaBr6, Cs2NaLaBr6, Cs2LiLaI6, and Cs2NaLaI6, in terms of their crystal symmetry, characteristics of photoluminescence and optical quantum efficiency. The mechanical property and thermal expansion behavior of the cubic Cs2LiLaBr6 will be reported. The isotropic nature of this material has potential for scaled-up crystal growth, as well as the possibility of low-cost polycrystalline ceramic processing. In addition, the proportional response with gamma-ray energy of directionally solidified Cs2LiLaBr6 will be compared with workhorse alkali halide scintillators. The processing challenges associated with hot forged polycrystalline elpasolite based lanthanide halides will also be discussed.

More Details

Understanding Li-ion battery processes at the atomic to nano-scale

Huang, Jian Y.; Subramanian, Arunkumar; Hudak, Nicholas S.

Reducing battery materials to nano-scale dimensions may improve battery performance while maintaining the use of low-cost materials. However, we need better characterization tools with atomic to nano-scale resolution in order to understand degradation mechanisms and the structural and mechanical changes that occur in these new materials during battery cycling. To meet this need, we have developed a micro-electromechanical systems (MEMS)-based platform for performing electrochemical measurements using volatile electrolytes inside a transmission electron microscope (TEM). This platform uses flip-chip assembly with special alignment features and multiple buried electrode configurations. In addition to this platform, we have developed an unsealed platform that permits in situ TEM electrochemistry using ionic liquid electrolytes. As a test of these platform concepts, we have assembled MnO{sub 2} nanowires on to the platform using dielectrophoresis and have examined their electrical and structural changes as a function of lithiation. These results reveal a large irreversible drop in electronic conductance and the creation of a high degree of lattice disorder following lithiation of the nanowires. From these initial results, we conclude that the future full development of in situ TEM characterization tools will enable important mechanistic understanding of Li-ion battery materials.

More Details
Results 73201–73400 of 99,299
Results 73201–73400 of 99,299