Boron and nitrogen-containing materials for hydrogen storage
Abstract not provided.
Abstract not provided.
Abstract not provided.
Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. In this work, we follow-up on previous evaluations of the Automated Expert Modeling and Automated Student Evaluation (AEMASE) system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain. The current study provides an empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
As high energy laser systems evolve towards higher energies, fundamental material properties such as the laser-induced damage threshold (LIDT) of the optics limit the overall system performance. The Z-Backlighter Laser Facility at Sandia National Laboratories uses a pair of such kiljoule-class Nd:Phosphate Glass lasers for x-ray radiography of high energy density physics events on the Z-Accelerator. These two systems, the Z-Beamlet system operating at 527nm/ 1ns and the Z-Petawatt system operating at 1054nm/ 0.5ps, can be combined for some experimental applications. In these scenarios, dichroic beam combining optics and subsequent dual wavelength high reflectors will see a high fluence from combined simultaneous laser exposure and may even see lingering effects when used for pump-probe configurations. Only recently have researchers begun to explore such concerns, looking at individual and simultaneous exposures of optics to 1064 and third harmonic 355nm light from Nd:YAG [1]. However, to our knowledge, measurements of simultaneous and delayed dual wavelength damage thresholds on such optics have not been performed for exposure to 1054nm and its second harmonic light, especially when the pulses are of disparate pulse duration. The Z-Backlighter Facility has an instrumented damage tester setup to examine the issues of laser-induced damage thresholds in a variety of such situations [2] . Using this damage tester, we have measured the LIDT of dual wavelength high reflectors at 1054nm/0.5ps and 532nm/7ns, separately and spatially combined, both co-temporal and delayed, with single and multiple exposures. We found that the LIDT of the sample at 1054nm/0.5ps can be significantly lowered, from 1.32J/cm{sup 2} damage fluence with 1054/0.5ps only to 1.05 J/cm{sup 2} with the simultaneous presence of 532nm/7ns laser light at a fluence of 8.1 J/cm{sup 2}. This reduction of LIDT of the sample at 1054nm/0.5ps continues as the fluence of 532nm/7ns laser light simultaneously present increases. The reduction of LIDT does not occur when the 2 pulses are temporally separated. This paper will also present dual wavelength LIDT results of commercial dichroic beam-combining optics simultaneously exposed with laser light at 1054nm/2.5ns and 532nm/7ns.
Abstract not provided.
Adagio is a three-dimensional, implicit solid mechanics code with a versatile element library, nonlinear material models, and capabilities for modeling large deformation and contact. Adagio is a parallel code, and its nonlinear solver and contact capabilities enable scalable solutions of large problems. It is built on the SIERRA Framework [1, 2]. SIERRA provides a data management framework in a parallel computing environment that allows the addition of capabilities in a modular fashion. The Adagio 4.16 User's Guide provides information about the functionality in Adagio and the command structure required to access this functionality in a user input file. This document is divided into chapters based primarily on functionality. For example, the command structure related to the use of various element types is grouped in one chapter; descriptions of material models are grouped in another chapter. The input and usage of Adagio is similar to that of the code Presto [3]. Presto, like Adagio, is a solid mechanics code built on the SIERRA Framework. The primary difference between the two codes is that Presto uses explicit time integration for transient dynamics analysis, whereas Adagio is an implicit code. Because of the similarities in input and usage between Adagio and Presto, the user's guides for the two codes are structured in the same manner and share common material. (Once you have mastered the input structure for one code, it will be easy to master the syntax structure for the other code.) To maintain the commonality between the two user's guides, we have used a variety of techniques. For example, references to Presto may be found in the Adagio user's guide and vice versa, and the chapter order across the two guides is the same. On the other hand, each of the two user's guides is expressly tailored to the features of the specific code and documents the particular functionality for that code. For example, though both Presto and Adagio have contact functionality, the content of the chapter on contact in the two guides differs. Important references for both Adagio and Presto are given in the references section at the end of this chapter. Adagio was preceded by the codes JAC and JAS3D; JAC is described in Reference 4; JAS3D is described in Reference 5. Presto was preceded by the code Pronto3D. Pronto3D is described in References 6 and 7. Some of the fundamental nonlinear technology used by both Presto and Adagio are described in References 8, 9, and 10. Currently, both Presto and Adagio use the Exodus II database and the XDMF database; Exodus II is more commonly used than XDMF. (Other options may be added in the future.) The Exodus II database format is described in Reference 11, and the XDMF database format is described in Reference 12. Important information about contact is provided in the reference document for ACME [13]. ACME is a third-party library for contact. One of the key concepts for the command structure in the input file is a concept referred to as scope. A detailed explanation of scope is provided in Section 1.2. Most of the command lines in Chapter 2 are related to a certain scope rather than to some particular functionality.
Presto is a three-dimensional transient dynamics code with a versatile element library, nonlinear material models, large deformation capabilities, and contact. It is built on the SIERRA Framework [1, 2]. SIERRA provides a data management framework in a parallel computing environment that allows the addition of capabilities in a modular fashion. Contact capabilities are parallel and scalable. The Presto 4.16 User's Guide provides information about the functionality in Presto and the command structure required to access this functionality in a user input file. This document is divided into chapters based primarily on functionality. For example, the command structure related to the use of various element types is grouped in one chapter; descriptions of material models are grouped in another chapter. The input and usage of Presto is similar to that of the code Adagio [3]. Adagio is a three-dimensional quasi-static code with a versatile element library, nonlinear material models, large deformation capabilities, and contact. Adagio, like Presto, is built on the SIERRA Framework [1]. Contact capabilities for Adagio are also parallel and scalable. A significant feature of Adagio is that it offers a multilevel, nonlinear iterative solver. Because of the similarities in input and usage between Presto and Adagio, the user's guides for the two codes are structured in the same manner and share common material. (Once you have mastered the input structure for one code, it will be easy to master the syntax structure for the other code.) To maintain the commonality between the two user's guides, we have used a variety of techniques. For example, references to Adagio may be found in the Presto user's guide and vice versa, and the chapter order across the two guides is the same. On the other hand, each of the two user's guides is expressly tailored to the features of the specific code and documents the particular functionality for that code. For example, though both Presto and Adagio have contact functionality, the content of the chapter on contact in the two guides differs. Important references for both Adagio and Presto are given in the references section at the end of this chapter. Adagio was preceded by the codes JAC and JAS3D; JAC is described in Reference 4; JAS3D is described in Reference 5. Presto was preceded by the code Pronto3D. Pronto3D is described in References 6 and 7. Some of the fundamental nonlinear technology used by both Presto and Adagio are described in References 8, 9, and 10. Currently, both Presto and Adagio use the Exodus II database and the XDMF database; Exodus II is more commonly used than XDMF. (Other options may be added in the future.) The Exodus II database format is described in Reference 11, and the XDMF database format is described in Reference 12. Important information about contact is provided in the reference document for ACME [13]. ACME is a third-party library for contact. One of the key concepts for the command structure in the input file is a concept referred to as scope. A detailed explanation of scope is provided in Section 1.2. Most of the command lines in Chapter 2 are related to a certain scope rather than to some particular functionality.
Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated in order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.
Journal of Computational Physics
Abstract not provided.
Finding the optimal (lightest, least expensive, etc.) design for an engineered component that meets or exceeds a specified level of reliability is a problem of obvious interest across a wide spectrum of engineering fields. Various methods for this reliability-based design optimization problem have been proposed. Unfortunately, this problem is rarely solved in practice because, regardless of the method used, solving the problem is too expensive or the final solution is too inaccurate to ensure that the reliability constraint is actually satisfied. This is especially true for engineering applications involving expensive, implicit, and possibly nonlinear performance functions (such as large finite element models). The Efficient Global Reliability Analysis method was recently introduced to improve both the accuracy and efficiency of reliability analysis for this type of performance function. This paper explores how this new reliability analysis method can be used in a design optimization context to create a method of sufficient accuracy and efficiency to enable the use of reliability-based design optimization as a practical design tool.
Abstract not provided.
Abstract not provided.
Energy and Fuels
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Monolithic photonic integrated circuits (PICs) have a long history reaching back more than 40 years. During that time, and particularly in the past 15 years, the technology has matured and the application space grown to span sophisticated tunable diode lasers, 40 Gb/s electrical-to-optical signal converters with complex data formats, wavelength multiplexors and routers, as well as chemical/biological sensors. Most of this activity has centered in recent years on optical circuits built on either Silicon or InP substrates. This talk will review the three classes of PIC and highlight the unique strengths, and weaknesses, of PICs based on Silicon and InP substrates. Examples will be provided from recent R&D activity.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
It is known that, in general, the correlation structure in the joint distribution of model parameters is critical to the uncertainty analysis of that model. Very often, however, studies in the literature only report nominal values for parameters inferred from data, along with confidence intervals for these parameters, but no details on the correlation or full joint distribution of these parameters. When neither posterior nor data are available, but only summary statistics such as nominal values and confidence intervals, a joint PDF must be chosen. Given the summary statistics it may not be reasonable nor necessary to assume the parameters are independent random variables. We demonstrate, using a Bayesian inference procedure, how to construct a posterior density for the parameters exhibiting self consistent correlations, in the absence of data, given (1) the fit-model, (2) nominal parameter values, (3) bounds on the parameters, and (4) a postulated statistical model, around the fit-model, for the missing data. Our approach ensures external Bayesian updating while marginalizing over possible data realizations. We then address the matching of given parameter bounds through the choice of hyperparameters, which are introduced in postulating the statistical model, but are not given nominal values. We discuss some possible approaches, including (1) inferring them in a separate Bayesian inference loop and (2) optimization. We also perform an empirical evaluation of the algorithm showing the posterior obtained with this data free inference compares well with the true posterior obtained from inference against the full data set.
Abstract not provided.
One of the authors previously conjectured that the wrinkling of propagating fronts by weak random advection increases the bulk propagation rate (turbulent burning velocity) in proportion to the 4/3 power of the advection strength. An exact derivation of this scaling is reported. The analysis shows that the coefficient of this scaling is equal to the energy density of a lower-dimensional Burgers fluid with a white-in-time forcing whose spatial structure is expressed in terms of the spatial autocorrelation of the flow that advects the front. The replica method of field theory has been used to derive an upper bound on the coefficient as a function of the spatial autocorrelation. High precision numerics show that the bound is usefully sharp. Implications for strongly advected fronts (e.g., turbulent flames) are noted.
Recent work suggests that cloud effects remain one of the largest sources of uncertainty in model-based estimates of climate sensitivity. In particular, the entrainment rate in stratocumulus-topped mixed layers needs better models. More than thirty years ago a clever laboratory experiment was conducted by McEwan and Paltridge to examine an analog of the entrainment process at the top of stratiform clouds. Sayler and Breidenthal extended this pioneering work and determined the effect of the Richardson number on the dimensionless entrainment rate. The experiments gave hints that the interaction between molecular effects and the one-sided turbulence seems to be crucial for understanding entrainment. From the numerical point of view large-eddy simulation (LES) does not allow explicitly resolving all the fine scale processes at the entrainment interface. Direct numerical simulation (DNS) is limited due to the Reynolds number and is not the tool of choice for parameter studies. Therefore it is useful to investigate new modeling strategies, such as stochastic turbulence models which allow sufficient resolution at least in one dimension while having acceptable run times. We will present results of the One-Dimensional Turbulence stochastic simulation model applied to the experimental setup of Sayler and Breidenthal. The results on radiatively induced entrainment follow quite well the scaling of the entrainment rate with the Richardson number that was experimentally found for a set of trials. Moreover, we investigate the influence of molecular effects, the fluids optical properties, and the artifact of parasitic turbulence experimentally observed in the laminar layer. In the simulations the parameters are varied systematically for even larger ranges than in the experiment. Based on the obtained results a more complex parameterization of the entrainment rate than currently discussed in the literature seems to be necessary.
Abstract not provided.
We report air filamentation by a 1550 nm subpicosecond pulse. During filamentation, the continuum generated was less than expected. A large amount of third harmonic was also generated.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
In this paper, we report the progress made in our project recently funded by the US Department of Energy (DOE) toward developing a computational capability, which includes a two-phase, three-dimensional PEM (polymer electrolyte membrane) fuel cell model and its coupling with DAKOTA (a design and optimization toolkit developed and being enhanced by Sandia National Laboratories). We first present a brief literature survey in which the prominent/notable PEM fuel cell models developed by various researchers or groups are reviewed. Next, we describe the two-phase, three-dimensional PEM fuel cell model being developed, tested, and later validated by experimental data. Results from case studies are presented to illustrate the utility of our comprehensive, integrated cell model. The coupling between the PEM fuel cell model and DAKOTA is briefly discussed. Our efforts in this DOE-funded project are focused on developing a validated computational capability that can be employed for PEM fuel cell design and optimization.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This paper focuses on the extraction of skeletons of CAD models and its applications in finite element (FE) mesh generation. The term 'skeleton of a CAD model' can be visualized as analogous to the 'skeleton of a human body'. The skeletal representations covered in this paper include medial axis transform (MAT), Voronoi diagram (VD), chordal axis transform (CAT), mid surface, digital skeletons, and disconnected skeletons. In the literature, the properties of a skeleton have been utilized in developing various algorithms for extracting skeletons. Three main approaches include: (1) the bisection method where the skeleton exists at equidistant from at least two points on boundary, (2) the grassfire propagation method in which the skeleton exists where the opposing fronts meet, and (3) the duality method where the skeleton is a dual of the object. In the last decade, the author has applied different skeletal representations in all-quad meshing, hex meshing, mid-surface meshing, mesh size function generation, defeaturing, and decomposition. A brief discussion on the related work from other researchers in the area of tri meshing, tet meshing, and anisotropic meshing is also included. This paper concludes by summarizing the strengths and weaknesses of the skeleton-based approaches in solving various geometry-centered problems in FE mesh generation. The skeletons have proved to be a great shape abstraction tool in analyzing the geometric complexity of CAD models as they are symmetric, simpler (reduced dimension), and provide local thickness information. However, skeletons generally require some cleanup, and stability and sensitivity of the skeletons should be controlled during extraction. Also, selecting a suitable application-specific skeleton and a computationally efficient method of extraction is critical.
Communications of the Association for Computing Machinery
Abstract not provided.
In recent years, a successful method for generating experimental dynamic substructures has been developed using an instrumented fixture, the transmission simulator. The transmission simulator method solves many of the problems associated with experimental substructuring. These solutions effectively address: (1) rotation and moment estimation at connection points; (2) providing substructure Ritz vectors that adequately span the connection motion space; and (3) adequately addressing multiple and continuous attachment locations. However, the transmission simulator method may fail if the transmission simulator is poorly designed. Four areas of the design addressed here are: (1) designating response sensor locations; (2) designating force input locations; (3) physical design of the transmission simulator; and (4) modal test design. In addition to the transmission simulator design investigations, a review of the theory with an example problem is presented.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
High Energy Density Physics
Abstract not provided.
Results of several experiments aimed at remedying photoresist adhesion failure during spray wet chemical etching of InGaP/GaAs NPN HBTs are reported. Several factors were identified that could influence adhesion and a Design of Experiment (DOE) approach was used to study the effects and interactions of selected factors. The most significant adhesion improvement identified is the incorporation of a native oxide etch immediately prior to the photoresist coat. In addition to improving adhesion, this pre-coat treatment also alters the wet etch profile of (100) GaAs so that the reaction limited etch is more isotropic compared to wafers without surface treatment; the profiles have a positive taper in both the [011] and [011] directions, but the taper angles are not identical. The altered profiles have allowed us to predictably yield fully probe-able HBTs with 5 x 5 {micro}m emitters using 5200 {angstrom} evaporated metal without planarization.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Scoping studies have demonstrated that ceragenins, when linked to water-treatment membranes have the potential to create biofouling resistant water-treatment membranes. Ceragenins are synthetically produced molecules that mimic antimicrobial peptides. Evidence includes measurements of CSA-13 prohibiting the growth of and killing planktonic Pseudomonas fluorescens. In addition, imaging of biofilms that were in contact of a ceragenin showed more dead cells relative to live cells than in a biofilm that had not been treated with a ceragenin. This work has demonstrated that ceragenins can be attached to polyamide reverse osmosis (RO) membranes, though work needs to improve the uniformity of the attachment. Finally, methods have been developed to use hyperspectral imaging with multivariate curve resolution to view ceragenins attached to the RO membrane. Future work will be conducted to better attach the ceragenin to the RO membranes and more completely test the biocidal effectiveness of the ceragenins on the membranes.
Journal of Chemical Physics
Abstract not provided.
Abstract not provided.
Abstract not provided.
Objectives of the Office of Energy Efficiency and Renewable Energy (EERE) 2009-2010 Studies (Solar, Wind, Geothermal, & Combustion Engine R&D) are to: (1) Demonstrate to investors that EERE research and technology development (R&D) programs & subprograms are 'Worth It'; (2) Develop an improved Benefit-Cost methodology for determining realized economic and other benefits of EERE R&D programs - (a) Model government additionality more thoroughly and on a case-by-case basis; (b) Move beyond economic benefits; and (c) Have each study calculate returns to a whole EERE program/subprogram; and (3) Develop a consistent, workable Methods Guide for independent contractors who will perform the evaluation studies.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This paper describes the development and implementation of an integrated resistor process based on reactively sputtered tantalum nitride. Image reversal lithography was shown to be a superior method for liftoff patterning of these films. The results of a response surface DOE for the sputter deposition of the films are discussed. Several approaches to stabilization baking were examined and the advantages of the hot plate method are shown. In support of a new capability to produce special-purpose HBT-based Small-Scale Integrated Circuits (SSICs), we developed our existing TaN resistor process, designed for research prototyping, into one with greater maturity and robustness. Included in this work was the migration of our TaN deposition process from a research-oriented tool to a tool more suitable for production. Also included was implementation and optimization of a liftoff process for the sputtered TaN to avoid the complicating effects of subtractive etching over potentially sensitive surfaces. Finally, the method and conditions for stabilization baking of the resistors was experimentally determined to complete the full implementation of the resistor module. Much of the work to be described involves the migration between sputter deposition tools - from a Kurt J. Lesker CMS-18 to a Denton Discovery 550. Though they use nominally the same deposition technique (reactive sputtering of Ta with N{sup +} in a RF-excited Ar plasma), they differ substantially in their design and produce clearly different results in terms of resistivity, conformity of the film and the difference between as-deposited and stabilized films. We will describe the design of and results from the design of experiments (DOE)-based method of process optimization on the new tool and compare this to what had been used on the old tool.
Most far-field optical imaging systems rely on a lens and spatially-resolved detection to probe distinct locations on the object. We describe and demonstrate a novel high-speed wide-field approach to imaging that instead measures the complex spatial Fourier transform of the object by detecting its spatially-integrated response to dynamic acousto-optically synthesized structured illumination. Tomographic filtered backprojection is applied to reconstruct the object in two or three dimensions. This technique decouples depth-of-field and working-distance from resolution, in contrast to conventional imaging, and can be used to image biological and synthetic structures in fluoresced or scattered light employing coherent or broadband illumination. We discuss the electronically programmable transfer function of the optical system and its implications for imaging dynamic processes. Finally, we present for the first time two-dimensional high-resolution image reconstructions demonstrating a three-orders-of-magnitude improvement in depth-of-field over conventional lens-based microscopy.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We consider the problem of placing sensors in a municipal water network when we can choose both the location of sensors and the sensitivity and specificity of the contamination warning system. Sensor stations in a municipal water distribution network continuously send sensor output information to a centralized computing facility, and event detection systems at the control center determine when to signal an anomaly worthy of response. Although most sensor placement research has assumed perfect anomaly detection, signal analysis software has parameters that control the tradeoff between false alarms and false negatives. We describe a nonlinear sensor placement formulation, which we heuristically optimize with a linear approximation that can be solved as a mixed-integer linear program. We report the results of initial experiments on a real network and discuss tradeoffs between early detection of contamination incidents, and control of false alarms.
Future Photovoltaics
Abstract not provided.
Abstract not provided.
Abstract not provided.
Since the energy storage technology market is in a relatively emergent phase, narrowing the gap between pilot project status and commercialization is fundamental to the accelerating of this innovative market space. This session will explore regional market design factors to facilitate the storage enterprise. You will also hear about: quantifying transmission and generation efficiency enhancements; resource planning for storage; and assessing market mechanisms to accelerate storage adoption regionally.
Abstract not provided.
Abstract not provided.
We will discuss general mathematical ideas arising in the problems of Laser beam shaping and splitting. We will be particularly concerned with questions concerning the scaling and symmetry of such systems.
Abstract not provided.
Abstract not provided.
Abstract not provided.