AUVSI Unmanned Systems North America Conference 2011
Salton, Jonathan R.; Buerger, Stephen P.; Marron, Lisa; Feddema, John; Fischer, Gary; Little, Charles; Spletzer, Barry; Xavier, Patrick; Rizzi, Alfred A.; Murphy, Michael P.; Giarratana, John; Malchano, Matthew D.; Weagle, Christian A.
This paper explores various frameworks to quantify and propagate sources of epistemic and aleatoric uncertainty within the context of decision making for assessing system performance relative to design margins of a complex mechanical system. If sufficient data is available for characterizing aleatoric-type uncertainties, probabilistic methods are commonly used for computing response distribution statistics based on input probability distribution specifications. Conversely, for epistemic uncertainties, data is generally too sparse to support objective probabilistic input descriptions, leading to either subjective probabilistic descriptions (e.g., assumed priors in Bayesian analysis) or non-probabilistic methods based on interval specifications. Among the techniques examined in this work are (1) Interval analysis, (2) Dempster-Shafer Theory of Evidence, (3) a second-order probability (SOP) analysis in which the aleatory and epistemic variables are treated separately, and a nested iteration is performed, typically sampling epistemic variables on the outer loop, then sampling over aleatory variables on the inner loop and (4) a Bayesian approach where plausible prior distributions describing the epistemic variable are created and updated using available experimental data. This paper compares the results and the information provided by different methods to enable decision making in the context of performance assessment when epistemic uncertainty is considered.
The inability to do visual solder joint inspection has been a major road block to using advanced ICs with high I/O count in area array packaging technologies like flip-chip, Quad Flat No Lead (QFN) and Ball grid Arrays (BGAs). In this paper, we report the results of a study to evaluate 3D X-Ray Computed Tomography (3DXRay-CT) as a solder inspection technique for area array package assemblies. We have conducted an experiment with board assemblies having intentionally designed solder defects like cold solder joints, solder-mask defects, unfilled vias in solder pads, and different shape and size solder pads. We have demonstrated that 3D X-Ray-CT technique was able to detect all these defects. This technique is a valid technique to inspect solder joints in area array packaging technologies.
This paper discusses the handling and treatment of uncertainties corresponding to relatively few data samples in experimental characterization of random quantities. The importance of this topic extends beyond experimental uncertainty to situations where the derived experimental information is used for model validation or calibration. With very sparse data it is not practical to have a goal of accurately estimating the underlying variability distribution (probability density function, PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a desired percentage of the actual PDF, say 95% included probability, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the random-variable range corresponding to the desired percentage of the actual PDF. The performance of a variety of uncertainty representation techniques is tested and characterized in this paper according to these two opposing objectives. An initial set of test problems and results is presented here from a larger study currently underway.
As semantic graph database technology grows to address components ranging from large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a high performance hybrid system comprising computational capability for semantic graph database processing utilizing the multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths of the Billion Triple Challenge 2010 dataset.
This paper presents some statistical concepts and techniques for refining the expression of uncertainty arising from: a) random variability (aleatory uncertainty) of a random quantity; and b) contributed epistemic uncertainty due to limited sampling of the random quantity. The treatment is tailored to handling experimental uncertainty in a context of model validation and calibration. Two particular problems are considered. One involves deconvolving random measurement error from measured random response. The other involves exploiting a relationship between two random variates of a system and an independently characterized probability density of one of the variates.
With the increasing levels of parallelism in a compute node, it is important to exploit multiple levels of parallelism even within a single compute node. We present ShyLU (pro- nounced\Shy-loo"for Scalable Hybrid LU), a\hybrid-hybrid" solver for general sparse linear systems that is hybrid in two ways: First, it combines direct and iterative methods. The iterative method is based on approximate Schur com- plements. Second, the solver uses two levels of parallelism via hybrid programming (MPI+threads). Our solver is use- ful both in shared-memory environments and on large par- allel computers with distributed memory (as a subdomain solver). We compare the robustness of ShyLU against other algebraic preconditioners. ShyLU scales well up to 192 cores for a given problem size. We compare at MPI performance of ShyLU against a hybrid implementation. We conclude that on present multicore nodes at MPI is better. However, for future manycore machines (48 or more cores) hybrid/ hi- erarchical algorithms and implementations are important for sustained performance. Copyright is held by the author/owner(s).
As part of the liquefied natural gas (LNG) Cascading Damage Study, a series of structural tests were conducted to investigate the thermal induced fracture of steel plate structures. The thermal stresses were achieved by applying liquid nitrogen (LN{sub 2}) onto sections of each steel plate. In addition to inducing large thermal stresses, the lowering of the steel temperature simultaneously reduced the fracture toughness. Liquid nitrogen was used as a surrogate for LNG due to safety concerns and since the temperature of LN{sub 2} is similar (-190 C) to LNG (-161 C). The use of LN{sub 2} ensured that the tests could achieve cryogenic temperatures in the range an actual vessel would encounter during a LNG spill. There were four phases to this test series. Phase I was the initial exploratory stage, which was used to develop the testing process. In the Phase II series of tests, larger plates were used and tested until fracture. The plate sizes ranged from 4 ft square pieces to 6 ft square sections with thicknesses from 1/4 inches to 3/4 inches. This phase investigated the cooling rates on larger plates and the effect of different notch geometries (stress concentrations used to initiate brittle fracture). Phase II was divided into two sections, Phase II-A and Phase II-B. Phase II-A used standard A36 steel, while Phase II-B used marine grade steels. In Phase III, the test structures were significantly larger, in the range of 12 ft by 12 ft by 3 ft high. These structures were designed with more complex geometries to include features similar to those on LNG vessels. The final test phase, Phase IV, investigated differences in the heat transfer (cooling rates) between LNG and LN{sub 2}. All of the tests conducted in this study are used in subsequent parts of the LNG Cascading Damage Study, specifically the computational analyses.
We analyze the artificial dissipation introduced by a streamline-upwind Petrov-Galerkin finite element method and consider its effect on the conservation of total enthalpy for the Euler and laminar Navier-Stokes equations. We also consider the chemically reacting case. We demonstrate that in general, total enthalpy is not conserved for the important special case of the steady-state Euler equations. A modification to the artificial dissipation is proposed and shown to significantly improve the conservation of total enthalpy.
The Sunshine to Petrol effort at Sandia National Laboratories aims to convert CO 2 and water to liquid hydrocarbon fuel precursors using concentrated solar energy with redox-active metal oxide systems, such as ferrites: Fe 3O 4→3FeO+ 0.5O 2 (>1350°C) 3FeO + CO 2→Fe 3O 4 + CO (<1200°C). However, the ferrite materials are not repeatedly reactive on their own and require a support, such as yttria-stabilized zirconia (YSZ). The ferrite-support interaction is not well defined, as there has been little fundamental characterization of these oxides at the high temperatures and conditions present in these cycles. We have investigated the microstructure, structure-property relationships, and the role of the support on redox behavior of the ferrite composites. In-situ capabilities to elucidate chemical reactions under operating conditions have been developed. The synthesis, structural characterization (room and high- temperature x-ray diffraction, secondary ion mass spectroscopy, scanning electron microscopy), and thermogravimetric analysis of YSZ-supported ferrites will be discussed.
A microfluidic RNA interference screening device was designed to study which genes are involved in Rift Valley Fever Virus (RVFV) infection. Spots of small interfering RNA (siRNA) are manually spotted onto a glass microscope slide, and aligned to a screening device designed to accommodate cell seeding, siRNA transfection, cell culture, virus infection and imaging analysis. This portable and disposable PDMS-based microfluidic device for RNAi screening was designed for a 96-well library of transfection against variety of gene targets. Current results show transfection of GFP-22 siRNA within the device, as compared to controls, which inhibit the expression of GFP produced by recombinant RVFV. This technique can be applied to host-pathogen interactions for highly dangerous systems in BSL-3/4 laboratories, where bulky robotic equipment is not ideal.
This presentation will discuss progress towards developing a large-scale parallel CFD capability using stabilized finite element formulations to simulate turbulent reacting flow and heat transfer in light water nuclear reactors (LWRs). Numerical simultation plays a critical role in the design, certification, and operation of LWRs. The Consortium for Advanced Simulation of Light Water Reactors is a U. S. Department of Energy Innovation Hub that is developing a virtual reactor toolkit that will incorporate science-based models, state-of-the-art numerical methods, modern computational science and engineering practices, and uncertainty quantification (UQ) and validation against operating pressurized water reactors. It will couple state-of-the-art fuel performance, neutronics, thermal-hydraulics (T-H), and structural models with existing tools for systems and safety analysis and will be designed for implementation on both today's leadership-class computers and next-generation advanced architecture platforms. We will first describe the finite element discretization utilizing PSPG, SUPG, and discontinuity capturing stabilization. We will then discuss our initial turbulence modeling formulations (LES and URANS) and the scalable fully implicit, fully coupled solution methods that are used to solve the challenging systems. These include globalized Newton-Krylov methods for solving the nonlinear systems of equaitons and preconditioned Krylov techniques. The preconditioners are based on fully-coupled algebraic multigrid and approximate block factorization preconditioners. We will discuss how these methods provide a powerful integration path for multiscale coupling to the neutronics and structures applications. Initial results on scalabiltiy will be presented. Finally we will comment on our use of embedded technology and how this capbaility impacts the application of implicit methods, sensitivity analysis and UQ.
Proceedings of the European Conference on Radiation and its Effects on Components and Systems, RADECS
Schwank, James R.; Shaneyfelt, Marty R.; Ferlet-Cavrois, Véronique; Dodd, Paul E.; Blackmore, Ewart W.; Pellish, Jonathan A.; Rodbell, Kenneth P.; Heidel, David F.; Marshall, Paul W.; LaBel, Kenneth A.; Gouker, Pascale M.; Tam, Nelson; Wong, Richard; Wen, Shi J.; Reed, Robert A.; Dalton, Scott M.; Swanson, Scot E.
Marine Hydrokinetic energy is the production of renewable electricity converted from the kinetic energy of ocean waves, current, tides, or by thermal gradients. Currently an emerging global industry is focused on developing novel technology to harness this sustainable power. These alternative energy devices require advances in anticorrosion and antibiofouling coatings to enhance lifetime and performance. In order to understand the microbial-nanomaterial interaction as well as nanomaterial corrosion process, we have elected to examine a variety of metallic, oxide and phosphate based nanomaterials. The synthesis of these materials using solution precipitation and solovothermal routes along with their full characterization will be presented.
The use of Low Temperature Co-Fired Ceramics (LTCC) is a very attractive material option for advanced packaging. For applications, a variety of features are printed in the base material: thermal and electrical vias, resistors, solder pads to name a few. Most of these features have materials that are thermally and elastically mismatched from the LTCC, producing a localized residual stress. These stresses impact the strength and reliability of the LTCC package. Here we present results and analysis for the strength and reliability assessment of an LTCC (DupontTM 951) with and without Au vias. The reliability of the ceramic material is assessed from the perspective of its susceptibility to sub-critical crack growth (SCG). Metallic vias can significantly lower the strength of the LTCC, however, their presence does not change the measured susceptibility of the material to SCG. Using our experimental data, and empirical descriptions of SCG laws, safe design life for LTCC packages under a particular stress state is estimated.
Tran, Hy D.; Emtman, Casey; Salsbury, James G.; Wright, William; Zwilling, Avron
A mesoscale dimensional artifact based on silicon bulk micromachining fabrication has been developed and manufactured with the intention of evaluating the artifact both on a high precision coordinate measuring machine (CMM) and video-probe based measuring systems. This hybrid artifact has features that can be located by both a touch probe and a video probe system. A key feature is that the physical edge can be located using a touch probe CMM, and this same physical edge can also be located using a video probe. While video-probe based systems are commonly used to inspect mesoscale mechanical components, a video-probe system's certified accuracy is generally much worse than its repeatability. To solve this problem, an artifact has been developed which can be calibrated using a commercially available high-accuracy tactile system and then be used to calibrate typical production vision-based measurement systems. This allows for error mapping to a higher degree of accuracy than is possible with a typical chrome-on-glass reference artifact. Details of the designed features and manufacturing process of the hybrid dimensional artifact are given, and a comparison of the designed features to the measured features of the manufactured artifact is presented and discussed. Measurement results are presented using a meter-scale CMM with submicron measurement uncertainty; an optical CMM with submicron measurement uncertainty; a micro-CMM with submicron measurement uncertainty using three different probes; and a form contour instrument.
The use of Low Temperature Co-Fired Ceramics (LTCC) is a very attractive material option for advanced packaging. For applications, a variety of features are printed in the base material: thermal and electrical vias, resistors, solder pads to name a few. Most of these features have materials that are thermally and elastically mismatched from the LTCC, producing a localized residual stress. These stresses impact the strength and reliability of the LTCC package. Here we present results and analysis for the strength and reliability assessment of an LTCC (DupontTM 951) with and without Au vias. The reliability of the ceramic material is assessed from the perspective of its susceptibility to sub-critical crack growth (SCG). Metallic vias can significantly lower the strength of the LTCC, however, their presence does not change the measured susceptibility of the material to SCG. Using our experimental data, and empirical descriptions of SCG laws, safe design life for LTCC packages under a particular stress state is estimated.
Because of past military operations, lack of upkeep and looting there are now enormous radioactive waste problems in Iraq. These waste problems include destroyed nuclear facilities, uncharacterized radioactive wastes, liquid radioactive waste in underground tanks, wastes related to the production of yellow cake, sealed radioactive sources, activated metals and contaminated metals that must be constantly guarded. Iraq currently lacks the trained personnel, regulatory and physical infrastructure to safely and securely manage these facilities and wastes. In 2005 the International Atomic Energy Agency (IAEA) agreed to organize an international cooperative program to assist Iraq with these issues. Soon after, the Iraq Nuclear Facility Dismantlement and Disposal Program (the NDs Program) was initiated by the U.S. Department of State (DOS) to support the IAEA and assist the Government of Iraq (GOI) in eliminating the threats from poorly controlled radioactive materials. The Iraq NDs Program is providing support for the IAEA plus training, consultation and limited equipment to the GOI. The GOI owns the problems and will be responsible for implementation of the Iraq NDs Program.
Inference techniques play a central role in many cognitive systems. They transform low-level observations of the environment into high-level, actionable knowledge which then gets used by mechanisms that drive action, problem-solving, and learning. This paper presents an initial effort at combining results from AI and psychology into a pragmatic and scalable computational reasoning system. Our approach combines a numeric notion of plausibility with first-order logic to produce an incremental inference engine that is guided by heuristics derived from the psychological literature. We illustrate core ideas with detailed examples and discuss the advantages of the approach with respect to cognitive systems.
Conventional full spectrum gamma spectroscopic analysis has the objective of quantitative identification of all the radionuclides present in a measurement. For low-energy resolution detectors such as NaI, when photopeaks alone are not sufficient for complete isotopic identification, such analysis requires template spectra for all the radionuclides present in the measurement. When many radionuclides are present it is difficult to make the correct identification and this process often requires many attempts to obtain a statistically valid solution by highly skilled spectroscopists. A previous report investigated using the targeted principal component analysis method (TPCA) for detection of embedded sources for RPM applications. This method uses spatial/temporal information from multiple spectral measurements to test the hypothesis of the presence of a target spectrum of interest in these measurements without the need to identify all the other radionuclides present. The previous analysis showed that the TPCA method has significant potential for automated detection of target radionuclides of interest, but did not include the effects of shielding. This report complements the previous analysis by including the effects of spectral distortion due to shielding effects for the same problem of detection of embedded sources. Two examples, one with one target radionuclide and the other with two, show that the TPCA method can successfully detect shielded targets in the presence of many other radionuclides. The shielding parameters are determined as part of the optimization process using interpolation of library spectra that are defined on a 2D grid of atomic numbers and areal densities.
In 1965 Gordon Moore wrote an article claiming that integrated circuit density would scale exponentially. His prediction has remained valid for more than four decades. Integrated circuits have changed all aspects of everyday life. They are also the 'heart and soul' of modern systems for defense, national infrastructure, and intelligence applications. The United States government needs an assured and trusted microelectronics supply for military systems. However, migration of microelectronics design and manufacturing from the United States to other countries in recent years has placed the supply of trusted microelectronics in jeopardy. Prevailing wisdom dictates that it is necessary to use microelectronics fabricated in a state-of-the-art technology for highest performance and military system superiority. Close examination of silicon microelectronics technology evolution and Moore's Law reveals that this prevailing wisdom is not necessarily true. This presents the US government the possibility of a totally new approach to acquire trusted microelectronics.
This Pollution Prevention Opportunity Assessment (PPOA) was conducted for the MicroFab and SiFab facilities at Sandia National Laboratories/New Mexico in Fiscal Year 2011. The primary purpose of this PPOA is to provide recommendations to assist organizations in reducing the generation of waste and improving the efficiency of their processes and procedures. This report contains a summary of the information collected, the analyses performed, and recommended options for implementation. The Sandia National Laboratories Environmental Management System (EMS) and Pollution Prevention (P2) staff will continue to work with the organizations to implement the recommendations.
A dynamic reactor model has been developed for pulse-type reactor applications. The model predicts reactor power, axial and radial fuel expansion, prompt and delayed neutron population, and prompt and delayed gamma population. All model predictions are made as a function of time. The model includes the reactivity effect of fuel expansion on a dynamic timescale as a feedback mechanism for reactor power. All inputs to the model are calculated from first principles, either directly by solving systems of equations, or indirectly from Monte Carlo N-Particle Transport Code (MCNP) derived results. The model does not include any empirical parameters that can be adjusted to match experimental data. Comparisons of model predictions to actual Sandia Pulse Reactor SPR-III pulses show very good agreement for a full range of pulse magnitudes. The model is also applied to Z-pinch externally driven neutron assembly (ZEDNA) type reactor designs to model both normal and off-normal ZEDNA operations.
We present the results of a three year LDRD project that has focused on overcoming major materials roadblocks to achieving AlGaN-based deep-UV laser diodes. We describe our growth approach to achieving AlGaN templates with greater than ten times reduction of threading dislocations which resulted in greater than seven times enhancement of AlGaN quantum well photoluminescence and 15 times increase in electroluminescence from LED test structures. We describe the application of deep-level optical spectroscopy to AlGaN epilayers to quantify deep level energies and densities and further correlate defect properties with AlGaN luminescence efficiency. We further review our development of p-type short period superlattice structures as an approach to mitigate the high acceptor activation energies in AlGaN alloys. Finally, we describe our laser diode fabrication process, highlighting the development of highly vertical and smooth etched laser facets, as well as characterization of resulting laser heterostructures.
Structural Considerations for Solar Installers provides a comprehensive outline of structural considerations associated with simplified solar installations and recommends a set of best practices installers can follow when assessing such considerations. Information in the manual comes from engineering and solar experts as well as case studies. The objectives of the manual are to ensure safety and structural durability for rooftop solar installations and to potentially accelerate the permitting process by identifying and remedying structural issues prior to installation. The purpose of this document is to provide tools and guidelines for installers to help ensure that residential photovoltaic (PV) power systems are properly specified and installed with respect to the continuing structural integrity of the building.
A microkinetic chemical reaction mechanism capable of describing both the storage and regeneration processes in a fully formulated lean NOx trap (LNT) is presented. The mechanism includes steps occurring on the precious metal, barium oxide (NOx storage), and cerium oxide (oxygen storage) sites of the catalyst. The complete reaction set is used in conjunction with a transient plug flow reactor code (including boundary layer mass transfer) to simulate not only a set of long storage/regeneration cycles with a CO/H2 reductant, but also a series of steady flow temperature sweep experiments that were previously analyzed with just a precious metal mechanism and a steady state code neglecting mass transfer. The results show that, while mass transfer effects are generally minor, NOx storage is not negligible during some of the temperature ramps, necessitating a re-evaluation of the precious metal kinetic parameters. The parameters for the entire mechanism are inferred by finding the best overall fit to the complete set of experiments. Rigorous thermodynamic consistency is enforced for parallel reaction pathways and with respect to known data for all of the gas phase species involved. It is found that, with a few minor exceptions, all of the basic experimental observations can be reproduced with the transient simulations. In addition to accounting for normal cycling behavior, the final mechanism should provide a starting point for the description of further LNT phenomena such as desulfation and the role of alternative reductants.
Development of an effective strategy for shelter and evacuation is among the most important planning tasks in preparation for response to a low yield, nuclear detonation in an urban area. Extensive studies have been performed and guidance published that highlight the key principles for saving lives following such an event. However, region-specific data are important in the planning process as well. This study examines some of the unique regional factors that impact planning for a 10 kT detonation in the National Capital Region. The work utilizes a single scenario to examine regional impacts as well as the shelter-evacuate decision alternatives at one exemplary point. For most Washington, DC neighborhoods, the excellent assessed shelter quality available make shelter-in-place or selective transit to a nearby shelter a compelling post-detonation strategy.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.
For a CASL grid-to-rod fretting problem, Sandia's Percept software was used in conjunction with the Sierra Mechanics suite to analyze the convergence behavior of the data transfer from a fluid simulation to a solid mechanics simulation. An analytic function, with properties relatively close to numerically computed fluid approximations, was chosen to represent the pressure solution in the fluid domain. The analytic pressure was interpolated on a sequence of grids on the fluid domain, and transferred onto a separate sequence of grids in the solid domain. The error in the resulting pressure in the solid domain was measured with respect to the analytic pressure. The error in pressure approached zero as both the fluid and solids meshes were refined. The convergence of the transfer algorithm was limited by whether the source grid resolution was the same or finer than the target grid resolution. In addition, using a feature coverage analysis, we found gaps in the solid mechanics code verification test suite directly relevant to the prototype CASL GTRF simulations.
Engineered nanomaterials (ENMs) are increasingly being used in commercial products, particularly in the biomedical, cosmetic, and clothing industries. For example, pants and shirts are routinely manufactured with silver nanoparticles to render them 'wrinkle-free.' Despite the growing applications, the associated environmental health and safety (EHS) impacts are completely unknown. The significance of this problem became pervasive within the general public when Prince Charles authored an article in 2004 warning of the potential social, ethical, health, and environmental issues connected to nanotechnology. The EHS concerns, however, continued to receive relatively little consideration from federal agencies as compared with large investments in basic nanoscience R&D. The mounting literature regarding the toxicology of ENMs (e.g., the ability of inhaled nanoparticles to cross the blood-brain barrier; Kwon et al., 2008, J. Occup. Health 50, 1) has spurred a recent realization within the NNI and other federal agencies that the EHS impacts related to nanotechnology must be addressed now. In our study we proposed to address critical aspects of this problem by developing primary correlations between nanoparticle properties and their effects on cell health and toxicity. A critical challenge embodied within this problem arises from the ability to synthesize nanoparticles with a wide array of physical properties (e.g., size, shape, composition, surface chemistry, etc.), which in turn creates an immense, multidimensional problem in assessing toxicological effects. In this work we first investigated varying sizes of quantum dots (Qdots) and their ability to cross cell membranes based on their aspect ratio utilizing hyperspectral confocal fluorescence microscopy. We then studied toxicity of epithelial cell lines that were exposed to different sized gold and silver nanoparticles using advanced imaging techniques, biochemical analyses, and optical and mass spectrometry methods. Finally we evaluated a new assay to measure transglutaminase (TG) activity; a potential marker for cell toxicity.
The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.
The United States and China are committed to cooperation to address the challenges of the next century. Technical cooperation, building on a long tradition of technical exchange between the two countries, can play an important role. This paper focuses on technical cooperation between the United States and China in the areas of nonproliferation, arms control and other nuclear security topics. It reviews cooperation during the 1990s on nonproliferation and arms control under the U.S.-China Arms Control Exchange, discusses examples of ongoing activities under the Peaceful Uses of Technology Agreement to enhance security of nuclear and radiological material, and suggests opportunities for expanding technical cooperation between the defense nuclear laboratories of both countries to address a broader range of nuclear security topics.
Increasing interest in marine hydrokinetic (MHK) energy has spurred to significant research on optimal placement of emerging technologies to maximize energy conversion and minimize potential effects on the environment. However, these devices will be deployed as an array in order to reduce the cost of energy and little work has been done to understand the impact these arrays will have on the flow dynamics, sediment-bed transport and benthic habitats and how best to optimize these arrays for both performance and environmental considerations. An "MHK-friendly" routine has been developed and implemented by Sandia National Laboratories (SNL) into the flow, sediment dynamics and water-quality code, SNL-EFDC. This routine has been verified and validated against three separate sets of experimental data. With SNL-EFDC, water quality and array optimization studies can be carried out to optimize an MHK array in a resource and study its effects on the environment. The present study examines the effect streamwise and spanwise spacing has on the array performance. Various hypothetical MHK array configurations are simulated within a trapezoidal river channel. Results show a non-linear increase in array-power efficiency as turbine spacing is increased in each direction, which matches the trends seen experimentally. While the sediment transport routines were not used in these simulations, the flow acceleration seen around the MHK arrays has the potential to significantly affect the sediment transport characteristics and benthic habitat of a resource. Evaluation Only. Created with Aspose.Pdf.Kit. Copyright 2002-2011 Aspose Pty Ltd Evaluation Only. Created with Aspose.Pdf.Kit. Copyright 2002-2011 Aspose Pty Ltd
The preliminary design for a three-bladed cross-flow rotor for a reference marine hydrokinetic turbine is presented. A rotor performance design code is described, along with modifications to the code to allow prediction of blade support strut drag as well as interference between two counter-rotating rotors. The rotor is designed to operate in a reference site corresponding to a riverine environment. Basic rotor performance and rigid-body loads calculations are performed to size the rotor elements and select the operating speed range. The preliminary design is verified with a simple finite element model that provides estimates of bending stresses during operation. A concept for joining the blades and support struts is developed and analyzed with a separate finite element analysis. Rotor mass, production costs, and annual energy capture are estimated in order to allow calculations of system cost-of-energy. Evaluation Only. Created with Aspose.Pdf.Kit. Copyright 2002-2011 Aspose Pty Ltd Evaluation Only. Created with Aspose.Pdf.Kit. Copyright 2002-2011 Aspose Pty Ltd
In the interest of providing an economically sensible use for the copious small-diameter wood in Northern New Mexico, an economic study is performed focused on mobile pyrolysis. Mobile pyrolysis was selected for the study because transportation costs limit the viability of a dedicated pyrolysis plant, and the relative simplicity of pyrolysis compared to other technology solutions lends itself to mobile reactor design. A bench-scale pyrolysis system was used to study the wood pyrolysis process and to obtain performance data that was otherwise unavailable under conditions theorized to be optimal given the regional problem. Pyrolysis can convert wood to three main products: fixed gases, liquid pyrolysis oil and char. The fixed gases are useful as low-quality fuel, and may have sufficient chemical energy to power a mobile system, eliminating the need for an external power source. The majority of the energy content of the pyrolysis gas is associated with carbon monoxide, followed by light hydrocarbons. The liquids are well characterized in the historical literature, and have slightly lower heating values comparable to the feedstock. They consist of water and a mix of hundreds of hydrocarbons, and are acidic. They are also unstable, increasing in viscosity with time stored. Up to 60% of the biomass in bench-scale testing was converted to liquids. Lower ({approx}550 C) furnace temperatures are preferred because of the decreased propensity for deposits and the high liquid yields. A mobile pyrolysis system would be designed with low maintenance requirements, should be able to access wilderness areas, and should not require more than one or two people to operate the system. The techno-economic analysis assesses fixed and variable costs. It suggests that the economy of scale is an important factor, as higher throughput directly leads to improved system economic viability. Labor and capital equipment are the driving factors in the viability of the system. The break-even selling price for the baseline assumption is about $11/GJ, however it may be possible to reduce this value by 20-30% depending on other factors evaluated in the non-baseline scenarios. Assuming a value for the char co-product improves the analysis. Significantly lower break-even costs are possible in an international setting, as labor is the dominant production cost.
Since completion of the Solar Two molten-salt power tower demonstration in 1999, the solar industry has been developing initial commercial-scale projects that are 3 to 14 times larger. Like Solar Two, these initial plants will power subcritical steam-Rankine cycles using molten salt with a temperature of 565 C. The main question explored in this study is whether there is significant economic benefit to develop future molten-salt plants that operate at a higher receiver outlet temperature. Higher temperatures would allow the use of supercritical steam cycles that achieve an improved efficiency relative to today's subcritical cycle ({approx}50% versus {approx}42%). The levelized cost of electricity (LCOE) of a 565 C subcritical baseline plant was compared with possible future-generation plants that operate at 600 or 650 C. The analysis suggests that {approx}8% reduction in LCOE can be expected by raising salt temperature to 650 C. However, most of that benefit can be achieved by raising the temperature to only 600 C. Several other important insights regarding possible next-generation power towers were also drawn: (1) the evaluation of receiver-tube materials that are capable of higher fluxes and temperatures, (2) suggested plant reliability improvements based on a detailed evaluation of the Solar Two experience, and (3) a thorough evaluation of analysis uncertainties.
This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).