A variety of performance tests are described relating to: Material Transfers; Emergency Evacuation; Alarm Response Assessment; and an Enhanced Limited Scope Performance Test (ELSPT). Procedures are given for: nuclear material physical inventory and discrepancy; material transfers; and emergency evacuation.
A laboratory system was constructed that allows the super-micron particles to be aged for long periods of time under conditions that can simulate a range of natural environments and conditions, including relative humidity, oxidizing chemicals, organics and simulated solar radiation. Two proof-of-concept experiments using a non-biological simulant for biological particles and a biological simulant demonstrate the utility of these types of aging experiments. Green Visolite®, which is often used as a tracer material for model validation experiments, does not degrade with exposure to simulated solar radiation, the actual biological material does. This would indicate that Visolite® should be a good tracer compound for mapping the extent of a biological release using fluorescence as an indicator, but that it should not be used to simulate the decay of a biological particle when exposed to sunlight. The decay in the fluorescence measured for B. thurengiensis is similar to what has been previously observed in outdoor environments.
This report describes the condition of the research environment at Sandia National Laboratories and outlines key environment improvement activities undertaken by the Office of the Chief Technology Officer and the Sandia Research Leadership Team during fiscal year 2013. The report also outlines Lab-level objectives related to the research environment for fiscal year 2014.
When multiple channels are employed in a pulse-Doppler radar, achieving and maintaining balance between the channels is problematic. In some circumstances the channels may be commutated to achieve adequate balance. Commutation is the switching, trading, toggling, or multiplexing of the channels between signal paths. Commutation allows modulating the imbalance energy away from the balanced energy in Doppler, where it can be mitigated with filtering.
This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.
Here we discuss an improved Corcos (Corcos (1963), (1963)) style cross spectral density utilizing zero pressure gradient, supersonic (Beresh et. al. (2013)) data sets. Using the connection between narrow band measurements with broadband cross-spectral density, i.e. Γ(ξ ,η ,ω )= Φ (ω) A(ωη/U )exp (-i ωξ/U) we focus on estimating coherence expressions of the form: A (ξω nb/U) and B (ηω nb/ U) where ωnb denotes the narrow band frequency, i.e. the band center frequency value and ξ and η are sensors spacing in streamwise/longitudinal and cross-stream/lateral directions, respectively. A methodology to estimate the parameters which retains the Corcos exponential functional form, A(ξω/U)=exp(-klat ηω/U) but identifies new parameters (constants) consistent with the Beresh et. al. data sets is discussed. The Corcos result requires that the data be properly explained by self-similar variable: ξω/U and ηω/U. The longitudinal (streamwise) variable ξω/U tends to provide a better data collapse, while, consistent with the literature the lateral ηω/U is only successful for higher band center frequencies. Assuming the similarity variables provide a useful description of the data, the longitudinal coherence decay constant result using the Beresh et. al. data sets yields a value for the longitudinal constant klong≈0.36-0.28 that is approximately 3x larger than the “traditional” (low speed, large Reynolds number and zero pressure gradient) of klong≈0.11. We suggest that the most likely reason that the Beresh et. al. data sets incur increased longitudinal decay which results in reduced coherence lengths is due to wall shear induced compression causing an adverse pressure gradient. Focusing on the higher band center frequency measurements where the frequency dependent similarity variables are applicable, the lateral or transverse coherence decay constant klat≈0.7 is consistent with the “traditional” (low speed, large Reynolds number and zero pressure gradient). It should be noted, that the longitudinal/streamwise coherence decay deviates from the value observed by other researchers while the lateral/ cross-stream value is consistent has been observed by other researchers. We believe that while the measurements used to obtain new decay constant estimates are from internal wind tunnel tests, they likely provide a useful estimate expected reentry flow behavior and are therefore recommended for use. These data could also be useful in determining the uncertainty of correlation length for a uncertainty quantification (UQ) analysis.
This report reviews the method recommended by the U.S. Food and Drug Administration for calculating Derived Intervention Levels (DILs) and identifies potential improvements to the DIL calculation method to support more accurate ingestion pathway analyses and protective action decisions. Further, this report proposes an alternate method for use by the Federal Emergency Radiological Assessment Center (FRMAC) to calculate FRMAC Intervention Levels (FILs). The default approach of the FRMAC during an emergency response is to use the FDA recommended methods. However, FRMAC recommends implementing the FIL method because we believe it to be more technically accurate. FRMAC will only implement the FIL method when approved by the FDA representative on the Federal Advisory Team for Environment, Food, and Health.
A model is presented for recombination of charge carriers at displacement damage in gallium arsenide, which includes clustering of the defects in atomic displacement cascades produced by neutron or ion irradiation. The carrier recombination model is based on an atomistic description of capture and emission of carriers by the defects with time evolution resulting from the migration and reaction of the defects. The physics and equations on which the model is based are presented, along with details of the numerical methods used for their solution. The model uses a continuum description of diffusion, field-drift and reaction of carriers and defects within a representative spherically symmetric cluster. The initial radial defect profiles within the cluster were chosen through pair-correlation-function analysis of the spatial distribution of defects obtained from the binary-collision code MARLOWE, using recoil energies for fission neutrons. Charging of the defects can produce high electric fields within the cluster which may influence transport and reaction of carriers and defects, and which may enhance carrier recombination through band-to-trap tunneling. Properties of the defects are discussed and values for their parameters are given, many of which were obtained from density functional theory. The model provides a basis for predicting the transient response of III-V heterojunction bipolar transistors to pulsed neutron irradiation.
Sandia National Laboratories (SNL) plans to conduct uncertainty analyses (UA) on the Fukushima Daiichi unit (1F1) plant with the MELCOR code. The model to be used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). However, that study only examined a handful of various model inputs and boundary conditions, and the predictions yielded only fair agreement with plant data and current release estimates. The goal of this uncertainty study is to perform a focused evaluation of uncertainty in core melt progression behavior and its effect on key figures-of-merit (e.g., hydrogen production, vessel lower head failure, etc.). In preparation for the SNL Fukushima UA work, a scoping study has been completed to identify important core melt progression parameters for the uncertainty analysis. The study also lays out a preliminary UA methodology.
Macon, David J.; Brannon, Rebecca M.; Strack, Otto E.
Mechanical testing of porous materials generates physical data that contain contributions from more than one underlying physical phenomenon. All that is measurable is the "ensemble" hardening modulus. This thesis is concerned with the phenomenon of dilatation in triaxial compression of porous media, which has been modeled very accurately in the literature for monotonic loading using models that predict dilatation under triaxial compression (TXC) by presuming that dilatation causes the cap to move outwards. These existing models, however, predict a counter-intuitive (and never validated) increase in hydrostatic compression strength. This work explores an alternative approach for modeling TXC dilatation based on allowing induced elastic anisotropy (which makes the material both less stiff and less strong in the lateral direction) with no increase in hydrostatic strength. Induced elastic anisotropy is introduced through the use of a distortion operator. This operator is a fourth-order tensor consisting of a combination of the undeformed stiffness and deformed compliance and has the same eigenprojectors as the elastic compliance. In the undeformed state, the distortion operator is equal to the fourth-order identity. Through the use of the distortion operator, an evolved stress tensor is introduced. When the evolved stress tensor is substituted into an isotropic yield function, a new anisotropic yield function results. In the case of the von Mises isotropic yield function (which contains only deviatoric components), it is shown that the distortion operator introduces a dilatational contribution without requiring an increase in hydrostatic strength. In the thesis, an introduction and literature review of the cap function is given. A transversely isotropic compliance is presented, based on a linear combination of natural bases constructed about a transverse-symmetry axis. Using a probabilistic distribution of cracks constructed for the case of transverse isotropy, a compliance expression is presented that demonstrated a decrease in lateral stiffness, but leaves axial stiffness unchanged. A demonstration of how the distortion operator could be used in the elastic/plastic analysis of a von Mises surface loaded in TXC is also presented.
This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the model response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)
Sandia National Laboratories' technology solutions are depended on to solve national and global threats to peace and freedom. Through science and technology, people, infrastructure, and partnerships, part of Sandia's mission is to meet the national needs in the areas of energy, climate and infrastructure security. Within this mission to ensure clean, abundant, and affordable energy and water is the Nuclear Energy and Fuel Cycle Programs. The Nuclear Energy and Fuel Cycle Programs have a broad range of capabilities, with both physical facilities and intellectual expertise. These resources are brought to bear upon the key scientific and engineering challenges facing the nation and can be made available to address the research needs of others. Sandia can support the safe, secure, reliable, and sustainable use of nuclear power worldwide by incorporating state-of-the-art technologies in safety, security, nonproliferation, transportation, modeling, repository science, and system demonstrations.
Fine resolution Synthetic Aperture Radar (SAR) systems necessarily require wide bandwidths that often overlap spectrum utilized by other wireless services. These other emitters pose a source of Radio Frequency Interference (RFI) to the SAR echo signals that degrades SAR image quality. Filtering, or excising, the offending spectral contaminants will mitigate the interference, but at a cost of often degrading the SAR image in other ways, notably by raising offensive sidelobe levels. This report proposes borrowing an idea from nonlinear sidelobe apodization techniques to suppress interference without the attendant increase in sidelobe levels. The simple post-processing technique is termed Apodized RFI Filtering (ARF).
While collection capabilities have yielded an ever-increasing volume of aerial imagery, analytic techniques for identifying patterns in and extracting relevant information from this data have seriously lagged. The vast majority of imagery is never examined, due to a combination of the limited bandwidth of human analysts and limitations of existing analysis tools. In this report, we describe an alternative, novel approach to both encoding and analyzing aerial imagery, using the concept of a geospatial semantic graph. The advantages of our approach are twofold. First, intuitive templates can be easily specified in terms of the domain language in which an analyst converses. These templates can be used to automatically and efficiently search large graph databases, for specific patterns of interest. Second, unsupervised machine learning techniques can be applied to automatically identify patterns in the graph databases, exposing recurring motifs in imagery. We illustrate our approach using real-world data for Anne Arundel County, Maryland, and compare the performance of our approach to that of an expert human analyst.
The screening process for DG interconnection procedures needs to be improved in order to increase the PV deployment level on the distribution grid. A significant improvement in the current screening process could be achieved by finding a method to classify the feeders in a utility service territory and determine the sensitivity of particular groups of distribution feeders to the impacts of high PV deployment levels. This report describes the utility distribution feeder characteristics in California for a large dataset of 8,163 feeders and summarizes the California feeder population including the range of characteristics identified and most important to hosting capacity. The report describes the set of feeders that are identified for modeling and analysis as well as feeders identified for the control group. The report presents a method for separating a utility's distribution feeders into unique clusters using the k-means clustering algorithm. An approach for determining the feeder variables of interest for use in a clustering algorithm is also described. The report presents an approach for choosing the feeder variables to be utilized in the clustering process and a method is identified for determining the optimal number of representative clusters.
A laboratory testing program was developed to examine the short-term mechanical and time-dependent (creep) behavior of salt from the Bayou Choctaw Salt Dome. This report documents the test methodologies, and constitutive properties inferred from tests performed. These are used to extend our understanding of the mechanical behavior of the Bayou Choctaw domal salt and provide a data set for numerical analyses. The resulting information will be used to support numerical analyses of the current state of the Bayou Choctaw Dome as it relates to its crude oil storage function as part of the US Strategic Petroleum Reserve. Core obtained from Drill Hole BC-102B was tested under creep and quasi-static constant mean stress axisymmetric compression, and constant mean stress axisymmetric extension conditions. Creep tests were performed at 100 degrees Fahrenheit, and the axisymmetric tests were performed at ambient temperatures (72-78 degrees Fahrenheit). The testing performed indicates that the dilation criterion is pressure and stress state dependent. It was found that as the mean stress increases, the shear stress required to cause dilation increases. The results for this salt are reasonably consistent with those observed for other domal salts. Also it was observed that tests performed under extensile conditions required consistently lower shear stress to cause dilation for the same mean stress, which is consistent with other domal salts. Young's moduli ranged from 3.95 x 106 to 8.51 x 106 psi with an average of 6.44 x 106 psi, with Poisson's ratios ranging from 0.10 to 0.43 with an average of 0.30. Creep testing indicates that the BC salt is intermediate in creep resistance when compared with other bedded and domal salt steady-state behavior.
This report summarizes the work performed in developing a framework for the prioritization of cavern access wells for remediation and monitoring at the Big Hill Strategic Petroleum Reserve site. This framework was then applied to all 28 wells at the Big Hill site with each well receiving a grade for remediation and monitoring. Numerous factors affecting well integrity were incorporated into the grading framework including casing survey results, cavern pressure history, results from geomechanical simulations, and site geologic factors. The framework was developed in a way as to be applicable to all four of the Strategic Petroleum Reserve sites.
This white paper focuses on "advanced microgrids," but sections do, out of necessity, reference today's commercially available systems and installations in order to clearly distinguish the differences and advances. Advanced microgrids have been identified as being a necessary part of the modern electrical grid through a two DOE microgrid workshops, the National Institute of Standards and Technology, Smart Grid Interoperability Panel and other related sources. With their grid-interconnectivity advantages, advanced microgrids will improve system energy efficiency and reliability and provide enabling technologies for grid-independence to end-user sites. One popular definition that has been evolved and is used in multiple references is that a microgrid is a group of interconnected loads and distributed-energy resources within clearly defined electrical boundaries that acts as a single controllable entity with respect to the grid. A microgrid can connect and disconnect from the grid to enable it to operate in both grid-connected or island-mode. Further, an advanced microgrid can then be loosely defined as a dynamic microgrid.
The power output variability of photovoltaic systems can affect local electrical grids in locations with high renewable energy penetrations or weak distribution or transmission systems. In those rare cases, quick controllable generators (e.g., energy storage systems) or loads can counteract the destabilizing effects by compensating for the power fluctuations. Previously, control algorithms for coordinated and uncoordinated operation of a small natural gas engine-generator (genset) and a battery for smoothing PV plant output were optimized using MATLAB/Simulink simulations. The simulations demonstrated that a traditional generation resource such as a natural gas genset in combination with a battery would smooth the photovoltaic output while using a smaller battery state of charge (SOC) range and extending the life of the battery. This paper reports on the experimental implementation of the coordinated and uncoordinated controllers to verify the simulations and determine the differences in the controllers. The experiments were performed with the PNM PV and energy storage Prosperity site and a gas engine-generator located at the Aperture Center at Mesa Del Sol in Albuquerque, New Mexico. Two field demonstrations were performed to compare the different PV smoothing control algorithms: (1) implementing the coordinated and uncoordinated controls while switching off a subsection of the PV array at precise times on successive clear days, and (2) comparing the results of the battery and genset outputs for the coordinated control on a high variability day with simulations of the coordinated and uncoordinated controls. It was found that for certain PV power profiles the SOC range of the battery may be larger with the coordinated control, but the total amp-hours through the battery-which approximates battery wear-will always be smaller with the coordinated control.
The objective of this project was to evaluate the use of the Johnson-Cook strength and failure models in an adiabatic finite element model to simulate the puncture of 7075- T651 aluminum plates that were studied as part of an ASC L2 milestone by Corona et al (2012). The Johnson-Cook model parameters were determined from material test data. The results show a marked improvement, in particular in the calculated threshold velocity between no puncture and puncture, over those obtained in 2012. The threshold velocity calculated using a baseline model is just 4% higher than the mean value determined from experiment, in contrast to 60% in the 2012 predictions. Sensitivity studies showed that the threshold velocity predictions were improved by calibrating the relations between the equivalent plastic strain at failure and stress triaxiality, strain rate and temperature, as well as by the inclusion of adiabatic heating.
The line-imaging ORVIS or VISAR provides velocity as a function of position and time for a line on an experimental setup via a streak camera record of interference fringes. This document describes a Matlab-based program which guides the user through the process of converting these fringe data to a velocity surface. The data reduction is of the "fringe trace" type, wherein the changes in velocity at a given position on the line are calculated based on fringe motion past that point. The analyst must establish the fringe behavior up front, aided by peak-finding routines in the program. However, the later work of using fringe jumps to compensate for phase problems in other analysis techniques is greatly reduced. This program is not a standard GUI construction, and is prescriptive. At various points it saves the progress, allowing later restarts from those points.
The Saturn accelerator, owned by Sandia National Laboratories, has been in operation since the early 1980s and still has many of the original systems. A critical legacy system is the oil transfer system which transfers 250,000 gallons of transformer oil from outside storage tanks to the Saturn facility. The oil transfer system was identified for upgrade to current technology standards. Using the existing valves, pumps, and relay controls, the system was automated using the National Instruments cRIO FGPA platform. Engineered safety practices, including a failure mode effects analysis, were used to develop error handling requirements. The uniqueness of the Saturn Oil Automated Transfer System (SOATS) is in the graphical user interface. The SOATS uses an HTML interface to communicate to the cRIO, creating a platform independent control system. The SOATS was commissioned in April 2013.
Proceedings of Co-HPC 2014: 1st International Workshop on Hardware-Software Co-Design for High Performance Computing - Held in Conjunction with SC 2014: The International Conference for High Performance Computing, Networking, Storage and Analysis
The Piecewise Parabolic Method (PPM) was designed as a means of exploring compressible gas dynam-ics problems of interest in astrophysics, including super-sonic jets, compressible turbulence, stellar convection, and turbulent mixing and burning of gases in stellar interiors. Over time, the capabilities encapsulated in PPM have co-evolved with the availability of a series of high performance computing platforms. Implementation of the algorithm has adapted to and advanced with the architectural capabilities and characteristics of these machines. This adaptability of our PPM codes has enabled targeted astrophysical applica-tions of PPM to exploit these scarce resources to explore complex physical phenomena. Here we describe the means by which this was accomplished, and set a path forward, with a new miniapp, mPPM, for continuing this process in a diverse and dynamic architecture design environment. Adaptations in mPPM for the latest high performance machines are discussed that address the important issue of limited bandwidth from locally attached main memory to the microprocessor chip.
Proceedings of Co-HPC 2014: 1st International Workshop on Hardware-Software Co-Design for High Performance Computing - Held in Conjunction with SC 2014: The International Conference for High Performance Computing, Networking, Storage and Analysis
Disruptive changes to computer architecture are paving the way toward extreme scale computing. The co-design strategy of collaborative research and development among computer architects, system software designers, and application teams can help to ensure that applications not only cope but thrive with these changes. In this paper, we present a novel combined co-design approach of emulation and simulation in the context of investigating future Processing in Memory (PIM) architectures. PIM enables co-location of data and computation to decrease data movement, to provide increases in memory speed and capacity compared to existing technologies and, perhaps most importantly for extreme scale, to improve energy efficiency. Our evaluation of PIM focuses on three mini-applications representing important production applications. The emulation and simulation studies examine the effects of locality-aware versus locality-oblivious data distribution and computation, and they compare PIM to conventional architectures. Both studies contribute in their own way to the overall understanding of the application-architecture interactions, and our results suggest that PIM technology shows great potential for efficient computation without negatively impacting productivity.
Understanding how resources of High Performance Compute platforms are utilized by applications both individually and as a composite is key to application and platform performance. Typical system monitoring tools do not provide sufficient fidelity while application profiling tools do not capture the complex interplay between applications competing for shared resources. To gain new insights, monitoring tools must run continuously, system wide, at frequencies appropriate to the metrics of interest while having minimal impact on application performance. We introduce the Lightweight Distributed Metric Service for scalable, lightweight monitoring of large scale computing systems and applications. We describe issues and constraints guiding deployment in Sandia National Laboratories' capacity computing environment and on the National Center for Supercomputing Applications' Blue Waters platform including motivations, metrics of choice, and requirements relating to the scale and specialized nature of Blue Waters. We address monitoring overhead and impact on application performance and provide illustrative profiling results.
Krylov subspace projection methods are widely used iterative methods for solving large-scale linear systems of equations. Researchers have demonstrated that communication avoiding (CA) techniques can improve Krylov methods' performance on modern computers, where communication is becoming increasingly expensive compared to arithmetic operations. In this paper, we extend these studies by two major contributions. First, we present our implementation of a CA variant of the Generalized Minimum Residual (GMRES) method, called CAGMRES, for solving no symmetric linear systems of equations on a hybrid CPU/GPU cluster. Our performance results on up to 120 GPUs show that CA-GMRES gives a speedup of up to 2.5x in total solution time over standard GMRES on a hybrid cluster with twelve Intel Xeon CPUs and three Nvidia Fermi GPUs on each node. We then outline a domain decomposition framework to introduce a family of preconditioners that are suitable for CA Krylov methods. Our preconditioners do not incur any additional communication and allow the easy reuse of existing algorithms and software for the sub domain solves. Experimental results on the hybrid CPU/GPU cluster demonstrate that CA-GMRES with preconditioning achieve a speedup of up to 7.4x over CAGMRES without preconditioning, and speedup of up to 1.7x over GMRES with preconditioning in total solution time. These results confirm the potential of our framework to develop a practical and effective preconditioned CA Krylov method.
Lee, David S.; Wirthlin, Michael; Swift, Gary; Le, Anthony C.
This study examines the single-event response of the Xilinx 28 nm Kintex-7 FPGA irradiated with heavy ions. Results for single-event effects on configuration SRAM cells, user-accessible Flip-Flop cells, and BlockRAM™ memory are provided. This study also describes an unconventional single event latch-up signature observed during testing.
Lee, David S.; Wirthlin, Michael; Swift, Gary; Le, Anthony C.
This study examines the single-event response of the Xilinx 28 nm Kintex-7 FPGA irradiated with heavy ions. Results for single-event effects on configuration SRAM cells, user-accessible Flip-Flop cells, and BlockRAM™ memory are provided. This study also describes an unconventional single event latch-up signature observed during testing.
Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.
We present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. We present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.
Computational uid dynamics (CFD) is a powerful analysis tool for engineering anal- ysis of aerodynamic devices. Though great effort has been expended to assist the CFD practitioner in mesh generation efforts, investigation of spatial discretization error is still one of the primary time costs associated with field simulations. As complexity in both physics and geometry continues to increase, uniform grid refinement studies are not always practical from either a time or computational cost perspective. Error transport equations have been investigated by many researchers with the goal of providing greater confidence in simulation results while utilizing only a single mesh. One of the primary diffculties in applying these methods is the computation of a reliable error source model. This work presents a method for approximating these error sources with the intent of creating a gen- eral model which is applicable to all flux types within a general gas dynamics framework. Adaptivity results as well as comparison with a popular error source model are presented.
Microalgae have been identified as a promising renewable feedstock for production of lipids for feeds and fuels. Current methods for identifying algae strains and growth conditions that support high lipid production require a variety of fluorescent chemical indicators, such as Nile Red and more recently, Bodipy. Despite notable successes using these approaches, chemical indicators exhibit several drawbacks, including non-uniform staining, low lipid specificity, cellular toxicity, and variable permeability based on cell-type, limiting their applicability for high-throughput bioprospecting. In this work, we used in vivo hyperspectral confocal fluorescence microscopy of a variety of potential microalgae production strains (Nannochloropsis sp., Dunaliella salina, Neochloris oleoabundans, and Chlamydomonas reinhardtii) to identify a label-free method for localizing lipid bodies and quantifying the lipid yield on a single-cell basis. By analyzing endogenous fluorescence from chlorophyll and resonance Raman emission from lipid-solubilized carotenoids we deconvolved pure component emission spectra and generated diffraction limited projections of the lipid bodies and chloroplast organelles, respectively. Applying this imaging method to nutrient depletion time-courses from lab-scale and outdoor cultivation systems revealed an additional autofluorescence spectral component that became more prominent over time, and varied inversely with the chlorophyll intensity, indicative of physiological compromise of the algal cell. This signal could result in false-positives for conventional measurements of lipid accumulation (via spectral overlap with Nile Red), however, the additional spectral feature was found to be useful for classification of lipid enrichment and culture crash conditions in the outdoor cultivation system. Under nutrient deprivation, increases in the lipid fraction of the cellular volume of ~. 500% were observed, as well as a correlated decrease in the chloroplast fraction of the total cellular volume. The results suggest that a membrane recycling mechanism dominates for nutrient deprivation-based lipid accumulation in the microalgae tested.
Constitutive models in nanoscience and engineering often poorly represent the physics due to significant deviations in model form from their macroscale counterparts. In Part 1 of this study, this problem was explored by considering a continuum scale heat conduction constitutive law inferred directly from molecular dynamics (MD) simulations. In contrast, this work uses Bayesian inference based on the MD data to construct a Gaussian process emulator of the heat flux as a function of temperature and temperature gradient. No assumption of Fourier-like behavior is made, requiring alternative approaches to assess the well-posedness and accuracy of the emulator. Validation is provided by comparing continuum scale predictions using the emulator model against a larger all-MD simulation representing the true solution. The results show that a Gaussian process emulator of the heat conduction constitutive law produces an empirically unbiased prediction of the continuum scale temperature field for a variety of time scales, which was not observed when Fourier’s law is assumed to hold. Finally, uncertainty is propagated in the continuum model and quantified in the temperature field so the impact of errors in the model on continuum quantities can be determined.
The use of computational models to simulate the behavior of complex mechanical systems is ubiquitous in many high consequence applications such as aerospace systems. Results from these simulations are being used, among other things, to inform decisions regarding system reliability and margin assessment. In order to properly support these decisions, uncertainty needs to be accounted for. To this end, it is necessary to identify, quantify and propagate different sources of uncertainty as they relate to these modeling efforts. Some sources of uncertainty arise from the following: (1) modeling assumptions and approximations, (2) solution convergence, (3) differences between model predictions and experiments, (4) physical variability, (5) the coupling of various components and (6) and unknown unknowns. An additional aspect of the problem is the limited information available at the full system level in the application space. This is offset, in some instances, by information on individual components at testable conditions. In this paper, we focus on the quantification of uncertainty due to differences in model prediction and experiments, and present a technique to aggregate and propagate uncertainty from the component level to the full system in the applications space. A numerical example based on a structural dynamics application is used to demonstrate the technique.
In the coming decades, vehicle and fuel options and their supporting infrastructure must undergo significant transformations to achieve aggressive national targets for reducing petroleum consumption and lowering greenhouse gas (GHG) emissions. Vehicle electrification, advanced biofuels, natural gas, and hydrogen fuel cells are among the promising technology options that are being explored as future alternatives. A number of recent U.S. studies have examined how a mix of technology and policy options can contribute to the aggressive goals of 50- 80% reduction in petroleum consumption and 80% reduction in GHG emissions by 2050. These include reports issued by the National Petroleum Council, National Academies, and U.S. Department of Energy. While these studies all generally point to the need for a portfolio of technologies for the transportation sector, they do not draw the same set of conclusions for the portfolio mix. Moreover, they were commissioned for a variety of reasons, applied different modelling and analytical approaches in their assessments, and used a variety of assumptions in reaching their findings and recommendations. Using four recent major U. S. scenario analyses, this paper will illustrate several factors that can influence the interpretation of their results. Consideration of the underlying technology and policy assumptions, analytical approaches, and presentation of results can enable a more robust comparison across projections for the vehicle and fuel mix.
In response to the accident at the Fukushima Daiichi nuclear power station in Japan, the U.S. Nuclear Regulatory Commission and US. Department of Energy agreed to jointly sponsor an accident reconstruction study as a means of assessing the severe accident modeling capability of the MELCOR code and developing an understanding of the likely accident progression. Objectives of the project included reconstruction of the accident progressions using computer models and accident data, and validation of MELCOR and the Fukushima models against plant data. In this study Sandia National Laboratories developed MELCOR 2.1 models of Fukushima Daiichi Units 1 (IFI), 2, and 3 as well as the Unit 4 spent fuel pool. This paper reports on the analysis of the 1F1 accident. Details are presented on the modeled accident progression, hypothesized mode of failures in the reactor pressure vessel (RPV) and containment pressure boundary, and release of fission products to the environment. The MELCOR-predicted RPV and containment pressure trends compare well with available measured pressures. Conditions leading up to the observed explosion of the reactor building are postulated based on this analysis where drywell head flange leakage is thought to have led to accumulation of flammable gases in the refueling bay. The favorable comparison of the results from the analyses with the data from the plant provides additional confidence in MELCOR to reliably predict real-world accident progression. The modeling effort has also provided insights into future data needs for both model development and validation.
Controlling the materials chemistry of the solid-state ion conductor NaSICON is key to realizing its potential utility in emerging sodium-based battery technologies. We describe here the influence of excess sodium on phase evolution of sol-gel synthesized NaSICON. Alkoxide-based sol-gel processing was used to produce powders of Na3Zr2PSi2O12 NaSICON with 0-2 atomic % excess sodium. Phase formation and component volatility were studied as a function of temperature. NaSICON synthesis at temperatures between 900-1100C with up to 2% excess sodium significantly reduced the presence of zirconia, sodium phosphate, and sodium silicate secondary phases in fired NaSICON powders. Insights into the role of sodium on the phase chemistry of sol-gel processed NaSICON may inform key improvements in NaSICON development.
Scanning electron microscopes (SEMs) are used in neuroscience and materials science to image square centimeters of sample area at nanometer scales. Since imaging rates are in large part SNR-limited. imaging time is proportional to the number of measurements taken of each sample; in a traditional SEM. large collections can lead to weeks of around-the-clock imaging time. We previously reported a single-beam sparse sampling approach that we have demonstrated on an operational SEM for collecting "smooth" images. In this paper, we analyze how measurements from a hypothetical multi-beam system would compare to the single-beam approach in a compressed sensing framework. To that end. multi-beam measurements are synthesized on a single-beam SEM. and fidelity of reconstructed images are compared to the previously demonstrated approach. Since taking fewer measurements comes at the cost of reduced SNR, image fidelity as a function of undersampling ratio is reported.