International Journal of Electrical Power and Energy Systems
Weaver, Wayne W.; Robinett, Rush D.; Parker, Gordon G.; Wilson, David G.
Energy storage is a important design component in microgrids with high penetration renewable sources to maintain the system because of the highly variable and sometimes stochastic nature of the sources. Storage devices can be distributed close to the sources and/or at the microgrid bus. In addition, storage requirements can be minimized with a centralized control architecture, but this creates a single point of failure. Distributed droop control enables a completely decentralized architecture but, the energy storage optimization becomes more difficult. This paper presents an approach to droop control that enables the local and bus storage requirements to be determined. Given a priori knowledge of the design structure of a microgrid and the basic cycles of the renewable sources, the droop settings of the sources are determined that minimize both the bus voltage variations and overall energy storage capacity required in the system. This approach can be used in the design phase of a microgrid with a decentralized control structure to determine appropriate droop settings as well as the sizing of energy storage devices.
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.
Griffiths, R.A.C.; Chen, J.H.; Kolla, Hemanth; Cant, R.S.; Kollmann, W.
The topology of turbulent premixed flames is analysed using data from Direct Numerical Simulation (DNS), with emphasis on the statistical geometry of flame-flame interaction. A general method for obtaining the critical points of line, surface and volume fields is outlined, and the method is applied to isosurfaces of reaction progress variable in a DNS configuration involving a pair of freely-propagating hydrogen-air flames in a field of intense shear-generated turbulence. A complete set of possible flame-interaction topologies is derived using the eigenvalues of the scalar Hessian, and the topologies are parametrised using a pair of shape factors. The frequency of occurrence of each type of topology is evaluated from the DNS dataset for two different Damköhler numbers. Different types of flame-interaction topology are found to be favoured in various regions of the turbulent flame, and the physical significance of each interaction is discussed.
Results are presented here from a three-dimensional direct numerical simulation of a temporally-evolving planar slot jet flame and experimental measurements within a spatially-evolving axisymmetric jet flame operating with DME (dimethyl ether, CH3OCH3) as the fuel. Both simulation and experiment are conducted at a Reynolds number of 13050. The Damköhler number, stoichiometric mixture fraction and fuel and oxidizer compositions also are matched between simulation and experiment. Simultaneous OH/CH2O PLIF imaging is performed experimentally to characterize the spatial structure of the turbulent DME flames. The simulation shows a fully burning flame initially, which undergoes partial extinction and subsequently, reignition. The scalar dissipation rate (χ) increases to a value much greater than that calculated from near-extinction strained laminar flames, leading to the observed local extinction. As the turbulence decays, the local values of χ decrease and the flame reignites. The reignition process appears to be strongly dependent on the local χ value, which is consistent with previous results for simpler fuels. Statistics of OH and CH2O are compared between simulation and experiment and found to agree. The applicability of OH/CH2O (formaldehyde) product imaging as a surrogate for peak heat release rate is investigated. The concentration product is found to predict peak heat release rate extremely well in the simulation data. When this product imaging is applied to the experimental data, a similar extinction/reignition pattern also is observed in the experiments as a function of axial position. A new 30-species reduced chemical mechanism for DME was also developed as part of this work.
Turbulent dimethyl ether (DME) jet flames provide a canonical flame geometry for studying turbulence-flame interactions in oxygenated fuels and for developing predictive models of these interactions. The development of accurate models for DME/air flames would establish a foundation for studies of more complex oxygenated fuels. We present a joint experimental and computational investigation of the velocity field and OH and CH2O distributions in a piloted, partially-premixed turbulent DME/air jet flame with a jet exit Reynolds number, ReD, of 29,300. The turbulent DME/air flame is analogous to the well-studied, partially-premixed methane/air jet flame, Sandia Flame D, with identical stoichiometric mixture fraction, ξst = 0.35, and bulk jet exit velocity, Vbulk = 45.9 m/s. Measurements include particle image velocimetry (PIV) and simultaneous CH2O and OH laser-induced fluorescence (LIF) imaging. Simulations are performed using a large eddy simulation combined with conditional moment closure (LES-CMC) on an intermediate size grid of 1.3 million cells. Overall, the downstream evolution of the mean and RMS profiles of velocity, OH, and CH2O are well predicted, with the largest discrepancies occurring for CH2O at x/D = 20-25. LES-CMC simulations employing two different chemical reaction mechanisms (Kaiser et al., 2000 [20] and Zhao et al., 2008 [21]) show approximately a factor of two difference in the peak CH2O mole fractions, whereas OH mole fractions are in good agreement between the two mechanisms. The single-shot LIF measurements of OH and CH2O show a wide range of separation distances between the spatial distributions of these intermediate species with gaps on the order of millimeters. The intermittency in the overlap between these species indicates that the consumption rates of formaldehyde by OH in the turbulent DME/air jet flame may be highly intermittent with significant departures from flamelet models.
The effects of combustion on the strain rate field in turbulent jets were studied using 10 kHz tomographic particle image velocimetry (TPIV). Measurements were performed in three turbulent jets: a well-studied, piloted partially-premixed methane/air jet flame, Sandia flame C, with low probability of localized extinction; a second piloted jet flame, analogous to flame C but with a reduced pilot flow rate and a high probability of localized extinction; and a non-reacting air jet. Since the jet exit Reynolds number of approximately 13000 was nearly identical in the three jets, differences in the strain rate fields were attributed to the effects of combustion. Spatiotemporal characteristics of the strain rate field were analyzed. Overall, the strain rate norm was larger in the flames than in the non-reacting jet with the most stable flame having the largest values. In all three jets, the compressive strain rate was on average the largest of the three principal strain rates. At high strain rates, the ratios of the compressive and extensive strain rate to the intermediate strain rate were similar to those found in isotropic incompressible turbulent flows. The three-dimensional velocity measurements were used to analyze the spatial distribution of strain rate clusters, defined as singly-connected groups of voxels where the strain rate magnitude exceeded a threshold value. The presence of a stable flame significantly attenuated the number of clusters of intermediate strain rate. Strain rate bursts, corresponding to sudden increases in the number of clusters, were identified in the three jets. Bursts in the non-reacting jet and the unstable flame contained up to twice as many clusters as in the stable flame. The temporal intermittency of intense strain rate clusters was analyzed using the time-series measurements. Clusters with strain rates greater than five times the standard deviation of the strain rate norm were highly intermittent.
The Transportation Security Administration has a large workforce of Transportation Security Officers, most of whom perform interrogation of x-ray images at the passenger checkpoint. To date, TSOs on the x-ray have been limited to a 30-min session at a time, however, it is unclear where this limit originated. The current paper outlines methods for empirically determining if that 30-min duty cycle is optimal and if there are differences between individual TSOs. This work can inform scheduling TSOs at the checkpoint and can also inform whether TSOs should continue to be cross-trained (i.e., performing all 6 checkpoint duties) or whether specialization makes more sense.
A new open-source project evaluation tool, entitle the Materials Engineering Tetrahedron (MET), has been developed to determine the economic viability of materials design, selection, processing, and validation costs associated with any infrastructure-based project. MET improves project design by providing an economic perspective to the traditional materials science tetrahedron, relating microstructure, processing, property, and performance through the introduction of value-based economic costs for each side of the tetrahedron. The resulting size and distortion, from a regular tetrahedron illustrates the balance between the system, component, or material fabrication project detailed. Furthermore, the MET model also allows for increased budget efficiency and the potential for improved identification of cost saving mechanisms.
Motivated by the disagreement between recent diffusion Monte Carlo calculations of the phase transition pressure between the ambient and beta-Sn phases of silicon and experiments, we present a study of the HCP to BCC phase transition in beryllium. This lighter element provides an opportunity for directly testing many of the approximations required for calculations on silicon and may suggest a path towards increasing the practical accuracy of diffusion Monte Carlo calculations of solids in general. We demonstrate that the single largest approximation in these calculations is the pseudopotential approximation and after removing this we find excellent agreement with experiment for the ambient HCP phase and results similar to careful calculations using density functional theory for the phase transition pressure.
This report describes a simple, quasi-static, closed-form, parameterized model that predicts the contact forces acting between axially-engaging electrical contact receptacles and a pin. This approach is useful for design studies and reduced-order mechanism modeling, where receptacle-pin insertion forces have traditionally been difficult to quantify without high-fidelity (e.g. rigid body dynamics, finite element analysis) simulations. A Matlab implementation of the model is provided and is demonstrated for three receptacle geometries. Results are compared to rigid body dynamics simulations for the first two geometries and experimental insertion force measurements for the third.
Pulsed laser irradiation is used to irradiate and mark 13-8 steel and Nitronics 60 parts in order to create observable markings on the surfaces. The best optical contrast ratio between marked regions and unmarked regions is desired for digital image correlation. The contrast is optimized by using pulsed-laser irradiation and varying the laser power, pulse length, and scan speed. X-ray diffraction was used to characterize the laser-irradiated surface, and it was found that oxide formation and surface roughness are responsible for the observed contrast.
The Trilinos Project is an effort to develop algorithms and enabling technologies within an object- oriented software framework for the solution of large-scale, complex multi-physics engineering and scientific problems. A unique design feature of Trilinos is its focus on packages. While the abstractions make it easy to incorporate advanced processing and data manipulation tools, it is not always obvious how to take advantage of these features. The trios package incorporated two years ago offers general data management services, but has yet to offer integrated support for core Trilinos data structures, such as those offered in the Tpetra package. An initial attempt to incorporate native Trilinos data structure support into trios services revealed the complexity, from a non-mathematician's perspective, of using Trilinos. This project sought to understand the complexities and potential barriers not just for non-mathematicians that want to contribute to or use Trilinos, but potentially for new mathematically-inclined users as well that may want to offer services to support users. This report documents the challenges for trios to offer some simple data manipulation required as a precursor to any direct data services integration and makes recommendations for clarifying the performance implications and general approach to use.
The objective for this research has been to develop a method to induce a high frequency, large amplitude shock pulse into materials and structures as an above-ground laboratory simulation of an exo-atmospheric cold x-ray induced blow-off event. This work builds on the successes of direct-spray Light Initiated High Explosive impulse delivery technique, in order to drive a flyer to a desired impact velocity to induce the proper combined material and structural response of the target. The reported development focuses on flyer velocity from explosive initiation to target impact to flyer rebound. A comprehensive derivation of an analytical model to predict flyer velocity as a function of explosive deposition and flyer properties is presented. One-and two-dimensional test series were conducted to evaluate impulse delivery and impact pressure, as well as target material and structural response. Experimental results show good agreement in flyer velocity between that predicted by the developed theory and that inferred by impulse delivery. A definitive material response was measured in each of the one-dimensional targets. The structural strain response measured in the ring experiments showed excellent agreement with both the predicted flyer performance and the analytical strain solution for a cosine distributed impulsive loading. This work has focused on the utilization of analytical, hydrocode, and test analysis to confirm that a LIHE driven flyer impulse technique can be an effective simulation of a cold x-ray blow-off event. It is shown that a thin metallic flyer plate can be explosively accelerated to impact a target with sufficient energy to generate an impulsive load which induces both structural and material response in a test item.
Photovoltaic (PV) systems using microinverters are becoming increasingly popular in the residential system market, as such systems offer several advantages over PV systems using central inverters. PV modules with integrated microinverters, termed AC modules, are emerging to fill this market space. Existing test procedures and performance models designed for separate DC and AC components are unusable for AC modules because these do not allow ready access to the intermediate DC bus. Sandia National Laboratories' Photovoltaics and Distributed Systems department has developed a set of procedures to test, characterize, and model PV modules with integrated microinverters. The resulting empirical model is able to predict the output AC power of with RMS error of 1-2%. This document describes these procedures and provides the results of model validation efforts.
Four PV power plant variability simulation methods - no-smoothing, time average, Marcos, and the wavelet variability model (WVM) - were compared to measured data from a 19MW PV power plant to test the relative accuracy of each method. Errors (simulated vs. measured) were quantified using five application-specific metrics: the largest down ramps, the largest up ramps, the mean absolute error in matching the cumulative distribution of large ramps, the total energy contained in down ramps over the entire period considered, total energy in down ramps on the worst day. These errors we evaluated over timescales ranging from 1-second to 1-hour and over plant sizes of 1 to 14MW and the total plant size of 19MWs to determine trends in model errors as a function of timescale and plant size. Overall, the WVM was found to most often have the smallest errors. The Marcos method also often had small errors, including having the smallest errors of all methods at small PV plant sizes (1 to 7MWs). The no-smoothing method had large errors and should not be used. The time average method was an improvement over the no-smoothing method, but generally has larger errors than the WVM and Marcos methods.
This report documents the early experiences with porting and performance analysis of the Tri-Lab Trinity benchmark applications on Intel Xeon Phi (Knights Corner) (KNC) processor. KNC, the second generation of the Intel Many Integrated Core (MIC) architectures, uses a large number of small P54C-x86 cores with wide vector units and is deployed as PCI bus attached process accelerators. Sandia has experimental test beds of small InifiniBand clusters and workstations to investigate the performance of the MIC architecture. On these experimental test beds the programming models that may be investigated are "offload", "symmetric" and "native". Among these program usage models our primary interest is in the so called "native" mode, because the planned Trinity system to be deployed in 2016 using the next generation MIC processor architecture called Knights Landing would be self-hosted. Trinity / NERSC-8 benchmark programs cover a variety of scientific disciplines and they were used to guide the procurement of these systems. Architectures such as the Intel MIC are well suited to study evolving processor architectures and a usage model commonly referred to as MPI + X that facilitates migration of our applications to use both coarse grain and fine grain parallelism. Our focus with the applications selected is on the efficacy of algorithms in these applications to take advantage of features like: large number of cores, wide vector units, higher-bandwidth and deeper memory sub-system. This is a first step towards understanding applications, algorithms and programming environments for Trinity and future exascale computing systems.
We derive from first principles a mathematical physics model useful for understanding nonlinear optical propagation (including filamentation). All assumptions necessary for the development are clearly explained. We include the Kerr effect, Raman scattering, and ionization (as well as linear and nonlinear shock, diffraction and dispersion). We explain the phenomenological sub-models and each assumption required to arrive at a complete and consistent theoretical description. The development includes the relationship between shock and ionization and demonstrates why inclusion of Drude model impedance effects alters the nature of the shock operator. Unclassified Unlimited Release
Silicon usage in fixed, flat-panel photovoltaic systems can be reduced by 60 to 75% with no efficiency loss through use of arrays of mini-concentrators. These concentrators are simple trough-like reflectors that are formed in flat sheets of ~1- mm thick optical plastic. Concentration ratios of 2.55X can be achieved on rooftops and 4.0X on walls while collecting all of the direct sun and scattered skylight. The concentrators are fabricated in optical plastic— preferably polycarbonate for its high refractive index. The panels are typically 1mm thick so the weight of a panel is ~1kg/m2. In addition to the rooftop, wall and window blind designs, a design is proposed that can be tilted toward the sun position at the equinox. These systems are all designed so they can be mass-produced.
Emerging infectious diseases present a profound threat to global health, economic development, and political stability, and therefore represent a significant national security concern for the United States. The increased prevalence of international travel and globalized trade further amplify the threat of infectious disease outbreaks of catastrophic effect. The key to containing and eradicating an outbreak before it goes global is rapid identification of index cases and initial clusters of affected individuals. This depends upon establishment of a biosurveillance network that effectively reaches infectious disease hotspots in even the most remote regions of the world and provides a network-integrated, location-appropriate diagnostic capability. At present, there are two critical needs which must be addressed in order to extend biosurveillance activities beyond centralized laboratory facilities: 1) A simple, reliable, and safe method for immediate stabilization of clinical specimens in the field; and 2) A flexible sample processing platform that enables in-field preparation of clinical specimens for rapid, on-site analysis using a variety of diagnostic assay platforms. These needs are not necessarily mutually exclusive; in fact, we propose that they are most efficiently addressed by a deployable sample processing platform that immediately stabilizes the information content of clinical specimens through transformation of the inherently unstable analytes of interest into stable equivalents that are appropriately formatted for downstream analysis. In order to address this problem, we have developed a sample processing pipeline and microfluidics-based platform modules enabling: 1) Extraction of total RNA from finger-stick quantities of human whole blood; and 2) Microscale synthesis of appropriately-formatted cDNA products that capture the information content of blood RNA in a stable form that supports pathogen detection and/or characterization via PCR and/or Second Generation Sequencing (SGS). Through this research we have discovered new, effective solutions for problems that thus far have hindered use of digital microfluidics (DMF) in biomedical applications. Our work reveals a clear path forward to fieldable, automated sample processing systems that will enable rapid, on-site identification of usual-suspect and novel pathogens in clinical specimens for improved biosurveillance.
Sandia journal manuscript; Not yet accepted for publication
Koh, Chung Y.; Piccini, Matthew E.; Schaff, Ulrich Y.; Stanker, Larry H.; Cheng, Luisa W.; Ravichandran, Easwaran; Singh, Bal-Ram; Sommer, Greg J.; Singh, Anup K.
Multiple cases of attempted bioterrorism events using biotoxins have highlighted the urgent need for tools capable of rapid screening of suspect samples in the field (e.g., mailroom and public events). We present a portable microfluidic device capable of analyzing environmental (e.g., white powder), food (e.g., milk) and clinical (e.g., blood) samples for multiplexed detection of biotoxins. The device is rapid (<15-30 min sample-to-answer), sensitive (< 0.08 pg/mL detection limit for botulinum toxin), multiplexed (up to 64 parallel assays) and capable of analyzing small volume samples (< 20 μL total sample input). The immunoassay approach (SpinDx) is based on binding of toxins in a sample to antibody-laden capture particles followed by sedimentation of particles through a density-media in a microfluidic disk and quantification using a laser-induced fluorescence detector. A direct, blinded comparison with a gold standard ELISA revealed a 5-fold more sensitive detection limit for botulinum toxin while requiring 250-fold less sample volume and a 30 minute assay time with a near unity correlation. A key advantage of the technique is its compatibility with a variety of sample matrices with no additional sample preparation required. Ultrasensitive quantification has been demonstrated from direct analysis of multiple clinical, environmental and food samples, including white powder, whole blood, saliva, salad dressing, whole milk, peanut butter, half and half, honey, and canned meat. We believe that this device can met an urgent need in screening both potentially exposed people as well as suspicious samples in mail-rooms, airports, public sporting venues and emergency rooms. The general-purpose immunodiagnostics device can also find applications in screening of infectious and systemic diseases or serve as a lab device for conducting rapid immunoassays.
This paper describes a method for incorporating a diffusion field modeling oxygen usage and dispersion in a multi-scale model of Mycobacterium tuberculosis (Mtb) infection mediated granuloma formation. We implemented this method over a floating-point field to model oxygen dynamics in host tissue during chronic phase response and Mtb persistence. The method avoids the requirement of satisfying the Courant-Friedrichs-Lewy (CFL) condition, which is necessary in implementing the explicit version of the finite-difference method, but imposes an impractical bound on the time step. Instead, diffusion is modeled by a matrix-based, steady state approximate solution to the diffusion equation. Moreover, presented in figure 1 is the evolution of the diffusion profiles of a containment granuloma over time.
This report documents work that was performed under the Laboratory Directed Research and Development project, Science of Battery Degradation. The focus of this work was on the creation of new experimental and theoretical approaches to understand atomistic mechanisms of degradation in battery electrodes that result in loss of electrical energy storage capacity. Several unique approaches were developed during the course of the project, including the invention of a technique based on ultramicrotoming to cross-section commercial scale battery electrodes, the demonstration of scanning transmission x-ray microscopy (STXM) to probe lithium transport mechanisms within Li-ion battery electrodes, the creation of in-situ liquid cells to observe electrochemical reactions in real-time using both transmission electron microscopy (TEM) and STXM, the creation of an in-situ optical cell utilizing Raman spectroscopy and the application of the cell for analyzing redox flow batteries, the invention of an approach for performing ab initio simulation of electrochemical reactions under potential control and its application for the study of electrolyte degradation, and the development of an electrochemical entropy technique combined with x-ray based structural measurements for understanding origins of battery degradation. These approaches led to a number of scientific discoveries. Using STXM we learned that lithium iron phosphate battery cathodes display unexpected behavior during lithiation wherein lithium transport is controlled by nucleation of a lithiated phase, leading to high heterogeneity in lithium content at each particle and a surprising invariance of local current density with the overall electrode charging current. We discovered using in-situ transmission electron microscopy that there is a size limit to lithiation of silicon anode particles above which particle fracture controls electrode degradation. From electrochemical entropy measurements, we discovered that entropy changes little with degradation but the origin of degradation in cathodes is kinetic in nature, i.e. lower rate cycling recovers lost capacity. Finally, our modeling of electrode-electrolyte interfaces revealed that electrolyte degradation may occur by either a single or double electron transfer process depending on thickness of the solid-electrolyte-interphase layer, and this cross-over can be modeled and predicted.
Discussion of HP/fuel explosives in the scientific literature dates back to at least 1927. A paper was published that year in a German journal entitled On Hydrogen Peroxide Explosives [Bamberger and Nussbaum 1927]. The paper dealt with HP/cotton/Vaseline formulations, specifically HP89/cotton/Vaseline (76/15/9) and (70/8.5/12.5). The authors performed experiments with charge masses of 250-750 g and charge diameters of 35-45 mm. This short paper provides brief discussion on the observed qualitative effects of detonations but does not report detonation velocities.
The project began as a e ort to support InLight and Lumidigm. With the sale of the companies to a non-New Mexico entity, the project then focused on supporting a new company Medici Technologies. The Small Business (SB) is attempting to quantify glucose in tissue using a series of short interferometer scans of the nger. Each scan is produced from a novel presentation of the nger to the device. The intent of the project is to identify and, if possible, implement improved methods for classi cation, feature selection, and training to improve the performance of predictive algorithms used for tissue classi cation.
While working at Sandia National Laboratories as a graduate intern from September 2014 to January 2015, most of my time was spent on two projects. The first project involved designing a test fixture for circuit boards used in a recording device. The test fixture was needed to decrease test set up time. The second project was to use optimization techniques to determine the optimal G-Switch for given acceleration profiles.
This paper describes the modelling and design development of an optical coating that is suitable for broad bandwidth high reflection (BBHR) at 45° angle of incidence (AOI), P polarization (Ppol) and fs-class laser pulses whose frequencies correspond to wavelengths from 800 to 1000 nm, and that can eventually be produced uniformly on meter-class optical substrates. The coating design process was guided by specifications of not only high reflection but also high laser-induced damage threshold (LIDT) as well as low group delay dispersion (GDD) for reflected light over the broad, 200 nm bandwidth in order to minimize temporal broadening of the fs pulses upon reflection. The coating is based on TiO2/SiO2layer pairs by means of e-beam evaporation with ion-assisted deposition (IAD). We used OptiLayer Thin Film Software to explore coating designs with a limited optimization process starting from TiO2/SiO2 layer pairs with layer thicknesses in an opposite “chirp” arrangement. This approach proved to be successful, leading to a design with R > 99.5% from 801 – 999 nm and GDD < 20 fs2 from 843 – 949 nm (45° AOI, Ppol). The GDD behaves in a smooth way that lends itself to compensation of GDD effects. Also, the electric field intensities are favorable to high LIDT in that they quench rapidly into the outer coating layers or are of moderate strength, or they are located in the higher band gap SiO2 layers.
This study sought to develop a framework which would assist project implementers in their efforts to effectively indigenize technical and policy expertise in SNL's international partners. The initial assumption was that current and past projects undertaken by the Center for Global Security and Cooperation (CGSC) had produced many successful and effective tools and techniques to achieve indigenization and sustainability. As such, this study would be able leverage those tools and techniques to produce a common framework that would enhance SNL's ability to reproduce those past successes. Data was collected for this study by conducting a series of interviews and focus groups in order to elicit information about SNL's efforts and capabilities. The interviews focused on collecting data ranging from understanding definitions of sustainability and indigenization to financial considerations and various project management considerations. Initial findings showed that the problem statement originally formed in this study's hypothesis was missing elements of customer and in-country partner needs and goals which lead the interview team to adapt the original goal to determine what elements produced a success project rather than developing a framework. Overall the study found four main components that each successful project shared: the need to answer the right question, involve an institutional champion, the need to understand key stakeholders, and finally the need continually survey the project landscape.
This report summarizes a project in which the authors sought to develop and deploy: (i) experimental techniques to elucidate the complex, multiscale nature of thermal transport in particle-based materials; and (ii) modeling approaches to address current challenges in predicting performance variability of materials (e.g., identifying and characterizing physical- chemical processes and their couplings across multiple length and time scales, modeling information transfer between scales, and statically and dynamically resolving material structure and its evolution during manufacturing and device performance). Experimentally, several capabilities were successfully advanced. As discussed in Chapter 2 a flash diffusivity capability for measuring homogeneous thermal conductivity of pyrotechnic powders (and beyond) was advanced; leading to enhanced characterization of pyrotechnic materials and properties impacting component development. Chapter 4 describes success for the first time, although preliminary, in resolving thermal fields at speeds and spatial scales relevant to energetic components. Chapter 7 summarizes the first ever (as far as the authors know) application of TDTR to actual pyrotechnic materials. This is the first attempt to actually characterize these materials at the interfacial scale. On the modeling side, new capabilities in image processing of experimental microstructures and direct numerical simulation on complicated structures were advanced (see Chapters 3 and 5). In addition, modeling work described in Chapter 8 led to improved prediction of interface thermal conductance from first principles calculations. Toward the second point, for a model system of packed particles, significant headway was made in implementing numerical algorithms and collecting data to justify the approach in terms of highlighting the phenomena at play and pointing the way forward in developing and informing the kind of modeling approach originally envisioned (see Chapter 6). In both cases much more remains to be accomplished.
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase significantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classification steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Quantum tomography is used to characterize quantum operations implemented in quantum information processing (QIP) hardware. Traditionally, state tomography has been used to characterize the quantum state prepared in an initialization procedure, while quantum process tomography is used to characterize dynamical operations on a QIP system. As such, tomography is critical to the development of QIP hardware (since it is necessary both for debugging and validating as-built devices, and its results are used to influence the next generation of devices). But tomography suffers from several critical drawbacks. In this report, we present new research that resolves several of these flaws. We describe a new form of tomography called gate set tomography (GST), which unifies state and process tomography, avoids prior methods critical reliance on precalibrated operations that are not generally available, and can achieve unprecedented accuracies. We report on theory and experimental development of adaptive tomography protocols that achieve far higher fidelity in state reconstruction than non-adaptive methods. Finally, we present a new theoretical and experimental analysis of process tomography on multispin systems, and demonstrate how to more effectively detect and characterize quantum noise using carefully tailored ensembles of input states.
As Modeling and Simulation (M&S) tools have matured, their applicability and importance have increased across many national security challenges. In particular, they provide a way to test how something may behave without the need to do real world testing. However, current and future changes across several factors including capabilities, policy, and funding are driving a need for rapid response or evaluation in ways that many M&S tools cannot address. Issues around large data, computational requirements, delivery mechanisms, and analyst involvement already exist and pose significant challenges. Furthermore, rising expectations, rising input complexity, and increasing depth of analysis will only increase the difficulty of these challenges. In this study we examine whether innovations in M&S software coupled with advances in ''cloud'' computing and ''big-data'' methodologies can overcome many of these challenges. In particular, we propose a simple, horizontally-scalable distributed computing environment that could provide the foundation (i.e. ''cloud'') for next-generation M&S-based applications based on the notion of ''parallel multi-simulation''. In our context, the goal of parallel multi- simulation is to consider as many simultaneous paths of execution as possible. Therefore, with sufficient resources, the complexity is dominated by the cost of single scenario runs as opposed to the number of runs required. We show the feasibility of this architecture through a stable prototype implementation coupled with the Umbra Simulation Framework [6]. Finally, we highlight the utility through multiple novel analysis tools and by showing the performance improvement compared to existing tools.
Results for the stability analysis are as follows: maximum N factor trends agree well with previous data; transition N factor difference between Case 2 and Case 3 disagrees with previous data. Requires another look; predicts disturbance frequencies that agree with experiments and VESTA computations; and predicts larger N factors than VESTA
The Frequency Translation to Demonstrate a Hybrid Quantum Architecture project focused on developing nonlinear optics to couple two different ion species and make their emitted UV photons indistinguishable. Successful demonstration of photonic coupling of different ion species lays the foundation for coupling drastically different types of qubits, such as ions and quantum dots. Frequency conversion of single photons emitted from single ions remains a "hot" topic with many groups pursing this effort; however due to challenges in producing short period periodically poled crystal it has yet to be realized. This report details the efforts of trying to frequency convert single photons emitted from trapped ions to other wavelengths. We present our theoretical studies of candidate platforms for frequency conversion: photonic crystal fibers, X(2) nonlinear crystals in optical cavities, and photonic crystal cavities. We also present experiment results in ion trapping X(2) nonlinear crystals measurements and photonic crystal fabrication
Aleph models continuum electrostatic and steady and transient thermal fields using a finite-element method. Much work has gone into expanding the core solver capability to support enriched modeling consisting of multiple interacting fields, special boundary conditions and two-way interfacial coupling with particles modeled using Aleph's complementary particle-in-cell capability. This report provides quantitative evidence for correct implementation of Aleph's field solver via order- of-convergence assessments on a collection of problems of increasing complexity. It is intended to provide Aleph with a pedigree and to establish a basis for confidence in results for more challenging problems important to Sandia's mission that Aleph was specifically designed to address.
Aleph is an electrostatic particle-in-cell code which uses the finite element method to solve for the electric potential and field based on external potentials and discrete charged particles. The field solver in Aleph was verified for two problems and matched the analytic theory for finite elements. The first problem showed the mesh-refinement convergence for a nonlinear field with no particles within the domain. This matched the theoretical convergence rates of second order for the potential field and first order for the electric field. Then the solution for a single particle in an infinite domain was compared to the analytic solution. This also matched the theory of first order convergence in both the potential and electric fields for both problems over a refinement factor of 16. These solutions give confidence that the field solver and charge weighting schemes are implemented correctly. This page intentionally left blank.
Up to date, studies of the fractional quantum Hall effect (FQHE) states in the second Landau level have mainly been carried out in the high electron density regime, where the electron mobility is the highest. Only recently, with the advance of high quality low density MBE growth, experiments have been pushed to the low density regime [1], where the electron-electron interactions are strong and the Landau level mixing parameter, defined by κ = e2/εIB/ℏωe, is large. Here, lB = (ℏe/B)1/2 is the magnetic length and ωc = eB/m the cyclotron frequency. All other parameters have their normal meanings. It has been shown that a large Landau level mixing effect strongly affects the electron physics in the second Landau level [2].
Recent studies have evaluated closed-loop supercritical carbon dioxide (s-CO2) Brayton cycles to be a higher energy-density system in comparison to conventional superheated steam Rankine systems. At turbine inlet conditions of 923K and 25 MPa, high thermal efficiency (~50%) can be achieved. Achieving these high efficiencies will make concentrating solar power (CSP) technologies a competitive alternative to current power generation methods. To incorporate a s-CO2 Brayton power cycle in a solar power tower system, the development of a solar receiver capable of providing an outlet temperature of 923 K (at 25 MPa) is necessary. To satisfy the temperature requirements of a s-CO2 Brayton cycle with recuperation and recompression, it is required to heat s-CO2 by a temperature of ~200 K as it passes through the solar receiver. Our objective was to develop an optical-thermal-fluid model to design and evaluate a tubular receiver that will receive a heat input ~1 MWth from a heliostat field. We also undertook the documentation of design requirements for the development, testing and safe operation of a direct s-CO2 solar receiver. The main purpose of this document is to serve as a reference and guideline for design and testing requirements, as well as to address the technical challenges and provide initial parameters for the computational models that will be employed for the development of s-CO2 receivers.
Tidwell, Vincent C.; Wolfsberg, Andrew; Macknick, Jordan; Middleton, Richard
In the Southwest and Southern Rocky Mountains (SWSRM), energy production, energy resource extraction, and other high volume uses depend on water supply from systems that are highly vulnerable to extreme, coupled hydro-ecosystem-climate events including prolonged drought, flooding, degrading snow cover, forest die off, and wildfire. These vulnerabilities, which increase under climate change, present a challenge for energy and resource planners in the region with the highest population growth rate in the nation. Currently, analytical tools are designed to address individual aspects of these regional energy and water vulnerabilities. Further, these tools are not linked, severely limiting the effectiveness of each individual tool. Linking established tools, which have varying degrees of spatial and temporal resolution as well as modeling objectives, and developing next-generation capabilities where needed would provide a unique and replicable platform for regional analyses of climate-water-ecosystem-energy interactions, while leveraging prior investments and current expertise (both within DOE and across other Federal agencies).
Hansen, Francis D.; Snl; Leigh, Christi; Steininger, W.K.; Bollingerfehr, Wilhelm; Von Berlepsche, Thilo; Technology, Dbe
The 5th US/German Workshop on Salt Repository Research, Design, and Operation was held in Santa Fe New Mexico September 8-10, 2014. The forty seven registered participants were equally divided between the United States (US) and Germany, with one participant from The Netherlands. The agenda for the 2014 workshop was under development immediately upon finishing the 4th Workshop. Ongoing, fundamental topics such as thermomechanical behavior of salt, plugging and sealing, the safety case, and performance assessment continue to advance the basis for disposal of heat-generating nuclear waste in salt formations. The utility of a salt underground research laboratory (URL) remains an intriguing concept engendering discussion of testing protocol. By far the most interest in this years’ workshop pertained to operational safety. Given events at the Waste Isolation Pilot Plant (WIPP), this discussion took on a new sense of relevance and urgency.
Sandia National Laboratories, New Mexico (SNL/NM) is a multi-program Research and Development Facility, owned by the Department of Energy (DOE) and operated by Sandia Corporation, a Lockheed Martin Company (LMC). Sandia waste management operations include radioactive and mixed waste and hazardous waste (including classified and explosive). Waste management activities include managing and characterizing the waste, completing waste disposal requests, and providing guidance on sorting, packaging, & storing wastes, preparing all necessary disposal request documents, and/or recycling, maintaining accurate records, help the line organizations in assuring the compliance of all waste management activities, and preparing compliance-related documentation. Additional contaminants may include polychlorinated biphenyls (PCBs), asbestos, and beryllium. Some radioactive and hazardous waste may also be managed as classified waste.
We are developing computational models to help understand manufacturing processes, final properties and aging of structural foam, polyurethane PMDI. Th e resulting model predictions of density and cure gradients from the manufacturing process will be used as input to foam heat transfer and mechanical models. BKC 44306 PMDI-10 and BKC 44307 PMDI-18 are the most prevalent foams used in structural parts. Experiments needed to parameterize models of the reaction kinetics and the equations of motion during the foam blowing stages were described for BKC 44306 PMDI-10 in the first of this report series (Mondy et al. 2014). BKC 44307 PMDI-18 is a new foam that will be used to make relatively dense structural supports via over packing. It uses a different catalyst than those in the BKC 44306 family of foams; hence, we expect that the reaction kineti cs models must be modified. Here we detail the experiments needed to characteriz e the reaction kinetics of BKC 44307 PMDI-18 and suggest parameters for the model based on these experiments. In additi on, the second part of this report describes data taken to provide input to the preliminary nonlinear visco elastic structural response model developed for BKC 44306 PMDI-10 foam. We show that the standard cu re schedule used by KCP does not fully cure the material, and, upon temperature elevation above 150°C, oxidation or decomposition reactions occur that alter the composition of the foam. These findings suggest that achieving a fully cured foam part with this formulation may be not be possible through therma l curing. As such, visco elastic characterization procedures developed for curing thermosets can provide only approximate material properties, since the state of the material continuously evolves during tests.
Accident management is an important component to maintaining risk at acceptable levels for all complex systems, such as nuclear power plants. With the introduction of self - correcting, or inherently safe, reactor designs the focus has shifted from management by operators to allowing the syste m's design to manage the accident. While inherently and passively safe designs are laudable, extreme boundary conditions can interfere with the design attributes which facilitate inherent safety , thus resulting in unanticipated and undesirable end states. This report examines an inherently safe and small sodium fast reactor experiencing a beyond design basis seismic event with the intend of exploring two issues : (1) can human intervention either improve or worsen the potential end states and (2) can a Bayes ian Network be constructed to infer the state of the reactor to inform (1). ACKNOWLEDGEMENTS The author s would like to acknowledge the U.S. Department of E nergy's Office of Nuclear Energy for funding this research through Work Package SR - 14SN100303 under the Advanced Reactor Concepts program. The authors also acknowledge the PRA teams at A rgonne N ational L aborator y , O ak R idge N ational L aborator y , and I daho N ational L aborator y for their continue d contributions to the advanced reactor PRA mission area.