Publications

Results 24401–24600 of 96,771

Search results

Jump to search filters

Rechargeable solid-state copper sulfide cathodes for alkaline batteries: Importance of the copper valence state

Journal of the Electrochemical Society

Duay, Jonathon W.; Lambert, Timothy N.; Kelly, Maria; Pineda-Dominguez, Ivan

Batteries for grid storage applications must be inexpensive, safe, reliable, as well as have a high energy density. Here, we utilize the high capacity of sulfur (S) (1675 mAh g-1, based on the idealized redox couple of S2./S) in order to demonstrate for the first time, a reversible high capacity solid-state S-based cathode for alkaline batteries. To maintain S in the solid-state, it is bound to copper (Cu), initially in its fully reduced state as the sulfide. Upon charging, the sulfide is oxidized to a polysulfide species which is captured and maintained in the solid-state by the Cu ions. This solid-state sulfide/polysulfide cathode was analyzed versus a zinc (Zn) anode which gives a nominal >1.2 V cell voltage based on the sulfide/polysulfide redox cathode chemistry. It was found that in order for the S cathode to have the best cycle life in the solid-state it must not only be bound to Cu ions but bound to Cu ions in the +1 valence state, forming Cu2S as a discharge product. Zn/Cu2S batteries cycled between 1.45 V and 0.4 V vs. Zn displayed capacities of 1500 mAh g-1 (based on mass of S) or i300 mAh g-1 (based on mass of Cu2S) and high areal (>23 mAh cm.2) and energy densities (>135 Wh L-1), but suffered from moderate cycle lifes (<250 cycles). The failure mechanism of this electrode was found to be disproportionation of the charged S species into irreversible sulfite releasing the bound Cu ions. The Cu ions become free to perform Cu specific redox reactions which slowly changes the battery redox chemistry from that of S to that of Cu with a S additive. Batteries utilizing the Cu2S cathode and a 50% depth of charge (DOC) cathode cycling protocol, with 5 wt% Na2S added to the electrolyte, retained a cathode capacity of 838 mAh g-1 (based on the mass of S) or 169 mA h g-1 (based on mass of Cu2S) after 450 cycles with >99.7% coulombic efficiency. These Zn/Cu2S batteries provided a grid storage relevant energy density of >42Wh L-1 (at 65 wt% Cu2S loading), despite only using a 3% depth of discharge (DOD) for the Zn anode. This work opens the way to a new class of energy dense grid storage batteries based on high capacity solid-state S-based cathodes.

More Details

DARMA-EMPIRE Integration and Performance Assessment – Interim Report

Lifflander, Jonathan; Bettencourt, Matthew T.; Slattengren, Nicole S.; Templet, Gary J.; Miller, Phil; Perrinel, Meriadeg; Rizzi, Francesco N.; Pebay, Philippe P.

We begin by presenting an overview of the general philosophy that is guiding the novel DARMA developments, followed by a brief reminder about the background of this project. We finally present the FY19 design requirements. As the Exascale era arises, DARMA is uniquely positioned at the forefront of asychronous many-task (AMT) research and development (R&D) to explore emerging programming model paradigms for next-generation HPC applications at Sandia, across NNSA labs, and beyond. The DARMA project explores how to fundamentally shift the expression(PM) and execution(EM)of massively concurrent HPC scientific algorithms to be more asynchronous, resilient to executional aberrations in heterogeneous/unpredictable environments, and data-dependency conscious—thereby enabling an intelligent, dynamic, and self-aware runtime to guide execution.

More Details

Rigorous code verification: An additional tool to use with the method of manufactured solutions

ASME 2019 Verification and Validation Symposium, VVS 2019

Krueger, Aaron M.; Mousseau, Vincent A.; Hassan, Yassin A.

The Method of Manufactured Solutions (MMS) has proven to be useful for completing code verification studies. MMS allows the code developer to verify that the observed order-of-accuracy matches the theoretical order-of accuracy. Even though the solution to the partial differential equation is not intuitive, it provides an exact solution to a problem that most likely could not be solved analytically. The code developer can then use the exact solution as a debugging tool. While the order-of-accuracy test has been historically treated as the most rigorous of all code verification methods, it fails to indicate code”bugs” that are of the same order as the theoretical order-of-accuracy. The only way to test for these types of code bugs is to verify that the theoretical local truncation error for a particular grid matches the difference between the manufactured solution (MS) and the solution on that grid. The theoretical local truncation error can be computed by using the modified equation analysis (MEA) with the MS and its analytic derivatives, which we call modified equation analysis method of manufactured solutions (MEAMMS). In addition to describing the MEAMMS process, this study shows the results of completing a code verification study on a conservation of mass code. The code was able to compute the leading truncation error term as well as additional higher-order terms. When the code verification process was complete, not only did the observed order-of-accuracy match the theoretical order-of-accuracy for all numerical schemes implemented in the code, but it was also able to cancel the discretization error to within round-off error for a 64-bit system.

More Details

Human Factors Guidance for Building a Computer-Based Procedures System: How to Give the Users Something They Actually Want

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Gilmore, Walter E.

Historically, “skill-of-the-craft” was the single measure of job qualification. In those days, no one gave workers a procedure to follow. Today, large complex industries rely on procedures as a way of ensuring the job will be performed reliably and safely. Typically, these procedures provide a layer of protection to mitigate the severity of an accident or prevent it from happening. While paper-based procedures have long been the standard way of doing business, there is increasing interest in replacing this format with Computer-Based Procedures. Though, the transition from paper to paperless can be more problematic than it seems. Some issues that have led to these problems are discussed here. It is hoped that, by knowing what these issues are, the same mistakes will not be repeated in the future. Mistake avoidance begins with a well-defined set of user requirements for the proposed system. Plus, it is important to realize that Computer-Based Procedures are likely going to be placed in a facility that has never used this type of technology before. As for any new technology, a new way of thinking must come with it. Otherwise, if attempts are made to intermingle old ideas with new ways of doing business, problems are destined to occur.

More Details

Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing

Science

Fuller, Elliot J.; Keene, Scott T.; Melianas, Armantas; Wang, Zhongrui; Asapu, Shiva; Agarwal, Sapan A.; Li, Yiyang; Tuchman, Yaakov; James, Conrad D.; Marinella, Matthew J.; Yang, J.J.; Salleo, Alberto; Talin, A.A.

Neuromorphic computers could overcome efficiency bottlenecks inherent to conventional computing through parallel programming and readout of artificial neural network weights in a crossbar memory array. However, selective and linear weight updates and <10-nanoampere read currents are required for learning that surpasses conventional computing efficiency. We introduce an ionic floating-gate memory array based on a polymer redox transistor connected to a conductive-bridge memory (CBM). Selective and linear programming of a redox transistor array is executed in parallel by overcoming the bridging threshold voltage of the CBMs. Synaptic weight readout with currents <10 nanoamperes is achieved by diluting the conductive polymer with an insulator to decrease the conductance. The redox transistors endure >1 billion write-read operations and support >1-megahertz write-read frequencies.

More Details

Imaging effectiveness calculator for non-design microscope samples

Applied Optics

Anthony, Stephen M.; Miller, Philip R.; Timlin, Jerilyn A.; Polsky, Ronen P.

When attempting to integrate single-molecule fluorescence microscopy with microfabricated devices such as microfluidic channels, fabrication constraints may prevent using traditional coverslips. Instead, the fabricated devices may require imaging through material with a different thickness or index of refraction. Altering either can easily reduce the quality of the image formation (measured by the Strehl ratio) by a factor of 2 or more, reducing the signal-to-noise ratio accordingly. In such cases, successful detection of single-molecule fluorescence may prove difficult or impossible. Here we provide software to calculate the effect of non-design materials upon the Strehl ratio or ensquared energy and explore the impact of common materials used in microfabrication.

More Details

Effects of Note-Taking Method on Knowledge Transfer in Inspection Tasks

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Stites, Mallory C.; Matzen, Laura E.; Smartt, Heidi A.; Gastelum, Zoe N.

International nuclear safeguards inspectors visit nuclear facilities to assess their compliance with international nonproliferation agreements. Inspectors note whether anything unusual is happening in the facility that might indicate the diversion or misuse of nuclear materials, or anything that changed since the last inspection. They must complete inspections under restrictions imposed by their hosts, regarding both their use of technology or equipment and time allotted. Moreover, because inspections are sometimes completed by different teams months apart, it is crucial that their notes accurately facilitate change detection across a delay. The current study addressed these issues by investigating how note-taking methods (e.g., digital camera, hand-written notes, or their combination) impacted memory in a delayed recall test of a complex visual array. Participants studied four arrays of abstract shapes and industrial objects using a different note-taking method for each, then returned 48–72Â h later to complete a memory test using their notes to identify objects changed (e.g., location, material, orientation). Accuracy was highest for both conditions using a camera, followed by hand-written notes alone, and all were better than having no aid. Although the camera-only condition benefitted study times, this benefit was not observed at test, suggesting drawbacks to using just a camera to aid recall. Change type interacted with note-taking method; although certain changes were overall more difficult, the note-taking method used helped mitigate these deficits in performance. Finally, elaborative hand-written notes produced better performance than simple ones, suggesting strategies for individual note-takers to maximize their efficacy in the absence of a digital aid.

More Details

Strong and Weak Scaling of the Sierra/SD Eigenvector Problem to a Billion Degrees of Freedom

Bunting, Gregory B.

Sierra/SD is a structural dynamics finite element software package that is known for its scalability and performance on DOE supercomputers. While there are historical documents demonstrating weak and strong scaling on DOE systems such as Redsky, no such formal studies have been done on modern architectures. This report demonstrates that Sierra/SD still scales on modern architectures. Non structured meshes in the shape of an I-Beam are solved in sizes ranging from fifty thousand degrees of freedom in serial up to one and a half billion degrees of freedom on over eighteen thousand processors using only default solver options. The report serves as a baseline for users to estimate computation cost of finite element analyses in Sierra/SD, understand how solver options relate to computational costs, and pick optimal processor counts to solve a given problem size, as well as a baseline for evaluating computational cost and scalability on next generation architectures.

More Details

Structure and electronic properties of rare earth DOBDC metal-organic-frameworks

Physical Chemistry Chemical Physics

Vogel, Dayton J.; Sava Gallis, Dorina F.; Nenoff, T.M.; Rimsza, Jessica R.

Here, we apply density functional theory (DFT) to investigate rare-earth metal organic frameworks (RE-MOFs), RE12(μ3-OH)16(C8O6H4)8(C8O6H5)4 (RE = Y, Eu, Tb, Yb), and characterize the level of theory needed to accurately predict structural and electronic properties in MOF materials with 4f-electrons. A two-step calculation approach of geometry optimization with spin-restricted DFT and large core potential (LCPs), and detailed electronic structures with spin-unrestricted DFT with a full valence potential + Hubbard U correction is investigated. Spin-restricted DFT with LCPs resulted in good agreement between experimental lattice parameters and optimized geometries, while a full valence potential is necessary for accurate representation of the electronic structure. The electronic structure of Eu-DOBDC MOF indicated a strong dependence on the treatment of highly localized 4f-electrons and spin polarization, as well as variation within a range of Hubbard corrections (U = 1-9 eV). For Hubbard corrected spin-unrestricted calculations, a U value of 1-4 eV maintains the non-metallic character of the band gap with slight deviations in f-orbital energetics. When compared with experimentally reported results, the importance of the full valence calculation and the Hubbard correction in correctly predicting the electronic structure is highlighted.

More Details

Insights into the solvent-Assisted degradation of organophosphorus compounds by a Zr-based metal-organic framework

Dalton Transactions

Harvey, Jacob H.; Pearce, Charles J.; Hall, Morgan G.; Bruni, Eric J.; Decoste, Jared B.; Sava Gallis, Dorina F.

The degradation of a chemical warfare agent simulant using a catalytically active Zr-based metal-organic framework (MOF) as a function of different solvent systems was investigated. Complementary molecular modelling studies indicate that the differences in the degradation rates are related to the increasing size in the nucleophile, which hinders the rotation of the product molecule during degradation. Methanol was identified as an appropriate solvent for non-Aqueous degradation applications and demonstrated to support the MOF-based destruction of both sarin and soman.

More Details

Rapid Synthesis of Monodispersed TATB Microparticles in Ionic Liquid Micelles

MRS Advances

Fan, Hongyou F.; Karler, Casey K.; Alarid, Leanne; Rosenberg, David

Controlling microscopic morphology of energetic materials is of significant interest for the improvement of their performance and production consistency. As an important insensitive high explosive material, triaminotrinitrobenzene (TATB) has attracted tremendous research effort for military grade explosives and propellants. In this study, a new, rapid and inexpensive synthesis method for monodispersed TATB microparticles based on micelle-confined precipitation was developed. Surfactant with proper hydrophilic-lipophilic balance value was found to be critical to the success of this synthesis. The morphology of the TATB microparticles can be tuned between quasi-spherical and faceted by controlling the speed of recrystallization.

More Details

Distinguishing between bulk and edge hydroxyl vibrational properties of 2 : 1 phyllosilicates via deuteration

Chemical Communications

Harvey, Jacob H.; Johnston, Cliff T.; Criscenti, Louise C.; Greathouse, Jeffery A.

Observation of vibrational properties of phyllosilicate edges via a combined molecular modeling and experimental approach was performed. Deuterium exchange was utilized to isolate edge vibrational modes from their internal counterparts. The appearance of a specific peak within the broader D2O band indicates the presence of deuteration on the edge surface, and this peak is confirmed with the simulated spectra. These results are the first to unambiguously identify spectroscopic features of phyllosilicate edge sites.

More Details

Crossover in membranes for aqueous soluble organic redox flow batteries

Journal of the Electrochemical Society

Small, Leo J.; Laros, James H.; Anderson, Travis M.

The performances of five commercial anion exchange membranes are compared in aqueous soluble organic redox flow batteries (RFBs) containing the TEMPO and methyl viologen (MV) redox pair. Capacities between RFBs with different membranes are found to vary by >50% of theoretical after 100 cycles. This capacity loss is attributed to crossover of TEMPO and MV across the membrane and is dominated by either diffusion, migration, or electroosmotic drag, depending on the membrane. Counterintuitively, the worst performing membranes display the lowest diffusion coefficients for TEMPO and MV, instead seeing high crossover fluxes due to electroosmotic drag. This trend is rationalized in terms of the ion exchange capacity and water content of these membranes. Decreasing these values in an effort to minimize diffusion of the redox-active species while the RFB rests can inadvertently exacerbate conditions for electroosmotic drag when theRFBoperates.Using fundamental membrane properties, it is demonstrated that the relative magnitude of crossover and capacity loss during RFB operation may be understood.

More Details

Calibration strategies and modeling approaches for predicting load-displacement behavior and failure for multiaxial loadings in threaded fasteners

ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)

Mersch, J.P.; Smith, J.A.; Orient, George E.; Grimmer, Peter W.; Gearhart, Jhana S.

Multiple fastener reduced-order models and fitting strategies are used on a multiaxial dataset and these models are further evaluated using a high-fidelity analysis model to demonstrate how well these strategies predict load-displacement behavior and failure. Two common reduced-order modeling approaches, the plug and spot weld, are calibrated, assessed, and compared to a more intensive approach – a “two-block” plug calibrated to multiple datasets. An optimization analysis workflow leveraging a genetic algorithm was exercised on a set of quasistatic test data where fasteners were pulled at angles from 0° to 90° in 15° increments to obtain material parameters for a fastener model that best capture the load-displacement behavior of the chosen datasets. The one-block plug is calibrated just to the tension data, the spot weld is calibrated to the tension (0°) and shear (90°), and the two-block plug is calibrated to all data available (0°-90°). These calibrations are further assessed by incorporating these models and modeling approaches into a high-fidelity analysis model of the test setup and comparing the load-displacement predictions to the raw test data.

More Details

Towards molecular dynamics studies of hydrogen effects in Fe-Cr-Ni stainless steels

Proceedings of the International Offshore and Polar Engineering Conference

Zhou, Xiaowang Z.; Foster, Michael E.; Sills, Ryan B.; Karnesky, Richard A.

Austenitic stainless steels (Fe-Cr-Ni) are resistant to hydrogen embrittlement but have not been studied using molecular dynamics simulations due to the lack of an Fe-Cr-Ni-H interatomic potential. Herein we describe our recent progress towards molecular dynamics studies of hydrogen effects in Fe-Cr-Ni stainless steels. We first describe our Fe-Cr-Ni-H interatomic potential and demonstrate its characteristics relevant to mechanical properties. We then demonstrate that our potential can be used in molecular dynamics simulations to derive Arrhenius equation of hydrogen diffusion and to reveal twinning and phase transformation deformation mechanisms in stainless steels.

More Details

Rethinking how external pressure can suppress dendrites in lithium metal batteries

Journal of the Electrochemical Society

Zhang, Xin; Wang, Q.J.; Harrison, Katharine L.; Jungjohann, Katherine; Boyce, Brad B.; Roberts, Scott A.; Attia, Peter M.; Harris, Stephen J.

We offer an explanation for how dendrite growth can be inhibited when Li metal pouch cells are subjected to external loads, even for cells using soft, thin separators. We develop a contact mechanics model for tracking Li surface and sub-surface stresses where electrodes have realistically (micron-scale) rough surfaces. Existing models examine a single, micron-scale Li metal protrusion under a fixed local current density that presses more or less conformally against a separator or stiff electrolyte. At the larger, sub-mm scales studied here, contact between the Li metal and the separator is heterogeneous and far from conformal for surfaces with realistic roughness: the load is carried at just the tallest asperities, where stresses reach tens of MPa, while most of the Li surface feels no force at all. Yet, dendrite growth is suppressed over the entire Li surface. To explain this dendrite suppression, our electrochemical/mechanics model suggests that Li avoids plating at the tips of growing Li dendrites if there is sufficient local stress; that local contact stresses there may be high enough to close separator pores so that incremental Li+ ions plate elsewhere; and that creep ensures that Li protrusions are gradually flattened. These mechanisms cannot be captured by single-dendrite-scale analyses.

More Details

Dual-wavelength laser-induced damage threshold of a HfO2/SiO2 dichroic coating developed for high transmission at 527 nm and high reflection at 1054 nm

Proceedings of SPIE - The International Society for Optical Engineering

Field, Ella S.; Galloway, B.R.; Kletecka, Damon E.; Rambo, Patrick K.; Smith, Ian C.

Dichroic coatings have been developed for high transmission at 527 nm and high reflection at 1054 nm for laser operations in the nanosecond pulse regime. The coatings consist of HfO2 and SiO2 layers deposited with e-beam evaporation, and laser-induced damage thresholds as high as 12.5 J/cm2 were measured at 532 nm with 3.5 ns pulses (22.5 degrees angle of incidence, in S-polarization). However, laser damage measurements at the single wavelength of 532 nm do not adequately characterize the laser damage resistance of these coatings, since they were designed to operate at dual wavelengths simultaneously. This became apparent after one of the coatings damaged prematurely at a lower fluence in the beam train, which inspired further investigations. To gain a more complete understanding of the laser damage resistance, results of a dual-wavelength laser damage test performed at both 532 nm and 1064 nm are presented.

More Details

Uncertainty in linewidth quantification of overlapping Raman bands

Review of Scientific Instruments

Saltonstall, Christopher B.; Laros, James H.; Floro, Jerrold; Hopkins, Patrick E.; Norris, Pamela M.

Spectral linewidths are used to assess a variety of physical properties, even as spectral overlap makes quantitative extraction difficult owing to uncertainty. Uncertainty, in turn, can be minimized with the choice of appropriate experimental conditions used in spectral collection. In response, we assess the experimental factors dictating uncertainty in the quantification of linewidth from a Raman experiment highlighting the comparative influence of (1) spectral resolution, (2) signal to noise, and (3) relative peak intensity (RPI) of the overlapping peaks. Practically, Raman spectra of SiGe thin films were obtained experimentally and simulated virtually under a variety of conditions. RPI is found to be the most impactful parameter in specifying linewidth followed by the spectral resolution and signal to noise. While developed for Raman experiments, the results are generally applicable to spectroscopic linewidth studies illuminating the experimental trade-offs inherent in quantification.

More Details

Diverse balances of tubulin interactions and shape change drive and interrupt microtubule depolymerization

Soft Matter

Bollinger, Jonathan B.; Stevens, Mark J.

Microtubules are stiff biopolymers that self-assemble via the addition of GTP-tubulin (αβ-dimer bound to GTP), but hydrolysis of GTP- to GDP-tubulin within the tubules destabilizes them toward catastrophically-fast depolymerization. The molecular mechanisms and features of the individual tubulin proteins that drive such behavior are still not well-understood. Using molecular dynamics simulations of whole microtubules built from a coarse-grained model of tubulin, we demonstrate how conformational shape changes (i.e., deformations) in subunits that frustrate tubulin-tubulin binding within microtubules drive depolymerization of stiff tubules via unpeeling "ram's horns" consistent with experiments. We calculate the sensitivity of these behaviors to the length scales and strengths of binding attractions and varying degrees of binding frustration driven by subunit shape change, and demonstrate that the dynamic instability and mechanical properties of microtubules can be produced based on either balanced or imbalanced strengths of lateral and vertical binding attractions. Finally, we show how catastrophic depolymerization can be interrupted by small regions of the microtubule containing undeformed dimers, corresponding to incomplete lattice hydrolysis. The results demonstrate a mechanism by which microtubule rescue can occur.

More Details

Model predictive control tuning by inverse matching for a wave energy converter

Energies

Cho, Hancheol; Bacelli, Giorgio; Coe, Ryan G.

This paper investigates the application of a method to find the cost function or the weight matrices to be used in model predictive control (MPC) such that the MPC has the same performance as a predesigned linear controller in state-feedback form when constraints are not active. This is potentially useful when a successful linear controller already exists and it is necessary to incorporate the constraint-handling capabilities of MPC. This is the case for a wave energy converter (WEC), where the maximum power transfer law is well-understood. In addition to solutions based on numerical optimization, a simple analytical solution is also derived for cases with a short prediction horizon. These methods are applied for the control of an empirically-based WEC model. The results show that the MPC can be successfully tuned to follow an existing linear control law and to comply with both input and state constraints, such as actuator force and actuator stroke.

More Details

Information Design for XR Immersive Environments: Challenges and Opportunities

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Raybourn, Elaine M.; Stubblefield, William A.; Trumbo, Michael C.; Jones, Aaron P.; Whetzel, Jonathan H.; Fabian, Nathan D.

Cross Reality (XR) immersive environments offer challenges and opportunities in designing for cognitive aspects (e.g. learning, memory, attention, etc.) of information design and interactions. Information design is a multidisciplinary endeavor involving data science, communication science, cognitive science, media, and technology. In the present paper the holodeck metaphor is extended to illustrate how information design practices and some of the qualities of this imaginary computationally augmented environment (a.k.a. the holodeck) may be achieved in XR environments to support information-rich storytelling and real life, face-to-face, and virtual collaborative interactions. The Simulation Experience Design Framework & Method is introduced to organize challenges and opportunities in the design of information for XR. The notion of carefully blending both real and virtual spaces to achieve total immersion is discussed as the reader moves through the elements of the cyclical framework. A solution space leveraging cognitive science, information design, and transmedia learning highlights key challenges facing contemporary XR designers. Challenges include but are not limited to interleaving information, technology, and media into the human storytelling process, and supporting narratives in a way that is memorable, robust, and extendable.

More Details

Fissile mass and concentration criteria for criticality in geologic media near bedded salt repository

International High-Level Radioactive Waste Management 2019, IHLRWM 2019

Rechard, Robert P.; Sanchez, Lawrence C.; Mcdaniel, Patrick K.; Hunt, Jacob; Broadous, Gabriella

This paper describes the fissile mass and concentration necessary for a critical event to occur outside containers disposed in a bedded salt repository. The criticality limits are based on modeling mixtures of water, salt, dolomite, concrete, rust, and fissile material using a neutron/photon transport computational code. Several idealized depositional configurations of fissile material in the host rock are analyzed: homogeneous spheres and heterogeneous arrangements of plate fractures in regular arrays. Deposition of large masses and concentrations are required for criticality to occur for low enriched 235U enrichment. Homogeneous mixtures with deposition in all the porosity are more reactive at high enrichments of 235U and 239Pu. However, unlike typical engineered systems, heterogeneous configurations can be more reactive than homogeneous systems at high enrichment when deposition occurs in only a portion of the porosity and the total porosity is small, because the relationship between the porosity of the fractures and matrix also strongly influences the results.

More Details

SCO2 power cycle component cost correlations from DOE data spanning multiple scales and applications

Proceedings of the ASME Turbo Expo

Weiland, Nathan T.; Lance, Blake L.; Pidaparti, Sandeep R.

Supercritical CO2 (sCO2) power cycles find potential application with a variety of heat sources including nuclear, concentrated solar (CSP), coal, natural gas, and waste heat sources, and consequently cover a wide range of scales. Most studies to date have focused on the performance of sCO2 power cycles, while economic analyses have been less prevalent, due in large part to the relative scarcity of reliable cost estimates for sCO2 power cycle components. Further, the accuracy of existing sCO2 techno-economic analyses suffer from a small sample set of vendor-based component costs for any given study. Improved accuracy of sCO2 component cost estimation is desired to enable a shift in focus from plant efficiency to economics as a driver for commercialization of sCO2 technology. This study reports on sCO2 component cost scaling relationships that have been developed collaboratively from an aggregate set of vendor quotes, cost estimates, and published literature. As one of the world’s largest supporters of sCO2 research and development, the Department of Energy (DOE) National Laboratories have access to a considerable pool of vendor component costs that span multiple applications specific to each National Laboratory’s mission, including fossil-fueled sCO2 applications at the National Energy Technology Laboratory (NETL), CSP at the National Renewable Energy Laboratory (NREL), and CSP, nuclear, and distributed energy sources at Sandia National Laboratories (SNL). The resulting cost correlations are relevant to sCO2 components in all these applications, and for scales ranging from 5-750 MWe. This work builds upon prior work at SNL, in which sCO2 component cost models were developed for CSP applications ranging from 1-100 MWe in size. Similar to the earlier SNL efforts, vendor confidentiality has been maintained throughout this collaboration and in the published results. Cost models for each component were correlated from 4-24 individual quotes from multiple vendors, although the individual cost data points are proprietary and not shown. Cost models are reported for radial and axial turbines, integrally-geared and barrel-style centrifugal compressors, high temperature and low temperature recuperators, dry sCO2 coolers, and primary heat exchangers for coal and natural gas fuel sources. These models are applicable to sCO2-specific components used in a variety of sCO2 cycle configurations, and include incremental cost factors for advanced, high temperature materials for relevant components. Non-sCO2-specific costs for motors, gearboxes, and generators have been included to allow cycle designers to explore the cost implications of various turbomachinery configurations. Finally, the uncertainty associated with these component cost models is quantified by using AACE International-style class ratings for vendor estimates, combined with component cost correlation statistics.

More Details

Compact heat exchanger semi-circular header burst pressure and strain validation

Proceedings of the ASME Turbo Expo

Lance, Blake W.; Carlson, Matthew D.

Compact heat exchangers for supercritical CO2 (sCO2) service are often designed with external, semi-circular headers. Their design is governed by the ASME Boiler & Pressure Vessel Code (BPVC) whose equations were typically derived by following Castigliano’s Theorems. However, there are no known validation experiments to support their claims of pressure rating or burst pressure predictions nor is there much information about how and where failures occur. This work includes high pressure bursting of three semicircular header prototypes for the validation of three aspects: (1) burst pressure predictions from the BPVC, (2) strain predictions from Finite Element Analysis (FEA), and (3) deformation from FEA. The header prototypes were designed with geometry and weld specifications from the BPVC Section VIII Division 1, a design pressure typical of sCO2 service of 3,900 psi (26.9 MPa), and were built with 316 SS. Repeating the test in triplicate allows for greater confidence in the experimental results and enables data averaging. Burst pressure predictions are compared with experimental results for accuracy assessment. The prototypes are analyzed to understand their failure mechanism and locations. Experimental strain and deformation measurements were obtained optically with Digital Image Correlation (DIC). This technique allows strain to be measured in two dimensions and even allows for deformation measurements, all without contacting the prototype. Eight cameras are used for full coverage of both headers on the prototypes. The rich data from this technique are an excellent validation source for FEA strain and deformation predictions. Experimental data and simulation predictions are compared to assess simulation accuracy.

More Details

Performance assessment model for degradation of tristructural-isotropic (TRISO) coated particle spent fuel

International High-Level Radioactive Waste Management 2019, IHLRWM 2019

Sassani, David C.; Gelbard, Fred G.

The U.S. Department of Energy is conducting research and development on generic concepts for disposal of spent nuclear fuel and high-level radioactive waste in multiple lithologies, including salt, crystalline (granite/metamorphic), and argillaceous (clay/shale) host rock. These investigations benefit greatly from international experience gained in disposal programs in many countries around the world. The focus of this study is the post-closure degradation and radionuclide-release rates for tristructural-isotropic (TRISO) coated particle spent fuels for various generic geologic repository environments.1,2,3 The TRISO particle coatings provide safety features during and after reactor operations, with the SiC layer representing the primary barrier. Three mechanisms that may lead to release of radionuclides from the TRISO particles are: (1) helium pressure buildup4 that may eventually rupture the SiC layer, (2) diffusive transport through the layers (solid-state diffusion in reactor, aqueous diffusion in porous media at repository conditions), and (3) corrosion5 of the layers in groundwater/brine. For TRISO particles in a graphite fuel element, the degradation in an oxidizing geologic repository was concluded to be directly dependent on the oxidative corrosion rate of the graphite matrix4, which was analyzed as much slower than SiC layer corrosion processes. However, accumulated physical damage to the graphite fuel element may decrease its post-closure barrier capability more rapidly. Our initial performance model focuses on the TRISO particles and includes SiC failure from pressure increase via alpha-decay helium, as exacerbated by SiC layer corrosion5. This corrosion mechanism is found to be much faster than solid-state diffusion at repository temperatures but includes no benefit of protection by the other outer layers, which may prolong lifetime. Our current model enhancements include constraining the material properties of the layers for porous media diffusion analyses. In addition to evaluating the SiC layer porosity structure, this work focuses on the pyrolytic carbon layers (inner/outer-IPyC/OPyC) layers, and the graphite compact, which are to be analyzed with the SiC layer in two modes: (a) intact SiC barrier until corrosion failure and (b) SiC with porous media transport. Our detailed performance analyses will consider these processes together with uncertainties in the properties of the layers to assess radionuclide release from TRISO particles and their graphite compacts.

More Details

Shear behavior of bedded salt interfaces and clay seams

53rd U.S. Rock Mechanics/Geomechanics Symposium

Sobolik, Steven R.; Buchholz, S.A.; Keffeler, E.; Borglum, S.; Reedlunn, Benjamin R.

Bedded salt contains interfaces between the host salt and other in situ materials such as clay seams, or different materials such as anhydrite or polyhalite in contact with the salt. These inhomogeneities are thought to have first-order effects on the closure of nearby drifts and potential roof collapses. Despite their importance, characterizations of the peak shear strength and residual shear strength of interfaces in salt are extremely rare in the published literature. This paper presents results from laboratory experiments designed to measure the mechanical behavior of a bedding interface or clay seam as it is sheared. The series of laboratory direct shear tests reported in this paper were performed on several samples of materials from the Permian Basin in New Mexico. These tests were conducted at several normal and shear loads up to the expected in situ pre-mining stress conditions. Tests were performed on samples with a halite/clay contact, a halite/anhydrite contact, a halite/polyhalite contact, and on plain salt samples without an interface for comparison. Intact shear strength values were determined for all of the test samples along with residual values for the majority of the tests. The test results indicated only a minor variation in shear strength, at a given normal stress, across all samples. This result was surprising because sliding along clay seams is regularly observed in the underground, suggesting the clay seam interfaces should be weaker than plain salt. Post-test inspections of these samples noted that salt crystals were intrinsic to the structure of the seam, which probably increased the shear strength as compared to a more typical clay seam.

More Details

Coupled hydro-mechanical modeling of injection-induced seismicity in the multiphase flow system

53rd U.S. Rock Mechanics/Geomechanics Symposium

Chang, Kyung W.; Yoon, Hongkyu Y.; Martinez, Mario A.; Newell, Pania N.

The fluid injection into the subsurface perturbs the states of pore pressure and stress on the pre-existing faults, potentially causing earthquakes. In the multiphase flow system, the contrast of fluid and rock properties between different structures produces the changes in pressure gradients and subsequently stress fields. Assuming two-phase fluid flow (gas-water system) and poroelasticity, we simulate the three-layered formation including a basement fault, in which injection-induced pressure encounters the fault directly given injection scenarios. The single-phase poroelasticity model with the same setting is also conducted to evaluate the multiphase flow effects on poroelastic response of the fault to gas injection. Sensitivity tests are performed by varying the fault permeability. The presence of gaseous phase reduces the pressure buildup within the highly gas-saturated region, causing less Coulomb stress changes, whereas capillarity increases the pore pressure within the gas-water mixed region. Even though the gaseous plume does not approach the fault, the poroelastic stressing can affect the fault stability, potentially the earthquake occurrence.

More Details

Identification of the Criegee intermediate reaction network in ethylene ozonolysis: Impact on energy conversion strategies and atmospheric chemistry

Physical Chemistry Chemical Physics

Hansen, Nils H.; Rousso, Aric C.; Jasper, Ahren W.; Ju, Yiguang

The reaction network of the simplest Criegee intermediate (CI) CH2OO has been studied experimentally during the ozonolysis of ethylene. The results provide valuable information about plasma- and ozone-assisted combustion processes and atmospheric aerosol formation. A network of CI reactions was identified, which can be described best by the sequential addition of CI with ethylene, water, formic acid, and other molecules containing hydroxy, aldehyde, and hydroperoxy functional groups. Species resulting from as many as four sequential CI addition reactions were observed, and these species are highly oxygenated oligomers that are known components of secondary organic aerosols in the atmosphere. Insights into these reaction pathways were obtained from a near-atmospheric pressure jet-stirred reactor coupled to a high-resolution molecular-beam mass spectrometer. The mass spectrometer employs single-photon ionization with synchrotron-generated, tunable vacuum-ultraviolet radiation to minimize fragmentation via near-threshold ionization and to observe mass-selected photoionization efficiency (PIE) curves. Species identification is supported by comparison of the mass-selected, experimentally observed photo-ionization thresholds with theoretical calculations for the ionization energies. A variety of multi-functional peroxide species are identified, including hydroxymethyl hydroperoxide (HOCH2OOH), hydroperoxymethyl formate (HOOCH2OCHO), methoxymethyl hydroperoxide (CH3OCH2OOH), ethoxymethyl hydroperoxide (C2H5OCH2OOH), 2-hydroxyethyl hydroperoxide (HOC2H4OOH), dihydroperoxy methane (HOOCH2OOH), and 1-hydroperoxypropan-2-one [CH3C(O)CH2OOH]. A semi-quantitative analysis of the signal intensities as a function of successive CI additions and temperature provides mechanistic insights and valuable information for future modeling work of the associated energy conversion processes and atmospheric chemistry. This work provides further evidence that the CI is a key intermediate in the formation of oligomeric species via the formation of hydroperoxides.

More Details

Characterization of particle and heat losses from falling particle receivers

ASME 2019 13th International Conference on Energy Sustainability, ES 2019, collocated with the ASME 2019 Heat Transfer Summer Conference

Ho, Clifford K.; Kinahan, Sean; Ortega, Jesus D.; Vorobieff, Peter; Mammoli, Andrea; Martins, Vanderlei

Camera-based imaging methods were evaluated to quantify both particle and convective heat losses from the aperture of a high-temperature particle receiver. A bench-scale model of a field-tested on-sun particle receiver was built, and particle velocities and temperatures were recorded using the small-scale model. Particles heated to over 700 °C in a furnace were released from a slot aperture and allowed to fall through a region that was imaged by the cameras. Particle-image, particle-tracking, and image-correlation velocimetry methods were compared against one another to determine the best method to obtain particle velocities. A high-speed infrared camera was used to evaluate particle temperatures, and a model was developed to determine particle and convective heat losses. In addition, particle sampling instruments were deployed during on-sun field tests of the particle receiver to determine if small particles were being generated that can pose an inhalation hazard. Results showed that while there were some recordable emissions during the tests, the measured particle concentrations were much lower than the acceptable health standard of 15 mg/m3. Additional bench-scale tests were performed to quantify the formation of particles during continuous shaking and dropping of the particles. Continuous formation of small particles in two size ranges (< ~1 microns and between ~8 - 10 microns) were observed due to deagglomeration and mechanical fracturing, respectively, during particle collisions.

More Details

Bayesian calibration of empirical models common in MELCOR and other nuclear safety codes

18th International Topical Meeting on Nuclear Reactor Thermal Hydraulics, NURETH 2019

Porter, N.W.; Mousseau, Vincent A.

In modern scientific analyses, physical experiments are often supplemented with computational modeling and simulation. This is especially true in the nuclear power industry, where experiments are prohibitively expensive, or impossible, due to extreme scales, high temperatures, high pressures, and the presence of radiation. To qualify these computational tools, it is necessary to perform software quality assurance, verification, validation, and uncertainty quantification. As part of this broad process, the uncertainty of empirically derived models must be quantified. In this work, three commonly used thermal hydraulic models are calibrated to experimental data. The empirical equations are used to determine single phase friction factor in smooth tubes, single phase heat transfer coefficient for forced convection, and the transfer of mass between two phases. Bayesian calibration methods are used to estimate the posterior distribution of the parameters given the experimental data. In cases where it is appropriate, mixed-effects hierarchical calibration methods are utilized. The analyses presented in this work result in justified and reproducible joint parameter distributions which can be used in future uncertainty analysis of nuclear thermal hydraulic codes. When using these joint distributions, uncertainty in the output will be lower than traditional methods of determining parameter uncertainty. The lower uncertainties are more representative of the state of knowledge for the phenomena analyzed in this work.

More Details

Communication-efficient property preservation in tracer transport

SIAM Journal on Scientific Computing

Bradley, Andrew M.; Bosler, Peter A.; Guba, Oksana G.; Taylor, Mark A.; Barnett, Gregory A.

Atmospheric tracer transport is a computationally demanding component of the atmospheric dynamical core of weather and climate simulations. Simulations typically have tens to hundreds of tracers. A tracer field is required to preserve several properties, including mass, shape, and tracer consistency. To improve computational efficiency, it is common to apply different spatial and temporal discretizations to the tracer transport equations than to the dynamical equations. Using different discretizations increases the difficulty of preserving properties. This paper provides a unified framework to analyze the property preservation problem and classes of algorithms to solve it. We examine the primary problem and a safety problem; describe three classes of algorithms to solve these; introduce new algorithms in two of these classes; make connections among the algorithms; analyze each algorithm in terms of correctness, bound on its solution magnitude, and its communication efficiency; and study numerical results. A new algorithm, QLT, has the smallest communication volume, and in an important case it redistributes mass approximately locally. These algorithms are only very loosely coupled to the underlying discretizations of the dynamical and tracer transport equations and thus are broadly and efficiently applicable. In addition, they may be applied to remap problems in applications other than tracer transport.

More Details

Swelling during pyrolysis of fibre–resin composites when heated above normal operating temperatures

WIT Transactions on Engineering Sciences

Houchens, Brent C.; Scott, Sarah N.; Brunini, Victor E.; Jones, Elizabeth M.; Montoya, Michael M.; Flores-Brito, Wendy; Hoffmeister, Kathryn N.G.

It is experimentally observed that multilayer fibre–resin composites can soften and swell significantly when heated above their designed operating temperatures. This swelling is expected to further accelerate the pyrolysis, releasing volatile components which can ignite in an oxygenated environment if exposed to a spark, flame or sufficiently elevated temperature. Here the intumescent behaviour of resin-infused carbon-fibre is investigated. Preliminary experiments and simulations are compared for a carbon-fibre sample radiatively heated on the top side and insulated on the bottom. Simulations consider coupled thermal and porous media flow.

More Details

Hyperspectral vegetation identification at a legacy underground nuclear explosion test site

Proceedings of SPIE - The International Society for Optical Engineering

Redman, Brian J.; Laros, James H.; Anderson, Dylan Z.; Craven, Julia M.; Miller, Elizabeth D.; Collins, Adam D.; Swanson, Erika M.; Schultz-Fellenz, Emily S.

The detection, location, and identification of suspected underground nuclear explosions (UNEs) are global security priorities that rely on integrated analysis of multiple data modalities for uncertainty reduction in event analysis. Vegetation disturbances may provide complementary signatures that can confirm or build on the observables produced by prompt sensing techniques such as seismic or radionuclide monitoring networks. For instance, the emergence of non-native species in an area may be indicative of anthropogenic activity or changes in vegetation health may reflect changes in the site conditions resulting from an underground explosion. Previously, we collected high spatial resolution (10 cm) hyperspectral data from an unmanned aerial system at a legacy underground nuclear explosion test site and its surrounds. These data consist of visible and near-infrared wavebands over 4.3 km2 of high desert terrain along with high spatial resolution (2.5 cm) RGB context imagery. In this work, we employ various spectral detection and classification algorithms to identify and map vegetation species in an area of interest containing the legacy test site. We employed a frequentist framework for fusing multiple spectral detections across various reference spectra captured at different times and sampled from multiple locations. The spatial distribution of vegetation species is compared to the location of the underground nuclear explosion. We find a difference in species abundance within a 130 m radius of the center of the test site.

More Details

Atomic-scale interaction of a crack and an infiltrating fluid

Chemical Physics Letters: X

Tucker, W.C.; Rimsza, Jessica R.; Criscenti, Louise C.; Jones, Reese E.

In this work we investigate the Orowan hypothesis, that decreases in surface energy due to surface adsorbates lead directly to lowered fracture toughness, at an atomic/molecular level. We employ a Lennard-Jones system with a slit crack and an infiltrating fluid, nominally with gold-water properties, and explore steric effects by varying the soft radius of fluid particles and the influence of surface energy/hydrophobicity via the solid–fluid binding energy. Using previously developed methods, we employ the J-integral to quantify the sensitivity of fracture toughness to the influence of the fluid on the crack tip, and exploit dimensionless scaling to discover universal trends in behavior.

More Details

Soft Magnetic Multilayered FeSiCrB-Fe x N Metallic Glass Composites Fabricated by Spark Plasma Sintering

IEEE Magnetics Letters

Monson, Todd M.; Zheng, Baolong; Delany, Robert E.; Pearce, Charles J.; Langlois, Eric L.; Lepkowski, Stefan M.; Stevens, Tyler E.; Zhou, Yizhang; Atcitty, Christopher B.; Lavernia, Enrique J.

Novel multilayered FeSiCrB-Fe x N (x = 2-4) metallic glass composites were fabricated using spark plasma sintering of FeSiCrB amorphous ribbons (Metglas 2605SA3 alloy) and Fe x N (x = 2-4) powder. Crystalline Fe x N can serve as a high magnetic moment, high electrical resistance binder, and lamination material in the consolidation of amorphous and nanocrystalline ribbons, mitigating eddy currents while boosting magnetic performance and stacking factor in both wound and stacked soft magnetic cores. Stacking factors of nearly 100% can be achieved in an amorphous ribbon/iron nitride composite. FeSiCrB-Fe x N multilayered metallic glass composites prepared by spark plasma sintering have the potential to serve as a next-generation soft magnetic material in power electronics and electrical machines.

More Details

Nonlinear dynamics of aqueous dissolution of silicate materials

International High-Level Radioactive Waste Management 2019, IHLRWM 2019

Wang, Yifeng

Aqueous dissolution of silicate materials exhibits complex temporal evolution and rich pattern formations. Mechanistic understanding of this process is critical for the development of a predictive model for a long-term performance assessment of silicate glass as a waste form for high-level radioactive waste disposal. Here we provide a summary of a recently developed nonlinear dynamic model for silicate material degradation in an aqueous environment. This model is based on a simple self-organizational mechanism: dissolution of silica framework of a material is catalyzed by cations released from material degradation, which in turn accelerate the release of cations. This model provides a systematical prediction of the key features observed in silicate glass dissolution, including the occurrence of a sharp corrosion front, oscillatory dissolution, multiple stages of the alteration process, wavy dissolution fronts, growth rings, incoherent bandings of alteration products, and corrosion pitting. This work provides a new perspective for understanding silicate material degradation and evaluating the long-term performance of these materials as a waste form for radioactive waste disposal.

More Details

Applications of evidence theory to issues with nuclear weapons

PSA 2019 - International Topical Meeting on Probabilistic Safety Assessment and Analysis

Darby, John

Over the last 13 years, at Sandia National Laboratories we have applied the belief/plausibility measure from evidence theory to estimate the uncertainty for numerous safety and security issues for nuclear weapons. For such issues we have significant epistemic uncertainty and are unable to assign probability distributions. We have developed and applied custom software to implement the belief/plausibility measure of uncertainty. For safety issues we perform a quantitative evaluation, and for security issues (e.g., terrorist acts) we use linguistic variables (fuzzy sets) combined with approximate reasoning. We perform the following steps: Train Subject Matter Experts (SMEs) on assignment of evidence Work with SMEs to identify the concern(s): the top-level variable(s) Work with SMEs to identify lower-level variable and functional relationship(s) to the top-level variable(s) Then the SMEs gather their State of Knowledge (SOK) and assign evidence to the lower-level variables. Using this information, we evaluate the variables using custom software and produce an estimate for the top-level variable(s) including uncertainty. We have extended the Kaplan-Garrick risk triplet approach for risk to use the belief/plausibility measure of uncertainty.

More Details

High performance erasure coding for very large stripe sizes

Simulation Series

Haddock, Walker; Bangalore, Purushotham V.; Curry, Matthew L.; Skjellum, Anthony

Exascale computing demands high bandwidth and low latency I/O on the computing edge. Object storage systems can provide higher bandwidth and lower latencies than tape archive. File transfer nodes present a single point of mediation through which data moving between these storage systems must pass. By increasing the performance of erasure coding, stripes can be subdivided into large numbers of shards. This paper’s contribution is a prototype nearline disk object storage system based on Ceph. We show that using general purpose graphical processing units (GPGPU) for erasure coding on file transfer nodes is effective when using a large number of shards. We describe an architecture for nearline disk archive storage for use with high performance computing (HPC) and demonstrate the performance with benchmarking results. We compare the benchmark performance of our design with the IntelR⃝ Storage Acceleration Library (ISA-L) CPU based erasure coding libraries using the native Ceph erasure coding feature.

More Details

Elucidating non-aqueous solvent stability and associated decomposition mechanisms for mg energy storage applications from first-principles

Frontiers in Chemistry

Seguin, Trevor J.; Hahn, Nathan T.; Zavadil, Kevin R.; Persson, Kristin A.

Rational design of novel electrolytes with enhanced functionality requires fundamental molecular-level understanding of structure-property relationships. Here we examine the suitability of a range of organic solvents for non-aqueous electrolytes in secondary magnesium batteries using density functional theory (DFT) calculations as well as experimental probes such as cyclic voltammetry and Raman spectroscopy. The solvents considered include ethereal solvents (e.g., glymes) sulfones (e.g., tetramethylene sulfone), and acetonitrile. Computed reduction potentials show that all solvents considered are stable against reduction by Mg metal. Additional computations were carried out to assess the stability of solvents in contact with partially reduced Mg cations (Mg 2+ → Mg + ) formed during cycling (e.g., deposition) by identifying reaction profiles of decomposition pathways. Most solvents, including some proposed for secondary Mg energy storage applications, exhibit decomposition pathways that are surprisingly exergonic. Interestingly, the stability of these solvents is largely dictated by magnitude of the kinetic barrier to decomposition. This insight should be valuable toward rational design of improved Mg electrolytes.

More Details

Bootstrapping and jackknife resampling to improve sparse-sample uq methods for tail probability estimation

ASME 2019 Verification and Validation Symposium, VVS 2019

Jekel, Charles F.; Romero, Vicente J.

Tolerance Interval Equivalent Normal (TI-EN) and Superdistribution (SD) sparse-sample uncertainty quantification (UQ) methods are used for conservative estimation of small tail probabilities. These methods are used to estimate the probability of a response laying beyond a specified threshold with limited data. The study focused on sparse-sample regimes ranging from N = 2 to 20 samples, because this is reflective of most experimental and some expensive computational situations. A tail probability magnitude of 10−4 was examined on four different distribution shapes, in order to be relevant for quantification of margins and uncertainty (QMU) problems that arise in risk and reliability analyses. In most cases the UQ methods were found to have optimal performance with a small number of samples, beyond which the performance deteriorated as samples were added. Using this observation, a generalized Jackknife resampling technique was developed to average many smaller subsamples. This improved the performance of the SD and TI-EN methods, specifically when a larger than optimal number of samples were available. A Complete Jackknifing technique, which considered all possible sub-sample combinations, was shown to perform better in most cases than an alternative Bootstrap resampling technique.

More Details

Exploration of multifidelity approaches for uncertainty quantification in network applications

Proceedings of the 3rd International Conference on Uncertainty Quantification in Computational Sciences and Engineering, UNCECOMP 2019

Geraci, Gianluca G.; Swiler, Laura P.; Crussell, Jonathan C.; Debusschere, Bert D.

Communication networks have evolved to a level of sophistication that requires computer models and numerical simulations to understand and predict their behavior. A network simulator is a software that enables the network designer to model several components of a computer network such as nodes, routers, switches and links and events such as data transmissions and packet errors in order to obtain device and network level metrics. Network simulations, as many other numerical approximations that model complex systems, are subject to the specification of parameters and operative conditions of the system. Very often the full characterization of the system and their input is not possible, therefore Uncertainty Quantification (UQ) strategies need to be deployed to evaluate the statistics of its response and behavior. UQ techniques, despite the advancements in the last two decades, still suffer in the presence of a large number of uncertain variables and when the regularity of the systems response cannot be guaranteed. In this context, multifidelity approaches have gained popularity in the UQ community recently due to their flexibility and robustness with respect to these challenges. The main idea behind these techniques is to extract information from a limited number of high-fidelity model realizations and complement them with a much larger number of a set of lower fidelity evaluations. The final result is an estimator with a much lower variance, i.e. a more accurate and reliable estimator can be obtained. In this contribution we investigate the possibility to deploy multifidelity UQ strategies to computer network analysis. Two numerical configurations are studied based on a simplified network with one client and one server. Preliminary results for these tests suggest that multifidelity sampling techniques might be used as effective tools for UQ tools in network applications.

More Details

Finepoints: Partitioned multithreaded MPI communication

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Grant, Ryan E.; Dosanjh, Matthew D.; Levenhagen, Michael J.; Brightwell, Ronald B.; Skjellum, Anthony

The MPI multithreading model has been historically difficult to optimize; the interface that it provides for threads was designed as a process-level interface. This model has led to implementations that treat function calls as critical regions and protect them with locks to avoid race conditions. We hypothesize that an interface designed specifically for threads can provide superior performance than current approaches and even outperform single-threaded MPI. In this paper, we describe a design for partitioned communication in MPI that we call finepoints. First, we assess the existing communication models for MPI two-sided communication and then introduce finepoints as a hybrid of MPI models that has the best features of each existing MPI communication model. In addition, “partitioned communication” created with finepoints leverages new network hardware features that cannot be exploited with current MPI point-to-point semantics, making this new approach both innovative and useful both now and in the future. To demonstrate the validity of our hypothesis, we implement a finepoints library and show improvements against a state-of-the-art multithreaded optimized Open MPI implementation on a Cray XC40 with an Aries network. Our experiments demonstrate up to a 12 × reduction in wait time for completion of send operations. This new model is shown working on a nuclear reactor physics neutron-transport proxy-application, providing up to 26.1% improvement in communication time and up to 4.8% improvement in runtime over the best performing MPI communication mode, single-threaded MPI.

More Details

Structural properties of crystalline and amorphous zirconium tungstate from classical molecular dynamics simulations

International High-Level Radioactive Waste Management 2019, IHLRWM 2019

Greathouse, Jeffery A.; Weck, Philippe F.; Gordon, Margaret E.; Kim, Eunja; Bryan, Charles R.

We use molecular simulations to provide a conceptual understanding of a crystalline-amorphous interface for a candidate negative thermal expansion (NTE) material. Specifically, classical molecular dynamics (MD) simulations were used to investigate the temperature and pressure dependence on structural properties of ZrW2O8. Polarizability of oxygen atoms was included to better account for the electronic charge distribution within the lattice. Constant-pressure simulations of cubic crystalline ZrW2O8 at ambient pressure reveal a slight NTE behavior, characterized by a small structural rearrangement resulting in oxygen sharing between adjacent WO4 tetrahedra. Periodic quantum calculations confirm that the MD-optimized structure is lower in energy than the idealized structure obtained from neutron diffraction experiments. Additionally, simulations of pressure-induced amorphization of ZrW2O8 at 300 K indicate that an amorphous phase forms at pressures greater than 10 GPa, and this phase persists when the pressure is decreased to 1 bar. Simulations were performed on a hybrid model consisting of amorphous ZrW2O8 in direct contact with the cubic crystalline phase. Upon equilibration at 300 K and 1 bar, the crystalline phase remains unchanged beyond a thin layer of disrupted structure at the amorphous interface. Detailed analysis reveals the transition in metal coordination at the interface.

More Details

Determination of factors influencing radionuclide transport in fractured crystalline rock

International High-Level Radioactive Waste Management 2019, IHLRWM 2019

Hadgu, Teklu H.; Kalinina, Elena; Wang, Yifeng

Numerical modeling of flow and transport through fractured crystalline rock was conducted to identify major factors that affect migration of radionuclides from a high-level nuclear waste repository. The study was based on data collected at the Mizunami Underground Research Laboratory (URL) in Japan. Distributions of fracture parameters were used to generate a selected number of DFN realizations. For each realization the DFN was upscaled to a continuum mesh to provide permeability and porosity fields. The upscaled permeability and porosity fields were then used to study flow and transport through the fractured rock in a site-scale domain. For the present study the focus is on the effect of domain size and on upscaling of DFN to a continuum system. Simulation results and analysis on various upscaling and boundary condition assumptions are presented.

More Details

Evaluating the resistance of austenitic stainless steel welds to hydrogen embrittlement

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Ronevich, Joseph A.; San Marchi, Christopher W.; Balch, Dorian K.

Austenitic stainless steels are used extensively in hydrogen gas containment components due to their known resilience in hydrogen environments. Depending on the conditions, degradation can occur in austenitic stainless steels but typically the materials retain sufficient mechanical properties within such extreme environments. In many hydrogen containment applications, it is necessary or advantageous to join components through welding as it ensures minimal gas leakage, unlike mechanical fittings that can become leak paths that develop over time. Over the years many studies have focused on the mechanical behavior of austenitic stainless steels in hydrogen environments and determined their properties to be sufficient for most applications. However, significantly less data have been generated on austenitic stainless steel welds, which can exhibit more degradation than the base material. In this paper, we assess the trends observed in austenitic stainless steel welds tested in hydrogen. Experiments of welds including tensile and fracture toughness testing are assessed and comparisons to behavior of base metals are discussed.

More Details

Uranyl oxalate species in natural environments: Stability constants for aqueous and solid uranyl oxalate complexes

International High-Level Radioactive Waste Management 2019, IHLRWM 2019

Xiong, Yongliang X.; Wang, Yifeng

Uranyl ion, UO22+, and its aqueous complexes with organic and inorganic ligands, are the dominant species for transport of natural occurring uranium at the Earth surface environments. In the nuclear waste management, uranyl ion and its aqueous complexes are expected to be responsible for uranium mobilization in the disposal concepts where spent fuel is disposed in oxidized environments such as unsaturated zones relative to the underground water table. In the natural environments, oxalate, in fully deprotonated form, C2O42-, is ubiquitous, as oxalate is one of the most important degradation products of humic and fulvic acids. Oxalate is known to form aqueous complexes with uranyl ion to facilitate the transport of uranium. However, oxalate also forms solid phases with uranyl ion in certain environments, limiting the movement of uranium. Therefore, the knowledge of the stability constants of aqueous and solid uranyl oxalate complexes is important not only to the understanding of the mobility of uranium in natural environments, but also to the performance assessment of radionuclides in geological repositories for spent nuclear fuel. In this work, we present the stability constants for UO2C2O4(aq) and UO2(C2O4)22- at infinite dilution based on our evaluation of the literature data over a wide range of ionic strengths up to 9.5 mol•kg-1. We also obtain the solubility constants at infinite dilution for the following solid uranyl oxalates, UO2C2O4•3H2O and UO2C2O4•H2O, based on the solubility data in a wide range of ionic strengths up to 11 mol•kg-1. In our evaluation, we use the computer code EQ3/6 Version 8.0a. The model developed by us is expected to enable researchers to accurately assess the role of oxalate in mobilization/immobilization of uranium under various conditions including those in geological repositories.

More Details

Design and evaluation of task-specific compressive optical systems

Proceedings of SPIE - The International Society for Optical Engineering

Redman, Brian J.; Birch, Gabriel C.; LaCasse, Charles F.; Dagel, Amber L.; Quach, Tu-Thach Q.; Sahakian, Meghan A.

Many optical systems are used for specific tasks such as classification. Of these systems, the majority are designed to maximize image quality for human observers; however, machine learning classification algorithms do not require the same data representation used by humans. In this work we investigate compressive optical systems optimized for a specific machine sensing task. Two compressive optical architectures are examined: An array of prisms and neutral density filters where each prism and neutral density filter pair realizes one datum from an optimized compressive sensing matrix, and another architecture using conventional optics to image the aperture onto the detector, a prism array to divide the aperture, and a pixelated attenuation mask in the intermediate image plane. We discuss the design, simulation, and tradeoffs of these compressive imaging systems built for compressed classification of the MNSIT data set. To evaluate the tradeoffs of the two architectures, we present radiometric and raytrace models for each system. Additionally, we investigate the impact of system aberrations on classification accuracy of the system. We compare the performance of these systems over a range of compression. Classification performance, radiometric throughput, and optical design manufacturability are discussed.

More Details

Robust uncertainty quantification using response surface approximations of discontinuous functions

International Journal for Uncertainty Quantification

Wildey, Timothy M.; Gorodetsky, A.A.; Belme, A.C.; Shadid, John N.

This paper considers response surface approximations for discontinuous quantities of interest. Our objective is not to adaptively characterize the interface defining the discontinuity. Instead, we utilize an epistemic description of the uncertainty in the location of a discontinuity to produce robust bounds on sample-based estimates of probabilistic quantities of interest. We demonstrate that two common machine learning strategies for classification, one based on nearest neighbors (Voronoi cells) and one based on support vector machines, provide reasonable descriptions of the region where the discontinuity may reside. In higher dimensional spaces, we demonstrate that support vector machines are more accurate for discontinuities defined by smooth interfaces. We also show how gradient information, often available via adjoint-based approaches, can be used to define indicators to effectively detect a discontinuity and to decompose the samples into clusters using an unsupervised learning technique. Numerical results demonstrate the epistemic bounds on probabilistic quantities of interest for simplistic models and for a compressible fluid model with a shock-induced discontinuity.

More Details

Design and evaluation of task-specific compressive optical systems

Proceedings of SPIE - The International Society for Optical Engineering

Redman, Brian J.; Birch, Gabriel C.; LaCasse, Charles F.; Dagel, Amber L.; Quach, Tu-Thach Q.; Sahakian, Meghan A.

Many optical systems are used for specific tasks such as classification. Of these systems, the majority are designed to maximize image quality for human observers; however, machine learning classification algorithms do not require the same data representation used by humans. In this work we investigate compressive optical systems optimized for a specific machine sensing task. Two compressive optical architectures are examined: An array of prisms and neutral density filters where each prism and neutral density filter pair realizes one datum from an optimized compressive sensing matrix, and another architecture using conventional optics to image the aperture onto the detector, a prism array to divide the aperture, and a pixelated attenuation mask in the intermediate image plane. We discuss the design, simulation, and tradeoffs of these compressive imaging systems built for compressed classification of the MNSIT data set. To evaluate the tradeoffs of the two architectures, we present radiometric and raytrace models for each system. Additionally, we investigate the impact of system aberrations on classification accuracy of the system. We compare the performance of these systems over a range of compression. Classification performance, radiometric throughput, and optical design manufacturability are discussed.

More Details

Decay Length Estimation of Single-, Two-,and Three-Wire Systems above Ground under HEMP Excitation

Progress In Electromagnetics Research B

Campione, Salvatore; Warne, Larry K.; Halligan, Matthew; Lavrova, Olga; Martin, Luis S.

We analytically model single-, two-, and three-wires above ground to determine the decay lengths of common and differential modes induced by an E1 high-altitude electromagnetic pulse (HEMP) excitation. Decay length information is pivotal to determine whether any two nodes in the power grid may be treated as uncoupled. We employ a frequency-domain method based on transmission line theory named ATLOG — Analytic Transmission Line Over Ground to model infinitely long and finite single wires, as well as solve the eigenvalue problem of a single-, two-, and three-wire system. Our calculations show that a single, semi-infinite power line can be approximated by a 10 km section of line and that the second electrical reflection for all line lengths longer than the decay length are below half the rated operating voltage. Furthermore, our results show that the differential mode propagates longer distances than the common mode in two-and three-wire systems, and this should be taken into account when performing damage assessment from HEMP excitation. This analysis is a significant step toward simplifying the modeling of practical continental grid lengths, yet maintaining accuracy, a result of enormous impact.

More Details

Effects of extreme hydrogen environments on the fracture and fatigue behavior of additively manufactured stainless steels

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Smith, Thale R.; San Marchi, Christopher W.; Sugar, Joshua D.; Balch, Dorian K.

Additive manufacturing (AM) offers the potential for increased design flexibility in the low volume production of complex engineering components for hydrogen service. However the suitability of AM materials for such extreme service environments remains to be evaluated. This work examines the effects of internal and external hydrogen on AM type 304L austenitic stainless steels fabricated via directed-energy deposition (DED) and powder bed fusion (PBF) processes. Under ambient test conditions, AM materials with minimal manufacturing defects exhibit excellent combinations of tensile strength, tensile ductility, and fatigue resistance. To probe the effects of extreme hydrogen environments on the AM materials, tensile and fatigue tests were performed after thermalprecharging in high pressure gaseous hydrogen (internal H) or in high pressure gaseous hydrogen (external H). Hydrogen appears to have a comparable influence on the AM 304L as in wrought materials, although the micromechanisms of tensile fracture and fatigue crack growth appear distinct. Specifically, microstructural characterization implicates the unique solidification microstructure of AM materials in the propagation of cracks under conditions of tensile fracture with hydrogen. These results highlight the need to establish comprehensive microstructure-property relationships for AM materials to ensure their suitability for use in extreme hydrogen environments.

More Details

Optimization of hardware and image processing for improved image quality in X-ray phase contrast imaging

Proceedings of SPIE - The International Society for Optical Engineering

Dagel, Amber L.; West, Roger D.; Goodner, Ryan N.; Grover, Steven M.; Epstein, Collin E.; Thompson, Kyle R.

High-quality image products in an X-Ray Phase Contrast Imaging (XPCI) system can be produced with proper system hardware and data acquisition. However, it may be possible to further increase the quality of the image products by addressing subtleties and imperfections in both hardware and the data acquisition process. Noting that addressing these issues entirely in hardware and data acquisition may not be practical, a more prudent approach is to determine the balance of how the apparatus may reasonably be improved and what can be accomplished with image post-processing techniques. Given a proper signal model for XPCI data, image processing techniques can be developed to compensate for many of the image quality degradations associated with higher-order hardware and data acquisition imperfections. However, processing techniques also have limitations and cannot entirely compensate for sub-par hardware or inaccurate data acquisition practices. Understanding system and image processing technique limitations enables balancing between hardware, data acquisition, and image post-processing. In this paper, we present some of the higher-order image degradation effects we have found associated with subtle imperfections in both hardware and data acquisition. We also discuss and demonstrate how a combination of hardware, data acquisition processes, and image processing techniques can increase the quality of XPCI image products. Finally, we assess the requirements for high-quality XPCI images and propose reasonable system hardware modifications and the limits of certain image processing techniques.

More Details

Perturbation theory to model shielding effectiveness of cavities loaded with electromagnetic dampeners

Electronics Letters

Campione, Salvatore; Reines, Isak C.; Warne, L.K.; Grimms, Caleb; Williams, Jeffery T.; Gutierrez, Roy K.; Coats, Rebecca S.; Basilio, Lorena I.

It is well-known that a slotted resonant cavity with high-quality factor exhibits interior electromagnetic (EM) fields that may be even larger than the external field. The authors aim to reduce the cavity’s EM fields and quality factor over a frequency band analytically, numerically, and experimentally by introducing microwave absorbing materials in the cavity. A perturbation model approach was developed to estimate the quality factor of loaded cavities, which is validated against full-wave simulations and experiments. Results with 78.7 mils (2 mm) thick ECCOSORB-MCS absorber placed on the inside cavity wall above and below the aperture slot (with only 0.026% cavity volume) result in a reduction of shielding effectiveness >19 dB and reductions in quality factor >91%, providing confirmation of the efficacy of this approach.

More Details

Combined computational and experimental study of zirconium tungstate

International High-Level Radioactive Waste Management 2019, IHLRWM 2019

Kim, Eunja; Gordon, Margaret E.; Weck, Philippe F.; Greathouse, Jeffery A.; Meserole, S.P.; Rodriguez, Mark A.; Payne, Clay P.; Bryan, Charles R.

We have investigated cubic zirconium tungstate (ZrW2O8) using density functional perturbation theory (DFPT), along with experimental characterization to assess and validate computational results. Cubic zirconium tungstate is among the few known materials exhibiting isotropic negative thermal expansion (NTE) over a broad temperature range, including room temperature where it occurs metastably. Isotropic NTE materials are important for technological applications requiring thermal-expansion compensators in composites designed to have overall zero or adjustable thermal expansion. While cubic zirconium tungstate has attracted considerable attention experimentally, a very few computational studies have been dedicated to this well-known NTE material. Therefore, spectroscopic, mechanical and thermodynamic properties have been derived from DFPT calculations. A systematic comparison of the calculated infrared, Raman, and phonon density-of-state spectra has been made with Fourier transform far-/mid-infrared and Raman data collected in this study, as well as with available inelastic neutron scattering measurements. The thermal evolution of the lattice parameter computed within the quasi-harmonic approximation exhibits negative values below the Debye temperature, consistent with the observed negative thermal expansion characteristics of cubic zirconium tungstate, α-ZrW2O8. These results show that this DFPT approach can be used for studying the spectroscopic, mechanical and thermodynamic properties of prospective NTE ceramic waste forms for encapsulation of radionuclides produced during the nuclear fuel cycle.

More Details

Diagnostics and testing to assess the behavior of organic materials at high heat flux

Proceedings of the Thermal and Fluids Engineering Summer Conference

Brown, Alexander B.; Anderson, Ryan R.; Laros, James H.; Coombs, Deshawn

Pyrolysis of materials at high heat fluxes are less well-studied because the high heat flux regime is not as common to many practical fire applications. The fire behavior of organic materials in such an environment needs further characterization in order to construct models to predict the dynamics in this regime. The test regime is complicated because of the temperatures achieved and the speed at which materials decompose, due to the flux condition. A series of tests has been performed, which exposed a variety of materials to this environment. The resulting imagery from the tests provides some unique insights into the behavior of various materials at these conditions. Furthermore, experimental and processing techniques suggest analytical methods that can be employed to extract quantitative information from pyrolysis experiments.

More Details

Pyrolysis under extreme heat flux characterized by mass loss and three-dimensional scans

Proceedings of the Thermal and Fluids Engineering Summer Conference

Engerer, Jeffrey D.; Brown, Alexander B.

A variety of energy sources produce intense radiative flux (»100 kW/m2) well beyond those typical of fire environments. Such energy sources include directed energy, nuclear weapons, and propellant fires. Studies of material response to irradiation typically focus on much lower heat flux; characterization of materials at extreme flux is limited. Various common cellulosic and synthetic-polymer materials were exposed to intense irradiation (up to 3 MW/m2) using the Solar Furnace at Sandia National Laboratories. When irradiated, these materials typically pyrolyzed and ignited after a short time (<1 s). The mass loss for each sample was recorded; the topology of the pyrolysis crater was reconstructed using a commercial three-dimensional scanner. The scans spatially resolved the volumetric displacement, mapping this response to the radially varying flux and fluence. These experimental data better characterize material properties and responses, such as the pyrolysis efflux rate, aiding the development of pyrolysis and ignition models at extreme heat flux.

More Details

Sodium pump performance in the NASCORD database

PSA 2019 - International Topical Meeting on Probabilistic Safety Assessment and Analysis

Jankovsky, Zachary; Stuart, Zacharia W.; Denman, Matthew R.

Sodium-cooled Fast Reactors (SFRs) have an extended operational history that can be leveraged to accelerate the licensing process for modern designs. Sandia National Laboratories has recently reconstituted the United States SFR data from the Centralized Reliability Data Organization (CREDO) into a new modern database called the Sodium System Component Reliability Database (NaSCoRD). NaSCoRD contains a record of 117 pumps, 60 with a sodium working fluid, that have operated in EBR-II, FFTF, and test loops including those operated by both Westinghouse and the Energy Technology Engineering Center. This paper will present sodium pump failure probabilities for various conditions allowable from the U.S. facility CREDO data that has been recovered under NaSCoRD. The current sodium pump reliability estimates will be presented in comparison to estimates provided in historical studies. The impacts of the suggested corrections from an EG&G Idaho report and various prior distributions on these reliability estimates will also be presented.

More Details

Sodium pump performance in the NASCORD database

PSA 2019 - International Topical Meeting on Probabilistic Safety Assessment and Analysis

Jankovsky, Zachary; Stuart, Zacharia W.; Denman, Matthew R.

Sodium-cooled Fast Reactors (SFRs) have an extended operational history that can be leveraged to accelerate the licensing process for modern designs. Sandia National Laboratories has recently reconstituted the United States SFR data from the Centralized Reliability Data Organization (CREDO) into a new modern database called the Sodium System Component Reliability Database (NaSCoRD). NaSCoRD contains a record of 117 pumps, 60 with a sodium working fluid, that have operated in EBR-II, FFTF, and test loops including those operated by both Westinghouse and the Energy Technology Engineering Center. This paper will present sodium pump failure probabilities for various conditions allowable from the U.S. facility CREDO data that has been recovered under NaSCoRD. The current sodium pump reliability estimates will be presented in comparison to estimates provided in historical studies. The impacts of the suggested corrections from an EG&G Idaho report and various prior distributions on these reliability estimates will also be presented.

More Details

An extension of conditional point sampling to quantify uncertainty due to material mixing randomness

International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, M and C 2019

Vu, Emily V.; Olson, Aaron J.

Radiation transport in stochastic media is a problem found in a multitude of applications, and the need for tools that are capable of thoroughly modeling this type of problem remains. A collection of approximate methods have been developed to produce accurate mean results, but the demand for methods that are capable of quantifying the spread of results caused by the randomness of material mixing remains. In this work, the new stochastic media transport algorithm Conditional Point Sampling is expanded using Embedded Variance Deconvolution such that it can compute the variance caused by material mixing. The accuracy of this approach is assessed for 1D, binary, Markovian-mixed media by comparing results to published benchmark values, and the behavior of the method is numerically studied as a function of user parameters. We demonstrate that this extension of Conditional Point Sampling is able to compute the variance caused by material mixing with accuracy dependent on the accuracy of the conditional probability function used.

More Details

Methods of sensitivity analysis in geologic disposal safety assessment (GDSA) framework

International High-Level Radioactive Waste Management 2019, IHLRWM 2019

Stein, Emily S.; Swiler, Laura P.; Sevougian, Stephen D.

Probabilistic simulations of the post-closure performance of a generic deep geologic repository for commercial spent nuclear fuel in shale host rock provide a test case for comparing sensitivity analysis methods available in Geologic Disposal Safety Assessment (GDSA) Framework, the U.S. Department of Energy's state-of-the-art toolkit for repository performance assessment. Simulations assume a thick low-permeability shale with aquifers (potential paths to the biosphere) above and below the host rock. Multi-physics simulations on the 7-million-cell grid are run in a high-performance computing environment with PFLOTRAN. Epistemic uncertain inputs include properties of the engineered and natural systems. The output variables of interest, maximum I-129 concentrations (independent of time) at observation points in the aquifers, vary over several orders of magnitude. Variance-based global sensitivity analyses (i.e., calculations of sensitivity indices) conducted with Dakota use polynomial chaos expansion (PCE) and Gaussian process (GP) surrogate models. Results of analyses conducted with raw output concentrations and with log-transformed output concentrations are compared. Using log-transformed concentrations results in larger sensitivity indices for more influential input variables, smaller sensitivity indices for less influential input variables, and more consistent values for sensitivity indices between methods (PCE and GP) and between analyses repeated with samples of different sizes.

More Details

Post-Closure performance assessment for deep borehole disposal of Cs/Sr capsules

Energies

Freeze, Geoffrey A.; Stein, Emily S.; Brady, Patrick V.

Post-closure performance assessment (PA) calculations suggest that deep borehole disposal of cesium (Cs)/strontium (Sr) capsules, a U.S. Department of Energy (DOE) waste form (WF), is safe, resulting in no releases to the biosphere over 10,000,000 years when the waste is placed in a 3-5 km deep waste disposal zone. The same is true when a hypothetical breach of a stuck waste package (WP) is assumed to occur at much shallower depths penetrated by through-going fractures. Cs and Sr retardation in the host rock is a key control over movement. Calculated borehole performance would be even stronger if credit was taken for the presence of the WP.

More Details

Identification of Porphyrin-Silica Composite Nanoparticles using Atmospheric Solids Analysis Probe Mass Spectrometry

MRS Advances

Karler, Casey; Parchert, Kylea J.; Ricken, James B.; Carson, Bryan C.; Mowry, Curtis D.; Fan, Hongyou F.; Ye, Dongmei Y.

Porphyrins are vital pigments involved in biological energy transduction processes. Their abilities to absorb light, then convert it to energy, have raised the interest of using porphyrin nanoparticles as photosensitizers in photodynamic therapy. A recent study showed that self- assembled porphyrin-silica composite nanoparticles can selectively destroy tumor cells, but detection of the cellular uptake of porphyrin-silica composite nanoparticles was limited to imaging microscopy. Here we developed a novel method to rapidly identify porphyrin-silica composite nanoparticles using Atmospheric Solids Analysis Probe-Mass Spectrometry (ASAP-MS). ASAP-MS can directly analyze complex mixtures without the need for sample preparation. Porphyrin-silica composite nanoparticles were vaporized using heated nitrogen desolvation gas, and their thermo-profiles were examined to identify distinct mass- to-charge (M/Z) signatures. HeLa cells were incubated in growth media containing the nanoparticles, and after sufficient washing to remove residual nanoparticles, the cell suspension was loaded onto the end of ASAP glass capillary probe. Upon heating, HeLa cells were degraded and porphyrin-silica composite nanoparticles were released. Vaporized nanoparticles were ionized and detected by MS. The cellular uptake of porphyrin-silica composite nanoparticles was identified using this ASAP-MS method.

More Details

A novel energy-conversion device for wind and hydrokinetic applications

ASME-JSME-KSME 2019 8th Joint Fluids Engineering Conference, AJKFluids 2019

Houchens, Brent C.; Marian, David V.; Pol, Suhas; Westergaard, Carsten H.

In its simplest implementation, patent-protected AeroMINE consists of two opposing foils, where a low-pressure zone is generated between them. The low pressure draws fluid through orifices in the foil surfaces from plenums inside the foils. The inner plenums are connected to ambient pressure. If an internal turbine-generator is placed in the path of the flow to the plenums, energy can be extracted. The fluid transports the energy through the plenums, and the turbine-generator can be located at ground level, inside a controlled environment for easy access and to avoid inclement weather conditions or harsh environments. This contained internal turbine-generator has the only moving parts in the system, isolated from people, birds and other wildlife. AeroMINEs could be used in distributed-wind energy settings, where the stationary foil pairs are located on warehouse rooftops, for example. Flow created by several such foil pairs could be combined to drive a common turbine-generator.

More Details

Synthesis of complex rare earth nanostructures using: In situ liquid cell transmission electron microscopy

Nanoscale Advances

Laros, James H.; Nenoff, T.M.; Pratt, Sarah H.; Hattar, Khalid M.

Energy and cost efficient synthesis pathways are important for the production, processing, and recycling of rare earth metals necessary for a range of advanced energy and environmental applications. In this work, we present results of successful in situ liquid cell transmission electron microscopy production and imaging of rare earth element nanostructure synthesis, from aqueous salt solutions, via radiolysis due to exposure to a 200 keV electron beam. Nucleation, growth, and crystallization processes for nanostructures formed in yttrium(iii) nitrate hydrate (Y(NO3)3·4H2O), europium(iii) chloride hydrate (EuCl3·6H2O), and lanthanum(iii) chloride hydrate (LaCl3·7H2O) solutions are discussed. In situ electron diffraction analysis in a closed microfluidic configuration indicated that rare earth metal, salt, and metal oxide structures were synthesized. Real-time imaging of nanostructure formation was compared in closed cell and flow cell configurations. Notably, this work also includes the first known collection of automated crystal orientation mapping data through liquid using a microfluidic transmission electron microscope stage, which permits the deconvolution of amorphous and crystalline features (orientation and interfaces) inside the resulting nanostructures.

More Details

EXPERIMENTAL TESTING OF A 1MW SCO2 TURBOCOMPRESSOR

Conference Proceedings of the European sCO2 Conference

Rapp, Logan M.; Stapp, David

The Nuclear Energy Systems Laboratory (NESL) Brayton Laboratory at Sandia National Laboratories has been at the forefront of supercritical carbon dioxide (sCO2) power cycle development since 2007 when internal R&D funds were used to investigate the stability of sCO2 as a working fluid for power cycles. Since then, Sandia has been a leader in research and development of sCO2 power cycles through government funded research and by partnering with industry to design and test components necessary for commercialization of sCO2 Brayton cycles. Peregrine Turbine Technologies (PTT) is a small business working to commercialize sCO2 power cycles with their proprietary thermodynamic cycles, heat exchangers, and turbomachinery designs. Under a Small Business Innovation Research (SBIR) program with the United States Air Force Research Laboratory, PTT has designed a novel motorless turbocompressor for sCO2 power cycles. In 2017, Sandia purchased the first sCO2 turbocompressor from PTT and installed it into the 1-MW thermal turbomachinery development platform at Sandia. PTT and Sandia have worked together to experimentally test the turbocompressor to the limits of the development platform (932 F @ 2500 psi). This report will detail the design of the turbomachinery development platform, the novel process used to start the turbomachinery, and the experimental results to date. The report will also look at lessons learned throughout the process of constructing and operating an experimental sCO2 loop.

More Details

Code-verification techniques for hypersonic reacting flows in thermochemical nonequilibrium

AIAA Aviation 2019 Forum

Freno, Brian A.; Carnes, Brian C.; Weirs, Vincent G.

The study of hypersonic flows and their underlying aerothermochemical reactions is particularly important in the design and analysis of vehicles exiting and reentering Earth’s atmosphere. Computational physics codes can be employed to simulate these phenomena; however, code verification of these codes is necessary to certify their credibility. To date, few approaches have been presented for verifying codes that simulate hypersonic flows, especially flows reacting in thermochemical nonequilibrium. In this paper, we present our code-verification techniques for hypersonic reacting flows in thermochemical nonequilibrium, as well as their deployment in the Sandia Parallel Aerodynamics and Reentry Code (SPARC).

More Details

Spectral and polarimetric remote sensing for CBRNE applications

Proceedings of SPIE - The International Society for Optical Engineering

Anderson, Dylan Z.; Appelhans, Leah A.; Craven, Julia M.; LaCasse, Charles F.; Vigil, Steven R.; Dzur, Robert; Briggs, Trevor; Miller, Elizabeth; Schultz-Fellenz, Emily

Optical remote sensing has become a valuable tool in many application spaces because it can be unobtrusive, search large areas efficiently, and is increasingly accessible through commercially available products and systems. In the application space of chemical, biological, radiological, nuclear, and explosives (CBRNE) sensing, optical remote sensing can be an especially valuable tool because it enables data to be collected from a safe standoff distance. Data products and results from remote sensing collections can be combined with results from other methods to offer an integrated understanding of the nature of activities in an area of interest and may be used to inform in-situ verification techniques. This work will overview several independent research efforts focused on developing and leveraging spectral and polarimetric sensing techniques for CBRNE applications, including system development efforts, field deployment campaigns, and data exploitation and analysis results. While this body of work has primarily focused on the application spaces of chemical and underground nuclear explosion detection and characterization, the developed tools and techniques may have applicability to the broader CBRNE domain.

More Details

Near-wall modeling using coordinate frame invariant representations and neural networks

AIAA Aviation 2019 Forum

Miller, Nathan M.; Barone, Matthew F.; Davis, Warren L.; Fike, Jeffrey A.

Near-wall turbulence models in Large-Eddy Simulation (LES) typically approximate near-wall behavior using a solution to the mean flow equations. This approach inevitably leads to errors when the modeled flow does not satisfy the assumptions surrounding the use of a mean flow approximation for an unsteady boundary condition. Herein, modern machine learning (ML) techniques are utilized to implement a coordinate frame invariant model of the wall shear stress that is derived specifically for complex flows for which mean near-wall models are known to fail. The model operates on a set of scalar and vector invariants based on data taken from the first LES grid point off the wall. Neural networks were trained and validated on spatially filtered direct numerical simulation (DNS) data. The trained networks were then tested on data to which they were never previously exposed and comparisons of the accuracy of the networks’ predictions of wall-shear stress were made to both a standard mean wall model approach and to the true stress values taken from the DNS data. The ML approach showed considerable improvement in both the accuracy of individual shear stress predictions as well as produced a more accurate distribution of wall shear stress values than did the standard mean wall model. This result held both in regions where the standard mean approach typically performs satisfactorily as well as in regions where it is known to fail, and also in cases where the networks were trained and tested on data taken from the same flow type/region as well as when trained and tested on data from different respective flow topologies.

More Details

Swelling during pyrolysis of fibre–resin composites when heated above normal operating temperatures

WIT Transactions on Engineering Sciences

Houchens, Brent C.; Scott, Sarah N.; Brunini, Victor E.; Jones, Elizabeth M.; Montoya, Michael M.; Flores-Brito, Wendy; Hoffmeister, Kathryn N.G.

It is experimentally observed that multilayer fibre–resin composites can soften and swell significantly when heated above their designed operating temperatures. This swelling is expected to further accelerate the pyrolysis, releasing volatile components which can ignite in an oxygenated environment if exposed to a spark, flame or sufficiently elevated temperature. Here the intumescent behaviour of resin-infused carbon-fibre is investigated. Preliminary experiments and simulations are compared for a carbon-fibre sample radiatively heated on the top side and insulated on the bottom. Simulations consider coupled thermal and porous media flow.

More Details

COARSE QUAD LAYOUTS THROUGH ROBUST SIMPLIFICATION OF CROSS FIELD SEPARATRIX PARTITIONS

Proceedings of the 28th International Meshing Roundtable, IMR 2019

Viertel, Ryan V.; Osting, Braxton; Staten, Matthew L.

Streamline-based quad meshing algorithms use smooth cross fields to partition surfaces into quadrilateral regions by tracing cross field separatrices. In practice, re-entrant corners and misalignment of singularities lead to small regions and limit cycles, negating some of the benefits a quad layout can provide in quad meshing. We introduce three novel methods to improve on a pipeline for coarse quad partitioning. First, we formulate an efficient method to compute high-quality cross fields on curved surfaces by extending the diffusion generated method from Viertel and Osting (SISC, 2019). Next, we introduce a method for accurately computing the trajectory of streamlines through singular triangles that prevents tangential crossings. Finally, we introduce a robust method to produce coarse quad layouts by simplifying the partitions obtained via naive separatrix tracing. Our methods are tested on a database of 100 objects and the results are analyzed. The algorithm performs well both in terms of efficiency and visual results on the database when compared to state-of-the-art methods.

More Details

Validation of calibrated K-ɛ model parameters for jet-in-crossflow

AIAA Aviation 2019 Forum

Miller, Nathan M.; Beresh, Steven J.; Ray, Jaideep R.

Previous efforts determined a set of calibrated model parameters for ReynoldsAveraged Navier Stokes (RANS) simulations of a compressible jet in crossflow (JIC) using a k-ɛ turbulence model. These coefficients were derived from Particle Image Velocimetry (PIV) data of a complementary experiment using a limited set of flow conditions. Here, k-ɛ models using conventional (nominal) and calibrated parameters are rigorously validated against PIV data acquired under a much wider variety of JIC cases, including a flight configuration. The results from the simulations using the calibrated model parameters showed considerable improvements over those using the nominal values, even for cases that were not used in defining the calibrated parameters. This improvement is demonstrated using quality metrics defined specifically to test the spatial alignment of the jet core as well as the magnitudes of flow variables on the PIV planes. These results suggest that the calibrated parameters have applicability well outside the specific flow case used in defining them and that with the right model parameters, RANS results can be improved significantly over the nominal.

More Details

Generalized entropy stable weighted essentially non-oscillatory finite difference scheme in multi-block domains

AIAA Aviation 2019 Forum

Maeng, Jungyeoul B.; Fisher, Travis C.; Carpenter, Mark H.

A new cell-centered third-order entropy stable Weighted Essentially Non-Oscillatory (SS-WENO) finite difference scheme in multi-block domains is developed for compressible flows. This new scheme overcomes shortcomings of the conventional SSWENO finite difference scheme in multi-domain problems by incorporating non-dissipative Simultaneous Approximation Term (SAT) penalties into the construction of a dual flux. The stencil of the generalized dual flux allows for full stencil biasing across the interface while maintaining the nonlinear stability estimate. We demonstrate the shock capturing improvement across multi-block domain interfaces using the generalized SSWENO in comparison to the conventional entropy stable high-order finite difference with interface penalty in shock problems. Furthermore, we test the new scheme in multi-dimensional turbulent flow problems to assess the accuracy and stability of the multi-block domain formulation.

More Details

Zirconium metal-organic framework functionalized plasmonic sensor

Proceedings of SPIE - The International Society for Optical Engineering

Briscoe, Jayson B.; Appelhans, Leah A.; Smith, Sean S.; Westlake, Karl W.; Brener, Igal B.; Wright, Jeremy B.

Exposure to chemicals in everyday life is now more prevalent than ever. Air and water pollution can be delivery mechanisms for toxins, carcinogens, and other chemicals of interest (COI). A compact, multiplexed, chemical sensor with high responsivity and selectivity is desperately needed. We demonstrate the integration of unique Zr-based metal organic frameworks (MOFs) with a plasmonic transducer to demonstrate a nanoscale optical sensor that is both highly sensitive and selective to the presence of COI. MOFs are a product of coordination chemistry where a central ion is surrounded by a group of ligands resulting in a thin-film with nano-to micro-porosity, ultra-high surface area, and precise structural tunability. These properties make MOFs an ideal candidate for gaseous chemical sensing, however, transduction of a signal which probes changes in MOF films has been difficult. Plasmonic sensors have performed well in many sensing environments, but have had limited success detecting gaseous chemical analytes at low levels. This is due, in part, to the volume of molecules required to interact with the functionalized surface and produce a detectable shift in plasmonic resonance frequency. The fusion of a highly porous thin-film layer with an efficient plasmonic transduction platform is investigated and summarized. We will discuss the integration and characterization of the MOF/plasmonic sensor and summarize our results which show, upon exposure to COI, small changes in optical characteristics of the MOF layer are effectively transduced by observing shifts in plasmonic resonance.

More Details

A Hamiltonian Surface-Shaping approach for control system analysis and the design of nonlinear Wave Energy Converters

Journal of Marine Science and Engineering

Wilson, David G.; Darani, Shadi; Abdelkhalik, Ossama; Robinett, Rush D.

The dynamic model ofWave Energy Converters (WECs) may have nonlinearities due to several reasons such as a nonuniform buoy shape and/or nonlinear power takeoff units. This paper presents the Hamiltonian Surface-Shaping (HSS) approach as a tool for the analysis and design of nonlinear control of WECs. The Hamiltonian represents the stored energy in the system and can be constructed as a function of the WEC's system states, its position, and velocity. The Hamiltonian surface is defined by the energy storage, while the system trajectories are constrained to this surface and determined by the power flows of the applied non-conservative forces. The HSS approach presented in this paper can be used as a tool for the design of nonlinear control systems that are guaranteed to be stable. The optimality of the obtained solutions is not addressed in this paper. The case studies presented here cover regular and irregular waves and demonstrate that a nonlinear control system can result in a multiple fold increase in the harvested energy.

More Details

Enhancement of oil flow in shale nanopores by manipulating friction and viscosity

Physical Chemistry Chemical Physics

Ho, Tuan A.; Wang, Yifeng

Understanding the viscosity and friction of a fluid under nanoconfinement is the key to nanofluidics research. Existing work on nanochannel flow enhancement has been focused on simple systems with only one to two fluids considered such as water flow in carbon nanotubes, and large slip lengths have been found to be the main factor for the massive flow enhancement. In this study, we use molecular dynamics simulations to study the fluid flow of a ternary mixture of octane-carbon dioxide-water confined within two muscovite and kerogen surfaces. The results indicate that, in a muscovite slit, supercritical CO2 (scCO2) and H2O both enhance the flow of octane due to (i) a decrease in the friction of octane with the muscovite wall because of the formation of thin layers of H2O and scCO2 near the surfaces; and (ii) a reduction in the viscosity of octane in nanoconfinement. Water reduces octane viscosity by weakening the interaction of octane with the muscovite surface, while scCO2 reduces octane viscosity by weakening both octane-octane and octane-surface interactions. In a kerogen slit, water does not play any significant role in changing the friction or viscosity of octane. In contrast, scCO2 reduces both the friction and the viscosity of octane, and the enhancement of octane flow is mainly caused by the reduction of viscosity. Our results highlight the importance of multicomponent interactions in nanoscale fluid transport. The results presented here also have a direct implication in enhanced oil recovery in unconventional reservoirs.

More Details

Contrasting Advantages of Learning With Random Weights and Backpropagation in Non-Volatile Memory Neural Networks

IEEE Access

Bennett, Christopher H.; Parmar, Vivek; Calvet, Laurie E.; Klein, Jacques O.; Suri, Manan; Marinella, Matthew J.; Querlioz, Damien

Recently, a Cambrian explosion of a novel, non-volatile memory (NVM) devices known as memristive devices have inspired effort in building hardware neural networks that learn like the brain. Early experimental prototypes built simple perceptrons from nanosynapses, and recently, fully-connected multi-layer perceptron (MLP) learning systems have been realized. However, while backpropagating learning systems pair well with high-precision computer memories and achieve state-of-the-art performances, this typically comes with a massive energy budget. For future Internet of Things/peripheral use cases, system energy footprint will be a major constraint, and emerging NVM devices may fill the gap by sacrificing high bit precision for lower energy. In this paper, we contrast the well-known MLP approach with the extreme learning machine (ELM) or NoProp approach, which uses a large layer of random weights to improve the separability of high-dimensional tasks, and is usually considered inferior in a software context. However, we find that when taking the device non-linearity into account, NoProp manages to equal hardware MLP system in terms of accuracy. While also using a sign-based adaptation of the delta rule for energy-savings, we find that NoProp can learn effectively with four to six 'bits' of device analog capacity, while MLP requires eight-bit capacity with the same rule. This may allow the requirements for memristive devices to be relaxed in the context of online learning. By comparing the energy footprint of these systems for several candidate nanosynapses and realistic peripherals, we confirm that memristive NoProp systems save energy compared with MLP systems. Lastly, we show that ELM/NoProp systems can achieve better generalization abilities than nanosynaptic MLP systems when paired with pre-processing layers (which do not require backpropagated error). Collectively, these advantages make such systems worthy of consideration in future accelerators or embedded hardware.

More Details

U-Slot Patch Antenna Principle and Design Methodology Using Characteristic Mode Analysis and Coupled Mode Theory

IEEE Access

Borchardt, John J.; Lapointe, Tyler C.

Patch antennas incorporating a U-shaped slot are well-known to have relatively large (about 30%) impedance bandwidths. This work uses characteristic mode analysis (CMA) to explain the impedance behavior of a classic U-slot patch geometry in terms of coupled mode theory and shows the relevant modes are in-phase and anti-phase coupled modes whose resonant frequencies are governed by coupled mode theory. Additional analysis shows that one uncoupled resonator is the conventional TM01 patch mode and the other is a lumped LC resonator involving the slot and the probe. An equivalent circuit model for the antenna is given wherein element values are extracted from CMA data and which explicitly demonstrates coupling between these two resonators. The circuit model approximately reproduces the impedance locus of the driven simulation. A design methodology based on coupled mode theory and guided by CMA is presented that allows wideband U-slot patch geometries to be designed quickly and efficiently. The methodology is illustrated through example.

More Details

Determination of ballistic limit of skin-stringer panels using nonlinear, strain-rate dependent peridynamics

AIAA Scitech 2019 Forum

Cuenca, Fernando; Weckner, Olaf; Silling, Stewart A.; Rassaian, Mostafa

Significant testing is required to design and certify primary aircraft structures subject to High Energy Dynamic Impact (HEDI) events; current work under the NASA Advanced Composites Consortium (ACC) HEDI Project seeks to determine the state-of-the-art of dynamic fracture simulations for composite structures in these events. This paper discusses one of three Progressive Damage Analysis (PDA) methods selected for the second phase of the NASA ACC project: peridynamics, through its implementation in EMU. A brief discussion of peridynamic theory is provided, including the effects of nonlinearity and strain rate dependence of the matrix followed by a blind prediction and test-analysis correlation for ballistic impact testing performed for configured skin-stringer panels.

More Details

A low-rank solver for the Navier-Stokes equations with uncertain viscosity

SIAM-ASA Journal on Uncertainty Quantification

Lee, Kookjin; Elman, Howard C.; Sousedik, Bedrich

We study an iterative low-rank approximation method for the solution of the steady-state stochastic Navier-Stokes equations with uncertain viscosity. The method is based on linearization schemes using Picard and Newton iterations and stochastic finite element discretizations of the linearized problems. For computing the low-rank approximate solution, we adapt the nonlinear iterations to an inexact and low-rank variant, where the solution of the linear system at each nonlinear step is approximated by a quantity of low rank. This is achieved by using a tensor variant of the GMRES method as a solver for the linear systems. We explore the inexact low-rank nonlinear iteration with a set of benchmark problems, using a model of ow over an obstacle, under various configurations characterizing the statistical features of the uncertain viscosity, and we demonstrate its effectiveness by extensive numerical experiments.

More Details

Surfactant-Assisted Synthesis of Monodisperse Methylammonium Lead Iodide Perovskite Nanocrystals

Journal of Nanoscience and Nanotechnology

Fan, Hongyou F.; Billstrand, Brian; Bian, Kaifu; Alarid, Leanne

Here, we present that lead iodide based perovskites are promising optoelectronic materials ideal for solar cells. Recently emerged perovskite nanocrystals (NCs) offer more advantages including improved size-tunable band gap, structural stability, and solvent-based processing. Here we report a simple surfactant-assisted two-step synthesis to produce monodisperse PbI2 NCs which are then converted to methylammonium lead iodide perovskite NCs. Based on electron microscopy characterization, these NCs showed competitive monodispersity. Additionally, combined results from X-ray diffraction patterns, optical absorption, and photoluminescence confirmed the formation of high quality methylammonium lead iodide perovskite NCs. More importantly, by avoiding the use of hard-to-remove chemicals, the resulted perovskite NCs can be readily integrated in applications, especially solar cells through versatile solution/colloidal-based methods.

More Details

Low-temperature silicon epitaxy for atomic precision devices

ECS Transactions

Anderson, Evan M.; Katzenmeyer, Aaron M.; Luk, Ting S.; Campbell, DeAnna M.; Marshall, Michael T.; Bussmann, Ezra B.; Ohlhausen, J.A.; Lu, Ping L.; Kotula, Paul G.; Ward, Daniel R.; Lu, Tzu-Ming L.; Misra, Shashank M.

We discuss chemical, structural, and ellipsometry characterization of low temperature epitaxial Si. While low temperature growth is not ideal, we are still able to prepare crystalline Si to cap functional atomic precision devices.

More Details

Code-verification techniques for hypersonic reacting flows in thermochemical nonequilibrium

AIAA Aviation 2019 Forum

Freno, Brian A.; Carnes, Brian C.; Weirs, Vincent G.

The study of hypersonic flows and their underlying aerothermochemical reactions is particularly important in the design and analysis of vehicles exiting and reentering Earth’s atmosphere. Computational physics codes can be employed to simulate these phenomena; however, code verification of these codes is necessary to certify their credibility. To date, few approaches have been presented for verifying codes that simulate hypersonic flows, especially flows reacting in thermochemical nonequilibrium. In this paper, we present our code-verification techniques for hypersonic reacting flows in thermochemical nonequilibrium, as well as their deployment in the Sandia Parallel Aerodynamics and Reentry Code (SPARC).

More Details

Toroidal variation of the strike point in DIII-D

Nuclear Materials and Energy

Si, H.; Guo, H.Y.; Covele, B.M.; Leonard, A.W.; Watkins, J.G.; Thomas, D.M.

We report measurements of a+/− 5 mm toroidal variation of the outer strike point radial position using an array of three identical Langmuir probes distributed at 90° intervals around the torus (90° 180° 270°). The strike point radial location is determined from the profiles of floating potential (Vf) measured by the three 6 mm diameter domed Langmuir probes as the strike point is swept radially on a horizontal tile surface just outside of the upper small angle slot (SAS1) divertor. Based on the three probe measurements, the strike point variation is consistent with previous error field measurements by Schaffer [1,2] and estimates by Luxon [3] which indicated the strike point error could appear as an n = 1 radial variation of 4.5 mm at the outer mid plane and thus could be effectively described with a three point measurement. The results are also consistent with field line tracing calculations using the MAFOT code [4]. The small angle slot (SAS1) divertor performance is particularly sensitive to a misalignment with the divertor plasma since enhanced neutral confinement and recycling in the slot and distribution of neutrals along the slot surfaces are important for achieving divertor detachment at the lowest possible core plasma separatrix density. These strike point measurements are discussed with regard to the slot divertor alignment.

More Details

Degradation processes and mechanisms of PV wires and connectors

Durability and Reliability of Polymers and Other Materials in Photovoltaic Modules

Lokanath, Sumanth V.; Skarbek, Bryan; Schindelholz, Eric J.

Photovoltaic (PV) power plants and their constituent components, by virtue of their application, are exposed to some of the harshest outdoor terrestrial environments. Most equipment is subject directly to the environment and myriad stresses (micro and macro environment). Other aspects including local site conditions, construction variability and quality, and maintenance practices also influence the likelihood of such hazards. Many discrete components, including PV modules, wires, connectors, wire management devices, combiner boxes, protection devices, inverters, and transformers, make up the PV generation system. While there are abundant data that illustrate PV modules and PV inverters to be the major contributors of PV system failures, the mentioned data illustrate the importance of minimizing failures in the often ignored components such as PV connectors, PV wires (both above and below ground), wire splices, fuses, fuse holders, fuse holder enclosures, and wire management devices. With the exception of PV fuses, these components predominantly use polymeric materials. Therefore, it is crucial to understand the typical materials used in components, degradation processes and mechanisms leading to component failure, and their impact on system performance or failure. It further provides some practical considerations, approaches, and methods in addressing the problems with practical solutions in the design to assure the performance of the PV plant over the intended design lifetime.

More Details

Generating viable data to accurately quantify the performance of SHM systems

Structural Health Monitoring 2019: Enabling Intelligent Life-Cycle Health Management for Industry Internet of Things (IIOT) - Proceedings of the 12th International Workshop on Structural Health Monitoring

Roach, D.; Swindell, Paul

Reliable structural health monitoring (SHM) systems can automatically process data, assess structural condition and signal the need for human intervention. There is a significant need for formal SHM technology validation and quantitative performance assessment processes to uniformly and comprehensively support the evolution and adoption of SHM systems. In recent years, the SHM community has made significant advances in its efforts to evolve statistical methods for analyzing data from in-situ sensors. Several statistical approaches have been demonstrated using real data from multiple SHM technologies to produce Probability of Detection (POD) performance measures. Furthermore, limited comparisons of these methods - utilizing different simplification assumptions and data types - have shown them to produce similar POD values. Given these encouraging results, it is important to understand the circumstances under which the data was acquired. Thus far, the statistical analyses have assumed the viability of the data outright and focused on the performance quantification process once acceptable data has been compiled. This paper will address the array of parameters that must be considered when conducting tests to acquire representative SHM data. For some SHM applications, it may not be possible to simulate all environments in one single test. All relevant parameters must be identified and considered by properly merging results from multiple tests. Laboratory tests, for example, may have separate fatigue and environmental response components. Flight tests, which will likely not include statistically-relevant damage detection opportunities, will still play an important role in assessing overall SHM system performance under an aircraft operator's control. One statistical method, the One-Sided Tolerance Interval (OSTI) approach, will be discussed along with the test methods used to acquire the data. Finally, prospects for streamlining the deployment of SHM solutions will be considered by comparing SHM data needs during what is now an introductory phase of SHM usage with future data needs after a substantial database of SHM data and usage history has been compiled.

More Details

Hypersonic wake measurements behind a slender cone using fleet velocimetry

AIAA Aviation 2019 Forum

Zhang, Yibin Z.; Richardson, Daniel R.; Beresh, Steven J.; Casper, Katya M.; Soehnel, Melissa M.; Henfling, John F.; Spillers, Russell W.

Femtosecond Laser Electronic Excitation Tagging (FLEET) is used to measure velocity flowfields in the wake of a sharp 7◦ half-angle cone in nitrogen at Mach 8, over freestream Reynolds numbers from 4.3∗106 /m to 13.8∗106 /m. Flow tagging reveals expected wake features such as the separation shear layer and two-dimensional velocity components. Frequency-tripled FLEET has a longer lifetime and is more energy efficient by tenfold compared to 800 nm FLEET. Additionally, FLEET lines written with 267 nm are three times longer and 25% thinner than that written with 800 nm at a 1 µs delay. Two gated detection systems are compared. While the PIMAX 3 ICCD offers variable gating and fewer imaging artifacts than a LaVision IRO coupled to a Photron SA-Z, its slow readout speed renders it ineffective for capturing hypersonic velocity fluctuations. FLEET can be detected to 25 µs following excitation within 10 mm downstream of the model base, but delays greater than 4 µs have deteriorated signal-to-noise and line fit uncertainties greater than 10%. In a hypersonic nitrogen flow, exposures of just several hundred nanoseconds are long enough to produce saturated signals and/or increase the line thickness, thereby adding to measurement uncertainty. Velocity calculated between the first two delays offer the lowest uncertainty (less than 3% of the mean velocity).

More Details

Dynamic “what-if” modeling simulation

Chemical Engineering Transactions

Moyer, Eric

Dynamic modeling and simulation will be used to provide an understanding of the interactions between various complex systems. This dynamic model is based on an enterprise architecture framework whereby complex, dynamic and non-linear interactions, particularly those involving the human, can be understood and analyzed. Our modeling approach will include a synthesis of top-down and bottom-up strategies. The top-down portion will analyze high-level, mandated guidance and trace its tenants down to individually identifiable activities at the worker-level. We will then model these activities through the provision of a discrete event task model emphasizing research-based human performance and cognitive workload principles (bottom-up). These principles are based on accepted theories of the interaction between cognitive workload and human error. Synthesizing these two approaches will demonstrate both the impact and effect of high-level mandated activities and aid analysts in their understanding of how, why and when these impacts help or possibly hinder humans at the worker level. Benefits of using this model, namely the ability to predict “what if” scenarios in real time will be discussed. The model will be tested across multiple domains to demonstrate the potential modeling approach and its application in future hazard analyses.

More Details

Mediating Data Center Storage Diversity in HPC Applications with FAODEL

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Widener, Patrick W.; Ulmer, Craig D.; Levy, Scott L.; Kordenbrock, Todd H.; Templet, Gary J.

Composition of computational science applications into both ad hoc pipelines for analysis of collected or generated data and into well-defined and repeatable workflows is becoming increasingly popular. Meanwhile, dedicated high performance computing storage environments are rapidly becoming more diverse, with both significant amounts of non-volatile memory storage and mature parallel file systems available. At the same time, computational science codes are being coupled to data analysis tools which are not filesystem-oriented. In this paper, we describe how the FAODEL data management service can expose different available data storage options and mediate among them in both application- and FAODEL-directed ways. These capabilities allow applications to exploit their knowledge of the different types of data they may exchange during a workflow execution, and also provide FAODEL with mechanisms to proactively tune data storage behavior when appropriate. We describe the implementation of these capabilities in FAODEL and how they are used by applications, and present preliminary performance results demonstrating the potential benefits of our approach.

More Details

Making bread: Biomimetic strategies for artificial intelligence now and in the future

Frontiers in Neuroscience

Krichmar, Jeffrey L.; Severa, William M.; Khan, Muhammad S.; Olds, James L.

The Artificial Intelligence (AI) revolution foretold of during the 1960s is well underway in the second decade of the twenty first century. Its period of phenomenal growth likely lies ahead. AI-operated machines and technologies will extend the reach of Homo sapiens far beyond the biological constraints imposed by evolution: outwards further into deep space, as well as inwards into the nano-world of DNA sequences and relevant medical applications. And yet, we believe, there are crucial lessons that biology can offer that will enable a prosperous future for AI. For machines in general, and for AI's especially, operating over extended periods or in extreme environments will require energy usage orders of magnitudes more efficient than exists today. In many operational environments, energy sources will be constrained. The AI's design and function may be dependent upon the type of energy source, as well as its availability and accessibility. Any plans for AI devices operating in a challenging environment must begin with the question of how they are powered, where fuel is located, how energy is stored and made available to the machine, and how long the machine can operate on specific energy units. While one of the key advantages of AI use is to reduce the dimensionality of a complex problem, the fact remains that some energy is required for functionality. Hence, the materials and technologies that provide the needed energy represent a critical challenge toward future use scenarios of AI and should be integrated into their design. Here we look to the brain and other aspects of biology as inspiration for Biomimetic Research for Energy-efficient AI Designs (BREAD).

More Details

Physics–Dynamics Coupling with Element-Based High-Order Galerkin Methods: Quasi-Equal-Area Physics Grid

Monthly Weather Review

Herrington, Adam R.; Lauritzen, Peter H.; Taylor, Mark A.; Goldhaber, Steve; Eaton; Reed; Ullrich, Paul A.

Atmospheric modeling with element-based high-order Galerkin methods presents a unique challenge to the conventional physics–dynamics coupling paradigm, due to the highly irregular distribution of nodes within an element and the distinct numerical characteristics of the Galerkin method. The conventional coupling procedure is to evaluate the physical parameterizations (physics) on the dynamical core grid. Evaluating the physics at the nodal points exacerbates numerical noise from the Galerkin method, enabling and amplifying local extrema at element boundaries. Grid imprinting may be substantially reduced through the introduction of an entirely separate, approximately isotropic finite-volume grid for evaluating the physics forcing. Integration of the spectral basis over the control volumes provides an area-average state to the physics, which is more representative of the state in the vicinity of the nodal points rather than the nodal point itself and is more consistent with the notion of a “large-scale state” required by conventional physics packages. This study documents the implementation of a quasi-equal-area physics grid into NCAR’s Community Atmosphere Model Spectral Element and is shown to be effective at mitigating grid imprinting in the solution. The physics grid is also appropriate for coupling to other components within the Community Earth System Model, since the coupler requires component fluxes to be defined on a finite-volume grid, and one can be certain that the fluxes on the physics grid are, indeed, volume averaged.

More Details

Novel ground test applications of high-frequency pressure sensitive paint

AIAA Aviation 2019 Forum

Casper, Katya M.; Spitzer, Seth M.; Glenn, Nathan; Schultz, Ryan S.

Two novel and challenging applications of high-frequency pressure-sensitive paint were attempted for ground testing at Sandia National Labs. Blast tube testing, typically used to assess the response of a system to an incident blast wave, was the first application. The paint was tested to show feasibility for supplementing traditional pressure instrumentation in the harsh outdoor environment. The primary challenge was the background illumination from sunlight and time-varying light contamination from the associated explosion. Optimal results were obtained in pre-dawn hours when sunlight contamination was absent; additional corrections must be made for the intensity of the explosive illumination. A separate application of the paint for acoustic testing was also explored to provide the spatial distribution of loading on systems that do not contain pressure instrumentation. In that case, the challenge was the extremely low level of pressure variations that the paint must resolve (120 dB). Initial testing indicated the paint technique merits further development for a larger scale reverberant chamber test with higher loading levels near 140 dB.

More Details

CAD DEFEATURING USING MACHINE LEARNING

Proceedings of the 28th International Meshing Roundtable, IMR 2019

Owen, Steven J.; Shead, Timothy M.; Martin, Shawn

We describe new machine-learning-based methods to defeature CAD models for tetrahedral meshing. Using machine learning predictions of mesh quality for geometric features of a CAD model prior to meshing we can identify potential problem areas and improve meshing outcomes by presenting a prioritized list of suggested geometric operations to users. Our machine learning models are trained using a combination of geometric and topological features from the CAD model and local quality metrics for ground truth. We demonstrate a proof-of-concept implementation of the resulting work ow using Sandia's Cubit Geometry and Meshing Toolkit.

More Details

Making openMP ready for c++ executors

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Scogland, Thomas R.W.; Sunderland, Daniel S.; Olivier, Stephen L.; Hollman, David S.; Evans, Noah; De Supinski, Bronis R.

For at least the last 20 years, many have tried to create a general resource management system to support interoperability across various concurrent libraries. The previous strategies all suffered from additional toolchain requirements, and/or a usage of a shared programing model that assumed it owned/controlled access to all resources available to the program. None of these techniques have achieved wide spread adoption. The ubiquity of OpenMP coupled with C++ developing a standard way to describe many different concurrent paradigms (C++23 executors) would allow OpenMP to assume the role of a general resource manager without requiring user code written directly in OpenMP. With a few added features such as the ability to use otherwise idle threads to execute tasks and to specify a task “width”, many interesting concurrent frameworks could be developed in native OpenMP and achieve high performance. Further, one could create concrete C++ OpenMP executors that enable support for general C++ executor based codes, which would allow Fortran, C, and C++ codes to use the same underlying concurrent framework when expressed as native OpenMP or using language specific features. Effectively, OpenMP would become the de facto solution for a problem that has long plagued the HPC community.

More Details

The upcoming storm: The implications of increasing core count on scalable system software

Advances in Parallel Computing

Dosanjh, Matthew D.; Grant, Ryan E.; Hjelm, Nathan; Levy, Scott L.; Schonbein, William W.

As clock speeds have stagnated, the number of cores in a node has been drastically increased to improve processor throughput. Most scalable system software was designed and developed for single-threaded environments. Multithreaded environments become increasingly prominent as application developers optimize their codes to leverage the full performance of the processor; however, these environments are incompatible with a number of assumptions that have driven scalable system software development. This paper will highlight a case study of this mismatch focusing on MPI message matching. MPI message matching has been designed and optimized for traditional serial execution. The reduced determinism in the order of MPI calls can significantly reduce the performance of MPI message matching, potentially overtaking time-per-iteration targets of many applications. Different proposed techniques attempt to address these issues and enable multithreaded MPI usage. These approaches highlight a number of tradeoffs that make adapting MPI message matching complex. This case study and its proposed solutions highlight a number of general concepts that need to be leveraged in the design of next generation scaleable system software.

More Details

Options for modifying existing and future DPCs for disposal

Transactions of the American Nuclear Society

Hardin, Ernest H.; Alsaed, Abdelhalim; Damjanac, Branko

The overall DOE R&D strategy for DPC disposition includes a significant effort directed toward consequence screening to determine if engineered solutions discussed above are needed. Work to develop injectable filler technology will continue. The disposal criticality control features approach, and zone loading, have not been investigated since the EPRI studies in 2008-2009. The utility of such measures would be maximized by implementing them soon. This ongoing study is motivated by comparative cost analysis [7] that showed the potential cost savings using the control rods/blades approach, compared to repackaging (comparing the two most technically mature options for DPC disposition and retaining the low-probability criticality screening objective) would be approximately $2 million per DPC.

More Details

Software for sparse tensor decomposition on emerging computing architectures

SIAM Journal on Scientific Computing

Phipps, Eric T.; Kolda, Tamara G.

In this paper, we develop software for decomposing sparse tensors that is portable to and performant on a variety of multicore, manycore, and GPU computing architectures. The result is a single code whose performance matches optimized architecture-specific implementations. The key to a portable approach is to determine multiple levels of parallelism that can be mapped in different ways to different architectures, and we explain how to do this for the matricized tensor times Khatri-Rao product (MTTKRP), which is the key kernel in canonical polyadic tensor decomposition. Our implementation leverages the Kokkos framework, which enables a single code to achieve high performance across multiple architectures that differ in how they approach fine-grained parallelism. We also introduce a new construct for portable thread-local arrays, which we call compile-time polymorphic arrays. Not only are the specifics of our approaches and implementation interesting for tuning tensor computations, but they also provide a roadmap for developing other portable high-performance codes. As a last step in optimizing performance, we modify the MTTKRP algorithm itself to do a permuted traversal of tensor nonzeros to reduce atomic-write contention. We test the performance of our implementation on 16- and 68-core Intel CPUs and the K80 and P100 NVIDIA GPUs, showing that we are competitive with state-of-the-art architecture-specific codes while having the advantage of being able to run on a variety of architectures.

More Details

Effect of mineral orientation on roughness and toughness of mode I fractures

53rd U.S. Rock Mechanics/Geomechanics Symposium

Jiang, Liyang; Yoon, Hongkyu Y.; Bobet, Antonio; Pyrak-Nolte, Laura J.

Anisotropy in the mechanical properties of rock is often attributed to layering or mineral texture. Here, results from a study on mode I fracturing are presented that examine the effect of layering and mineral orientation fracture toughness and roughness. Additively manufactured gypsum rock was created through 3D printing with bassanite/gypsum. The 3D printing process enabled control of the orientation of the mineral texture within the printed layers. Three-point bending (3PB) experiments were performed on the 3D printed rock with a central notch. Unlike cast gypsum, the 3D-printed gypsum exhibited ductile post-peak behavior in all cases. The experiments also showed that the mode I fracture toughness and surface roughness of the induced fracture depended on both the orientation of the bedding relative to the load and the orientation of the mineral texture relative to the layering. This study found that mineral texture orientation, chemical bond strength and layer orientation play dominant roles in the formation of mode I fractures. The uniqueness of the induced fracture roughness is a potential method for the assessment of bonding strengths in rock.

More Details

3D strain-induced superconductivity in La2CuO4+δ using a simple vertically aligned nanocomposite approach

Science Advances

Lu, Ping L.

A long-term goal for superconductors is to increase the superconducting transition temperature, TC. In cuprates, TC depends strongly on the out-of-plane Cu-apical oxygen distance and the in-plane Cu-O distance, but there has been little attention paid to tuning them independently. Here, in simply grown, self-assembled, vertically aligned nanocomposite thin films of La2CuO4+δ + LaCuO3, by strongly increasing out-of-plane distances without reducing in-plane distances (three-dimensional strain engineering), we achieve superconductivity up to 50 K in the vertical interface regions, spaced ∼50 nm apart. No additional process to supply excess oxygen, e.g., by ozone or high-pressure oxygen annealing, was required, as is normally the case for plain La2CuO4+δ films. Our proof-of-concept work represents an entirely new approach to increasing TC in cuprates or other superconductors.

More Details

Hybrid plasmonic Au-TiN vertically aligned nanocomposites: A nanoscale platform towards tunable optical sensing

Nanoscale Advances

Lu, Ping L.

Tunable plasmonic structure at the nanometer scale presents enormous opportunities for various photonic devices. In this work, we present a hybrid plasmonic thin film platform: i.e., a vertically aligned Au nanopillar array grown inside a TiN matrix with controllable Au pillar density. Compared to single phase plasmonic materials, the presented tunable hybrid nanostructures attain optical flexibility including gradual tuning and anisotropic behavior of the complex dielectric function, resonant peak shifting and change of surface plasmon resonances (SPRs) in the UV-visible range, all confirmed by numerical simulations. The tailorable hybrid platform also demonstrates enhanced surface plasmon Raman response for Fourier-transform infrared spectroscopy (FTIR) and photoluminescence (PL) measurements, and presents great potentials as designable hybrid platforms for tunable optical-based chemical sensing applications.

More Details

Enabling Resilience in Asynchronous Many-Task Programming Models

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Paul, Sri R.; Hayashi, Akihiro; Slattengren, Nicole S.; Kolla, Hemanth K.; Whitlock, Matthew J.; Bak, Seonmyeong; Teranishi, Keita T.; Mayo, Jackson M.; Sarkar, Vivek

Resilience is an imminent issue for next-generation platforms due to projected increases in soft/transient failures as part of the inherent trade-offs among performance, energy, and costs in system design. In this paper, we introduce a comprehensive approach to enabling application-level resilience in Asynchronous Many-Task (AMT) programming models with a focus on remedying Silent Data Corruption (SDC) that can often go undetected by the hardware and OS. Our approach makes it possible for the application programmer to declaratively express resilience attributes with minimal code changes, and to delegate the complexity of efficiently supporting resilience to our runtime system. We have created a prototype implementation of our approach as an extension to the Habanero C/C++ library (HClib), where different resilience techniques including task replay, task replication, algorithm-based fault tolerance (ABFT), and checkpointing are available. Our experimental results show that task replay incurs lower overhead than task replication when an appropriate error checking function is provided. Further, task replay matches the low overhead of ABFT. Our results also demonstrate the ability to combine different resilience schemes. To evaluate the effectiveness of our resilience mechanisms in the presence of errors, we injected synthetic errors at different error rates (1.0%, and 10.0%) and found modest increase in execution times. In summary, the results show that our approach supports efficient and scalable recovery, and that our approach can be used to influence the design of future AMT programming models and runtime systems that aim to integrate first-class support for user-level resilience.

More Details

Calibration strategies and modeling approaches for predicting load-displacement behavior and failure for multiaxial loadings in threaded fasteners

ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)

Mersch, J.P.; Smith, J.A.; Orient, George E.; Grimmer, Peter W.; Gearhart, Jhana S.

Multiple fastener reduced-order models and fitting strategies are used on a multiaxial dataset and these models are further evaluated using a high-fidelity analysis model to demonstrate how well these strategies predict load-displacement behavior and failure. Two common reduced-order modeling approaches, the plug and spot weld, are calibrated, assessed, and compared to a more intensive approach – a “two-block” plug calibrated to multiple datasets. An optimization analysis workflow leveraging a genetic algorithm was exercised on a set of quasistatic test data where fasteners were pulled at angles from 0° to 90° in 15° increments to obtain material parameters for a fastener model that best capture the load-displacement behavior of the chosen datasets. The one-block plug is calibrated just to the tension data, the spot weld is calibrated to the tension (0°) and shear (90°), and the two-block plug is calibrated to all data available (0°-90°). These calibrations are further assessed by incorporating these models and modeling approaches into a high-fidelity analysis model of the test setup and comparing the load-displacement predictions to the raw test data.

More Details

INVESTIGATING THE ELECTRICAL RESISTANCE TECHNIQUE FOR STRUCTURAL ALLOY CORROSION MONITORING WITHIN SUPERCRITICAL CO2 POWER CYCLES

Joint EPRI-123HiMAT International Conference on Advances in High-Temperature Materials - Proceedings from EPRI's 9th International Conference on Advances in Materials Technology for Fossil Power Plants and the 2nd International 123HiMAT Conference on High-Temperature Materials

Walker, Matthew W.; Chames, Jeffery M.; Cebrian, Javier C.; Vega, Heidy V.

Structural alloy corrosion is a major concern for the design and operation of supercritical carbon dioxide (sCO2) power cycles. Looking towards the future of sCO2 system development, the ability to measure real-time alloy corrosion would be invaluable to informing operation and maintenance of these systems. Sandia has recently explored methods available for in-situ alloy corrosion monitoring. Electrical resistance (ER) was chosen for initial tests due the operational simplicity and commercial availability. A series of long duration (>1000 hours) experiments have recently been completed at a range of temperatures (400-700oC) using ER probes made from four important structural alloys (C1010 Carbon Steel, 410ss, 304L, 316L) being considered for sCO2 systems. Results from these tests are presented, including correlations between the probe measured corrosion rate to that for witness coupons of the same alloys.

More Details

Effects of spatial energy distribution on defects and fracture of LPBF 316L stainless steel

Solid Freeform Fabrication 2019: Proceedings of the 30th Annual International Solid Freeform Fabrication Symposium - An Additive Manufacturing Conference, SFF 2019

Jost, Elliott W.; Moore, David G.; Robbins, Aron R.; Miers, John C.; Saldana, Christopher

Measures of energy input and spatial energy distribution during laser powder bed fusion additive manufacturing have significant implications for the build quality of parts, specifically relating to formation of internal defects during processing. In this study, scanning electron microscopy was leveraged to investigate the effects of these distributions on the mechanical performance of parts manufactured using laser powder bed fusion as seen through the fracture surfaces resulting from uniaxial tensile testing. Variation in spatial energy density is shown to manifest in differences in defect morphology and mechanical properties. Computed tomography and scanning electron microscopy inspections revealed significant evidence of porosity acting as failure mechanisms in printed parts. These results establish an improved understanding of the effects of spatial energy distributions in laser powder bed fusion on mechanical performance.

More Details

Integrated optical probing of the thermal dynamics of wide bandgap power electronics

ASME 2019 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems, InterPACK 2019

Lundh, James S.; Song, Yiwen; Chatterjee, Bikram; Baca, A.G.; Kaplar, Robert K.; Allerman, A.A.; Armstrong, Andrew A.; Kim, Hyungtak; Choi, Sukwon

Researchers have been extensively studying wide-bandgap (WBG) semiconductor materials such as gallium nitride (GaN) with an aim to accomplish an improvement in size, weight, and power (SWaP) of power electronics beyond current devices based on silicon (Si). However, the increased operating power densities and reduced areal footprints of WBG device technologies result in significant levels of self-heating that can ultimately restrict device operation through performance degradation, reliability issues, and failure. Typically, self-heating in WBG devices is studied using a single measurement technique while operating the device under steady-state direct current (DC) measurement conditions. However, for switching applications, this steady-state thermal characterization may lose significance since high power dissipation occurs during fast transient switching events. Therefore, it can be useful to probe the WBG devices under transient measurement conditions in order to better understand the thermal dynamics of these systems in practical applications. In this work, the transient thermal dynamics of an AlGaN/GaN high electron mobility transistor (HEMT) were studied using thermoreflectance thermal imaging and Raman thermometry. Also, the proper use of iterative pulsed measurement schemes such as thermoreflectance thermal imaging to determine the steady-state operating temperature of devices is discussed. These studies are followed with subsequent transient thermal characterization to accurately probe the self-heating from steady-state down to sub-microsecond pulse conditions using both thermoreflectance thermal imaging and Raman thermometry with temporal resolutions down to 15 ns.

More Details

Evaluation of chlorine booster station placement for water security

Computer Aided Chemical Engineering

Seth, Arpan; Hackebeil, Gaberiel A.; Haxton, Terranna; Murray, Regan; Laird, Carl D.; Klise, Katherine A.

Drinking water utilities use booster stations to maintain chlorine residuals throughout water distribution systems. Booster stations could also be used as part of an emergency response plan to minimize health risks in the event of an unintentional or malicious contamination incident. The benefit of booster stations for emergency response depends on several factors, including the reaction between chlorine and an unknown contaminant species, the fate and transport of the contaminant in the water distribution system, and the time delay between detection and initiation of boosted levels of chlorine. This paper takes these aspects into account and proposes a mixed-integer linear program formulation for optimizing the placement of booster stations for emergency response. A case study is used to explore the ability of optimally placed booster stations to reduce the impact of contamination in water distribution systems.

More Details

Phenomenological versus random data augmentation for hyperspectral target detection

Proceedings of SPIE - The International Society for Optical Engineering

Zollweg, Joshua D.; LaCasse, Charles F.; Smith, Braden J.

In this effort, random noise data augmentation is compared to phenomenologically-inspired data augmentation for a target detection task, evaluated on the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model "MegaScene" simulated hyperspectral dataset. Random data augmentation is commonly used in the machine learning literature to improve model generalization. While random perturbations of an input may work well in certain fields such as image classification, they can be unhelpful in other applications such as hyperspectral target detection. For instance, random noise augmentation may not be beneficial when the applied noise distribution does not match underlying physical signal processes or sensor noise. In the context of a low-noise sensor, augmentation mimicking material mixing and other practical spectral modulations is likely to be more effective when used to train a target detector. It is therefore important to utilize a data augmentation strategy that emulates the natural variability in observed spectra. To validate this claim, a small fully connected neural network architecture is trained using an ideal hemispheric reflectance materials dataset as a trivial baseline. That dataset is then augmented using Gaussian random noise and the model is retrained and again applied to MegaScene. Finally, augmentation is instead performed using phenomenological insight and used to retrain and reevaluate the model. In this work, the phenomenological augmentation implements only simple and commonly encountered spectral permutations, namely linear mixing and shadowing. Comparison is made between the augmented models and the baseline model in terms of low constant false alarm rate (CFAR) performance.

More Details

Evaluation of the transportable detonation chamber for processing recovered munitions

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Tribble, Megan K.; Stofleth, Jerome H.

Sandia National Laboratories was tasked by the United States Army Recovered Chemical Materiel Directorate with evaluating the fitness of the Transportable Detonation Chamber for use in demilitarization of chemical munitions. The chamber was instrumented with strain, pressure, and acceleration sensors to study its behavior during explosive tests ranging from 1.25 to 20 lb of explosive charge weight. The structural response of the chamber and techniques recommended by the manufacturer- use of water bags and sand-filled walls-were assessed. Through this testing, it was found that the two techniques did not significantly affect the chamber's response. It was also discovered that the structural integrity of the chamber (and, therefore, its suitability for use with chemical agents) was compromised, as some welds failed. Sandia does not recommend using this vessel for chemical munition demilitarization. This chamber is suitable, however, for demilitarization of conventional munitions, in which fragments and overpressure are the primary concern.

More Details

Performance of a tiled array compressive sensing spectrometer

Proceedings of SPIE - The International Society for Optical Engineering

Shields, Eric A.

A Compressive Sensing Snapshot Imaging Spectrometer (CSSIS) and its performance are described. The number of spectral bins recorded in a traditional tiled array spectrometer is limited to the number of filters. By properly designing the filters and leveraging compressive sensing techniques, more spectral bins can be reconstructed. Simulation results indicate that closely-spaced spectral sources that are not resolved with a traditional spectrometer can be resolved with the CSSIS. The nature of the filters used in the CSSIS enable higher signal-to-noise ratios in measured signals. The filters are spectrally broad relative to narrow-line filters used in traditional systems, and hence more light reaches the imaging sensor. This enables the CSSIS to outperform a traditional system in a classification task in the presence of noise. Simulation results on classifying in the compressive domain are shown. This obviates the need for the computationally-intensive spectral reconstruction algorithm.

More Details

Parmest: Parameter Estimation Via Pyomo

Computer Aided Chemical Engineering

Klise, Katherine A.; Nicholson, Bethany L.; Staid, Andrea; Woodruff, David L.

The ability to estimate a range of plausible parameter values, based on experimental data, is a critical aspect in process model validation and design optimization. In this paper, a Python software package is described that allows for model-based parameter estimation along with characterization of the uncertainty associated with the estimates. The software, called parmest, is available within the Pyomo open-source software project as a third-party contribution. The software includes options to obtain confidence regions that are based on single or multi-variate distributions, compute likelihood ratios, use bootstrap resampling in estimation, and make use of parallel processing capabilities.

More Details

Second-Order Multiplier Updates to Accelerate Admm Methods in Optimization Under Uncertainty

Computer Aided Chemical Engineering

Rodriguez, Jose S.; Hackebeil, Gabriel; Siirola, John D.; Zavala, Victor M.; Laird, Carl D.

There is a need for efficient optimization strategies to efficiently solve large-scale, nonlinear optimization problems. Many problem classes, including design under uncertainty are inherently structured and can be accelerated with decomposition approaches. This paper describes a second-order multiplier update for the alternating direction method of multipliers (ADMM) to solve nonlinear stochastic programming problems. We exploit connections between ADMM and the Schur-complement decomposition to derive an accelerated version of ADMM. Specifically, we study the effectiveness of performing a Newton-Raphson algorithm to compute multiplier estimates for the method of multipliers (MM). We interpret ADMM as a decomposable version of MM and propose modifications to the multiplier update of the standard ADMM scheme based on improvements observed in MM. The modifications to the ADMM algorithm seek to accelerate solutions of optimization problems for design under uncertainty and the numerical effectiveness of the approaches is demonstrated on a set of ten stochastic programming problems. Practical strategies for improving computational performance are discussed along with comparisons between the algorithms. We observe that the second-order update achieves convergence in fewer unconstrained minimizations for MM on general nonlinear problems. In the case of ADMM, the second-order update reduces significantly the number of subproblem solves for convex quadratic programs (QPs).

More Details

Gate-defined quantum dots in Ge/SiGe quantum wells as a platform for spin qubits

ECS Transactions

Hardy, Will H.; Su, Y.H.; Chuang, Y.; Maurer, Leon M.; Brickson, Mitchell I.; Baczewski, Andrew D.; Li, J.Y.; Lu, Tzu-Ming L.; Luhman, Dwight R.

In the field of semiconductor quantum dot spin qubits, there is growing interest in leveraging the unique properties of hole-carrier systems and their intrinsically strong spin-orbit coupling to engineer novel qubits. Recent advances in semiconductor heterostructure growth have made available high quality, undoped Ge/SiGe quantum wells, consisting of a pure strained Ge layer flanked by Ge-rich SiGe layers above and below. These quantum wells feature heavy hole carriers and a cubic Rashba-type spin-orbit interaction. Here, we describe progress toward realizing spin qubits in this platform, including development of multi-metal-layer gated device architectures, device tuning protocols, and charge-sensing capabilities. Iterative improvement of a three-layer metal gate architecture has significantly enhanced device performance over that achieved using an earlier single-layer gate design. We discuss ongoing, simulation-informed work to fine-tune the device geometry, as well as efforts toward a single-spin qubit demonstration.

More Details

Characterization of particle and heat losses from falling particle receivers

ASME 2019 13th International Conference on Energy Sustainability, ES 2019, collocated with the ASME 2019 Heat Transfer Summer Conference

Ho, Clifford K.; Kinahan, Sean; Ortega, Jesus D.; Vorobieff, Peter; Mammoli, Andrea; Martins, Vanderlei

Camera-based imaging methods were evaluated to quantify both particle and convective heat losses from the aperture of a high-temperature particle receiver. A bench-scale model of a field-tested on-sun particle receiver was built, and particle velocities and temperatures were recorded using the small-scale model. Particles heated to over 700 °C in a furnace were released from a slot aperture and allowed to fall through a region that was imaged by the cameras. Particle-image, particle-tracking, and image-correlation velocimetry methods were compared against one another to determine the best method to obtain particle velocities. A high-speed infrared camera was used to evaluate particle temperatures, and a model was developed to determine particle and convective heat losses. In addition, particle sampling instruments were deployed during on-sun field tests of the particle receiver to determine if small particles were being generated that can pose an inhalation hazard. Results showed that while there were some recordable emissions during the tests, the measured particle concentrations were much lower than the acceptable health standard of 15 mg/m3. Additional bench-scale tests were performed to quantify the formation of particles during continuous shaking and dropping of the particles. Continuous formation of small particles in two size ranges (< ~1 microns and between ~8 - 10 microns) were observed due to deagglomeration and mechanical fracturing, respectively, during particle collisions.

More Details

Optimization of storage bin geometry for high temperature particle-based CSP systems

ASME 2019 13th International Conference on Energy Sustainability, ES 2019, collocated with the ASME 2019 Heat Transfer Summer Conference

Sment, Jeremy; Albrecht, Kevin J.; Christian, Joshua M.; Ho, Clifford K.

Solid particle receivers provide an opportunity to run concentrating solar tower receivers at higher temperatures and increased overall system efficiencies. The design of the bins used for storing and managing the flow of particles creates engineering challenges in minimizing thermomechanical stress and heat loss. An optimization study of mechanical stress and heat loss was performed at the National Solar Thermal Test Facility at Sandia National Laboratories to determine the geometry of the hot particle storage hopper for a 1 MWt pilot plant facility. Modeling of heat loss was performed on hopper designs with a range of geometric parameters with the goal of providing uniform mass flow of bulk solids with no clogging, minimizing heat loss, and reducing thermomechanical stresses. The heat loss calculation included an analysis of the particle temperatures using a thermal resistance network that included the insulation and hopper. A plot of the total heat loss as a function of geometry and required thicknesses to accommodate thermomechanical stresses revealed suitable designs. In addition to the geometries related to flow type and mechanical stress, this study characterized flow related properties of CARBO HSP 40/70 and Accucast ID50-K in contact with refractory insulation. This insulation internally lines the hopper to prevent heat loss and allow for low cost structural materials to be used for bin construction. The wall friction angle, effective angle of friction, and cohesive strength of the bulk solid were variables that were determined from empirical analysis of the particles at temperatures up to 600°C.

More Details

Evaluating the resistance of austenitic stainless steel welds to hydrogen embrittlement

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Ronevich, Joseph A.; San Marchi, Christopher W.; Balch, Dorian K.

Austenitic stainless steels are used extensively in hydrogen gas containment components due to their known resilience in hydrogen environments. Depending on the conditions, degradation can occur in austenitic stainless steels but typically the materials retain sufficient mechanical properties within such extreme environments. In many hydrogen containment applications, it is necessary or advantageous to join components through welding as it ensures minimal gas leakage, unlike mechanical fittings that can become leak paths that develop over time. Over the years many studies have focused on the mechanical behavior of austenitic stainless steels in hydrogen environments and determined their properties to be sufficient for most applications. However, significantly less data have been generated on austenitic stainless steel welds, which can exhibit more degradation than the base material. In this paper, we assess the trends observed in austenitic stainless steel welds tested in hydrogen. Experiments of welds including tensile and fracture toughness testing are assessed and comparisons to behavior of base metals are discussed.

More Details

Bayesian calibration of empirical models common in MELCOR and other nuclear safety codes

18th International Topical Meeting on Nuclear Reactor Thermal Hydraulics, NURETH 2019

Porter, N.W.; Mousseau, Vincent A.

In modern scientific analyses, physical experiments are often supplemented with computational modeling and simulation. This is especially true in the nuclear power industry, where experiments are prohibitively expensive, or impossible, due to extreme scales, high temperatures, high pressures, and the presence of radiation. To qualify these computational tools, it is necessary to perform software quality assurance, verification, validation, and uncertainty quantification. As part of this broad process, the uncertainty of empirically derived models must be quantified. In this work, three commonly used thermal hydraulic models are calibrated to experimental data. The empirical equations are used to determine single phase friction factor in smooth tubes, single phase heat transfer coefficient for forced convection, and the transfer of mass between two phases. Bayesian calibration methods are used to estimate the posterior distribution of the parameters given the experimental data. In cases where it is appropriate, mixed-effects hierarchical calibration methods are utilized. The analyses presented in this work result in justified and reproducible joint parameter distributions which can be used in future uncertainty analysis of nuclear thermal hydraulic codes. When using these joint distributions, uncertainty in the output will be lower than traditional methods of determining parameter uncertainty. The lower uncertainties are more representative of the state of knowledge for the phenomena analyzed in this work.

More Details

Proteus: A DLT-agnostic emulation and analysis framework

12th USENIX Workshop on Cyber Security Experimentation and Test, CSET 2019, co-located with USENIX Security 2019

Van Dam, Russell V.; Dinh, Thien-Nam D.; Cordi, Christopher N.; Jacobus, Gregory J.; Pattengale, Nicholas D.; Elliott, Steven E.

This paper presents Proteus, a framework for conducting rapid, emulation-based analysis of distributed ledger technologies (DLTs) using FIREWHEEL, an orchestration tool that assists a user in building, controlling, observing, and analyzing realistic experiments of distributed systems. Proteus is designed to support any DLT that has some form of a “transaction” and which operates on a peer-to-peer network layer. Proteus provides a framework for an investigator to set up a network of nodes, execute rich agent-driven behaviors, and extract run-time observations. Proteus relies on common features of DLTs to define agent-driven scenarios in a DLT-agnostic way allowing for those scenarios to be executed against different DLTs. We demonstrate the utility of using Proteus by executing a 51% attack on an emulated Ethereum network containing 2000 nodes.

More Details

Parametric analysis of particle CSP system performance and cost to intrinsic particle properties and operating conditions

ASME 2019 13th International Conference on Energy Sustainability, ES 2019, collocated with the ASME 2019 Heat Transfer Summer Conference

Albrecht, Kevin J.; Bauer, Matthew L.; Ho, Clifford K.

The use of solid particles as a heat-transfer fluid and thermal storage media for concentrating solar power is a promising candidate for meeting levelized cost of electricity (LCOE) targets for next-generation CSP concepts. Meeting these cost targets for a given system concept will require optimization of the particle heat-transfer fluid with simultaneous consideration of all system components and operating conditions. This paper explores the trade-offs in system operating conditions and particle thermophysical properties on the levelized cost of electricity through parametric analysis. A steady-state modeling methodology for design point simulations dispatched against typical meteorological year (TMY) data is presented, which includes computationally efficient submodels of a falling particle receiver, moving packed-bed heat exchanger, storage bin, particle lift, and recompression supercritical CO2 (sCO2) cycle. The components selected for the baseline system configuration presents the most near-term realization of a particle-based CSP system that has been developed to date. However, the methodology could be extended to consider alternative particle receiver and heat exchanger concepts. The detailed system-level model coupled to component cost models is capable of propagating component design and performance information directly into the plant performance and economics. The system-level model is used to investigate how the levelized cost of electricity varies with changes in particle absorptivity, hot storage bin temperature, heat exchanger approach temperature, and sCO2 cycle operating parameters. Trade-offs in system capital cost and solar-to-electric efficiency due to changes in the size of the heliostat field, storage bins, primary heat exchanger, and receiver efficiency are observed. Optimal system operating conditions are reported, which approach levelized costs of electricity of $0.06 kWe-1hr-1

More Details

Scaling Productivity and Innovation on the Path to Exascale with a “Team of Teams” Approach

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Raybourn, Elaine M.; Moulton, J.D.; Hungerford, Aimee

One of the core missions of the Department of Energy (DOE) is to move beyond current high performance computing (HPC) capabilities toward a capable exascale computing ecosystem that accelerates scientific discovery and addresses critical challenges in energy and national security. The very nature of this mission has drawn a wide range of talented and successful scientists to work together in new ways to push beyond the status-quo toward this goal. For many scientists, their past success was achieved through efficient and agile collaboration within small trusted teams that rapidly innovate, prototype, and deliver. Thus, a key challenge for the ECP (Exascale Computing Project) is to scale this efficiency and innovation from small teams to aggregate teams of teams. While scaling agile collaboration from small teams to teams of teams may seem like a trivial transition, the path to exascale introduces significant uncertainty in HPC scientific software development for future modeling and simulation, and can cause unforeseen disruptions or inefficiencies that impede organizational productivity and innovation critical to achieving an integrated exascale vision. This paper identifies key challenges in scaling to a team of teams approach and recommends strategies for addressing them. The scientific community will take away lessons learned and recommended best practices from examples for enhancing productivity and innovation at scale for immediate use in modeling and simulation software engineering projects and programs.

More Details

The Impact of Information Presentation on Visual Inspection Performance in the International Nuclear Safeguards Domain

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Matzen, Laura E.; Stites, Mallory C.; Smartt, Heidi A.; Gastelum, Zoe N.

International nuclear safeguards inspectors are tasked with verifying that nuclear materials in facilities around the world are not misused or diverted from peaceful purposes. They must conduct detailed inspections in complex, information-rich environments, but there has been relatively little research into the cognitive aspects of their jobs. We posit that the speed and accuracy of the inspectors can be supported and improved by designing the materials they take into the field such that the information is optimized to meet their cognitive needs. Many in-field inspection activities involve comparing inventory or shipping records to other records or to physical items inside of a nuclear facility. The organization and presentation of the records that the inspectors bring into the field with them could have a substantial impact on the ease or difficulty of these comparison tasks. In this paper, we present a series of mock inspection activities in which we manipulated the formatting of the inspectors’ records. We used behavioral and eye tracking metrics to assess the impact of the different types of formatting on the participants’ performance on the inspection tasks. The results of these experiments show that matching the presentation of the records to the cognitive demands of the task led to substantially faster task completion.

More Details

Analysis of EDS vessel clamping system and door seal

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Stofleth, Jerome H.; Tribble, Megan K.; Ludwigsen, John S.; Crocker, Robert W.

The V26 containment vessel was procured by the Project Manager, Non-Stockpile Chemical Materiel (PMNSCM) for use on the Phase-2 Explosive Destruction Systems. The vessel was fabricated under Code Case 2564 of the ASME Boiler and Pressure Vessel Code, which provides rules for the design of impulsively loaded vessels. The explosive rating for the vessel, based on the Code Case, is nine (9) pounds TNT-equivalent for up to 637 detonations, limited only by fatigue crack growth calculations initiated from a minimum detectable crack depth. The vessel consists of a cylindrical cup, a flat cover or door, and clamps to secure the door. The vessel is sealed with a metal gasket. The body is a deep cylindrical cup machined from a 316 stainless steel forging. The door is also machined from a 316 stainless steel forging. The closure clamps are secured with four 17-4 PH steel threaded rods with 4140 alloy steel threadednuts on one end and hydraulic nuts on the other. A flange with four high-voltage electrical feedthroughs is bolted to the door and sealed with a small metal gasket. These feedthroughs conduct the firing signals for the high-voltage Exploding Bridge-wire detonators. Small blast plates on the inside of the door protect fluidic components and electrical feedthroughs. A large blast plate provides additional protection. Both vessel door and feedthrough flange employ O-ring seals outside the metal seals in order to provide a mechanism for helium leak checks of the volume just outside the metal seal surface before and after detonation. In previous papers (References 2 and 3), the authors describe results from testing of the vessel body and ends under qualification loads, determining the effective TNT equivalency of Composition C4 (EDS Containment Vessel TNT Equivalence Testing) and analyzing the effects of distributed explosive charges versus unitary charges (EDS Containment Vessel Explosive Test and Analysis). In addition to measurements made on the vessel body and ends as reported previously, bulk motion and deformation of the door and clamping system was made. Strain gauges were positioned at various locations on the inner and outer surface of the clamping system and on the vessel door surface. Digital Image Correlation was employed during both hydrostatic testing and dynamic testing under full-load explosive detonation to determine bulk and bending motion of the door relative to the vessel body and clamping system. Some limited hydrocode and finite element code analysis was performed on the clamping system for comparison. The purpose of this analysis was to determine the likelihood of a change in the static sealing efficacy of the metal clamping system and to evaluate the possibility of dynamic burping of vessel contents during detonation. Those results will be reported in this paper.

More Details

Gate-defined quantum dots in Ge/SiGe quantum wells as a platform for spin qubits

ECS Transactions

Hardy, Will H.; Su, Y.H.; Chuang, Y.; Maurer, Leon M.; Brickson, Mitchell I.; Baczewski, Andrew D.; Li, J.Y.; Lu, Tzu-Ming L.; Luhman, Dwight R.

In the field of semiconductor quantum dot spin qubits, there is growing interest in leveraging the unique properties of hole-carrier systems and their intrinsically strong spin-orbit coupling to engineer novel qubits. Recent advances in semiconductor heterostructure growth have made available high quality, undoped Ge/SiGe quantum wells, consisting of a pure strained Ge layer flanked by Ge-rich SiGe layers above and below. These quantum wells feature heavy hole carriers and a cubic Rashba-type spin-orbit interaction. Here, we describe progress toward realizing spin qubits in this platform, including development of multi-metal-layer gated device architectures, device tuning protocols, and charge-sensing capabilities. Iterative improvement of a three-layer metal gate architecture has significantly enhanced device performance over that achieved using an earlier single-layer gate design. We discuss ongoing, simulation-informed work to fine-tune the device geometry, as well as efforts toward a single-spin qubit demonstration.

More Details

Terahertz detectors based on all-dielectric photoconductive metasurfaces

Proceedings of SPIE - The International Society for Optical Engineering

Mitrofanov, Oleg; Siday, Thomas; Vabishchevich, Polina V.; Hale, Lucy; Harris, Charles T.; Luk, Ting S.; Reno, J.L.; Brener, Igal B.

Performance of terahertz (THz) photoconductive devices, including detectors and emitters, has been improved recently by means of plasmonic nanoantennae and gratings. However, plasmonic nanostructures introduce Ohmic losses, which limit gains in device performance. In this presentation, we discuss an alternative approach, which eliminates the problem of Ohmic losses. We use all-dielectric photoconductive metasurfaces as the active region in THz switches to improve their efficiency. In particular, we discuss two approaches to realize perfect optical absorption in a thin photoconductive layer without introducing metallic elements. In addition to providing perfect optical absorption, the photoconductive channel based on all-dielectric metasurface allows us to engineer desired electrical properties, specifically, fast and efficient conductivity switching with very high contrast. This approach thus promises a new generation of sensitive and efficient THz photoconductive detectors. Here we demonstrate and discuss performance of two practical THz photoconductive detectors with integrated all-dielectric metasurfaces.

More Details

Production application performance data streaming for system monitoring

ACM Transactions on Modeling and Performance Evaluation of Computing Systems

Izadpanah, Ramin; Allan, Benjamin A.; Dechev, Damian; Brandt, James M.

In this article, we present an approach to streaming collection of application performance data. Practical application performance tuning and troubleshooting in production high-performance computing (HPC) environments requires an understanding of how applications interact with the platform, including (but not limited to) parallel programming libraries such as Message Passing Interface (MPI). Several profiling and tracing tools exist that collect heavy runtime data traces either in memory (released only at application exit) or on a file system (imposing an I/O load that may interfere with the performance being measured). Although these approaches are beneficial in development stages and post-run analysis, a systemwide and low-overhead method is required to monitor deployed applications continuously. This method must be able to collect information at both the application and system levels to yield a complete performance picture. In our approach, an application profiler collects application event counters. A sampler uses an efficient inter-process communication method to periodically extract the application counters and stream them into an infrastructure for performance data collection. We implement a tool-set based on our approach and integrate it with the Lightweight Distributed Metric Service (LDMS) system, a monitoring system used on large-scale computational platforms. LDMS provides the infrastructure to create and gather streams of performance data in a low overhead manner. We demonstrate our approach using applications implemented with MPI, as it is one of the most common standards for the development of large-scale scientific applications. We utilize our tool-set to study the impact of our approach on an open source HPC application, Nalu. Our tool-set enables us to efficiently identify patterns in the behavior of the application without source-level knowledge. We leverage LDMS to collect system-level performance data and explore the correlation between the system and application events. Also, we demonstrate how our tool-set can help detect anomalies with a low latency. We run tests on two different architectures: a system enabled with Intel Xeon Phi and another system equipped with Intel Xeon processor. Our overhead study shows our method imposes at most 0.5% CPU usage overhead on the application in realistic deployment scenarios.

More Details

Paired neural networks for hyperspectral target detection

Proceedings of SPIE - The International Society for Optical Engineering

Anderson, Dylan Z.; Zollweg, Joshua D.; Smith, Braden J.

Spectral matched filtering and its variants (e.g. Adaptive Coherence Estimator or ACE) rely on strong assumptions about target and background distributions. For instance, ACE assumes a Gaussian distribution of background and additive target model. In practice, natural spectral variation, due to effects such as material Bidirectional Reflectance Distribution Function, non-linear mixing with surrounding materials, or material impurities, degrade the performance of matched filter techniques and require an ever-increasing library of target templates measured under different conditions. In this work, we employ the contrastive loss function and paired neural networks to create data-driven target detectors that do not rely on strong assumptions about target and background distribution. Furthermore, by matching spectra to templates in a highly nonlinear fashion via neural networks, our target detectors exhibit improved performance and greater resiliency to natural spectral variation; this performance improvement comes with no increase in target template library size. We evaluate and compare our paired neural network detector to matched filter-based target detectors on a synthetic hyperspectral scene and the well-known Indian Pines AVIRIS hyperspectral image.

More Details

Optical ray-tracing performance modeling of quartz half-shell tubes aperture cover for falling particle receiver

ASME 2019 13th International Conference on Energy Sustainability, ES 2019, collocated with the ASME 2019 Heat Transfer Summer Conference

Yellowhair, Julius; Ho, Clifford K.

A 1 MWt falling particle receiver prototype was designed, built and is being evaluated at Sandia National Laboratories, National Solar Thermal Test Facility (NSTTF). The current prototype has a 1 m2 aperture facing the north field. The current aperture configuration is susceptible to heat and particle losses through the receiver aperture. Several options are being considered for the next design iteration to reduce the risk of heat and particle losses, in addition to improving the receiver efficiency to target levels of ~90%. One option is to cover the receiver aperture with a highly durable and transmissive material such as quartz glass. Quartz glass has high transmittance for wavelengths less than 2.5 microns and low transmittance for wavelengths greater than 2.5 microns to help trap the heat inside the receiver. To evaluate the receiver optical performance, ray-tracing models were set up for several different aperture cover configurations. The falling particle receiver is modeled as a box with a 1 m2 aperture on the north side wall. The box dimensions are 1.57 m wide x 1.77 m tall x 1.67 m deep. The walls are composed of RSLE material modeled as Lambertian surfaces with reflectance of either 0.9 for the pristine condition or 0.5 for soiled walls. The quartz half-shell tubes are 1.46 m long with 105 mm and 110 mm inner and outer diameters, respectively. The half-shell tubes are arranged vertically and slant forward at the top by 30 degrees. Four configurations were considered: concave side of the half-shells facing away from the receiver aperture with (1) no spacing and (2) high spacing between the tubes, and concave side of the half-shells facing the aperture with (3) no spacing and (4) high spacing between the tubes. The particle curtain, in the first modeling approach, is modeled as a diffuse surface with transmittance, reflectance, and absorptance values, which are based on estimates from previous experiments for varying particle flow rates. The incident radiation is from the full NSTTF heliostat field with a single aimpoint at the center of the receiver aperture. The direct incident rays and reflected and scattered rays off the internal receiver surfaces are recorded on the internal walls and particle curtain surfaces as net incident irradiance. The net incident irradiances on the internal walls and particle curtain for the different aperture cover configuration are compared to the baseline configuration. In all cases, just from optical performance alone, the net incident irradiance is reduced from the baseline. However, it is expected that the quartz half-shells will reduce the convective and thermal radiation losses through the aperture. These ray-tracing results will be used as boundary conditions in computational fluid dynamics (CFD) analyses to determine the net receiver efficiency and optimal configuration for the quartz half-shells that minimize heat losses and maximize thermal efficiency.

More Details

Optical performance modeling and analysis of a tensile ganged heliostat concept

ASME 2019 13th International Conference on Energy Sustainability, ES 2019, collocated with the ASME 2019 Heat Transfer Summer Conference

Yellowhair, Julius; Armijo, Kenneth M.; Andraka, Charles E.; Ortega, J.; Clair, Jim

Designs of conventional heliostats have been varied to reduce cost, improve optical performance or both. In one case, reflective mirror area on heliostats has been increased with the goal of reducing the number of pedestals and drives and consequently reducing the cost on those components. The larger reflective areas, however, increase torques due to larger mirror weights and wind loads. Higher cost heavy-duty motors and drives must be used, which negatively impact any economic gains. To improve on optical performance, the opposite may be true where the mirror reflective areas are reduced for better control of the heliostat pointing and tracking. For smaller heliostats, gravity and wind loads are reduced, but many more heliostats must be added to provide sufficient solar flux to the receiver. For conventional heliostats, there seems to be no clear cost advantage of one heliostat design over other designs. The advantage of ganged heliostats is the pedestal and tracking motors are shared between multiple heliostats, thus can significantly reduce the cost on those components. In this paper, a new concept of cable-suspended tensile ganged heliostats is introduced, preliminary analysis is performed for optical performance and incorporated into a 10 MW conceptual power tower plant where it was compared to the performance of a baseline plant with a conventional radially staggered heliostat field. The baseline plant uses conventional heliostats and the layout optimized in System Advisor Model (SAM) tool. The ganged heliostats are suspended on two guide cables. The cables are attached to rotations arms which are anchored to end posts. The layout was optimized offline and then transferred to SAM for performance evaluation. In the initial modeling of the tensile ganged heliostats for a 10 MW power tower plant, equal heliostat spacing along the guide cables was assumed, which as suspected leads to high shading and blocking losses. The goal was then to optimize the heliostat spacing such that annual shading and blocking losses are minimized. After adjusting the spacing on tensile ganged heliostats for minimal blocking losses, the annual block/shading efficiency was greater than 90% and annual optical efficiency of the field became comparable to the conventional field at slightly above 60%.

More Details

On-sun tracking evaluation of a small-scale tensile ganged heliostat prototype

ASME 2019 13th International Conference on Energy Sustainability, ES 2019, collocated with the ASME 2019 Heat Transfer Summer Conference

Yellowhair, Julius; Armijo, Kenneth M.; Ortega, J.; Clair, Jim

Various ganged heliostat concepts have been proposed in the past. The attractive aspect of ganged heliostat concepts is multiple heliostats are grouped so that pedestals, tracking drives, and other components can be shared, thus reducing the number of components. The reduction in the number of components is thought to significantly reduce cost. However, since the drives and tracking mechanisms are shared, accurate on-sun tracking of grouped heliostats becomes challenging because the angular degrees-of-freedom are now limited for the multiple number of combined heliostats. In this paper, the preliminary evaluation of the on-sun tracking of a novel tensile-based cable suspended ganged heliostat concept is provided. In this concept, multiple heliostats are attached to two guide cables. The cables are attached to rotation spreader arms which are anchored to end posts on two ends. The guide cables form a catenary which makes tracking on-sun interesting and challenging. Tracking is performed by rotating the end plates that the two cables are attached to and rotating the individual heliostats in one axis. An additional degree-of-freedom can be added by differentially tensioning the two cables, but this may be challenging to do in practice. Manual on-sun tracking was demonstrated on small-scale prototypes. The rotation arms were coarsely controlled with linear actuators, and the individual heliostats were hand-adjusted in local pitch angle and locked in place with set screws. The coarse angle adjustments showed the tracking accuracy was 3-4 milli-radians. However, with better angle control mechanisms the tracking accuracy can be drastically improved. In this paper, we provide tracking data that was collected for a day, which showed feasibility for automated on-sun tracking. The next steps are to implement better angle control mechanisms and develop tracking algorithms so that the ganged heliostats can automatically track.

More Details

Conservative multimoment transport along characteristics for discontinuous Galerkin methods

SIAM Journal on Scientific Computing

Bosler, Peter A.; Bradley, Andrew M.; Taylor, Mark A.

A set of algorithms based on characteristic discontinuous Galerkin methods is presented for tracer transport on the sphere. The algorithms are designed to reduce message passing interface communication volume per unit of simulated time relative to current methods generally, and to the spectral element scheme employed by the U.S. Department of Energy's Exascale Earth System Model (E3SM) specifically. Two methods are developed to enforce discrete mass conservation when the transport schemes are coupled to a separate dynamics solver; constrained transport and Jacobian-combined transport. A communication-efficient method is introduced to enforce tracer consistency between the transport scheme and dynamics solver; this method also provides the transport scheme's shape preservation capability. A subset of the algorithms derived here is implemented in E3SM and shown to improve transport performance by a factor of 2.2 for the model's standard configuration with 40 tracers at the strong scaling limit of one element per core.

More Details

Comparison of field measurements and large eddy simulations of the scaled wind farm technology (SWIFT) site

ASME-JSME-KSME 2019 8th Joint Fluids Engineering Conference, AJKFluids 2019

Blaylock, Myra L.; Houchens, Brent C.; Maniaci, David C.; Herges, Thomas H.; Laros, James H.; Knaus, Robert C.; Sakievich, Philip S.

Power production of the turbines at the Department of Energy/Sandia National Laboratories Scaled Wind Farm Technology (SWiFT) facility located at the Texas Tech University’s National Wind Institute Research Center was measured experimentally and simulated for neutral atmospheric boundary layer operating conditions. Two V27 wind turbines were aligned in series with the dominant wind direction, and the upwind turbine was yawed to investigate the impact of wake steering on the downwind turbine. Two conditions were investigated, including that of the leading turbine operating alone and both turbines operating in series. The field measurements include meteorological evaluation tower (MET) data and light detection and ranging (lidar) data. Computations were performed by coupling large eddy simulations (LES) in the three-dimensional, transient code Nalu-Wind with engineering actuator line models of the turbines from OpenFAST. The simulations consist of a coarse precursor without the turbines to set up an atmospheric boundary layer inflow followed by a simulation with refinement near the turbines. Good agreement between simulations and field data are shown. These results demonstrate that Nalu-Wind holds the promise for the prediction of wind plant power and loads for a range of yaw conditions.

More Details

Wave-powered auv recharging: A feasibility study

Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering - OMAE

Driscol, Blake P.; Gish, Andrew; Coe, Ryan G.

The aim of this study is to determine whether multiple U.S. Navy autonomous underwater vehicles (AUVs) could be supported using a small, heaving wave energy converter (WEC). The U.S. Navy operates numerous AUVs that need to be charged periodically onshore or onboard a support ship. Ocean waves provide a vast source of energy that can be converted into electricity using a wave energy converter and stored using a conventional battery. The Navy would benefit from the development of a wave energy converter that could store electrical power and autonomously charge its AUVs offshore. A feasibility analysis is required to ensure that the WEC could support the energy needs of multiple AUVs, remain covert, and offer a strategic military advantage. This paper investigates the Navy's power demands for AUVs and decides whether or not these demands could be met utilizing various measures of WEC efficiency. Wave data from a potential geographic region is analyzed to determine optimal locations for the converter in order to meet the Navy's power demands and mission set.

More Details

Generalized entropy stable weighted essentially non-oscillatory finite difference scheme in multi-block domains

AIAA Aviation 2019 Forum

Maeng, Jungyeoul B.; Fisher, Travis C.; Carpenter, Mark H.

A new cell-centered third-order entropy stable Weighted Essentially Non-Oscillatory (SS-WENO) finite difference scheme in multi-block domains is developed for compressible flows. This new scheme overcomes shortcomings of the conventional SSWENO finite difference scheme in multi-domain problems by incorporating non-dissipative Simultaneous Approximation Term (SAT) penalties into the construction of a dual flux. The stencil of the generalized dual flux allows for full stencil biasing across the interface while maintaining the nonlinear stability estimate. We demonstrate the shock capturing improvement across multi-block domain interfaces using the generalized SSWENO in comparison to the conventional entropy stable high-order finite difference with interface penalty in shock problems. Furthermore, we test the new scheme in multi-dimensional turbulent flow problems to assess the accuracy and stability of the multi-block domain formulation.

More Details

Toward the analysis of embedded firmware through automated re-hosting

RAID 2019 Proceedings - 22nd International Symposium on Research in Attacks, Intrusions and Defenses

Gustafson, Eric D.; Muench, Marius; Spensky, Chad; Redini, Nilo; Machiry, Aravind; Fratantonio, Yanick; Francillon, Aurelien; Balzarotti, Davide; Choe, Yung R.; Kruegel, Christopher; Vigna, Giovanni

The recent paradigm shift introduced by the Internet of Things (IoT) has brought embedded systems into focus as a target for both security analysts and malicious adversaries. Typified by their lack of standardized hardware, diverse software, and opaque functionality, IoT devices present unique challenges to security analysts due to the tight coupling between their firmware and the hardware for which it was designed. In order to take advantage of modern program analysis techniques, such as fuzzing or symbolic execution, with any kind of scale or depth, analysts must have the ability to execute firmware code in emulated (or virtualized) environments. However, these emulation environments are rarely available and are cumbersome to create through manual reverse engineering, greatly limiting the analysis of binary firmware. In this work, we explore the problem of firmware re-hosting, the process by which firmware is migrated from its original hardware environment into a virtualized one. We show that an approach capable of creating virtual, interactive environments in an automated manner is a necessity to enable firmware analysis at scale. We present the first proof-of-concept system aiming to achieve this goal, called PRETENDER, which uses observations of the interactions between the original hardware and the firmware to automatically create models of peripherals, and allows for the execution of the firmware in a fully-emulated environment. Unlike previous approaches, these models are interactive, stateful, and transferable, meaning they are designed to allow the program to receive and process new input, a requirement of many analyses. We demonstrate our approach on multiple hardware platforms and firmware samples, and show that the models are flexible enough to allow for virtualized code execution, the exploration of new code paths, and the identification of security vulnerabilities.

More Details

Enhanced Optical Nonlinearities in All-Dielectric Metasurfaces

Optics InfoBase Conference Papers

Vabishchevich, Polina V.; Vaskin, A.; Addamane, Sadhvikas J.; Karl, Nicholas J.; Liu, S.; Sharma, A.P.; Balakrishnan, G.; Reno, J.L.; Keeler, G.A.; Sinclair, Michael B.; Staude, I.; Brener, Igal B.

We experimentally demonstrate simultaneous generation of second-, third-, fourthharmonic, sum-frequency, four-wave mixing and six-wave mixing processes in III-V semiconductor metasurfaces and show how to tailor second harmonic generation to zerodiffraction order via crystal orientation.

More Details

III-Nitride ultra-wide-bandgap electronic devices

Semiconductors and Semimetals

Kaplar, Robert K.; Allerman, A.A.; Armstrong, Andrew A.; Baca, A.G.; Crawford, Mary H.; Dickerson, Jeramy R.; Douglas, Erica A.; Fischer, Arthur D.; Klein, Brianna A.; Reza, Shahed R.

This chapter discusses the motivation for the use of Ultra-Wide-Bandgap Aluminum Gallium Nitride semiconductors for power switching and radio-frequency applications. A review of the relevant figures of merit for both vertical and lateral power switching devices, as well as lateral radio-frequency devices, is presented, demonstrating the potential superior performance of these devices relative to Gallium Nitride. Additionally, representative results from the literature for each device type are reviewed, highlighting recent progress as well as areas for further research.

More Details

Deep neural networks for compressive hyperspectral imaging

Proceedings of SPIE - The International Society for Optical Engineering

Lee, Dennis J.

We investigate deep neural networks to reconstruct and classify hyperspectral images from compressive sensing measurements. Hyperspectral sensors provide detailed spectral information to differentiate materials. However, traditional imagers require scanning to acquire spatial and spectral information, which increases collection time. Compressive sensing is a technique to encode signals into fewer measurements. It can speed acquisition time, but the reconstruction can be computationally intensive. First we describe multilayer perceptrons to reconstruct compressive hyperspectral images. Then we compare two different inputs to machine learning classifiers: compressive sensing measurements and the reconstructed hyperspectral image. The classifiers include support vector machines, K nearest neighbors, and three neural networks (3D convolutional neural networks and recurrent neural networks). The results show that deep neural networks can speed up the time for the acquisition, reconstruction, and classification of compressive hyperspectral images.

More Details

SBF-BO-2CoGP: A sequential bi-fidelity constrained Bayesian optimization for design applications

Proceedings of the ASME Design Engineering Technical Conference

Laros, James H.; Wildey, Timothy M.; Mccann, Scott

Bayesian optimization is an effective surrogate-based optimization method that has been widely used for simulation-based applications. However, the traditional Bayesian optimization (BO) method is only applicable to single-fidelity applications, whereas multiple levels of fidelity exist in reality. In this work, we propose a bi-fidelity known/unknown constrained Bayesian optimization method for design applications. The proposed framework, called sBF-BO-2CoGP, is built on a two-level CoKriging method to predict the objective function. An external binary classifier, which is also another CoKriging model, is used to distinguish between feasible and infeasible regions. The sBF-BO-2CoGP method is demonstrated using a numerical example and a flip-chip application for design optimization to minimize the warpage deformation under thermal loading conditions.

More Details

Impact of molecular dynamics simulations on research and development of semiconductor materials

MRS Advances

Zhou, Xiaowang Z.

Atomic scale defects critically limit performance of semiconductor materials. To improve materials, defect effects and defect formation mechanisms must be understood. In this paper, we demonstrate multiple examples where molecular dynamics simulations have effectively addressed these issues that were not well addressed in prior experiments. In the first case, we report our recent progress on modelling graphene growth, where we found that defects in graphene are created around periphery of islands throughout graphene growth, not just in regions where graphene islands impinge as believed previously. In the second case, we report our recent progress on modelling TlBr, where we discovered that under an electric field, edge dislocations in TlBr migrate in both slip and climb directions. The climb motion ejects extensive vacancies that can cause the rapid aging of the material seen in experiments. In the third case, we discovered that the growth of InGaN films on (0001) surfaces suffers from a serious polymorphism problem that creates enormous amounts of defects. Growth on surfaces, on the other hand, results in single crystalline wurtzite films without any of these defects. In the fourth case, we first used simulations to derive dislocation energies that do not possess any noticeable statistical errors, and then used these error-free methods to discover possible misuse of misfit dislocation theory in past thin film studies. Finally, we highlight the significance of molecular dynamics simulations in reducing defects in the design space of nanostructures.

More Details

Validation of shielded cable modeling in xyce based on transmission-line theory

Progress in Electromagnetics Research Letters

Campione, Salvatore; Pung, Aaron J.; Warne, Larry K.; Langston, William L.; Mei, Ting; Hudson, Howard G.

Cables and electronic devices typically employ electromagnetic shields to prevent coupling from external radiation. The imperfect nature of these shields allows external electric and magnetic fields to induce unwanted currents and voltages on the inner conductor by penetrating into the interior regions of the cable. In this paper, we verify a circuit model tool using a previously proposed analytic model [1], by evaluating induced currents and voltages on the inner conductor of the shielded cable. Comparisons with experiments are also provided, aimed to validate the proposed circuit model. We foresee that this circuit model will enable coupling between electromagnetic and circuit simulations.

More Details

High-fidelity calibration and characterization of a spectral computed tomography system

Proceedings of SPIE - The International Society for Optical Engineering

Gallegos, Isabel G.; Dalton, Gabriella D.; Stohn, Adriana M.; Koundinyan, Srivathsan P.; Thompson, Kyle R.; Jimenez, Edward S.

Sandia National Laboratories has developed a model characterizing the nonlinear encoding operator of the world's first hyperspectral x-ray computed tomography (H-CT) system as a sequence of discrete-to-discrete, linear image system matrices across unique and narrow energy windows. In fields such as national security, industry, and medicine, H-CT has various applications in the non-destructive analysis of objects such as material identification, anomaly detection, and quality assurance. However, many approaches to computed tomography (CT) make gross assumptions about the image formation process to apply post-processing and reconstruction techniques that lead to inferior data, resulting in faulty measurements, assessments, and quantifications. To abate this challenge, Sandia National Laboratories has modeled the H-CT system through a set of point response functions, which can be used for calibration and anaylsis of the real-world system. This work presents the numerical method used to produce the model through the collection of data needed to describe the system; the parameterization used to compress the model; and the decompression of the model for computation. By using this linear model, large amounts of accurate synthetic H-CT data can be efficiently produced, greatly reducing the costs associated with physical H-CT scans. Furthermore, successfully approximating the encoding operator for the H-CT system enables quick assessment of H-CT behavior for various applications in high-performance reconstruction, sensitivity analysis, and machine learning.

More Details

Geometric uncertainty quantification and robust design for 2D satellite shielding

International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, M and C 2019

Pautz, Shawn D.; Adams, Brian M.; Bruss, Donald E.

The design of satellites usually includes the objective of minimizing mass due to high launch costs, which is challenging due to the need to protect sensitive electronics from the space radiation environment by means of radiation shielding. This is further complicated by the need to account for uncertainties, e.g. in manufacturing. There is growing interest in automated design optimization and uncertainty quantification (UQ) techniques to help achieve that objective. Traditional optimization and UQ approaches that rely exclusively on response functions (e.g. dose calculations) can be quite expensive when applied to transport problems. Previously we showed how adjoint-based transport sensitivities used in conjunction with gradient-based optimization algorithms can be quite effective in designing mass-efficient electron and/or proton shields in one- or two-dimensional Cartesian geometries. In this paper we extend that work to UQ and to robust design (i.e. optimization that considers uncertainties) in 2D. This consists primarily of using the sensitivities to geometric changes, originally derived for optimization, within relevant algorithms for UQ and robust design. We perform UQ analyses on previous optimized designs given some assumed manufacturing uncertainties. We also conduct a new optimization exercise that accounts for the same uncertainties. Our results show much improved computational efficiencies over previous approaches.

More Details

Characterization of 3D printed computational imaging element for use in task-specific compressive classification

Proceedings of SPIE - The International Society for Optical Engineering

Birch, Gabriel C.; Redman, Brian J.; Dagel, Amber L.; Kaehr, Bryan J.; Dagel, Daryl D.; LaCasse, Charles F.; Quach, Tu-Thach Q.; Sahakian, Meghan A.

We investigate the feasibility of additively manufacturing optical components to accomplish task-specific classification in a computational imaging device. We report on the design, fabrication, and characterization of a non-traditional optical element that physically realizes an extremely compressed, optimized sensing matrix. The compression is achieved by designing an optical element that only samples the regions of object space most relevant to the classification algorithms, as determined by machine learning algorithms. The design process for the proposed optical element converts the optimal sensing matrix to a refractive surface composed of a minimized set of non-repeating, unique prisms. The optical elements are 3D printed using a Nanoscribe, which uses two-photon polymerization for high-precision printing. We describe the design of several computational imaging prototype elements. We characterize these components, including surface topography, surface roughness, and angle of prism facets of the as-fabricated elements.

More Details

Deep neural networks for compressive hyperspectral imaging

Proceedings of SPIE - The International Society for Optical Engineering

Lee, Dennis J.

We investigate deep neural networks to reconstruct and classify hyperspectral images from compressive sensing measurements. Hyperspectral sensors provide detailed spectral information to differentiate materials. However, traditional imagers require scanning to acquire spatial and spectral information, which increases collection time. Compressive sensing is a technique to encode signals into fewer measurements. It can speed acquisition time, but the reconstruction can be computationally intensive. First we describe multilayer perceptrons to reconstruct compressive hyperspectral images. Then we compare two different inputs to machine learning classifiers: compressive sensing measurements and the reconstructed hyperspectral image. The classifiers include support vector machines, K nearest neighbors, and three neural networks (3D convolutional neural networks and recurrent neural networks). The results show that deep neural networks can speed up the time for the acquisition, reconstruction, and classification of compressive hyperspectral images.

More Details

An extension of conditional point sampling to multi-dimensional transport

International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, M and C 2019

Olson, Aaron J.; Vu, Emily V.

Radiation transport in stochastic media is a challenging problem type relevant for applications such as meteorological modeling, heterogeneous radiation shields, BWR coolant, and pebble-bed reactor fuel. A commonly cited challenge for methods performing transport in stochastic media is to simultaneously be accurate and efficient. Conditional Point Sampling (CoPS), a new method for transport in stochastic media, was recently shown to have accuracy comparable to the most accurate approximate methods for a common 1D benchmark set. In this paper, we use a pseudo-interface-based approach to extend CoPS to application in multi-D for Markovian-mixed media, compare its accuracy with published results for other approximate methods, and examine its accuracy and efficiency as a function of user options. CoPS is found to be the most accurate of the compared methods on the examined benchmark suite for transmittance and comparable in accuracy with the most accurate methods for reflectance and internal flux. Numerical studies examine accuracy and efficiency as a function of user parameters providing insight for effective parameter selection and further method development. Since the authors did not implement any of the other approximate methods, there is not yet a valid comparison for efficiency with the other methods.

More Details

Lessons learned from 10k experiments to compare virtual and physical testbeds

12th USENIX Workshop on Cyber Security Experimentation and Test, CSET 2019, co-located with USENIX Security 2019

Crussell, Jonathan C.; Kroeger, Thomas M.; Kavaler, David; Brown, Aaron B.; Phillips, Cynthia A.

Virtual testbeds are a core component of cyber experimentation as they allow for fast and relatively inexpensive modeling of computer systems. Unlike simulations, virtual testbeds run real software on virtual hardware which allows them to capture unknown or complex behaviors. However, virtualization is known to increase latency and decrease throughput. Could these and other artifacts from virtualization undermine the experiments that we wish to run? For the past three years, we have attempted to quantify where and how virtual testbeds differ from their physical counterparts to address this concern. While performance differences have been widely studied, we aim to uncover behavioral differences. We have run over 10,000 experiments and processed over half a petabyte of data. Complete details of our methodology and our experimental results from applying that methodology are published in previous work. In this paper, we describe our lessons learned in the process of constructing and instrumenting both physical and virtual testbeds and analyzing the results from each.

More Details

Uncertainty propagation using conditional random fields in large-eddy simulations of scramjet computations

AIAA Scitech 2019 Forum

Huan, Xun H.; Safta, Cosmin S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, H.N.

The development of scramjet engines is crucial for attaining efficient and stable propulsion under hypersonic flight conditions. Design for well-performing scramjet engines requires accurate flow simulations in conjunction with uncertainty quantification (UQ). We advance computational methods in bringing together UQ and large-eddy simulations for scramjet computations, with a focus on the HIFiRE Direct Connect Rig combustor. In particular, we perform uncertainty propagation for spatially dependent field quantities of interest (QoIs) by treating them as random fields, and numerically compute low-dimensional Karhunen-Loève expansions (KLEs) using a finite number of simulations on non-uniform grids. We also describe a formulation and procedure to extract conditional KLEs that characterize the stochasticity induced by uncertain parameters at given designs. This is achieved by first building a single KLE for each QoI via samples drawn jointly from the parameter and design spaces, and then leverage polynomial chaos expansions to insert input dependencies into the KLE. The ability to access conditional KLEs will be immensely useful for subsequent efforts in design optimization under uncertainty as well as model calibration with field variable measurements.

More Details

Estimation of inflow uncertainties in laminar hypersonic double-cone experiments

AIAA Scitech 2019 Forum

Ray, Jaideep R.; Kieweg, Sarah K.; Dinzl, Derek J.; Carnes, Brian C.; Weirs, Vincent G.; Freno, Brian A.; Howard, Micah A.; Smith, Thomas M.

We propose a probabilistic framework for assessing the consistency of an experimental dataset, i.e., whether the stated experimental conditions are consistent with the measurements provided. In case the dataset is inconsistent, our framework allows one to hypothesize and test sources of inconsistencies. This is crucial in model validation efforts. The framework relies on statistical inference to estimate experimental settings deemed untrustworthy, from measurements deemed accurate. The quality of the inferred variables is gauged by its ability to reproduce held-out experimental measurements; if the new predictions are closer to measurements than before, the cause of the discrepancy is deemed to have been found. The framework brings together recent advances in the use of Bayesian inference and statistical emulators in fluid dynamics with similarity measures for random variables to construct the hypothesis testing approach. We test the framework on two double-cone experiments executed in the LENS-XX wind tunnel and one in the LENS-I tunnel; all three have encountered difficulties when used in model validation exercises. However, the cause behind the difficulties with the LENS-I experiment is known, and our inferential framework recovers it. We also detect an inconsistency with one of the LENS-XX experiments, and hypothesize three causes for it. We check two of the hypotheses using our framework, and we find evidence that rejects them. We end by proposing that uncertainty quantification methods be used more widely to understand experiments and characterize facilities, and we cite three different methods to do so, the third of which we present in this paper.

More Details

Rock-welding materials development for deep borehole nuclear waste disposal

Materials Chemistry and Physics

Yang, Pin Y.; Wang, Yifeng; Rodriguez, Mark A.; Brady, Patrick V.; Swift, Peter N.

Various versions of deep borehole nuclear waste disposal have been proposed in the past in which effective sealing of a borehole after waste emplacement is generally required. In a high temperature disposal mode, the sealing function will be fulfilled by melting the ambient granitic rock with waste decay heat or an external heating source, creating a melt that will encapsulate waste containers or plug a portion of the borehole above a stack of the containers. However, there are certain drawbacks associated with natural materials, such as high melting temperatures, inefficient consolidation, slow crystallization kinetics, the resulting sealing materials generally being porous with low mechanical strength, insufficient adhesion to waste container surface, and lack of flexibility for engineering controls. In this study, we showed that natural granitic materials can be purposefully engineered through chemical modifications to enhance the sealing capability of the materials for deep borehole disposal. The present work systematically explores the effect of chemical modification and crystallinity (amorphous vs. crystalline) on the melting and crystallization processes of a granitic rock system. The approach can be applied to modify granites excavated from different geological sites. Several engineered granitic materials have been explored which possess significantly lower processing and densification temperatures than natural granites. Those new materials consolidate more efficiently by viscous flow and accelerated recrystallization without compromising their mechanical integrity and properties.

More Details

Sparse Sampling in Microscopy

Statistical Methods for Materials Science: The Data Science of Microstructure Characterization

Larson, K.W.; Anderson, Hyrum; Wheeler, Jason W.

This chapter considers the collection of sparse samples in electron microscopy, either by modification of the sampling methods utilized on existing microscopes, or with new microscope concepts that are specifically designed and optimized for collection of sparse samples. It explores potential embodiments of a multi-beam compressive sensing electron microscope. Sparse measurement matrices offer an advantage of efficient image recovery, since each iteration of the process becomes a simple multiplication by a sparse matrix. Electron microscopy is well suited to compressed or sparse sampling due to the difficulty of building electron microscopes that can accurately record more than one electron signal at a time. Sparse sampling in electron microscopy has been considered for dose reduction, improving three-dimensional reconstructions and accelerating data acquisition. For sparse sampling, variations of scanning transmission electron microscopy (STEM) are typically used. In STEM, the electron probe is scanned across the specimen, and the detector measurement is recorded as a function of probe location.

More Details

From Nanofluidics to Basin‐Scale Flow in Shale: Tracer Investigations

Geophysical Monograph Series

Wang, Yifeng

Understanding fluid flow and transport in shale is of great importance to the development of unconventional hydrocarbon reservoirs and nuclear waste repositories. Tracer techniques have proven to be a useful tool for gaining such understanding. Shale is characterized by the presence of nanometer‐sized pores and the resulting extremely low permeability. Chemical species confined in nanopores could behave drastically differently from those in a bulk system and the interaction of these species with pore surfaces is much enhanced due to a high surface/fluid volume ratio, both of which could potentially affect tracer migration and chromatographic differentiation in shale. Nanoconfinement manifests the discrete nature of fluid molecules in transport, therefore enhancing mass‐dependent isotope fractionations. All these effects combined lead to a distinct set of tracer signatures that may not be observed in a conventional hydrocarbon reservoir or highly permeable groundwater aquifer system. These signatures can be used to delineate flow regimes, trace fluid sources, and quantify the rate and extent of a physical/chemical process. Such signatures can be used for the evaluation of cap rock structural integrity, the postclosure monitoring of a geologic repository, or the detection of a possible contamination in a water aquifer by a shale oil/gas extraction.

More Details

Off-axis input characterization of random vibration laboratory data for model credibility

Conference Proceedings of the Society for Experimental Mechanics Series

Blecke, Jill B.; Freymiller, J.E.; Ross, Michael R.

The goal of this work is to build model credibility of a structural dynamics model by comparing simulated responses to measured responses in random vibration environments, with limited knowledge of the true test input. Oftentimes off-axis excitations can be introduced during single axis vibration testing in the laboratory due to shaker or test fixture dynamics and interface variation. Model credibility cannot be improved by comparing predicted responses to measured responses with unknown excitation profiles. In the absence of sufficient time domain response measurements, the true multi-degree-of-freedom input cannot be exactly characterized for a fair comparison between the model and experiment. Methods exist, however, to estimate multi-degree-of-freedom (MDOF) inputs required to replicate field test data in the laboratory Ross et al.: 6-DOF Shaker Test Input Derivation from Field Test. In: Proceedings of the 35th IMAC, A Conference and Exposition on Structural Dynamics, Bethel (2017). This work focuses on utilizing one of these methods to approximately characterize the off-axis excitation present during laboratory random vibration testing. The method selects a sub-set of the experimental output spectral density matrix, in combination with the system transmissibility matrix, to estimate the input spectral density matrix required to drive the selected measurement responses. Using the estimated multi-degree-of-freedom input generated from this method, the error between simulated predictions and measured responses was significantly reduced across the frequency range of interest, compared to the error computed between experimental data to simulated responses generated assuming single axis excitation.

More Details

Derivation of six degree of freedom shaker inputs using sub-structuring techniques

Conference Proceedings of the Society for Experimental Mechanics Series

Schoenherr, Tyler F.

Multi-degree of freedom testing is growing in popularity and in practice. This is largely due to its inherent benefits in producing realistic stresses that the test article observes in its working environment and the efficiency of testing all axes at one time instead of individually. However, deriving and applying the “correct” inputs to a test has been a challenge. This paper explores a recently developed theory into deriving rigid body accelerations as an input to a test article through sub-structuring techniques. The theory develops a transformation matrix that separates the complete system dynamics into two sub-structures, the test article and next level assembly. The transformation does this by segregating the test article’s fixed base modal coordinates and the next level assembly’s free modal coordinates. This transformation provides insight into the damage that the test article acquires from its excited fixed base shapes and how to properly excite the test article by observing the next level assembly’s rigid body motion. This paper examines using next level assembly’s rigid body motion as a direct input in a multi-degree of freedom test to excite the test article’s fixed base shapes in the same way as the working environment.

More Details

Interface reduction on hurty/craig-bampton substructures with frictionless contact

Conference Proceedings of the Society for Experimental Mechanics Series

Hughes, Patrick J.; Scott, Wesley; Wu, Wensi; Kuether, Robert J.; Allen, Matthew S.; Tiso, Paolo

Contact in structures with mechanical interfaces has the ability to significantly influence the system dynamics, such that the energy dissipation and resonant frequencies vary as a function of the response amplitude. Finite element analysis is commonly used to study the physics of such problems, particularly when examining the local behavior at the interfaces. These high fidelity, nonlinear models are computationally expensive to run with time-stepping solvers due to their large mesh densities at the interface, and because of the high expense required to update the tangent operators. Hurty/Craig-Bampton substructuring and interface reduction techniques are commonly utilized to reduce computation time for jointed structures. In the past, these methods have only been applied to substructures rigidly attached to one another, resulting in a linear model. The present work explores the performance of a particular interface reduction technique (system-level characteristic constraint modes) on a nonlinear model with node-to-node contact for a benchmark structure consisting of two c-shape beams bolted together at each end.

More Details

Pushing 3D Scanning Laser Doppler Vibrometry to Capture Time Varying Dynamic Characteristics

Conference Proceedings of the Society for Experimental Mechanics Series

Witt, Bryan; Zwink, Brandon R.

3D scanning laser Doppler vibrometry (LDV) systems are well known for modal testing of articles whose excited dynamic properties are time-invariant over the duration of all scans. However, several potential test situations can arise in which the modal parameters of a given article will change over the course of a typical LDV scan. One such instance is considered in this work, in which the internal state of a thermal battery changes at different rates over its activation lifetime. These changes substantially alter its dynamic properties as a function of time. Due to the extreme external temperatures of the battery, non-contact LDV was the preferred method of response measurement. However, scanning such an object is not optimal due to the non-simultaneous nature of the scanning LDV when capturing a full set of data. Nonetheless, by carefully considering the test configuration, hardware and software setup, as well as data acquisition and processing methods it was possible to utilize a scanning LDV system to collect sufficient information to provide a measure of the time varying dynamic characteristics of the test article. This work will demonstrate the techniques used, the acquired results and discuss the technical issues encountered.

More Details

Numerical Modeling of an Enclosed Cylinder

Conference Proceedings of the Society for Experimental Mechanics Series

Schultz, Ryan S.; Shepherd, Micah

Finite element models are regularly used in many disciplines to predict dynamic behavior of a structure under certain loads and subject to various boundary conditions, in particular when analytical models cannot be used due to geometric complexity. One such example is a structure with an entrained fluid cavity. To assist an experimental study of the acoustoelastic effect, numerical studies of an enclosed cylinder were performed to design the test hardware. With a system that demonstrates acoustoelastic coupling, it was then desired to make changes to decouple the structure from the fluid by making changes to either the fluid or the structure. In this paper, simulation is used to apply various changes and observe the effects on the structural response to choose an effective decoupling approach for the experimental study.

More Details

Experimental Demonstration of a Tunable Acoustoelastic System

Conference Proceedings of the Society for Experimental Mechanics Series

Fowler, Deborah; Lopp, Garrett; Bansal, Dhiraj; Schultz, Ryan S.; Brake, Matthew; Shepherd, Micah

Acoustoelastic coupling occurs when a hollow structure’s in-vacuo mode aligns with an acoustic mode of the internal cavity. The impact of this coupling on the total dynamic response of the structure can be quite severe depending on the similarity of the modal frequencies and shapes. Typically, acoustoelastic coupling is not a design feature, but rather an unfortunate result that must be remedied as modal tests are often used to correlate or validate finite element models of the uncoupled structure. Here, however, a test structure is intentionally designed such that multiple structural and acoustic modes are well-aligned, resulting in a coupled system that allows for an experimental investigation. Coupling in the system is first identified using a measure termed the magnification factor and the structural-acoustic interaction for a target mode is then measured. Modifications to the system demonstrate the dependency of the coupling with respect to changes in the mode shape and frequency proximity. This includes an investigation of several practical techniques used to decouple the system by altering the internal acoustic cavity, as well as the structure itself. Furthermore, acoustic absorption material effectively decoupled the structure while structural modifications, in their current form, proved unsuccessful. The most effective acoustic absorption method consisted of randomly distributing typical household paper towels in the acoustic cavity; a method that introduces negligible mass to the structural system with the additional advantages of being inexpensive and readily available.

More Details

Probability Distribution of von Mises Stress in the Presence of Pre-load

Conference Proceedings of the Society for Experimental Mechanics Series

Segalman, Daniel J.; Reese, Garth M.; Field, Richard V.

Random vibration under preload is important in multiple endeavors, including those involving launch and re-entry. In these days of increasing reliance on predictive simulation, it is important to address this problem in a probabilistic manner – this is the appropriate flavor of quantification of margin and uncertainty in the context of random vibration. One of the quantities of particular interest in design is the probability distribution of von Mises stress. There are some methods in the literature that begin to address this problem, but they generally are extremely restricted and astonishingly, the most common restriction of these techniques is that they assume zero mean loads. The work presented here employs modal tools to suggest an approach for estimating the probability distributions for von Mises stress of a linear structure for the case of multiple independent Gaussian random loadings combined with a nonzero pre-load.

More Details

Time-resolved planar velocimetry of the supersonic wake of a wall-mounted hemisphere

AIAA Journal

Beresh, Steven J.; Henfling, John F.; Spillers, Russell W.

Time-resolved particle image velocimetry was conducted at 40 kHz using a pulse-burst laser in the supersonic wake of a wall-mounted hemisphere. Velocity fields suggest a recirculation region with two lobes, in which flow moves away from the wall near the centerline and recirculates back toward the hemisphere off the centerline, contrary to transonic configurations. Spatio-temporal cross-correlations and conditional ensemble averages relate the characteristic behavior of the unsteady shock motion to the flapping of the shear layer. At Mach 1.5, oblique shocks develop, associated with vortical structures in the shear layer and convect downstream in tandem; a weak periodicity is observed. Shock motion at Mach 2.0 appears somewhat different, wherein multiple weak disturbances propagate from shear-layer turbulent structures to form an oblique shock that ripples as these vortices pass by. Bifurcated shock feet coalesce and break apart without evident periodicity. Power spectra show a preferred frequency of shear-layer flapping and shock motion for Mach 1.5, but at Mach 2.0, a weak preferred frequency at the same Strouhal number of 0.32 is found only for oblique shock motion and not shear-layer unsteadiness.

More Details

Assessing the relative importance of flame regimes in Raman/Rayleigh line measurements of turbulent lifted flames

Proceedings of the Combustion Institute

Hartl, S.; Van Winkle, R.; Geyer, D.; Dreizler, A.; Magnotti, G.; Hasse, C.; Barlow, R.S.

Understanding and quantifying the relative importance of premixed and non-premixed reaction zones within turbulent partially premixed flames is an important issue for multi-regime combustion. In the present work, the recently-developed method of gradient-free regime identification (GFRI) is applied to instantaneous 1D Raman/Rayleigh measurements of temperature and major species from two turbulent lifted methane/air flames. Local premixed and non-premixed reaction zones are identified using criteria based on the mixture fraction, the chemical explosive mode, and the heat release rate, the latter two being calculated from an approximation of the full thermochemical state of each measured sample. A chemical mode (CM) zero-crossing is a previously documented marker for a premixed reaction zone. Results from the lifted flames show strong correlations among the mixture fraction at the CM zero-crossing, the magnitude of the change in CM at the zero-crossing, and the local heat release rate at the CM zero-crossing compared to the maximum heat release rate. The trends are confirmed through a comparable analysis of numerical simulations of two laminar triple flames. These newly documented trends are associated with the transition from dominantly premixed flame structures to dominantly non-premixed flames structures. The methods introduced for assessing the relative importance of local premixed and non-premixed reactions zones have potential for application to a broad range of turbulent flames.

More Details

Space-time least-squares Petrov-Galerkin projection for nonlinear model reduction

SIAM Journal on Scientific Computing

Choi, Youngsoo; Carlberg, Kevin T.

This work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply (Petrov-)Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discreteoptimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over all space and time in a weighted ℓ2-norm. This norm can be defined to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time Gauss-Newton with Approximated Tensors (GNAT) variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include a reduction of both the spatial and temporal dimensions of the dynamical system, and a priori error bounds that bound the solution error by the best space-time approximation error and whose stability constants exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy for a fixed spatio-temporal discretization.

More Details

Dynamic compressive strength of rock salts

International Journal of Rock Mechanics and Mining Sciences

Bauer, Stephen J.; Song, Bo S.; Sanborn, Brett S.

Mining rock salt results in subsurface damage, which may affect the strength because of applied stress, anisotropy, and deformation rate. In this study, we used a Kolsky compression bar to measure the high strain rate response of bedded and domal salt at strain rates up to approximately 50 s−1 in parallel and perpendicular directions to bedding or foliation direction depending on rock salt type. Both types of salt exhibited a negative strain rate effect wherein a decrease in strength was observed with increasing strain rate compared to strength measured in the quasi-static regime. Both materials exhibited strength anisotropy. Fracturing and microfracturing were the dominant deformation mechanisms. High pore pressures and frictional heating due to the high loading rate may have contributed to reduction in strength.

More Details

Integrated membrane-electrode-assembly photoelectrochemical cell under various feed conditions for solar water splitting

Journal of the Electrochemical Society

Walczak, Karl A.

Photoelectrochemical (PEC) water splitting has the potential to significantly reduce the costs associated with electrochemical hydrogen production through the direct utilization of solar energy. Many PEC cells utilize liquid electrolytes that are detrimental to the durability of the photovoltaic (PV) or photoactive materials at the heart of the device. The membrane-electrode-assembly (MEA) style, PEC cell presented herein is a deviation from that paradigm as a solid electrolyte is used, which allows the use of a water vapor feed. The result of this is a correspondent reduction in the amount of liquid and electrolyte contact with the PV, thereby opening the possibility of longer PEC device lifetimes. In this study, we demonstrate the operation of a liquid and vapor-fed PEC device utilizing a commercial III-V photovoltaic that achieves a solar-to-hydrogen (STH) efficiency of 7.5% (12% as a PV-electrolyzer). While device longevity using liquid water was limited to less than 24 hours, replacement of reactant with water vapor permitted 100 hours of continuous operation under steady-state conditions and diurnal cycling. Key findings include the observations that the exposure of bulk water or water vapor to the PV must be minimized, and that operating in mass-transport limited regime gave preferable performance.

More Details

Mesoscale electrochemical performance simulation of 3D interpenetrating lithium-ion battery electrodes

Journal of the Electrochemical Society

Trembacki, Bradley T.; Duoss, Eric; Oxberry, Geoffrey; Stadermann, Michael; Murthy, Jayathi

Advancements in micro-scale additive manufacturing techniques have made it possible to fabricate intricate architectures including 3D interpenetrating electrode microstructures. A mesoscale electrochemical lithium-ion battery model is presented and implemented in the PETSc software framework using a finite volume scheme. The model is used to investigate interpenetrating 3D electrode architectures that offer potential energy density and power density improvements over traditional particle bed battery geometries. Using the computational model, a variety of battery electrode geometries are simulated and compared across various battery discharge rates and length scales to quantify performance trends and investigate geometrical factors that improve battery performance. The energy density vs. power density relationship of the electrode microstructures are compared in several ways, including a uniform surface area to volume ratio comparison as well as a comparison requiring a minimum manufacturable feature size. Significant performance improvements over traditional particle-bed electrode designs are predicted, and electrode microarchitectures derived from minimal surfaces are shown to be superior under a minimum feature size constraint, especially when subjected to high discharge currents. An average Thiele modulus formulation is presented as a back-of-the-envelope calculation to predict the performance trends of microbattery electrode geometries.

More Details

Defending our public biological databases as a global critical infrastructure

Frontiers in Bioengineering and Biotechnology

Caswell, Jacob; Gans, Jason D.; Generous, Nicholas; Hudson, Corey M.; Merkley, Eric; Johnson, Curtis; Oehmen, Christopher; Omberg, Kristin; Purvine, Emilie; Taylor, Karen; Ting, Christina L.; Wolinsky, Murray; Xie, Gary

Progress in modern biology is being driven, in part, by the large amounts of freely available data in public resources such as the International Nucleotide Sequence Database Collaboration (INSDC), the world's primary database of biological sequence (and related) information. INSDC and similar databases have dramatically increased the pace of fundamental biological discovery and enabled a host of innovative therapeutic, diagnostic, and forensic applications. However, as high-value, openly shared resources with a high degree of assumed trust, these repositories share compelling similarities to the early days of the Internet. Consequently, as public biological databases continue to increase in size and importance, we expect that they will face the same threats as undefended cyberspace. There is a unique opportunity, before a significant breach and loss of trust occurs, to ensure they evolve with quality and security as a design philosophy rather than costly "retrofitted" mitigations. This Perspective surveys some potential quality assurance and security weaknesses in existing open genomic and proteomic repositories, describes methods to mitigate the likelihood of both intentional and unintentional errors, and offers recommendations for risk mitigation based on lessons learned from cybersecurity.

More Details

Single-molecule polarization microscopy of DNA intercalators sheds light on the structure of S-DNA

Science Advances

Backer, Adam B.; Biebricher, Andreas S.; King, Graeme A.; Wuite, Gijs J.L.; Heller, Iddo; Peterman, Erwin J.G.

DNA structural transitions facilitate genomic processes, mediate drug-DNA interactions, and inform the development of emerging DNA-based biotechnology such as programmable materials and DNA origami. While some features of DNA conformational changes are well characterized, fundamental information such as the orientations of the DNA base pairs is unknown. Here, we use concurrent fluorescence polarization imaging and DNA manipulation experiments to probe the structure of S-DNA, an elusive, elongated conformation that can be accessed by mechanical overstretching. To this end, we directly quantify the orientations and rotational dynamics of fluorescent DNA-intercalated dyes. At extensions beyond the DNA overstretching transition, intercalators adopt a tilted (q ~ 54°) orientation relative to the DNA axis, distinct from the nearly perpendicular orientation (q ~ 90°) normally assumed at lower extensions. These results provide the first experimental evidence that S-DNA has substantially inclined base pairs relative to those of the standard (Watson-Crick) B-DNA conformation.

More Details

Exploring the limits of bottom-up gold filling to fabricate diffraction gratings

Journal of the Electrochemical Society

Hollowell, Andrew E.; Arrington, Christian L.; Josell, D.; Ambrozik, S.; Williams, M.E.; Muramoto, S.

Gold deposition on rotating disk electrodes, Bi3+ adsorption on planar Au films and superconformal Au filling of trenches up to 45 μm deep are examined in Bi3+-containing Na3Au(SO3)2 electrolytes with pH between 9.5 and 11.5. Higher pH is found to increase the potential-dependent rate of Bi3+ adsorption on planar Au surfaces, shortening the incubation period that precedes active Au deposition on planar surfaces and bottom-up filling in patterned features. Decreased contact angles between the Au seeded sidewalls and bottom-up growth front also suggest improved wetting. The bottom-up filling dynamic in trenches is, however, lost at pH 11.5. The impact of Au concentration, 80 mmol/L versus 160 mmol/L Na3Au(SO3)2, on bottom-up filling is examined in trenches up to ≈ 210 μm deep with aspect ratio of depth/width ≈ 30. The microstructures of void-free, bottom-up filled trench arrays used as X-ray diffraction gratings are characterized by scanning electron microscopy (SEM) and Electron Backscatter Diffraction (EBSD), revealing marked spatial variation of the grain size and orientation within the filled features.

More Details

Recovery and calibration of legacy analog data from the Leo Brady Seismic Network for the Source Physics Experiment

Young, Brian A.; Abbott, Robert A.

The Leo Brady Seismic Network (LBSN) was established in 1960 by Sandia National Laboratories for monitoring underground nuclear tests (UGTs) at the Nevada Test Site— renamed in 2010 to the Nevada National Security Site (NNSS). The LBSN has been in various configurations throughout its existence, but it has been generally comprised of four to six stations at regional distances from the NNSS with evenly spaced azimuthal coverage. Between 1962 and the early 1980s, the LBSN—and a sister network operated by Lawrence Livermore National Laboratory—were the most comprehensive U.S. source of regional seismic data of UGTs. During the pre-digital era, LBSN data were transmitted as frequency-modulated (FM) audio over telephone lines to the NTS and recorded in analog on hi-fi 8-track AMPEX tapes. These tapes have been stored in temperature-stable buildings or bunkers on the NNSS and Kirtland Air Force Base in Albuquerque, NM for decades and contain the sole record of this irreplaceable data from the analog era; full waveforms of UGTs during this time were never routinely converted to digital form. We have been developing a process over the past few years to recover and calibrate data from these tapes, converting them from FM audio to digital waveforms in ground motion units. The calibration of legacy data from the LBSN is still ongoing. To date, we have digitized tapes from 592 separate UGTs. As a proof-of-concept, we calibrated data from the BOXCAR event.

More Details

The 2018 Nonlinear Mechanics and Dynamics Research Institute

Kuether, Robert J.; Allensworth, Brooke M.; Smith, J.A.; Peebles, Diane E.

The 2018 Nonlinear Mechanics and Dynamics (NOMAD) Research Institute was successfully held from June 18 to August 2, 2018. NOMAD brings together participants with diverse technical backgrounds to work in small teams to cultivate new ideas and approaches in engineering mechanics and dynamics research. NOMAD provides an opportunity for researchers -- especially early career researchers - to develop lasting collaborations that go beyond what can be established from the limited interactions at their institutions or at annual conferences. A total of 17 students came to Albuquerque, New Mexico to participate in the seven-week long program held at the Mechanical Engineering building on the University of New Mexico campus. The students collaborated on one of six research projects that were developed by various mentors from Sandia National Laboratories, University of New Mexico, and academic institutions. In addition to the research activities, the students attended weekly technical seminars, various tours, and socialized at various off-hour events including an Albuquerque Isotopes baseball game. At the end of the summer, the students gave a final technical presentation on their research findings. Many of the research discoveries made at NOMAD are published as proceedings at technical conferences and have direct alignment with the critical mission work performed at Sandia.

More Details

November 2016 HERMES Outdoor Shot Series 10268-313: Courtyard Dosimetry and Parametric Fits

Cartwright, Keith C.; Yee, Benjamin T.; Pointon, Timothy D.; Gooding, Renee L.

A series of outdoor shots were conducted at the HERMES III facility in November 2016. There were several goals associated with these experiments, one of which is an improved understanding of the courtyard radiation environment. Previous work had developed parametric fits to the spatial and temporal dose rate in the area of interest. This work explores the inter-shot variation of the dose in the courtyard, updated fit parameters, and an improved dose rate model which better captures high frequency content. The parametric fit for the spatial profile is found to be adequate in the far-field, however near-field radiation dose is still not well-understood.

More Details

Low Energy Photon Filter Box Optimization Study for the Gamma Irradiation Facility (GIF)

Depriest, Kendall D.

As a follow-up to results presented at the 16th International Symposium on Reactor Dosimetry, a new set of low energy photon filter box designs were evaluated for potential testing at the Gamma Irradiation Facility in Sandia National Laboratories' Technical Area V. The goal of this filter box design study is to produce the highest fidelity gamma ray test environment for electronic parts. Using Monte Carlo coupled photon/electron transport, approximately a dozen different designs were evaluated for the effectiveness in reducing the dose enhancement in a silicon sensor. The completion of this study provides the Radiation Metrology Laboratory staff with a starting point for experimental test plans that could lead to improvement in the gamma ray test environment at the Gamma Irradiation Facility.

More Details

Codes and Standards Update January 2019

Conover, David R.

The goal of the DOE OE Energy Storage System Safety Roadmap is to foster confidence in the safety and reliability of energy storage systems. There are three interrelated objectives to support the realization of that goal: research, codes and standards (C/S) and communication/coordination. The objective focused on C/S is "To apply research and development to support efforts that refocused on ensuring that codes and standards are available to enable the safe implementation of energy storage systems in a comprehensive, non-discriminatory and science-based manner."

More Details
Results 24401–24600 of 96,771
Results 24401–24600 of 96,771