This paper describes an efficient reverse-mode differentiation algorithm for contraction operations for arbitrary and unconventional tensor network topologies. The approach leverages the tensor contraction tree of Evenbly and Pfeifer (2014), which provides an instruction set for the contraction sequence of a network. We show that this tree can be efficiently leveraged for differentiation of a full tensor network contraction using a recursive scheme that exploits (1) the bilinear property of contraction and (2) the property that trees have a single path from root to leaves. While differentiation of tensor-tensor contraction is already possible in most automatic differentiation packages, we show that exploiting these two additional properties in the specific context of contraction sequences can improve eficiency. Following a description of the algorithm and computational complexity analysis, we investigate its utility for gradient-based supervised learning for low-rank function recovery and for fitting real-world unstructured datasets. We demonstrate improved performance over alternating least-squares optimization approaches and the capability to handle heterogeneous and arbitrary tensor network formats. When compared to alternating minimization algorithms, we find that the gradient-based approach requires a smaller oversampling ratio (number of samples compared to number model parameters) for recovery. This increased efficiency extends to fitting unstructured data of varying dimensionality and when employing a variety of tensor network formats. Here, we show improved learning using the hierarchical Tucker method over the tensor-train in high-dimensional settings on a number of benchmark problems.
A crucial component of field testing is the utilization of numerical models to better understand the system and the experimental data being collected. Meshing and modeling field tests is a complex and computationally demanding problem. Hexahedral elements cannot always reproduce experimental dimensions leading to grid orientation or geometric errors. Voronoi meshes can match complex geometries without sacrificing orthogonality. As a result, here we present a high-resolution 3D numerical study for the BATS heater test at the WIPP that compares both a standard non-deformed cartesian mesh along with a Voronoi mesh to match field data collected during a salt heater experiment.
This report describes recommended abuse testing procedures for rechargeable energy storage systems (RESSs) for electric vehicles. This report serves as a revision to the USABC Electrical Energy Storage System Abuse Test Manual for Electric and Hybrid Electric Vehicle Applications (SAND99-0497).
Applications such as counterfeit identification, quality control, and non-destructive material identification benefit from improved spatial and compositional analysis. X-ray Computed Tomography is used in these applications but is limited by the X-ray focal spot size and the lack of energy-resolved data. Recently developed hyperspectral X-ray detectors estimate photon energy, which enables composition analysis but lacks spatial resolution. Moving beyond bulk homogeneous transmission anodes toward multi-metal patterned anodes enables improvements in spatial resolution and signal-to-noise ratios in these hyperspectral X-ray imaging systems. We aim to design and fabricate transmission anodes that facilitate confirmation of previous simulation results. These anodes are fabricated on diamond substrates with conventional photolithography and metal deposition processes. The final transmission anode design consists of a cluster of three disjoint metal bumps selected from molybdenum, silver, samarium, tungsten, and gold. These metals are chosen for their k-lines, which are positioned within distinct energy intervals of interest and are readily available in standard clean rooms. The diamond substrate is chosen for its high thermal conductivity and high transmittance of X-rays. The feature size of the metal bumps is chosen such that the cluster is smaller than the 100 m diameter of the impinging electron beam in the X-ray tube. This effectively shrinks the X-ray focal spot in the selected energy bands. Once fabricated, our transmission anode is packaged in a stainless-steel holder that can be retrofitted into our existing X-ray tube. Innovations in anode design enable an inexpensive and simple method to improve existing X-ray imaging systems.
Measurements of gas-phase pressure and temperature in hypersonic flows are important to understanding fluid–structure interactions on vehicle surfaces, and to develop compressible flow turbulence models. To achieve this measurement capability, femtosecond coherent anti-Stokes Raman scattering (fs CARS) is applied at Sandia National Laboratories’ hypersonic wind tunnel. After excitation of rotational Raman transitions by a broadband femtosecond laser pulse, two probe pulses are used: one at an early time where the collisional environment has largely not affected the Raman coherence, and another at a later time after the collisional environment has led to significant J-dependent dephasing of the Raman coherence. CARS spectra from the early probe are fit for temperature, while the later CARS spectra are fit for pressure. Challenges related to implementing fs CARS in cold-flow hypersonic facilities are discussed. Excessive fs pump energy can lead to flow perturbations. The output of a second-harmonic bandwidth compressor (SHBC) is spectrally filtered using a volume Bragg grating to provide the narrowband ps probe pulses and enable single-shot CARS measurements at 1 kHz. Measurements are demonstrated at temperatures and pressures relevant to cold-flow hypersonic wind tunnels in a low-pressure cryostat with an initial demonstration in the hypersonic wind tunnel.
The focus of this study is on spectral equivalence results for higher-order tensor product finite elements in the H(curl), H(div), and L2 function spaces. For certain choices of the higher-order shape functions, the resulting mass and stiffness matrices are spectrally equivalent to those for an assembly of lowest-order edge-, face- or interior-based elements on the associated Gauss–Lobatto–Legendre (GLL) mesh.
The Arroyo Seco Improvement Program (ASIP) is intended to provide active channel improvements and stream zone management activities that will reduce current flood and erosion risk while providing additional and improved habitat for critical species that may use the Arroyo Seco at the United States Department of Energy (DOE), Sandia National Laboratories, California (SNL/CA) location. The objectives of the ASIP are: correct existing channel stability problems associated with existing arroyo structures (i.e. bridges, security grates, utility crossings, and drain structures), correct bank erosion and provide protection against future erosion, reduce the risk of future flooding, and provide habitat improvement and creation of a mitigation credit for site development and management activities.
Many teams struggle to adapt and right-size software engineering best practices for quality assurance to fit their context. Introducing software quality is not usually framed in a way that motivates teams to take action, thus resulting in it becoming a "check the box for compliance"activity instead of a cultural practice that values software quality and the effort to achieve it. When and how can we provide effective incentives for software teams to adopt and integrate meaningful and enduring software quality practices? We explored this question through a persona-based ideation exercise at the 2021 Collegeville Workshop on Scientific Software in which we created three unique personas that represent different scientific software developer perspectives.
Geothermal energy has been underutilized in the U.S., primarily due to the high cost of drilling in the harsh environments encountered during the development of geothermal resources. Drilling depths can approach 5,000 m with temperatures reaching 170 C. In situ geothermal fluids are up to ten times more saline than seawater and highly corrosive, and hard rock formations often exceed 240 MPa compressive strength. This combination of extreme conditions pushes the limits of most conventional drilling equipment. Furthermore, enhanced geothermal systems are expected to reach depths of 10,000 m and temperatures more than 300 °C. To address these drilling challenges, Sandia developed a proof-of-concept tool called the auto indexer under an annual operating plan task funded by the Geothermal Technologies Program (GTP) of the U.S. Department of Energy Geothermal Technologies Office. The auto indexer is a relatively simple, elastomer-free motor that was shown previously to be compatible with pneumatic hammers in bench-top testing. Pneumatic hammers can improve penetration rates and potentially reduce drilling costs when deployed in appropriate conditions. The current effort, also funded by DOE GTP, increased the technology readiness level of the auto indexer, producing a scaled prototype for drilling larger diameter boreholes using pneumatic hammers. The results presented herein include design details, modeling and simulation results, and testing results, as well as background on percussive hammers and downhole rotation.
The Sandia Optical Fringe Analysis Slope Tool (SOFAST) is a tool that has been developed at Sandia to measure the surface slope of concentrating solar power optics. This tool has largely remained of research quality over the past few years. Since SOFAST is important to ongoing tests happening at Sandia as well as an interest to others outside Sandia, there is a desire to bring SOFAST up to professional software standards. The goal of this effort was to make progress in several broad areas including: code quality, sample data collection, and validation and testing. During the course of this effort, much progress was made in these areas. SOFAST is now a much more professional grade tool. There are, however, some areas of improvement that could not be addressed in the timeframe of this work and will be addressed in the continuation of this effort.
Sandia provided technical assistance to Kit Carson Electric Cooperative (KCEC) to assess the technical merits of a proposed community resilience microgrid project in the Village of El Rito, New Mexico (NM). The project includes a proposed community resilience microgrid in the Village of El Rito, NM, around the campus of Northern New Mexico College (NNMC). A conceptual microgrid analysis plan was performed, considering a campus and community-wide approach. The analysis results provided conceptual microgrid configurations, optimized according to the performance metrics defined. The campus microgrid was studied independently and many conceptual microgrid solutions were provided that met the performance requirements. Considering the existing 1.5 MW PV system on campus far exceeds the simulated campus load peak and energy demand, a small battery installation was deemed sufficient to support the campus microgrid goals. Following the analysis and consultation, it was determined that the core Resilient El Rito team will need to further investigate the results for additional economic and environmental considerations to continue toward the best approach for their goals and needs.
Incorrect modeling of control characteristics for inverter-based resources (IBRs) can affect the accuracy of electric power system studies. In many distribution system contexts, the control settings for behind-the-meter (BTM) IBRs are unknown. This paper presents an efficient method for selecting a small number of time series samples from net load meter data that can be used for reconstructing or classifying the control settings of BTM IBRs. Sparse approximation techniques are used to select the time series samples that cause the inversion of a matrix of candidate responses to be as well-conditioned as possible. We verify these methods on 451 actual advanced metering infrastructure (AMI) datasets from loads with BTM IBRs. Selecting 60 15-minute granularity time series samples, we recover BTM control characteristics with a mean error less than 0.2 kVAR.
As presented above, because similar existing DOE-managed SNF (DSNF) from previous reactors have been evaluated for disposal pathways, we use this knowledge/experience as a broad reference point for initial technical bases for preliminary dispositioning of potential AR SNF. The strategy for developing fully-formed gap analyses for AR SNF entails the primary step of first obtaining all the defining characteristics of the AR SNF waste stream from the AR developers. Utilizing specific and accurate information/data for developing the potential disposal inventory to be evaluated is a key principle start for success. Once the AR SNF waste streams are defined, the initial assessments would be based on comparison to appropriate existing SNF/waste forms previously analyzed (prior experience) to make a determination on feasibility of direct disposal, or the need to further evaluate due to differences specific to the AR SNF. Assessments of criticality potential and controls would also be performed to assess any R&D gaps to be addressed in that regard as well. Although some AR SNF may need additional treatment for waste form development, these aspects may also be constrained and evaluated within the context of disposal options, including detailed gap analysis to identify further R&D activities to close the gaps.
We present a procedure for randomly generating realistic steady-state contingency scenarios based on the historical outage data from a particular event. First, we divide generation into classes and fit a probability distribution of outage magnitude for each class. Second, we provide a method for randomly synthesizing generator resilience levels in a way that preserves the data-driven probability distributions of outage magnitude. Finally, we devise a simple method of scaling the storm effects based on a single global parameter. We apply our methods using data from historical Winter Storm Uri to simulate contingency events for the ACTIVSg2000 synthetic grid on the footprint of Texas.
This study presents a method that can be used to gain information relevant to determining the corrosion risk for spent nuclear fuel (SNF) canisters during extended dry storage. Currently, it is known that stainless steel canisters are susceptible to chloride-induced stress corrosion cracking (CISCC). However, the rate of CISCC degradation and the likelihood that it could lead to a through-wall crack is unknown. This study uses well-developed computational fluid dynamics and particle-tracking tools and applies them to SNF storage to determine the rate of deposition on canisters. The deposition rate is determined for a vertical canister system and a horizontal canister system, at various decay heat rates with a uniform particle size distribution, ranging from 0.25 to 25 µm, used as an input. In all cases, most of the dust entering the overpack passed through without depositing. Most of what was retained in the overpack was deposited on overpack surfaces (e.g., inlet and outlet vents); only a small fraction was deposited on the canister itself. These results are provided for generalized canister systems with a generalized input; as such, this technical note is intended to demonstrate the technique. This study is a part of an ongoing effort funded by the U.S. Department of Energy, Nuclear Energy Office of Spent Fuel Waste Science and Technology, which is tasked with doing research relevant to developing a sound technical basis for ensuring the safe extended storage and subsequent transport of SNF. This work is being presented to demonstrate a potentially useful technique for SNF canister vendors, utilities, regulators, and stakeholders to utilize and further develop for their own designs and site-specific studies.
Metal additive manufacturing allows for the fabrication of parts at the point of use as well as the manufacture of parts with complex geometries that would be difficult to manufacture via conventional methods (milling, casting, etc.). Additively manufactured parts are likely to contain internal defects due to the melt pool, powder material, and laser velocity conditions when printing. Two different types of defects were present in the CT scans of printed AlSi10Mg dogbones: spherical porosity and irregular porosity. Identification of these pores via a machine learning approach (i.e., support vector machines, convolutional neural networks, k-nearest neighbors’ classifiers) could be helpful with part qualification and inspections. The machine learning approach will aim to label the regions of porosity and label the type of porosity present. The results showed that a combination approach of Canny edge detection and a classification-based machine learning model (k-nearest neighbors or support vector machine) outperformed the convolutional neural network in segmenting and labeling different types of porosity.
To keep pace with the demand for innovation through scientific computing, modern scientific software development is increasingly reliant upon a rich and diverse ecosystem of software libraries and toolchains. Research software engineers (RSEs) responsible for that infrastructure perform highly integrative work, acting as a bridge between the hardware, the needs of researchers, and the software layers situated between them; relatively little, however, has been written about the role played by RSEs in that work and what support they need to thrive. To that end, we present a two-part report on the development of half-precision floating point support in the Kokkos Ecosystem. Half-precision computation is a promising strategy for increasing performance in numerical computing and is particularly attractive for emerging application areas (e.g., machine learning), but developing practicable, portable, and user-friendly abstractions is a nontrivial task. In the first half of the paper, we conduct an engineering study on the technical implementation of the Kokkos half-precision scalar feature and showcase experimental results; in the second half, we offer an experience report on the challenges and lessons learned during feature development by the first author. We hope our study provides a holistic view on scientific library development and surfaces opportunities for future studies into effective strategies for RSEs engaged in such work.
Type 2 high-pressure hydrogen vessels for storage at hydrogen refueling stations are designed assuming a predefined operational pressure cycle and targeted autofrettage conditions. However, the resulting finite life depends significantly on variables associated with the autofrettage process and the pressure cycles actually realized during service, which many times are not to the full range of the design. Clear guidance for cycle counting is lacking, therefore industry often defaults to counting every repressurization as a full range pressure cycle, which is an overly conservative approach. In-service pressure cycles used to predict the growth of cracks in operational pressure vessels results in significantly longer life, since most in-service pressure cycles are only a fraction of the full design pressure range. Fatigue crack growth rates can vary widely for a given pressure range depending on the details of the residual strains imparted during the autofrettage process because of their influence on crack driving forces. Small changes in variables associated with the autofrettage process, e.g., the target autofrettage overburden pressure, can result in large changes in the residual stress profile leading to possibly degraded fatigue life. In this paper, computational simulation was used for sensitivity studies to evaluate the effect of both operating conditions and autofrettage conditions on fatigue life for Type 2 highpressure hydrogen vessels. The analysis in this paper explores these sensitivities, and the results are used to provide guidance on cycle counting. In particular, we identify the pressure cycle ranges that can be ignored over the life of the vessel as having negligible effect on fatigue life. This study also examines the sensitivity of design life to the autofrettage process and the impact on life if the targeted residual strain is not achieved during manufacturing.
Growing interest in renewable energy sources has led to an increased installation rate of distributed energy resources (DERs) such as solar photovoltaics (PVs) and wind turbine generators (WTGs). The variable nature of DERs has created several challenges for utilities and system operators related to maintaining voltage and frequency. New grid standards are requiring DERs to provide voltage regulation across distribution networks. Volt-Var Curve (VVC) control is an autonomous grid-support function that provides voltage regulation based on the relationship between voltage and reactive power. This paper evaluates the performance of a WTG operating with VVC control. The evaluation of the model involves a MATLAB/Simulink simulation of a distribution system. For this simulation the model considers three WTGs and a variable load that creates a voltage event.
Fires of practical interest are often large in scale and involve turbulent behavior. Fire simulation tools are often utilized in an under-resolved prediction to assess fire behavior. Data are scarce for large fires because they are difficult to instrument. A helium plume scenario has been used as a surrogate for much of the fire phenomenology (O'Hern et al., 2005), including buoyancy, mixing, and advection. A clean dataset of this nature makes an excellent platform for assessing model accuracy. We have been participating in a community effort to validate fire simulation tools, and the SIERRA/Fuego code is compared here with the historical dataset. Our predictions span a wide range of length-scales, and comparisons are made to species mass fraction and two velocity components for a number of heights in the core of the plume. We detail our approach to the comparisons, which involves some accommodation for the uncertainty in the inflow boundary condition from the test. We show evolving improvement in simulation accuracy with increasing mesh resolution and benchmark the accuracy through comparisons with the data.
Zhang, Chen; Jacobson, Clas; Zhang, Qi; Biegler, Lorenz T.; Eslick, John C.; Zamarripa, Miguel A.; Stinchfeld, Georgia; Siirola, John D.; Laird, Carl D.
For many industries addressing varied customer needs means producing a family of products that satisfy a range of design requirements. Manufacturers seek to design this family of products while exploiting opportunities for shared components to reduce manufacturing cost and complexity. We present a mixed-integer programming formulation that determines the optimal design for each product, the number and design of shared components, and the allocation of those shared components across the products in the family. This formulation and workflow for product family design has created significant business impact on the industrial design of product families for large-scale commercial HVAC chillers in Carrier Global Corporation. We demonstrate the approach on an open case study based on a transcritical CO2 refrigeration cycle. This case study and our industrial experience show that the formulation is computationally tractable and can significantly reduce engineering time by replacing the manual design process with an automated approach.
In high temperature (HT) environments often encountered in geothermal wells, data rate transfers for downhole instrumentation are relatively limited due to transmission line bandwidth and insertion loss and the processing speed of HT microcontrollers. In previous research, Sandia National Laboratory Geothermal Department obtained 3.8 Mbps data rates over 1524 m (5000 ft) for single conductor wireline cable with less than a 1x10-8 bit error rate utilizing low temperature NITM hardware (formerly National InstrumentsTM). Our protocol technique was a combination of orthogonal frequency-division multiplexing and quadrature amplitude modulation across the bandwidth of the single conductor wireline. This showed it is possible to obtain high data rates in low bandwidth wirelines. This paper focuses on commercial HT microcontrollers (µC), rather than low temperature NITM modules, to enable high-speed communication in an HT environment. As part of this effort, four devices were evaluated, and an optimal device (SM320F28335-HT) was selected for its high clock rates, floating-point unit, and on-board analog-to-digital converter. A printed circuit board was assembled with the HT µC, an HT resistor digital-to-analog converter, and an HT line driver. The board was tested at the microcontroller's rated maximum temperature (210°C) for a week while transmitting through a 1524 m (5000 ft) wireline. A final test was conducted to the point of failure at elevated temperatures. This paper will discuss communication methods, achieved data rates, and hardware selection. This effort contributes to the enhancement of HT instrumentation by enabling greater sensor counts and improving data accuracy and transfer rates.
SIERRA/Aero is a compressible fluid dynamics program intended to solve a wide variety compressible fluid flows including transonic and hypersonic problems. This document describes the commands for assembling a fluid model for analysis with this module, henceforth referred to simply as Aero for brevity. Aero is an application developed using the SIERRA Toolkit (STK). The intent of STK is to provide a set of tools for handling common tasks that programmers encounter when developing a code for numerical simulation. For example, components of STK provide field allocation and management, and parallel input/output of field and mesh data. These services also allow the development of coupled mechanics analysis software for a massively parallel computing environment.
Proceedings of Correctness 2022: 6th International Workshop on Software Correctness for HPC Applications, Held in conjunction with SC 2022: The International Conference for High Performance Computing, Networking, Storage and Analysis
Iterative methods for solving linear systems serve as a basic building block for computational science. The computational cost of these methods can be significantly influenced by the round-off errors that accumulate as a result of their implementation in finite precision. In the extreme case, round-off errors that occur in practice can completely prevent an implementation from satisfying the accuracy and convergence behavior prescribed by its underlying algorithm. In the exascale era where cost is paramount, a thorough and rigorous analysis of the delay of convergence due to round-off should not be ignored. In this paper, we use a small model problem and the Jacobi iterative method to demonstrate how the Coq proof assistant can be used to formally specify the floating-point behavior of iterative methods, and to rigorously prove the accuracy of these methods.
Unpredictable disturbances with dynamic trajectories such as extreme weather events and cyber attacks require adaptive, cyber-physical special protection schemes to mitigate cascading impact in the electric grid. A harmonized automatic relay mitigation of nefarious intentional events (HARMONIE) special protection scheme (SPS) is being developed to address that need. However, for evaluating the HARMONIE-SPS performance in classifying system disturbances and mitigating consequences, a cyber-physical testbed is required to further development and validate the methodology. In this paper, we present a design for a co-simulation testbed leveraging the SCEPTRE™ platform and the real-time digital simulator (RTDS). The integration of these two platforms is detailed, as well as the unique, specific needs for testing HARMONIE-SPS within the environment. Results are presented from tests involving a WSCC 9-bus system with different load shedding scenarios with varying cyber-physical impact.
This document presents tests from the Sierra Structural Mechanics verification test suite. Each of these tests is run nightly with the Sierra/SD code suite and the results of the test checked versus the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the Sierra/SD code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
Geographic analysis of photovoltaic (PV) performance factors across large regions can help relevant stakeholders make informed, and reduced risk decisions. High temporal and spatial resolution meteorological data from the National Solar Radiation Database are used to investigate performance and cost as an effect of varying system characteristics such as the module temperature coefficients, mounting configurations and coatings. The results demonstrated the strong climatic dependence that these characteristics have on annual energy yield whereas the revenues were dominated by the electricity price.
This is an addendum to the Sierra/SolidMechanics 5.4 User’s Guide that documents additional capabilities available only in alternate versions of the Sierra/SolidMechanics (Sierra/SM) code. These alternate versions are enhanced to provide capabilities that are regulated under the U.S. Department of State’s International Traffic in Arms Regulations (ITAR) export control rules. The ITAR regulated codes are only distributed to entities that comply with the ITAR export control requirements. The ITAR enhancements to Sierra/SM include material models with an energy-dependent pressure response (appropriate for very large deformations and strain rates) and capabilities for blast modeling. This document is an addendum only; the standard Sierra/SolidMechanics 5.4 User’s Guide should be referenced for most general descriptions of code capability and use.
This paper describes how the performance of motion primitive-based planning algorithms can be improved using reinforcement learning. Specifically, we describe and evaluate a framework that autonomously improves the performance of a primitive-based motion planner. The improvement process consists of three phases: exploration, extraction, and reward updates. This process can be iterated continuously to provide successive improvement. The exploration step generates new trajectories, and the extraction step identifies new primitives from these trajectories. These primitives are then used to update rewards for continued exploration. This framework required novel shaping rewards, development of a primitive extraction algorithm, and modification of the Hybrid A* algorithm. The framework is tested on a navigation task using a nonlinear F-16 model. The framework autonomously added 91 motion primitives to the primitive library and reduced average path cost by 21.6 s, or 35.75% of the original cost. The learned primitives are applied to an obstacle field navigation task, which was not used in training, and reduced path cost by 16.3 s, or 24.1%. Additionally, two heuristics for the modified Hybrid A* algorithm are designed to improve effective branching factor.
We examine coupling into azimuthal slots on an infinite cylinder with a infinite length interior cavity operating both at the fundamental cavity modal frequencies, with small slots and a resonant slot, as well as higher frequencies. The coupling model considers both radiation on an infinite cylindrical exterior as well as a half space approximation. Bounding calculations based on maximum slot power reception and interior power balance are also discussed in detail and compared with the prior calculations. For higher frequencies limitations on matching are imposed by restricting the loads ability to shift the slot operation to the nearest slot resonance; this is done in combination with maximizing the power reception as a function of angle of incidence. Finally, slot power mismatch based on limited cavity load quality factor is considered below the first slot resonance.
Reno, Matthew J.; Blakely, Logan; Trevizan, Rodrigo D.; Pena, Bethany D.; Lave, Matthew S.; Azzolini, Joseph A.; Yusuf, Jubair; Jones, Christian B.; Furlani Bastos, Alvaro F.; Chalamala, Rohit; Korkali, Mert; Sun, Chih-Che; Donadee, Jonathan; Stewart, Emma M.; Donde, Vaibhav; Peppanen, Jouni; Hernandez, Miguel; Deboever, Jeremiah; Rocha, Celso; Rylander, Matthew; Siratarnsophon, Piyapath; Grijalva, Santiago; Talkington, Samuel; Gomez-Peces, Cristian; Mason, Karl; Vejdan, Sadegh; Khan, Ahmad U.; Mbeleg, Jordan S.; Ashok, Kavya; Divan, Deepak; Li, Feng; Therrien, Francis; Jacques, Patrick; Rao, Vittal; Francis, Cody; Zaragoza, Nicholas; Nordy, David; Glass, Jim
This report summarizes the work performed under a project funded by U.S. DOE Solar Energy Technologies Office (SETO) to use grid edge measurements to calibrate distribution system models for improved planning and grid integration of solar PV. Several physics-based data-driven algorithms are developed to identify inaccuracies in models and to bring increased visibility into distribution system planning. This includes phase identification, secondary system topology and parameter estimation, meter-to-transformer pairing, medium-voltage reconfiguration detection, determination of regulator and capacitor settings, PV system detection, PV parameter and setting estimation, PV dynamic models, and improved load modeling. Each of the algorithms is tested using simulation data and demonstrated on real feeders with our utility partners. The final algorithms demonstrate the potential for future planning and operations of the electric power grid to be more automated and data-driven, with more granularity, higher accuracy, and more comprehensive visibility into the system.
Structural alloys may experience corrosion when exposed to molten chloride salts due to selective dissolution of active alloying elements. One way to prevent this is to make the molten salt reducing. For the KCl + MgCl2 eutectic salt mixture, pure Mg can be added to achieve this. However, Mg can form intermetallic compounds with nickel at high temperatures, which may cause alloy embrittlement. This study shows that an optimum level of excess Mg could be added to the molten salt which will prevent corrosion of alloys like 316 H, while not forming any detectable Ni-Mg intermetallic phases on Ni-rich alloy surfaces.
Any program tasked with the evaluation and acquisition of algorithms for use in deployed scenarios must have an impartial, repeatable, and auditable means of benchmarking both candidate and fielded algorithms. Success in this endeavor requires a body of representative sensor data, data labels indicating the proper algorithmic response to the data as adjudicated by subject matter experts, a means of executing algorithms under review against the data, and the ability to automatically score and report algorithm performance. Each of these capabilities should be constructed in support of program and mission goals. By curating and maintaining data, labels, tests, and scoring methodology, a program can understand and continually improve the relationship between benchmarked and fielded performance of acquired algorithms. A system supporting these program needs, deployed in an environment with sufficient computational power and necessary security controls is a powerful tool for ensuring due diligence in evaluation and acquisition of mission critical algorithms. This paper describes the Seascape system and its place in such a process.
In recent years we have been exploring a novel asynchronous, ballistic physical model of reversible computing, variously termed ABRC (Asynchronous Ballistic Reversible Computing) or BARC (Ballistic Asynchronous Reversible Computing). In this model, localized information-bearing pulses propagate bidi-rectionally along nonbranching interconnects between I/O ports of stateful circuit elements, which carry out reversible transformations of the local digital state. The model appears suitable for implementation in superconducting circuits, using the naturally quantized configuration of magnetic flux in the circuit to encode digital information. One of the early research thrusts in this effort involves the enumeration and classification, at an abstract theoretical level, of the distinct possible reversible digital functional behaviors that primitive BARC circuit elements may exhibit, given the applicable conservation and symmetry constraints in superconducting implementations. In this paper, we describe the motivations for this work, outline our research methodology, and summarize some of the noteworthy preliminary results to date from our theoretical study of BARC elements for bipolarized pulses, and having up to three I/O ports and two internal digital states.
The precise estimation of performance loss rate (PLR) of photovoltaic (PV) systems is vital for reducing investment risks and increasing the bankability of the technology. Until recently, the PLR of fielded PV systems was mainly estimated through the extraction of a linear trend from a time series of performance indicators. However, operating PV systems exhibit failures and performance losses that cause variability in the performance and may bias the PLR results obtained from linear trend techniques. Change-point (CP) methods were thus introduced to identify nonlinear trend changes and behaviour. The aim of this work is to perform a comparative analysis among different CP techniques for estimating the annual PLR of eleven grid-connected PV systems installed in Cyprus. Outdoor field measurements over an 8-year period (June 2006-June 2014) were used for the analysis. The obtained results when applying different CP algorithms to the performance ratio time series (aggregated into monthly blocks) demonstrated that the extracted trend may not always be linear but sometimes can exhibit nonlinearities. The application of different CP methods resulted to PLR values that differ by up to 0.85% per year (for the same number of CPs/segments).
This work presents an experimental investigation of the deformation and breakup of water drops behind conical shock waves. A conical shock is generated by firing a bullet at Mach 4.5 past a vertical column of drops with a mean initial diameter of 192 µm. The time-resolved drop position and maximum transverse dimension are characterized using backlit stereo videos taken at 500 kHz. A Reynolds-Averaged Navier Stokes (RANS) simulation of the bullet is used to estimate the gas density and velocity fields experienced by the drops. Classical correlations for breakup times derived from planar-shock/drop interactions are evaluated. Predicted drop breakup times are found to be in error by a factor of three or more, indicating that existing correlations are inadequate for predicting the response to the three-dimensional relaxation of the velocity and thermodynamic properties downstream of the conical shock. Next, the Taylor Analogy Breakup (TAB) model, which solves a transient equation for drop deformation, is evaluated. TAB predictions for drop diameter calculated using a dimensionless constant of C2 = 2, as compared to the accepted value of C2 = 2/3, are found to agree within the confidence bounds of the ensemble averaged experimental values for all drops studied. These results suggest the three-dimensional relaxation effects behind conical shock waves alter the drop response in comparison to a step change across a planar shock, and that future models describing the interaction between a drop and a non-planar shock wave should account for flow field variations.
We demonstrate an optical waveguide device capable of supporting the optical power necessary for trapping a single atom or a cold-atom ensemble with evanescent fields. Our photonic integrated platform successfully manages optical powers of ~30mW.
Rock salt is being considered as a medium for energy storage and radioactive waste disposal. A Disturbed Rock Zone (DRZ) develops in the immediate vicinity of excavations in rock salt, with an increase in permeability, which alters the migration of gases and liquids around the excavation. When creep occurs adjacent to a stiff inclusion such as a concrete plug, it is expected that the stress state near the inclusion will become more hydrostatic and less deviatoric, promoting healing (permeability reduction) of the DRZ. In this scoping study, we measured the permeability of DRZ rock salt with time adjacent to inclusions (plugs) of varying stiffness to determine how the healing of rock salt, as reflected in the permeability changes, is a function of the stress and time. Samples were created with three different inclusion materials in a central hole along the axis of a salt core: (i) very soft silicone sealant, (ii) sorel cement, and (iii) carbon steel. The measured permeabilities are corrected for the gas slippage effect. We observed that the permeability change is a function of the inclusion material. The stiffer the inclusion, the more rapidly the permeability reduces with time.
Expansion techniques are powerful tools that can take a limited measurement set and provide information on responses at unmeasured locations. Expansion techniques are used in dynamic environments specifications, full field stress measurements, model calibration, and other calculations that require response at locations not measured. However, the process of modal expansion techniques such as SEREP (System Equivalent Reduction Expansion Process) has error with the projection of the measurement set of degrees of freedom to the expanded degrees of freedom. Empirical evidence has been used in the past to qualitatively determine the error. In recent years, the modal projection error was developed to quantify the error through a projection between different domains. The modal projection error is used in this paper to demonstrate the use of the metric in quantifying the error of the expansion process and to quantify which modes of the expansion process are the most important.
Carbon sequestration is a growing field that requires subsurface monitoring for potential leakage of the sequestered fluids through the casing annulus. Sandia National Laboratories (SNL) is developing a smart collar system for downhole fluid monitoring during carbon sequestration. This technology is part of a collaboration between SNL, University of Texas at Austin (UT Austin) (project lead), California Institute of Technology (Caltech), and Research Triangle Institute (RTI) to obtain real-time monitoring of the movement of fluids in the subsurface through direct formation measurements. Caltech and RTI are developing millimeter-scale radio frequency identification (RFID) sensors that can sense carbon dioxide, pH, and methane. These sensors will be impervious to cement, and as such, can be mixed with cement and poured into the casing annulus. The sensors are powered and communicate via standard RFID protocol at 902-928 MHz. SNL is developing a smart collar system that wirelessly gathers RFID sensor data from the sensors embedded in the cement annulus and relays that data to the surface via a wired pipe that utilizes inductive coupling at the collar to transfer data through each segment of pipe. This system cannot transfer a direct current signal to power the smart collar, and therefore, both power and communications will be implemented using alternating current and electromagnetic signals at different frequencies. The complete system will be evaluated at UT Austin's Devine Test Site, which is a highly characterized and hydraulically fractured site. This is the second year of the three-year effort, and a review of SNL's progress on the design and implementation of the smart collar system is provided.
Proceedings of the Nuclear Criticality Safety Division Topical Meeting, NCSD 2022 - Embedded with the 2022 ANS Annual Meeting
Salazar, Alex
The postclosure criticality safety assessment for the direct disposal of dual-purpose canisters (DPCs) in a geologic repository includes considerations of transient criticality phenomena. The power pulse from a hypothetical transient criticality event in an unsaturated alluvial repository is evaluated for a DPC containing 37 spent pressurized water reactor (PWR) assemblies. The scenario assumes that the conditions for baseline criticality are achieved through flooding with groundwater and progressive failure of neutron absorbing media. A preliminary series of steady-state criticality calculations is conducted to characterize reactivity feedback due to absorber degradation, Doppler broadening, and thermal expansion. These feedback coefficients are used in an analysis with a reactor kinetics code to characterize the transient pulse given a positive reactivity insertion for a given length of time. The time-integrated behavior of the pulse can be used to model effects on the DPC and surrounding barriers in future studies and determine if transient criticality effects are consequential.
We have extended the computational singular perturbation (CSP) method to differential algebraic equation (DAE) systems and demonstrated its application in a heterogeneous-catalysis problem. The extended method obtains the CSP basis vectors for DAEs from a reduced Jacobian matrix that takes the algebraic constraints into account. We use a canonical problem in heterogeneous catalysis, the transient continuous stirred tank reactor (T-CSTR), for illustration. The T-CSTR problem is modelled fundamentally as an ordinary differential equation (ODE) system, but it can be transformed to a DAE system if one approximates typically fast surface processes using algebraic constraints for the surface species. We demonstrate the application of CSP analysis for both ODE and DAE constructions of a T-CSTR problem, illustrating the dynamical response of the system in each case. We also highlight the utility of the analysis in commenting on the quality of any particular DAE approximation built using the quasi-steady state approximation (QSSA), relative to the ODE reference case.
The state of charge (SoC) estimated by Battery Management Systems (BMSs) could be vulnerable to False Data Injection Attacks (FDIAs), which aim to disturb state estimation. Inaccurate SoC estimation, due to attacks or suboptimal estimators, could lead to thermal runaway, accelerated degradation of batteries, and other undesirable events. In this paper, an ambient temperature-dependent model is adopted to represent the physics of a stack of three series-connected battery cells, and an Unscented Kalman Filter (UKF) is utilized to estimate the SoC for each cell. A Cumulative Sum (CUSUM) algorithm is used to detect FDIAs targeting the voltage sensors in the battery stack. The UKF was more accurate in state and measurement estimation than the Extended Kalman Filter (EKF) for Maximum Absolute Error (MAE) and Root Mean Squared Error (RMSE). The CUSUM algorithm described in this paper was able to detect attacks as low as ±1 mV when one or more voltage sensor was attacked under various ambient temperatures and attack injection times.
This paper presents a visualization technique for incorporating eigenvector estimates with geospatial data to create inter-area mode shape maps. For each point of measurement, the method specifies the radius, color, and angular orientation of a circular map marker. These characteristics are determined by the elements of the right eigenvector corresponding to the mode of interest. The markers are then overlaid on a map of the system to create a physically intuitive visualization of the mode shape. This technique serves as a valuable tool for differentiating oscillatory modes that have similar frequencies but different shapes. This work was conducted within the Western Interconnection Modes Review Group (WIMRG) in the Western Electric Coordinating Council (WECC). For testing, we employ the WECC 2021 Heavy Summer base case, which features a high-fidelity, industry standard dynamic model of the North American Western Interconnection. Mode estimates are produced via eigen-decomposition of a reduced-order state matrix identified from simulated ringdown data. The results provide improved physical intuition about the spatial characteristics of the inter-area modes. In addition to offline applications, this visualization technique could also enhance situational awareness for system operators when paired with online mode shape estimates.
Hu, Xuan; Walker, Benjamin W.; Garcia-Sanchez, Felipe; Edwards, Alexander J.; Zhou, Peng; Incorvia, Jean A.C.; Paler, Alexandru; Frank, Michael P.; Friedman, Joseph S.
Magnetic skyrmions are nanoscale whirls of magnetism that can be propagated with electrical currents. The repulsion between skyrmions inspires their use for reversible computing based on the elastic billiard ball collisions proposed for conservative logic in 1982. In this letter, we evaluate the logical and physical reversibility of this skyrmion logic paradigm, as well as the limitations that must be addressed before dissipation-free computation can be realized.
Neural networks (NN)s have been increasingly proposed as surrogates for approximation of systems with computationally expensive physics for rapid online evaluation or exploration. As these surrogate models are integrated into larger optimization problems used for decision making, there is a need to verify their behavior to ensure adequate performance over the desired parameter space. We extend the ideas of optimization-based neural network verification to provide guarantees of surrogate performance over the feasible optimization space. In doing so, we present formulations to represent neural networks within decision-making problems, and we develop verification approaches that use model constraints to provide increasingly tight error estimates. We demonstrate the capabilities on a simple steady-state reactor design problem.
Software is ubiquitous in society, but understanding it, especially without access to source code, is both non-trivial and critical to security. A specialized group of cyber defenders conducts reverse engineering (RE) to analyze software. The expertise-driven process of software RE is not well understood, especially from the perspective of workflows and automated tools. We conducted a task analysis to explore the cognitive processes that analysts follow when using static techniques on binary code. Experienced analysts were asked to statically find a vulnerability in a small binary that could allow for unverified access to root privileges. Results show a highly iterative process with commonly used cognitive states across participants of varying expertise, but little standardization in process order and structure. A goal-centered analysis offers a different perspective about dominant RE states. We discuss implications about the nature of RE expertise and opportunities for new automation to assist analysts using static techniques.
A newly developed variable-weight DSMC collision scheme for inelastic collision events is applied to PIC-DSMC modelling of electrical breakdown in 1-dimensional helium and argon-filled gaps. Application of the collision scheme to various inelastic collisional and gas-surface interaction processes (electron-impact ionization, electronic excitation, secondary electron emission) is considered. The collision scheme is shown to improve the level of noise in the computed current density compared to the commonly used approach of sampling a single process, whilst maintaining a comparable level of computational cost and providing less variance in the average number of particles per cell.
The Cramér-Rao Lower Bound (CRLB) is used as a classical benchmark to assess estimators. Online algorithms for estimating modal properties from ambient data, i.e., mode meters, can benefit from accurate estimates of forced oscillations. The CRLB provides insight into how well forced oscillation parameters, e.g., frequency and amplitude, can be estimated. Previous works have solved the lower bound under a single-channel PMU measurement; thus, this paper extends works further to study CRLB under two-channel PMU measurements. The goal is to study how correlated/uncorrelated noise can affect estimation accuracy. Interestingly, these studies shows that correlated noise can decrease the CRLB in some cases. This paper derives the CRLB for the two-channel case and discusses factors that affect the bound.
In transit visualization offers a desirable approach to performing in situ visualization by decoupling the simulation and visualization components. This decoupling requires that the data be transferred from the simulation to the visualization, which is typically done using some form of aggregation and redistribution. As the data distribution is adjusted to match the visualization’s parallelism during redistribution, the data transport layer must have knowledge of the input data structures to partition or merge them. In this chapter, we will discuss an alternative approach suitable for quickly integrating in transit visualization into simulations without incurring significant overhead or aggregation cost. Our approach adopts an abstract view of the input simulation data and works only on regions of space owned by the simulation ranks, which are sent to visualization clients on demand.
Aria is a Galerkin finite element based program for solving coupled-physics problems described by systems of PDEs and is capable of solving nonlinear, implicit, transient and direct-to-steady state problems in two and three dimensions on parallel architectures. The suite of physics currently supported by Aria includes thermal energy transport, species transport, and electrostatics as well as generalized scalar, vector and tensor transport equations. Additionally, Aria includes support for manufacturing process flows via the incompressible Navier-Stokes equations specialized to a low Reynolds number (Re < 1) regime. Enhanced modeling support of manufacturing processing is made possible through use of either arbitrary Lagrangian-Eulerian (ALE) and level set based free and moving boundary tracking in conjunction with quasi-static nonlinear elastic solid mechanics for mesh control. Coupled physics problems are solved in several ways including fully-coupled Newton’s method with analytic or numerical sensitivities, fully-coupled Newton-Krylov methods and a loosely-coupled nonlinear iteration about subsets of the system that are solved using combinations of the aforementioned methods. Error estimation, uniform and dynamic ℎ-adaptivity and dynamic load balancing are some of Aria’s more advanced capabilities.
Software sustainability is critical for Computational Science and Engineering (CSE) software. Measuring sustainability is challenging because sustainability consists of many attributes. One factor that impacts software sustainability is the complexity of the source code. This paper introduces an approach for utilizing complexity data, with a focus on hotspots of and changes in complexity, to assist developers in performing code reviews and inform project teams about longer-term changes in sustainability and maintainability from the perspective of cyclomatic complexity. We present an analysis of data associated with four real-world pull requests to demonstrate how the metrics may help guide and inform the code review process and how the data can be used to measure changes in complexity over time.
We evaluate the use of reference modules for monitoring effective irradiance in PV power plants, as compared with traditional plane-of-array (POA) irradiance sensors, for PV monitoring and capacity tests. Common POA sensors such as pyranometers and reference cells are unable to capture module-level irradiance nonuniformity and require several correction factors to accurately represent the conditions for fielded modules. These problems are compounded for bifacial systems, where the power loss due to rear side shading and rear-side plane-of-array (RPOA) irradiance gradients are greater and more difficult to quantify. The resulting inaccuracy can have costly real-world consequences, particularly when the data are used to perform power ratings and capacity tests. Here we analyze data from a bifacial single-axis tracking PV power plant, (175.6 MWdc) using 5 meteorological (MET) stations, located on corresponding inverter blocks with capacities over 4 MWdc. Each MET station consists of bifacial reference modules as well pyranometers mounted in traditional POA and RPOA installations across the PV power plant. Short circuit current measurements of the reference modules are converted to effective irradiance with temperature correction and scaling based on flash test or nameplate short circuit values. Our work shows that bifacial effective irradiance measured by pyranometers averages 3.6% higher than the effective irradiance measured by bifacial reference modules, even when accounting for spectral, angle of incidence, and irradiance nonuniformity. We also performed capacity tests using effective irradiance measured by pyranometers and reference modules for each of the 5 bifacial single-axis tracking inverter blocks mentioned above. These capacity tests evaluated bifacial plant performance at ∼3.9% lower when using bifacial effective irradiance from pyranometers as compared to the same calculation performed with reference modules.
This user’s guide documents capabilities in Sierra/SolidMechanics which remain “in-development” and thus are not tested and hardened to the standards of capabilities listed in Sierra/SM 5.4 User’s Guide. Capabilities documented herein are available in Sierra/SM for experimental use only until their official release. These capabilities include, but are not limited to, novel discretization approaches such as the conforming reproducing kernel (CRK) method, numerical fracture and failure modeling aids such as the extended finite element method (XFEM) and J-integral, explicit time step control techniques, dynamic mesh rebalancing, as well as a variety of new material models and finite element formulations.
In the summer of 2020, the National Aeronautics and Space Administration (NASA) launched a spacecraft as part of the Mars 2020 mission. The rover on the spacecraft uses a Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) to provide continuous electrical and thermal power for the mission. The MMRTG uses radioactive plutonium dioxide. NASA prepared a Supplemental Environmental Impact Statement (SEIS) for the mission in accordance with the National Environmental Policy Act. The SEIS provides information related to updates to the potential environmental impacts associated with the Mars 2020 mission as outlined in the Final Environmental Impact Statement (FEIS) for the Mars 2020 Mission issued in 2014 and associated Record of Decision (ROD) issued in January 2015. The Nuclear Risk Assessment (NRA) 2019 Update includes new and updated Mars 2020 mission information since the publication of the 2014 FEIS and the updates to the Launch Approval Process with the issuance of Presidential Memorandum on Launch of Spacecraft Containing Space Nuclear Systems, National Security Presidential Memorandum 20 (NSPM-20). The NRA 2019 Update addresses the responses of the MMRTG to potential accident and abort conditions during the launch opportunity for the Mars 2020 mission and the associated consequences. This information provides the technical basis for the radiological risks discussed in the SEIS. This paper provides a summary of the methods and results used in the NRA 2019 Update.
Femtosecond laser electronic excitation tagging (FLEET) is a powerful unseeded velocimetry technique typically used to measure one component of velocity along a line, or two or three components from a dot. In this Letter, we demonstrate a dotted-line FLEET technique which combines the dense profile capability of a line with the ability to perform two-component velocimetry with a single camera on a dot. Our set-up uses a single beam path to create multiple simultaneous spots, more than previously achieved in other FLEET spot configurations. We perform dotted-line FLEET measurements downstream of a highly turbulent, supersonic nitrogen free jet. Dotted-line FLEET is created by focusing light transmitted by a periodic mask with rectangular slits of 1.6 × 40 mm2 and an edge-to-edge spacing of 0.5 mm, then focusing the imaged light at the measurement region. Up to seven symmetric dots spaced approximately 0.9 mm apart, with mean full-width at half maximum diameters between 150 and 350 µm, are simultaneously imaged. Both streamwise and radial velocities are computed and presented in this Letter.
In accident scenarios involving release of tritium during handling and storage, the level of risk to human health is dominated by the extent to which radioactive tritium is oxidized to the water form (T2O or THO). At some facilities, tritium inventories consist of very small quantities stored at sub-atmospheric pressure, which means that tritium release accident scenarios will likely produce concentrations in air that are well below the lower flammability limit. It is known that isotope effects on reaction rates should result in slower oxidation rates for heavier isotopes of hydrogen, but this effect has not previously been quantified for oxidation at concentrations well below the lower flammability limit for hydrogen. This work describes hydrogen isotope oxidation measurements in an atmospheric tube furnace reactor. These measurements consist of five concentration levels between 0.01% and 1% protium or deuterium and two residence times. Oxidation is observed to occur between about 550°C and 800°C, with higher levels of conversion achieved at lower temperatures for protium with respect to deuterium at the same volumetric inlet concentration and residence time. Computational fluid dynamics simulations of the experiments were used to customize reaction orders and Arrhenius parameters in a 1-step oxidation mechanism. The trends in the rates for protium and deuterium are extrapolated based on guidance from literature to produce kinetic rate parameters appropriate for tritium oxidation at low concentrations.
For the model-based control of low-voltage microgrids, state and parameter information are required. Different optimal estimation techniques can be employed for this purpose. However, these estimation techniques require knowledge of noise covariances (process and measurement noise). Incorrect values of noise covariances can deteriorate the estimator performance, which in turn can reduce the overall controller performance. This paper presents a method to identify noise covariances for voltage dynamics estimation in a microgrid. The method is based on the autocovariance least squares technique. A simulation study of a simplified 100 kVA, 208 V microgrid system in MATLAB/Simulink validates the method. Results show that estimation accuracy is close to the actual value for Gaussian noise, and non-Gaussian noise has a slightly larger error.
OpenMP 5.0 added support for reductions over explicit tasks. This expands the previous reduction support that was limited primarily to worksharing and parallel constructs. While the scope of a reduction operation in a worksharing construct is the scope of the construct itself, the scope of a task reduction can vary. This difference requires syntactical means to define the scope of reductions, e.g., the task_reduction clause, and to associate participating tasks, e.g., the in_reduction clause. Furthermore, the disassociation of the number of threads and the number of tasks creates space for different implementations in the OpenMP runtime. In this work, we provide insights into the behavior and performance of task reduction implementations in GCC/g++ and LLVM/Clang. Our results indicate that task reductions are well supported by both compilers, but their performance differs in some cases and is often determined by the efficiency of the underlying task management.
Dynamical systems subject to intermittent contact are often modeled with piecewise-smooth contact forces. However, the discontinuous nature of the contact can cause inaccuracies in numerical results or failure in numerical solvers. Representing the piecewise contact force with a continuous and smooth function can mitigate these problems, but not all continuous representations may be appropriate for this use. In this work, five representations used by previous researchers (polynomial, rational polynomial, hyperbolic tangent, arctangent, and logarithm-arctangent functions) are studied to determine which ones most accurately capture nonlinear behaviors including super- and subharmonic resonances, multiple solutions, and chaos. The test case is a single-DOF forced Duffing oscillator with freeplay nonlinearity, solved using direct time integration. This work intends to expand on past studies by determining the limits of applicability for each representation and what numerical problems may occur.
Refractory complex concentrated alloys are an emerging class of materials that attracts attention due to their stability and performance at high temperatures. In this study, we investigate the variations in the mechanical and thermal properties across a broad compositional space for the refractory MoNbTaTi quaternary using high-throughput ab-initio calculations and experimental characterization. For all the properties surveyed, we note a good agreement between our modeling predictions and the experimentally measured values. We reveal the particular role of molybdenum (Mo) to achieve high strength when in high concentration. We trace the origin of this phenomenon to a shift from metallic to covalent bonding when the Mo content is increased. Additionally, a mechanistic, dislocation-based description of the yield strength further explains such high strength due to a combination of high bulk and shear moduli, accompanied by the relatively small size of the Mo atom compared to the other atoms in the alloy. Our analysis of the thermodynamics properties shows that regardless of the composition, this class of quaternary alloys shows good stability and low sensitivity to temperature. Taken together, these results pave the way for the design of new high-performance refractory alloys beyond the equimolar composition found in high-entropy alloys.
Proceedings of SPIE - The International Society for Optical Engineering
Fredricksen, C.J.; Peale, R.E.; Dhakal, N.; Barrett, C.L.; Boykin II, O.; Maukonen, D.; Davis, L.; Ferarri, B.; Chernyak, L.; Zeidan, O.A.; Hawkins, Samuel D.; Klem, John F.; Krishna, Sanjay; Kazemi, Alireza; Schuler-Sandy, Ted
Effects of gamma and proton irradiation, and of forward bias minority carrier injection, on minority carrier diffusion and photoresponse were investigated for long-wave (LW) and mid-wave (MW) infrared detectors with engineered majoritycarrier barriers. The LWIR detector was a type-II GaSb/InAs strained-layer superlattice pBiBn structure. The MWIR detector was a InAsSb/AlAsSb nBp structure without superlattices. Room temperature gamma irradiations degraded the minority carrier diffusion length of the LWIR structure, and minority carrier injections caused dramatic improvements, though there was little effect from either treatment on photoresponse. For the MWIR detector, effects of room temperature gamma irradiation and injection on minority carrier diffusion and photoresponse were negligible. Subsequently, both types of detectors were subjected to gamma irradiation at 77 K. In-situ photoresponse was unchanged for the LWIR detectors, while that for the MWIR ones decreased 19% after cumulative dose of ~500 krad(Si). Minority carrier injection had no effect on photoresponse for either. The LWIR detector was then subjected to 4 Mrad(Si) of 30 MeV proton irradiation at 77 K, and showed a 35% decrease in photoresponse, but again no effect from forward bias injection. These results suggest that photoresponse of the LWIR detectors is not limited by minority carrier diffusion.
Neural networks (NN) have become almost ubiquitous with image classification, but in their standard form produce point estimates, with no measure of confidence. Bayesian neural networks (BNN) provide uncertainty quantification (UQ) for NN predictions and estimates through the posterior distribution. As NN are applied in more high-consequence applications, UQ is becoming a requirement. Automating systems can save time and money, but only if the operator can trust what the system outputs. BNN provide a solution to this problem by not only giving accurate predictions and estimates, but also an interval that includes reasonable values within a desired probability. Despite their positive attributes, BNN are notoriously difficult and time consuming to train. Traditional Bayesian methods use Markov Chain Monte Carlo (MCMC), but this is often brushed aside as being too slow. The most common method is variational inference (VI) due to its fast computation, but there are multiple concerns with its efficacy. MCMC is the gold standard and given enough time, will produce the correct result. VI, alternatively, is an approximation that converges asymptotically. Unfortunately (or fortunately), high consequence problems often do not live in the land of asymtopia so solutions like MCMC are preferable to approximations. We apply and compare MCMC-and VI-trained BNN in the context of target detection in hyperspectral imagery (HSI), where materials of interest can be identified by their unique spectral signature. This is a challenging field, due to the numerous permuting effects practical collection of HSI has on measured spectra. Both models are trained using out-of-the-box tools on a high fidelity HSI target detection scene. Both MCMC-and VI-trained BNN perform well overall at target detection on a simulated HSI scene. Splitting the test set predictions into two classes, high confidence and low confidence predictions, presents a path to automation. For the MCMC-trained BNN, the high confidence predictions have a 0.95 probability of detection with a false alarm rate of 0.05 when considering pixels with target abundance of 0.2. VI-trained BNN have a 0.25 probability of detection for the same, but its performance on high confidence sets matched MCMC for abundances >0.4. However, the VI-trained BNN on this scene required significant expert tuning to get these results while MCMC worked immediately. On neither scene was MCMC prohibitively time consuming, as is often assumed, but the networks we used were relatively small. This paper provides an example of how to utilize the benefits of UQ, but also to increase awareness that different training methods can give different results for the same model. If sufficient computational resources are available, the best approach rather than the fastest or most efficient should be used, especially for high consequence problems.
Interest in the application of DC Microgrids to distribution systems have been spurred by the continued rise of renewable energy resources and the dependence on DC loads. However, in comparison to AC systems, the lack of natural zero crossing in DC Microgrids makes the interruption of fault currents with fuses and circuit breakers more difficult. DC faults can cause severe damage to voltage-source converters within few milliseconds, hence, the need to quickly detect and isolate the fault. In this paper, the potential for five different Machine Learning (ML) classifiers to identify fault type and fault resistance in a DC Microgrid is explored. The ML algorithms are trained using simulated fault data recorded from a 750 VDC Microgrid modeled in PSCAD/EMTDC. The performance of the trained algorithms are tested using real fault data gathered from an operational DC Microgrid located on the Kirtland Air Force Base. Of the five ML algorithms, three could detect the fault and determine the fault type with at least 99% accuracy, and only one could estimate the fault resistance with at least 99% accuracy. By performing a self-learning monitoring and decision making analysis, protection relays equipped with ML algorithms can quickly detect and isolate faults to improve the protection operations on DC Microgrids.
In this paper, we address the problem of convergence of sequential variational inference filter (VIF) through the application of a robust variational objective and H∞-norm based correction for a linear Gaussian system. As the dimension of state or parameter space grows, performing the full Kalman update with the dense covariance matrix for a large-scale system requires increased storage and computational complexity, making it impractical. The VIF approach, based on mean-field Gaussian variational inference, reduces this burden through the variational approximation to the covariance usually in the form of a diagonal covariance approximation. The challenge is to retain convergence and correct for biases introduced by the sequential VIF steps. We desire a frame-work that improves feasibility while still maintaining reasonable proximity to the optimal Kalman filter as data is assimilated. To accomplish this goal, a H∞-norm based optimization perturbs the VIF covariance matrix to improve robustness. This yields a novel VIF-H∞ recursion that employs consecutive variational inference and H∞ based optimization steps. We explore the development of this method and investigate a numerical example to illustrate the effectiveness of the proposed filter.
This SAND Report provides an overview of AniMACCS, the animation software developed for the MELCOR Accident Consequence Code System (MACCS). It details what users need to know in order to successfully generate animations from MACCS results. It also includes information on the capabilities, requirements, testing, limitations, input settings, and problem reporting instructions for AniMACCS version 1.3.1. Supporting information is provided in the appendices, such as guidance on required input files using both WinMACCS and running MACCS from the command line.