Romero-Fiances, Irene; Livera, Andreas; Theristis, Marios; Makrides, George; Stein, Joshua; Nofuentes, Gustavo; De La Casa, Juan; Georghiou, George E.
Accurate quantification of photovoltaic (PV) system degradation rate (RD) is essential for lifetime yield predictions. Although RD is a critical parameter, its estimation lacks a standardized methodology that can be applied on outdoor field data. The purpose of this paper is to investigate the impact of time period duration and missing data on RD by analyzing the performance of different techniques applied to synthetic PV system data at different linear RD patterns and known noise conditions. The analysis includes the application of different techniques to a 10-year synthetic dataset of a crystalline Silicon PV system, with emulated degradation levels and imputed missing data. The analysis demonstrated that the accuracy of ordinary least squares (OLS), year-on-year (YOY), autoregressive integrated moving average (ARIMA) and robust principal component analysis (RPCA) techniques is affected by the evaluation duration with all techniques converging to lower RD deviations over the 10-year evaluation, apart from RPCA at high degradation levels. Moreover, the estimated RD is strongly affected by the amount of missing data. Filtering out the corrupted data yielded more accurate RD results for all techniques. It is proven that the application of a change-point detection stage is necessary and guidelines for accurate RD estimation are provided.
Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Seraj, Esmaeil; Wang, Zheyuan; Paleja, Rohan; Patel, Anirudh; Gombolay, Matthew
High-performing teams learn intelligent and efficient communication and coordination strategies to maximize their joint utility. These teams implicitly understand the different roles of heterogeneous team members and adapt their communication protocols accordingly. Multi-Agent Reinforcement Learning (MARL) seeks to develop computational methods for synthesizing such coordination strategies, but formulating models for heterogeneous teams with different state, action, and observation spaces has remained an open problem. Without properly modeling agent heterogeneity, as in prior MARL work that leverages homogeneous graph networks, communication becomes less helpful and can even deteriorate the cooperativity and team performance. We propose Heterogeneous Policy Networks (HetNet) to learn efficient and diverse communication models for coordinating cooperative heterogeneous teams. Building on heterogeneous graph-attention networks, we show that HetNet not only facilitates learning heterogeneous collaborative policies per existing agent-class but also enables end-to-end training for learning highly efficient binarized messaging. Our empirical evaluation shows that HetNet sets a new state of the art in learning coordination and communication strategies for heterogeneous multi-agent teams by achieving an 8.1% to 434.7% performance improvement over the next-best baseline across multiple domains while simultaneously achieving a 200× reduction in the required communication bandwidth.
Dynamic injection shift factor (DISF) is the linear sensitivity factor that estimates the incremental line flows in a transmission network subject to load disturbances. The DISF provides fast computation of post-disturbance line flows without solving nonlinear equations of power-system dynamics for a given pre-disturbance operating condition. Furthermore, DISF can be utilized to derive other critical sensitivity factors used for fast contingency screening and generation dispatch in real-time markets. However, deriving the DISF analytically is difficult due to nonlinearity of power-system models. In this paper, we propose an approach based on a linear Koopman operator and a data-driven algorithm to construct a representative linear model for generator and network dynamics. The linear model constructed by the proposed approach is utilized to find an analytic expression of the DISF. Then, the DISF provides numerical tools to estimate line flows accurately subject to power injection changes in the network at any instant in time without solving nonlinear power-system equations.
There are unique challenges associated with protection and self-healing of microgrids energized by multiple inverterbased distributed energy resources. In this study, prioritized undervoltage load shedding and undervoltage-supervised overcurrent (UVOC) for fault isolation are demonstrated using PSCAD. The PSCAD implementations of these relays are described in detail, and their operation in a self-healing microgrid is demonstrated.
We report an analysis quantifying the contribution to uncertainty in annual energy projections from uncertainty in ground-measured irradiance. Uncertainty in measured irradiance is quantified for eight instruments by the difference from a well-maintained, secondary standard pyranometer which is regarded as truthful. We construct a statistical model of irradiance uncertainty and apply the model to generate a sample of 100 annual time series of irradiance for each instrument. The sample is propagated through a common performance model for a reference photovoltaic system to quantify variation in annual energy. Although the measured irradiance varies from the reference by a few percent (standard deviation of 1-2%) the uncertainty in annual energy is on the order of a fraction of one percent. We propose a model for a factor that represents uncertainty in modeled annual energy that arises from uncertainty in ground-measured irradiance.
Many teams struggle to adapt and right-size software engineering best practices for quality assurance to fit their context. Introducing software quality is not usually framed in a way that motivates teams to take action, thus resulting in it becoming a "check the box for compliance"activity instead of a cultural practice that values software quality and the effort to achieve it. When and how can we provide effective incentives for software teams to adopt and integrate meaningful and enduring software quality practices? We explored this question through a persona-based ideation exercise at the 2021 Collegeville Workshop on Scientific Software in which we created three unique personas that represent different scientific software developer perspectives.
This paper demonstrates that a faster Automatic Generation Control (AGC) response provided by Inverter-Based Resources (IBRs) can improve a performance-based regulation (PBR) metric. The improvement in performance has a direct effect on operational income. The PBR metric used in this work was obtained from a California ISO (CAISO) example and is fully described herein. A single generator in a modified three area IEEE 39 bus system was replaced with a group of co-located IBRs to present possible responses using different plant controls and variable resource conditions. We show how a group of IBRs that rely on variable resources may negatively affect the described PBR metric of all connected areas if adequate plant control is not employed. However, increasing the dispatch rate of internal plant controls may positively affect the PBR metric of all connected areas despite variable resource conditions.
Modern distribution systems can accommodate different topologies through controllable tie lines for increasing the reliability of the system. Estimating the prevailing circuit topology or configuration is of particular importance at the substation for different applications to properly operate and control the distribution system. One of the applications of circuit configuration estimation is adaptive protection. An adaptive protection system relies on the communication system infrastructure to identify the latest status of power. However, when the communication links to some of the equipment are outaged, the adaptive protection system may lose its awareness over the status of the system. Therefore, it is necessary to estimate the circuit status using the available healthy communicated data. This paper proposes the use of machine learning algorithms at the substation to estimate circuit configuration when the communication to the tie breakers is compromised. Doing so, the adaptive protection system can identify the correct protection settings corresponding to the estimated circuit topology. The effectiveness of the proposed approach is verified on IEEE 123 bus test system.
This paper applies sensitivity and uncertainty analysis to compare two model alternatives for fuel matrix degradation for performance assessment of a generic crystalline repository. The results show that this model choice has little effect on uncertainty in the peak 129I concentration. The small impact of this choice is likely due to the higher importance of uncertainty in the instantaneous release fraction and differences in epistemic uncertainty between the alternatives.
At Sandia National Laboratories, QSCOUT (the Quantum Scientific Computing Open User Testbed) is an ion-trap based quantum computer built for the purpose of allowing users low-level access to quantum hardware. Commands are executed on the hardware using Jaqal (Just Another Quantum Assembly Language), a programming language designed in-house to support the unique capabilities of QSCOUT. In this work, we describe a batching implementation of our custom software that speeds the experimental run-time through the reduction of communication and upload times. Reducing the code upload time during experimental runs improves system performance by mitigating the effects of drift. We demonstrate this implementation through a set of quantum chemistry experiments using a variational quantum eigensolver (VQE). While developed specifically for this testbed, this idea finds application across many similar experimental platforms that seek greater hardware control or reduced overhead.
A critical parameter for the well integrity in geothermal storage and production wells subjected to frequent thermal cycling is the interface between the steel and cement. In geothermal energy storage and energy production wells an insulating cement sheath is necessary to minimize heat losses through the heat uptake by cooler rock formations with high thermal conductivity. Also critical parameters for the well integrity in geothermal storage and production wells subjected to frequent thermal cycling is the interface between metal casing and cement composite. A team from Sandia and Brookhaven National Labs is evaluating special cement formulations to facilitate use during severe and repeated thermal cycling in geothermal wells; this paper reports on recent finding using these more recently developed cements. For this portion of the laboratory study we report on preliminary results from subjecting this cement to high temperature (T> 200°C), at a confining pressure of 13.8 MPa, and pore water pressure of 10.4 MPa. Building on previous work, we studied two sample types; solid cement and a steel cylinder sheathed with cement. In the first sample type we measured fluid flow at increasing elevated temperatures and pressure. In the second sample type, we flowed water through the inside of the steel cylinder rapidly to develop an inner to outer thermal gradient using this specialized test geometry. In the paper we report on water permeability estimates at elevated temperatures and the results of rapid thermal cycling of a steel/cement interface. Posttest observations of the steel-cement interface reveal insight into the nature of the steel/cement bond.
A high-speed, two-color pyrometer was developed and employed to characterize the temperature of the ejecta from pyrotechnic igniters. The pyrometer used a single objective lens, beamsplitter, and two high-speed cameras to maximize the spatial and temporal resolutions. The pyrometer used the integrated intensity of under-resolved particles to maintain a large region of interest to capture more particles. The spectral response of the pyrometer was determined based on the response of each optical component and the total system was calibrated using a black body source to ensure accurate intensity ratios over the range of interest.
The penetration of renewable energy resources (RER) and energy storage systems (ESS) into the power grid has been accelerated in recent times due to the aggressive emission and RER penetration targets. The Integrated resource planning (IRP) framework can help in ensuring long-term resource adequacy while satisfying RER integration and emission reduction targets in a cost-effective and reliable manner. In this paper, we present pIRP (probabilistic Integrated Resource Planning), an open-source Python-based software tool designed for optimal portfolio planning for an RER and ESS rich future grid and for addressing the capacity expansion problem. The tool, which is planned to be released publicly, with its ESS and RER modeling capabilities along with enhanced uncertainty handling make it one of the more advanced non-commercial IRP tools available currently. Additionally, the tool is equipped with an intuitive graphical user interface and expansive plotting capabilities. Impacts of uncertainties in the system are captured using Monte Carlo simulations and lets the users analyze hundreds of scenarios with detailed scenario reports. A linear programming based architecture is adopted which ensures sufficiently fast solution time while considering hundreds of scenarios and characterizing profile risks with varying levels of RER and ESS penetration levels. Results for a test case using data from parts of the Eastern Interconnection are provided in this paper to demonstrate the capabilities offered by the tool.
The paper proposes an implementation of Graph Neural Networks (GNNs) for distribution power system Traveling Wave (TW) - based protection schemes. Simulated faults on the IEEE 34 system are processed by using the Karrenbauer Transform and the Stationary Wavelet Transform (SWT), and the energy of the resulting signals is calculated using the Parseval's Energy Theorem. This data is used to train Graph Convolutional Networks (GCNs) to perform fault zone location. Several levels of measurement noise are considered for comparison. The results show outstanding performance, more than 90% for the most developed models, and outline a fast, reliable, asynchronous and distributed protection scheme for distribution level networks.
Surface flashover is a significant issue impacting the reliability of high voltage, high current gas switches. The goal of this work is to determine if poly(dicyclopentadiene) (pDCPD) coatings can be used to mitigate surface flashover on insulators compared to crosslinked polystyrene (Rexolite), cast poly-methylmethacrylate) (PMMA), and extruded PMMA. The pDCPD coating is expected to have a higher flashover voltage threshold to an initial flashover due to the oxidation of the polymer, creating trap sites for any free electrons that would otherwise serve as primary electrons in a surface electron avalanche. This is tested by measuring the flashover threshold for different extents of oxidation caused by thermally treating the samples for different durations. For subsequent flashover events the pDCPD coating is also expected to have a higher flashover threshold due to its high oxygen/hydrogen to carbon ratio, which is expected to preferentially create gaseous products, such as CO2 after a flashover event, rather than conductive carbon deposits. The control and pDCPD-coated test coupons are repeatedly subjected to increasing voltage stresses until flashover occurs to determine both the initial and subsequent flashover thresholds.
Applications such as counterfeit identification, quality control, and non-destructive material identification benefit from improved spatial and compositional analysis. X-ray Computed Tomography is used in these applications but is limited by the X-ray focal spot size and the lack of energy-resolved data. Recently developed hyperspectral X-ray detectors estimate photon energy, which enables composition analysis but lacks spatial resolution. Moving beyond bulk homogeneous transmission anodes toward multi-metal patterned anodes enables improvements in spatial resolution and signal-to-noise ratios in these hyperspectral X-ray imaging systems. We aim to design and fabricate transmission anodes that facilitate confirmation of previous simulation results. These anodes are fabricated on diamond substrates with conventional photolithography and metal deposition processes. The final transmission anode design consists of a cluster of three disjoint metal bumps selected from molybdenum, silver, samarium, tungsten, and gold. These metals are chosen for their k-lines, which are positioned within distinct energy intervals of interest and are readily available in standard clean rooms. The diamond substrate is chosen for its high thermal conductivity and high transmittance of X-rays. The feature size of the metal bumps is chosen such that the cluster is smaller than the 100 m diameter of the impinging electron beam in the X-ray tube. This effectively shrinks the X-ray focal spot in the selected energy bands. Once fabricated, our transmission anode is packaged in a stainless-steel holder that can be retrofitted into our existing X-ray tube. Innovations in anode design enable an inexpensive and simple method to improve existing X-ray imaging systems.
A crucial component of field testing is the utilization of numerical models to better understand the system and the experimental data being collected. Meshing and modeling field tests is a complex and computationally demanding problem. Hexahedral elements cannot always reproduce experimental dimensions leading to grid orientation or geometric errors. Voronoi meshes can match complex geometries without sacrificing orthogonality. As a result, here we present a high-resolution 3D numerical study for the BATS heater test at the WIPP that compares both a standard non-deformed cartesian mesh along with a Voronoi mesh to match field data collected during a salt heater experiment.
This paper presents a simulation and respective analysis of traveling waves from a 5-bus distribution system connected to a grid-forming inverter (GFMI). The goal is to analyze the numerical differences in traveling waves if a GFMI is used in place of a traditional generator. The paper introduces the topic of traveling waves and their use in distribution systems for fault clearing. Then it introduces a Simulink design of said 5-bus system around which this paper is centered. The system is subject to various simulation tests of which the results and design are explained further in the paper to discuss if and how exactly inverters affect traveling waves and how different design choices for the system can impact these waves. Finally, a consideration is made for what these traveling waves represent in a practical environment and how to properly address them using the information derived in this study.
Incorrect modeling of control characteristics for inverter-based resources (IBRs) can affect the accuracy of electric power system studies. In many distribution system contexts, the control settings for behind-the-meter (BTM) IBRs are unknown. This paper presents an efficient method for selecting a small number of time series samples from net load meter data that can be used for reconstructing or classifying the control settings of BTM IBRs. Sparse approximation techniques are used to select the time series samples that cause the inversion of a matrix of candidate responses to be as well-conditioned as possible. We verify these methods on 451 actual advanced metering infrastructure (AMI) datasets from loads with BTM IBRs. Selecting 60 15-minute granularity time series samples, we recover BTM control characteristics with a mean error less than 0.2 kVAR.
Thermal and hydrological behaviors of multiphase pore fluids in the presence of heat cause the near-field thermo-hydro-mechanicalchemical (THMC) coupled processes that can influence performance of geologic radioactive waste repositories. This hydro-thermal impacts may perturb the geomechanical stability of the disturbed rock zone (DRZ) surrounding the drifts in a shale-hosted deep geologic repository, which links heat/fluid flow and chemical/reactive transport between the engineered barrier system (EBS) and the host rock. This work focuses on integrating the effects of a near-field geomechanical process driven by buffer swelling into TH simulations to reduce dimensionality and improve computational efficiency. This geomechanical process can reduce the DRZ permeability, potentially influencing the rate of radionuclide transport and exchange with corrosive species in host rock groundwater that could accelerate waste package degradation. The sensitivity test with variation in host rock permeability indicates that less permeable shale retards re-saturation of the buffer, such that slower increase of swelling pressure delays reduction of DRZ permeability.
As the legacy distance protection schemes are starting to transition from impedance-based to traveling wave (TW) time-based, it is important to perform diligent simulations prior to commissioning the TW relay. Since Control-Hardware-In-the-Loop (CHIL) simulations have recently become a common practice for power system research, this work aims to illustrate some limitations in the integration of commercially available TW relays in CHIL for transmission-level simulations. The interconnection of Frequency-Dependent (FD) with PI-modeled transmission lines, which is a common practice in CHIL, may lead to sharp reflections that ease the relaying task. However, modeling contiguous lines as FD, or the presence of certain shunt loads, may cover certain TW reflections. As a consequence, the fault location algorithm in the relay may lead to a wrong calculation. In this paper, a qualitative comparison of the performance of commercially available TW relay is carried out to show how the system modeling in CHIL may affect the fault location accuracy.
This paper reports the experimental comparison of two silicon photomultipliers (SiPMs): the MicroFJ-30035 by ONSemi and the ASD-NUV3S-P by AdvanSiD, in terms of gain, dark count rate, and crosstalk probability. SiPMs are solid state photon detectors that enable high sensitivity light readout. They have low-voltage power requirements, small form factor, and are durable. For these reasons, they are being considered as replacements for vacuum photomultiplier tubes in some applications. However, their performance relies on several parameters, which need to be carefully characterized to enable their high-fidelity simulation and SiPM-based design of devices capable to operate in harsh environments. The parameters tend to vary between manufacturers and processing technologies. In this work, we have compared the MicroFJ and ASD SiPMs in terms of gain, dark count rate, and crosstalk probability. We found that the dark count rate of the MicroFJ was 16% higher than the ASD. Also, the gain of the MicroFJ is 3.5 times higher than the ASD. Finally, the crosstalk probability of the ASD 1.96 times higher than the MicroFJ. Our findings are in good agreement with manufacturer reported values.
In this paper, we present a sensor encoding technique for the detection of stealthy false data injection attacks in static power system state estimation. This method implements low-cost verification of the integrity of measurement data, allowing for the detection of stealthy additive attack vectors. It is considered that these attacks are crafted by malicious actors with knowledge of the system models and capable of tampering with any number of measurements. The solution involves encoding all vulnerable measurements. The effectiveness of the method was demonstrated through a simulation where a stealthy attack on an encoded measurement vector generates large residuals that trigger a chi-squared anomaly detector (e.g. χ2). Following a defense in-depth approach, this method could be used with other security features such as communications encryption to provide an additional line of defense against cyberattacks.
A high-throughput experimental setup was used to characterize initiation threshold and growth to detonation in the explosives hexanitrostilbene (HNS) and pentaerythritol tetranitrate (PETN). The experiment sequentially launched an array of laser-driven flyers to shock samples arranged in a 96-well microplate geometry, with photonic Doppler velocimetry diagnostics to characterize flyer velocity and particle velocity at the explosive-substrate interface. Vapor-deposited films of HNS and PETN were used to provide numerous samples with various thicknesses, enabling characterization of the evolution of growth to detonation. One-dimensional hydrocode simulations were performed with reactions disabled to illustrate where the experimental data deviate from the predicted inert response. Prompt initiation was observed in 144 μm thick HNS films at flyer velocities near 3000 m/s and in 125 μm thick PETN films at flyer velocities near 2400 m/s. This experimental setup enables rapid quantification of the growth of reactions in explosive materials that can reach detonation at sub-millimeter length scales. These data can subsequently be used for parameterizing reactive burn models in hydrocode simulations, as discussed in Paper II [D. E. Kittell, R. Knepper, and A. S. Tappan, J. Appl. Phys. 131, 154902 (2022)].
This paper summarizes the development of post-closure safety assessment for radioactive waste disposal from the point of view of scenarios, which occupy the key point in the process between FEPs and assessment using conceptual, mathematical, and numerical models. Scenarios are used in other fields for similar purposes, but they have a central role in safety assessment for radioactive waste disposal, given the large uncertainties in natural and engineered systems over long time periods. Repository design and assessments are built around a base scenario, which is usually built up from FEPs in a deductive bottom-up fashion. The alternative scenarios are often a perturbation of the base scenario, constructed in a top-down fashion around individual safety functions of key repository features. Despite differences between nations in how they implement scenarios, largely from regulatory differences, the concept of scenarios is beneficial and is used universally in development of deep geological repositories. The methodology has also seen some use outside the field radioactive waste disposal, but its wider adoption might be warranted.
Hu, Xuan; Walker, Benjamin W.; Garcia-Sanchez, Felipe; Edwards, Alexander J.; Zhou, Peng; Incorvia, Jean A.C.; Paler, Alexandru; Frank, Michael P.; Friedman, Joseph S.
Magnetic skyrmions are nanoscale whirls of magnetism that can be propagated with electrical currents. The repulsion between skyrmions inspires their use for reversible computing based on the elastic billiard ball collisions proposed for conservative logic in 1982. In this letter, we evaluate the logical and physical reversibility of this skyrmion logic paradigm, as well as the limitations that must be addressed before dissipation-free computation can be realized.
Sanders, Stephen; Dowran, Mohammadjavad; Jain, Umang; Lu, Tzu M.; Marino, Alberto M.; Manjavacas, Alejandro
Periodic arrays of nanoholes perforated in metallic thin films interact strongly with light and produce large electromagnetic near-field enhancements in their vicinity. As a result, the optical response of these systems is very sensitive to changes in their dielectric environment, thus making them an exceptional platform for the development of compact optical sensors. Given that these systems already operate at the shot-noise limit when used as optical sensors, their sensing capabilities can be enhanced beyond this limit by probing them with quantum light, such as squeezed or entangled states. Motivated by this goal, here, we present a comparative theoretical analysis of the quantum enhanced sensing capabilities of metallic nanohole arrays with one and two holes per unit cell. Through a detailed investigation of their optical response, we find that the two-hole array supports resonances that are narrower and stronger than its one-hole counterpart, and therefore have a higher fundamental sensitivity limit as defined by the quantum Cramér-Rao bound. We validate the optical response of the analyzed arrays with experimental measurements of the reflectance of representative samples. The results of this work advance our understanding of the optical response of these systems and pave the way for developing sensing platforms capable of taking full advantage of the resources offered by quantum states of light.
The use of containerization technology in high performance computing (HPC) workflows has substantially increased recently because it makes workflows much easier to develop and deploy. Although many HPC workflows include multiple data and multiple applications, they have traditionally all been bundled together into one monolithic container. This hinders the ability to trace the thread of execution, thus preventing scientists from establishing data provenance, or having workflow reproducibility. To provide a solution to this problem we extend the functionality of a popular HPC container runtime, Singularity. We implement both the ability to compose fine-grained containerized workflows and execute these workflows within the Singularity runtime with automatic metadata collection. Specifically, the new functionality collects a record trail of execution and creates data provenance. The use of our augmented Singularity is demonstrated with an earth science workflow, SOMOSPIE. The workflow is composed via our augmented Singularity which creates fine-grained containers and collects the metadata to trace, explain, and reproduce the prediction of soil moisture at a fine resolution.
We present a field-deployable microfluidic immunoassay device in response to the need for sensitive, quantitative, and high-throughput protein detection at point-of-need. The portable microfluidic system facilitates eight magnetic bead-based sandwich immunoassays from raw samples in 45 minutes. An innovative bead actuation strategy was incorporated into the system to automate multiple sample process steps with minimal user intervention. The device is capable of quantitative and sensitive protein analysis with a 10 pg/ml detection limit from interleukin 6-spiked human serum samples. We envision the reported device offering ultrasensitive point-of-care immunoassay tests for timely and accurate clinical diagnosis.
Many, if not all, Waste Management Organisation programs will include criticality safety. As criticality safety in the long-term, i.e. considered over post-closure timescales in dedicated disposal facilities, is a unique challenge for geological disposal there is limited opportunity for sharing of experience within an individual organization/country. Therefore, sharing of experience and knowledge between WMOs to understand any similarities and differences will be beneficial in understanding where the approaches are similar and where they are not, and the reasons for this. To achieve this benefit a project on Post-Closure Criticality Safety has been established through the Implementing Geological Disposal - Technology Platform with the overall aim to facilitate the sharing of this knowledge. This project currently has 11 participating nations, including the United States and this paper presents the current position in the United States.
In high temperature (HT) environments often encountered in geothermal wells, data rate transfers for downhole instrumentation are relatively limited due to transmission line bandwidth and insertion loss and the processing speed of HT microcontrollers. In previous research, Sandia National Laboratory Geothermal Department obtained 3.8 Mbps data rates over 1524 m (5000 ft) for single conductor wireline cable with less than a 1x10-8 bit error rate utilizing low temperature NITM hardware (formerly National InstrumentsTM). Our protocol technique was a combination of orthogonal frequency-division multiplexing and quadrature amplitude modulation across the bandwidth of the single conductor wireline. This showed it is possible to obtain high data rates in low bandwidth wirelines. This paper focuses on commercial HT microcontrollers (µC), rather than low temperature NITM modules, to enable high-speed communication in an HT environment. As part of this effort, four devices were evaluated, and an optimal device (SM320F28335-HT) was selected for its high clock rates, floating-point unit, and on-board analog-to-digital converter. A printed circuit board was assembled with the HT µC, an HT resistor digital-to-analog converter, and an HT line driver. The board was tested at the microcontroller's rated maximum temperature (210°C) for a week while transmitting through a 1524 m (5000 ft) wireline. A final test was conducted to the point of failure at elevated temperatures. This paper will discuss communication methods, achieved data rates, and hardware selection. This effort contributes to the enhancement of HT instrumentation by enabling greater sensor counts and improving data accuracy and transfer rates.
Simple but mission-critical internet-based applications that require extremely high reliability, availability, and verifiability (e.g., auditability) could benefit from running on robust public programmable blockchain platforms such as Ethereum. Unfortunately, program code running on such blockchains is normally publicly viewable, rendering these platforms unsuitable for applications requiring strict privacy of application code, data, and results. In this work, we investigate using MPC techniques to protect the privacy of a blockchain computation. While our main goal is to hide both the data and the computed function itself, we also consider the standard MPC setting where the function is public. We describe GABLE (Garbled Autonomous Bots Leveraging Ethereum), a blockchain MPC architecture and system. The GABLE architecture specifies the roles and capabilities of the players. GABLE includes two approaches for implementing MPC over blockchain: Garbled Circuits (GC), evaluating universal circuits, and Garbled Finite State Automata (GFSA). We formally model and prove the security of GABLE implemented over garbling schemes, a popular abstraction of GC and GFSA from (Bellare et al., CCS 2012). We analyze in detail the performance (including Ethereum gas costs) of both approaches and discuss the trade-offs. We implement a simple prototype of GABLE and report on the implementation issues and experience.
We evaluate the use of reference modules for monitoring effective irradiance in PV power plants, as compared with traditional plane-of-array (POA) irradiance sensors, for PV monitoring and capacity tests. Common POA sensors such as pyranometers and reference cells are unable to capture module-level irradiance nonuniformity and require several correction factors to accurately represent the conditions for fielded modules. These problems are compounded for bifacial systems, where the power loss due to rear side shading and rear-side plane-of-array (RPOA) irradiance gradients are greater and more difficult to quantify. The resulting inaccuracy can have costly real-world consequences, particularly when the data are used to perform power ratings and capacity tests. Here we analyze data from a bifacial single-axis tracking PV power plant, (175.6 MWdc) using 5 meteorological (MET) stations, located on corresponding inverter blocks with capacities over 4 MWdc. Each MET station consists of bifacial reference modules as well pyranometers mounted in traditional POA and RPOA installations across the PV power plant. Short circuit current measurements of the reference modules are converted to effective irradiance with temperature correction and scaling based on flash test or nameplate short circuit values. Our work shows that bifacial effective irradiance measured by pyranometers averages 3.6% higher than the effective irradiance measured by bifacial reference modules, even when accounting for spectral, angle of incidence, and irradiance nonuniformity. We also performed capacity tests using effective irradiance measured by pyranometers and reference modules for each of the 5 bifacial single-axis tracking inverter blocks mentioned above. These capacity tests evaluated bifacial plant performance at ∼3.9% lower when using bifacial effective irradiance from pyranometers as compared to the same calculation performed with reference modules.
Conference Proceedings of the Society for Experimental Mechanics Series
Saunders, Brian E.; Vasconcellos, Rui M.G.; Kuether, Robert J.; Abdelkefi, Abdessattar
Physical systems that are subject to intermittent contact/impact are often studied using piecewise-smooth models. Freeplay is a common type of piecewise-smooth system and has been studied extensively for gear systems (backlash) and aeroelastic systems (control surfaces like ailerons and rudders). These systems can experience complex nonlinear behavior including isolated resonance, chaos, and discontinuity-induced bifurcations. This behavior can lead to undesired damaging responses in the system. In this work, bifurcation analysis is performed for a forced Duffing oscillator with freeplay. The freeplay nonlinearity in this system is dependent on the contact stiffness, the size of the freeplay region, and the symmetry/asymmetry of the freeplay region with respect to the system’s equilibrium. Past work on this system has shown that a rich variety of nonlinear behaviors is present. Modern methods of nonlinear dynamics are used to characterize the transitions in system response including phase portraits, frequency spectra, and Poincaré maps. Different freeplay contact stiffnesses are studied including soft, medium, and hard in order to determine how the system response changes as the freeplay transitions from soft contact to near-impact. Particular focus is given to the effects of different initial conditions on the activation of secondary- and isolated-resonance responses. Preliminary results show isolated resonances to occur only for softer-contact cases, regions of superharmonic resonances are more prevalent for harder-contact cases, and more nonlinear behavior occurs for higher initial conditions.
A proper understanding of the complex physics associated with nonlinear dynamics can improve the accuracy of predictive engineering models and provide a foundation for understanding nonlinear response during environmental testing. Several researchers and studies have previously shown how localized nonlinearities can influence the global vibration modes of a system. This current work builds upon the study of a demonstration aluminum aircraft with a mock pylon with an intentionally designed, localized nonlinearity. In an effort to simplify the identification of the localized nonlinearity, previous work has developed a simplified experimental setup to collect experimental data for the isolated pylon mounted to a stiff fixture. This study builds on these test results by correlating a multi-degree-of-freedom model of the pylon to identify the appropriate model form and parameters of the nonlinear element. The experimentally measured backbone curves are correlated with a nonlinear Hurty/Craig-Bampton (HCB) reduced order model (ROM) using the calculated nonlinear normal modes (NNMs). Following the calibration, the nonlinear HCB ROM of the pylon is attached to a linear HCB ROM of the wing to predict the NNMs of the next-level wing-pylon assembly as a pre-test analysis to better understand the significance of the localized nonlinearity on the global modes of the wing structure.
Two techniques were developed to allow users of microfabricated surface ion traps to detect RF breakdown as soon as it happens, without needing to remove devices from vacuum and look at them with a microscope.
Driven by the exceedingly high computational demands of simulating mechanical response in complex engineered systems with finely resolved finite element models, there is a critical need to optimally reduce the fidelity of such simulations. The minimum required fidelity is constrained by error tolerances on the simulation results, but error bounds are often impossible to obtain a priori. One such source of error is the variability of material properties within a body due to spatially non-uniform processing conditions and inherent stochasticity in material microstructure. This study seeks to quantify the effects of microstructural heterogeneity on component- and system-scale performance to aid in the choice of an appropriate material model and spatial resolution for finite element analysis.
This user’s guide documents capabilities in Sierra/SolidMechanics which remain “in-development” and thus are not tested and hardened to the standards of capabilities listed in Sierra/SM 5.4 User’s Guide. Capabilities documented herein are available in Sierra/SM for experimental use only until their official release. These capabilities include, but are not limited to, novel discretization approaches such as the conforming reproducing kernel (CRK) method, numerical fracture and failure modeling aids such as the extended finite element method (XFEM) and J-integral, explicit time step control techniques, dynamic mesh rebalancing, as well as a variety of new material models and finite element formulations.
In 2016, the National Nuclear Security Agency (NNSA) initiated the Minority Serving Institution Partnership Plan (MSIPP) targeting Tribal Colleges and Universities (TCUs) to offer programs that will prepare students for technical careers in NNSA’s laboratories and production plants. The MSIPP consortium’s approach is as follows: 1) align investments at the college and university level to develop a curriculum and workforce needed to support NNSA’s nuclear weapon enterprise mission, and 2) to enhance research and education at under-represented colleges and universities. The first TCU consortium that MSIPP launched was known as the Advanced Manufacturing Network Initiative (AMNI) whose purpose was to develop additive manufacturing (AM) learning opportunities. The AMNI consortium consisted of Bay Mills Community College, Cankdeska Cikana Community College, Navajo Tech University, Salish Kootenai Community College, Turtle Mountain Community College, and United Tribes Technical College. In 2016, the American Indian Higher Education Consortium (AIHEC), the AMNI consortium and the Southwestern Indian Polytechnic Institute (SIPI), in collaboration with Sandia National Labs, using a grant by NNSA hosted the first TCU Advanced Manufacturing Technology Summer Institute (TCU AMTSI). The AMNI consortium will officially end Sept. 2022. However, building on the successes of AMNI, in FY22 NNSA’s MSIPP launched three additional consortiums: (1) the Indigenous Mutual Partnership to Advanced Cybersecurity Technology (IMPACT), which focuses on STEM and cybersecurity, (2) the Advanced Synergistic Program for Indigenous Research in Engineering (ASPIRE), which focuses on STEM and the electrical and mechanical engineering skills set needed for renewable and distributed energy systems, and (3) the Partnership for Advanced Manufacturing Education and Research (PAMER), which focuses on developing and maintaining a sustainable pathway for a highly trained, next-generation additive manufacturing workforce and a corresponding community of subject matter experts for NNSA enterprises. The following report summarizes the status update during this quarter for the ASPIRE program.
We examine coupling into azimuthal slots on an infinite cylinder with a infinite length interior cavity operating both at the fundamental cavity modal frequencies, with small slots and a resonant slot, as well as higher frequencies. The coupling model considers both radiation on an infinite cylindrical exterior as well as a half space approximation. Bounding calculations based on maximum slot power reception and interior power balance are also discussed in detail and compared with the prior calculations. For higher frequencies limitations on matching are imposed by restricting the loads ability to shift the slot operation to the nearest slot resonance; this is done in combination with maximizing the power reception as a function of angle of incidence. Finally, slot power mismatch based on limited cavity load quality factor is considered below the first slot resonance.
The Arroyo Seco Improvement Program (ASIP) is intended to provide active channel improvements and stream zone management activities that will reduce current flood and erosion risk while providing additional and improved habitat for critical species that may use the Arroyo Seco at the United States Department of Energy (DOE), Sandia National Laboratories, California (SNL/CA) location. The objectives of the ASIP are: correct existing channel stability problems associated with existing arroyo structures (i.e. bridges, security grates, utility crossings, and drain structures), correct bank erosion and provide protection against future erosion, reduce the risk of future flooding, and provide habitat improvement and creation of a mitigation credit for site development and management activities.
Sandia provided technical assistance to Kit Carson Electric Cooperative (KCEC) to assess the technical merits of a proposed community resilience microgrid project in the Village of El Rito, New Mexico (NM). The project includes a proposed community resilience microgrid in the Village of El Rito, NM, around the campus of Northern New Mexico College (NNMC). A conceptual microgrid analysis plan was performed, considering a campus and community-wide approach. The analysis results provided conceptual microgrid configurations, optimized according to the performance metrics defined. The campus microgrid was studied independently and many conceptual microgrid solutions were provided that met the performance requirements. Considering the existing 1.5 MW PV system on campus far exceeds the simulated campus load peak and energy demand, a small battery installation was deemed sufficient to support the campus microgrid goals. Following the analysis and consultation, it was determined that the core Resilient El Rito team will need to further investigate the results for additional economic and environmental considerations to continue toward the best approach for their goals and needs.
This document presents tests from the Sierra Structural Mechanics verification test suite. Each of these tests is run nightly with the Sierra/SD code suite and the results of the test checked versus the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the Sierra/SD code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
This report describes recommended abuse testing procedures for rechargeable energy storage systems (RESSs) for electric vehicles. This report serves as a revision to the USABC Electrical Energy Storage System Abuse Test Manual for Electric and Hybrid Electric Vehicle Applications (SAND99-0497).
This SAND Report provides an overview of AniMACCS, the animation software developed for the MELCOR Accident Consequence Code System (MACCS). It details what users need to know in order to successfully generate animations from MACCS results. It also includes information on the capabilities, requirements, testing, limitations, input settings, and problem reporting instructions for AniMACCS version 1.3.1. Supporting information is provided in the appendices, such as guidance on required input files using both WinMACCS and running MACCS from the command line.
Based on the rationale presented, nuclear criticality is improbable after salt creep causes compaction of criticality control overpacks (CCOs) disposed at the Waste Isolation Pilot Plant, an operating repository in bedded salt for the disposal of transuranic (TRU) waste from atomic energy defense activities. For most TRU waste, the possibility of post-closure criticality is exceedingly small either because the salt neutronically isolates TRU waste canisters or because closure of a disposal room from salt creep does not sufficiently compact the low mass of fissile material. The criticality potential has been updated here because of the introduction of CCOs, which may dispose up to 380 fissile gram equivalent plutonium-239 in each container. The criticality potential is evaluated through high-fidelity geomechanical modeling of a disposal room filled with CCOs during two representative conditions: (1) large salt block fall, and (2) gradual salt compaction (without brine seepage and subsequent gas generation to permit maximum room closure). Geomechanical models of rock fall demonstrate three tiers of CCOs are not greatly disrupted. Geomechanical models of gradual room closure from salt creep predict irregular arrays of closely packed CCOs after 1000 years, when room closure has asymptotically approached maximum compaction. Criticality models of spheres and cylinders of 380 fissile gram equivalent of plutonium (as oxide) at the predicted irregular spacing demonstrate that an array of CCOs is not critical when surrounded by salt and magnesium oxide, provided the amount of hydrogenous material shipped in the CCO (usually water and plastics) is controlled or boron carbide (a neutron poison) is mixed with the fissile contents.
Aria is a Galerkin finite element based program for solving coupled-physics problems described by systems of PDEs and is capable of solving nonlinear, implicit, transient and direct-to-steady state problems in two and three dimensions on parallel architectures. The suite of physics currently supported by Aria includes thermal energy transport, species transport, and electrostatics as well as generalized scalar, vector and tensor transport equations. Additionally, Aria includes support for manufacturing process flows via the incompressible Navier-Stokes equations specialized to a low Reynolds number (Re < 1) regime. Enhanced modeling support of manufacturing processing is made possible through use of either arbitrary Lagrangian-Eulerian (ALE) and level set based free and moving boundary tracking in conjunction with quasi-static nonlinear elastic solid mechanics for mesh control. Coupled physics problems are solved in several ways including fully-coupled Newton’s method with analytic or numerical sensitivities, fully-coupled Newton-Krylov methods and a loosely-coupled nonlinear iteration about subsets of the system that are solved using combinations of the aforementioned methods. Error estimation, uniform and dynamic ℎ-adaptivity and dynamic load balancing are some of Aria’s more advanced capabilities.
Study looks at the effect that failed links have on the throughput of HPC systems: What workloads are most effected? How many links need to be down before throughput of the machine is noticeably affected?
The resonant plate shock test is a dynamic test of a mid-field pyroshock environment where a projectile is struck against a plate. The structure undergoing the simulated field shock is mounted to the plate. The plate resonates when struck and provides a two sided shock that is representative of the shock observed in the field. This test environment shock simulates a shock in a single coordinate direction for components looking to provide evidence that they will survive a similar or less shock when deployed in their operating environment. However, testing in one axis at a time provides many challenges. The true environment is a multi-axis environment. The test environment exhibits strong off-axis motion when only motion in one axis is desired. Multiple fixtures are needed for a single test series. It would be advantageous if a single test could be developed that tests the multi-axis environment simultaneously. In order to design such a test, a model must be developed and validated. The model can be iterated in design and configuration until the specified multi-axis environment is met. The test can then execute the model driven test design. This report discusses the resonant plate model needed to design future tests and the steps and methods used to obtain the model. This report also details aspects of the resonant plate test discovered during the process of model development that aids in our understanding of the test.
Water and climate change pose many potential challenges to the electric power system. Substantial water is withdrawn every day to support thermoelectric power generating unit operations, and changes to water supply have the potential to affect generation dispatch. Climate change can accelerate growing demand for electricity, which can necessitate additional generating capacity, often in locations with limited water supply. Drought conditions also threaten thermoelectric power plant operations due to streamflow and reservoir levels dropping below intake structures, or water temperatures exceeding a power plants' permitted operating conditions. Here we explore how future climate change might influence decisions related to electricity capacity expansion planning in Electric Reliability Council of Texas (ERCOT) using a multi-model framework. Specifically, water resource modeling is used to simulate climate impacts on the future water supply for thermoelectric and hydropower generation for four future climate projections. Separately, temperature impacts on electricity load are evaluated for these scenarios. These climate impacts are applied to five alternative electricity futures in an electricity capacity expansion model that projects future generation and transmission capacity additions in ERCOT. Results indicate that climate has a measurable influence on future generation and transmission capacity needs, with temperature-driven increases in peak and average load resulting in 5-15 GW additional generating capacity and up to 1 GW additional transmission capacity. Additional capacity is a diverse mix of PV, natural gas, and wind, depending on the makeup of economic, policy and technology assumptions. Climate impacts increase total system costs 2-5%, while the marginal cost of energy and emissions are not affected substantially by climate change effects.
Using the power balance method we estimate the maximum electric field on a conducting wall of a cavity containing an interior structure supporting eccentric coaxial modes in the frequency regime where the resonant modes are isolated from each other.
Aria is a Galerkin finite element based program for solving coupled-physics problems described by systems of PDEs and is capable of solving nonlinear, implicit, transient and direct-to-steady state problems in two and three dimensions on parallel architectures. The suite of physics currently supported by Aria includes thermal energy transport, species transport, and electrostatics as well as generalized scalar, vector and tensor transport equations. Additionally, Aria includes support for manufacturing process flows via the incompressible Navier-Stokes equations specialized to a low Reynolds number (Re < 1) regime. Enhanced modeling support of manufacturing processing is made possible through use of either arbitrary Lagrangian-Eulerian (ALE) and level set based free and moving boundary tracking in conjunction with quasi-static nonlinear elastic solid mechanics for mesh control. Coupled physics problems are solved in several ways including fully-coupled Newton’s method with analytic or numerical sensitivities, fully-coupled Newton-Krylov methods and a loosely-coupled nonlinear iteration about subsets of the system that are solved using combinations of the aforementioned methods. Error estimation, uniform and dynamic ℎ-adaptivity and dynamic load balancing are some of Aria’s more advanced capabilities.
High - temperature particle receivers are being pursued to enable next - generation concentrating solar thermal power (CSP) systems that can achieve higher temperatures (> 700 C) to enable more efficient power cycles, lower overall system costs, and emerging CSP - based process - heat applications. The objective of this work was to develop characterization methods to quantify the particle and heat losses from the open aperture of the particle receiver. Novel camera - based imaging methods were developed and applied to both laboratory - scale and larger 1 MW t on - sun tests at the National Solar Thermal Test Facility in Albuquerque, New Mexico. Validation of the imaging methods was performed using gravimetric and calorimetric methods. In addition, conventional particle - sampling methods using volumetric particle - air samplers were applied to the on - sun tests to compare particle emission rates with regulatory standards for worker safety and pollution. Novel particle sampling methods using 3 - D printed tipping buckets and tethered balloons were also developed and applied to the on - sun particle - receiver tests. Finally, models were developed to simulate the impact of particle size and wind on particle emissions and concentrations as a function of location. Results showed that particle emissions and concentrations were well below regulatory standards for worker safety and pollution. In addition, estimated particle temperatures and advective heat losses from the camera - based imaging methods correlated well with measured values during the on - sun tests.
Neural operators have recently become popular tools for designing solution maps between function spaces in the form of neural networks. Differently from classical scientific machine learning approaches that learn parameters of a known partial differential equation (PDE) for a single instance of the input parameters at a fixed resolution, neural operators approximate the solution map of a family of PDEs [6, 7]. Despite their success, the uses of neural operators are so far restricted to relatively shallow neural networks and confined to learning hidden governing laws. In this work, we propose a novel nonlocal neural operator, which we refer to as nonlocal kernel network (NKN), that is resolution independent, characterized by deep neural networks, and capable of handling a variety of tasks such as learning governing equations and classifying images. Our NKN stems from the interpretation of the neural network as a discrete nonlocal diffusion reaction equation that, in the limit of infinite layers, is equivalent to a parabolic nonlocal equation, whose stability is analyzed via nonlocal vector calculus. The resemblance with integral forms of neural operators allows NKNs to capture long-range dependencies in the feature space, while the continuous treatment of node-to-node interactions makes NKNs resolution independent. The resemblance with neural ODEs, reinterpreted in a nonlocal sense, and the stable network dynamics between layers allow for generalization of NKN’s optimal parameters from shallow to deep networks. This fact enables the use of shallow-to-deep initialization techniques [8]. Our tests show that NKNs outperform baseline methods in both learning governing equations and image classification tasks and generalize well to different resolutions and depths.
Numerical simulations of pressure-shear loading of a granular material are performed using the shock physics code CTH. A simple mesoscale model for the granular material is used that consists of a randomly packed arrangement of solid circular or spherical grains of uniform size separated by vacuum. The grain material is described by a simple shock equation of state, elastic perfectly plastic strength model, and fracture model with baseline parameters for WC taken from previous mesoscale modeling work. Simulations using the baseline material parameters are performed at the same initial conditions of pressure-shear experiments on dry WC powders. Except for some localized flow regions appearing in simulations with an approximate treatment of sliding interfaces among grains, the samples respond elastically during shear, which is in contrast to experimental observations. By extending the simulations to higher shear wave amplitudes, macroscopic shear failure of the simulated samples is observed with the shear strength increasing with increasing stress confinement. The shear strength is also found to be strongly dependent on the grain interface treatment and on the fracture stress of the grains, though the variation in shear strength due to fracture stress decreases with increasing stress confinement. At partial compactions, the transverse velocity histories show strain-hardening behavior followed by formation of a shear interface that extends through the transverse dimensions of the sample. Near full compaction, no strain hardening is observed and, instead, the sample transitions sharply from an elastic response to formation of an internal shear interface. Agreement with experiment is shown to worsen with increasing confinement stress with simulations overpredicting the shear strengths measured in experiment. The source of the disagreement can be ultimately attributed to the Eulerian nature of the simulations, which do not treat contact and fracture realistically.
This study investigates the impact that operations and market strategy have on the design and value of an energy storage system on three levels of the facility: the cell level, the system level, and the project level. The study provides insights for developers, capital providers, customers and policy makers into the impact different operational strategies have on effectiveness of energy storage system in today's emerging market. Energy storage systems can be used for a variety of usage profiles, with the choice having a profound impact on their performance, lifespan, and revenue potential. Most evaluations of application stacking only look at the possible revenue potential without understanding the increased costs and potential for major damage to the cells. Evaluating the impact of operational choices is critical to understanding the risk adjusted return from energy storage project investment. This is the fifth study in the Energy Storage Financing Study series, which is designed to investigate challenges surrounding the financing of energy storage projects in the U.S., promoting greater technology and project risk transparency, reducing project transaction costs, and supporting a level playing field for innovative energy storage technologies.
Reno, Matthew J.; Blakely, Logan; Trevizan, Rodrigo D.; Pena, Bethany D.; Lave, Matt; Azzolini, Joseph A.; Yusuf, Jubair; Jones, Christian B.; Furlani Bastos, Alvaro; Chalamala, Rohit; Korkali, Mert; Sun, Chih-Che; Donadee, Jonathan; Stewart, Emma M.; Donde, Vaibhav; Peppanen, Jouni; Hernandez, Miguel; Deboever, Jeremiah; Rocha, Celso; Rylander, Matthew; Siratarnsophon, Piyapath; Grijalva, Santiago; Talkington, Samuel; Gomez-Peces, Cristian; Mason, Karl; Vejdan, Sadegh; Khan, Ahmad U.; Mbeleg, Jordan S.; Ashok, Kavya; Divan, Deepak; Li, Feng; Therrien, Francis; Jacques, Patrick; Rao, Vittal; Francis, Cody; Zaragoza, Nicholas; Nordy, David; Glass, Jim
This report summarizes the work performed under a project funded by U.S. DOE Solar Energy Technologies Office (SETO) to use grid edge measurements to calibrate distribution system models for improved planning and grid integration of solar PV. Several physics-based data-driven algorithms are developed to identify inaccuracies in models and to bring increased visibility into distribution system planning. This includes phase identification, secondary system topology and parameter estimation, meter-to-transformer pairing, medium-voltage reconfiguration detection, determination of regulator and capacitor settings, PV system detection, PV parameter and setting estimation, PV dynamic models, and improved load modeling. Each of the algorithms is tested using simulation data and demonstrated on real feeders with our utility partners. The final algorithms demonstrate the potential for future planning and operations of the electric power grid to be more automated and data-driven, with more granularity, higher accuracy, and more comprehensive visibility into the system.
The Fusion Energy Sciences office supported “A Pilot Program for Research Traineeships to Broaden and Diversify Fusion Energy Sciences” at Sandia National Laboratories during the summer of 2021. This pilot project was motivated in part by the Fusion Energy Sciences Advisory Committee report observation that “The multidisciplinary workforce needed for fusion energy and plasma science requires that the community commit to the creation and maintenance of a healthy climate of diversity, equity, and inclusion, which will benefit the community as a whole and the mission of FES”. The pilot project was designed to work with North Carolina A&T (NCAT) University and leverage SNL efforts in FES to engage underrepresented students in developing and accessing advanced material solutions for plasma facing components in fusion systems. The intent was to create an environment conducive to the development of a sense of belonging amongst participants, foster a strong sense of physics identity among the participants, and provide financial support to enable students to advance academically while earning money. The purpose of this assessment is to review what worked well and lessons that can be learned. We reviewed implementation and execution of the pilot, describe successes and areas for improvement and propose a no-cost extension of the pilot project to apply these lessons and continue engagement activities in the summer of 2022.
Monitoring cavern leaching after each calendar year of oil sales is necessary to support cavern stability efforts and long-term availability for oil drawdowns in the U.S. Strategic Petroleum Reserve. Modeling results from the SANSMIC code and recent sonars are compared to show projected changes in the cavern’s geometry due to leaching from raw-water injections. This report aims to give background on the importance of monitoring cavern leaching and provide a detailed explanation of the process used to create the leaching plots used to monitor cavern leaching. In the past, generating leaching plots for each cavern in a given leaching year was done manually, and every cavern had to be processed individually. A Python script, compatible with Earth Volumetric Studio, was created to automate most of the process. The script makes a total of 26 plots per cavern to show leaching history, axisymmetric representation of leaching, and SANSMIC modeling of future leaching. The current run time for the script is one hour, replacing 40-50 hours of the monitoring cavern leaching process.
In March 2021, a functional area drill was held at the Remote Sensing Laboratory–Nellis that focused on using CBRNResponder and the Digital Field Monitoring (DFM) tablets for sample hotline operations and the new paper Sample Control Forms (SCFs) for sample collection. Participants included staff trained and billeted as sample control specialists and Consequence Management Response Team (CMRT) field monitoring personnel. Teams were able to successfully gather and transfer samples to the sample control hotline staff through the manual process, though there were several noted areas for improvement. In July and October 2021, two additional functional area drills were held at Sandia National Laboratories that focused on field sample collection and custody transfer at the sample control hotline for the Consequence Management (CM) Radiological Assistance Program (RAP) program. The overarching goal of the drills was to evaluate the current CM process for sample collection, sample drop off, and sample control using the CBRNResponder mobile and web-based applications. The July 2021 drill had an additional focus to have a subset of samples analyzed by the local analytical laboratory, Radiation Protection Sample Diagnostics (RPSD) laboratory, to evaluate the Laboratory Access portal on CBRNResponder. All three drills were able to accomplish their objectives however, there were several issues noted (Observations: 25 Urgent, 29 Important, and 22 Improvement Opportunities). The observations were prioritized according to their impact on the mission as well as categorized to align with the programmatic functional area required to address the issue. This report provides additional detail on each observation for skillset/program leads and software developers to consider for future improvement or mandatory efforts.
Understanding the lightning science behind the lightning detected by remote sensing systems is crucial to Sandia’s remote sensing program. Improved understanding of lightning properties can lead to improvements of onboard and/or ground-based background signal discrimination.
Organizations that monitor for underground nuclear explosive tests are interested in techniques that automatically characterize mining blasts to reduce the human analyst effort required to produce high - quality event bulletins. Waveform correlation is effective in finding similar waveforms from repeating seismic events, including mining blasts. In this study we use waveform template event metadata to seek corroborating detections from multiple stations in the International Monitoring System of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization. We build upon events detected in a prior waveform correlation study of mining blasts in two geographic regions, Wyoming and Scandinavia. Using a set of expert analyst-reviewed waveform correlation events that were declared to be true positive detections, we explore criteria for choosing the waveform correlation detections that are most likely to lead to bulletin-worthy events and reduction of analyst effort.
Organizations that monitor for underground nuclear explosive tests are interested in techniques that automatically characterize recurring events such as aftershocks to reduce the human analyst effort required to produce high-quality event bulletins. Waveform correlation is a technique that is effective in finding similar waveforms from repeating seismic events. In this study, we apply waveform correlation in combination with template event metadata to two aftershock sequences in the Middle East to seek corroborating detections from multiple stations in the International Monitoring System of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization. We use waveform templates from stations that are within regional distance of aftershock sequences to detect subsequent events, then use template event metadata to discover what stations are likely to record corroborating arrival waveforms for recurring aftershock events at the same location, and develop additional waveform templates to seek corroborating detections. We evaluate the results with the goal of determining whether applying the method to aftershock events will improve the choice of waveform correlation detections that lead to bulletin-worthy events and reduction of analyst effort.
This is an addendum to the Sierra/SolidMechanics 5.4 User’s Guide that documents additional capabilities available only in alternate versions of the Sierra/SolidMechanics (Sierra/SM) code. These alternate versions are enhanced to provide capabilities that are regulated under the U.S. Department of State’s International Traffic in Arms Regulations (ITAR) export control rules. The ITAR regulated codes are only distributed to entities that comply with the ITAR export control requirements. The ITAR enhancements to Sierra/SM include material models with an energy-dependent pressure response (appropriate for very large deformations and strain rates) and capabilities for blast modeling. This document is an addendum only; the standard Sierra/SolidMechanics 5.4 User’s Guide should be referenced for most general descriptions of code capability and use.
Currently, the solar industry is operating with little application-specific guidance on how to protect and defend their systems from cyberattacks. This 3-year Department of Energy (DOE) Solar Energy Technologies Office-funded project helped advance the distributed energy resource (DER) cybersecurity state-of-the-art by (a) bolstering industry awareness of cybersecurity concepts, risks, and solutions through a webinar series and (b) developing recommendations for DER cybersecurity standards to improve the security performance of DER products and networks. Drafting DER standards is a lengthy, consensus-based process requiring effective leadership and stakeholder participation. This project was designed to reduce standard and guide writing times by creating well-researched recommendations that could act as a starting place for national and international standards development organizations. Working within the SunSpec/Sandia DER Cybersecurity Workgroup, the team produced guidance for DER cybersecurity certification, communication protocol standards, network architecture s, access control, and patching. The team also led subgroups within the IEEE P 1547.3 Guide for Cybersecurity of Distributed Energy Resources Interconnected with Electric Power Systems committee and pushed a draft to ballot in October 2021.
Downtown low-voltage (LV) distribution networks are generally protected with network protectors that detect faults by restricting reverse power flow out of the network. This creates protection challenges for protecting the system as new smart grid technologies and distributed generation are installed. This report summarizes well-established methods for the control and protection of LV secondary network systems and spot networks, including operating features of network relays. Some current challenges and findings are presented from interviews with three utilities, PHI PEPCO, Oncor Energy Delivery, and Consolidated Edison Company of New York. Opportunities for technical exploration are presented with an assessment of the importance or value and the difficulty or cost. Finally, this leads to some recommendations for research to improve protection in secondary networks.
Early on in 2018 Sandia recognized the Microsystems Engineering, Science and Applications (MESA) Programmatic Asset Lifecycle Planning capability to be unpredictable, inconsistent, reactive, and unable to provide strong linkage to the sponsor's needs. The impetus for this report is to share learnings from MESA's journey towards maturing this capability. This report describes re-building the foundational elements of MESA's Programmatic Asset Lifecycle Planning capability using a risk-based, Multi-Criteria Decision Analysis (MCDA) approach. To begin, MESA's decades-old Piano Chart + Ad Hoc Hybrid Methodology is described with a narrative of its strengths and weaknesses. Then its replacement, the MCDA /Analytical Hierarchy Process, is introduced with a discussion of its strengths and weaknesses. To generate a realistic Programmatic Asset Lifecycle Planning budget outlook, MESA used its rolling 20-year Extended Life Program Plan (MELPP) as a baseline. The new MCDA risk-based prioritization methodology implements DOE/NNSA guidelines for prioritization of DOE activities and provides a reliable, structured framework for combining expert judgement and stakeholder preferences according to an established scientific technique. An in-house Hybrid Decision Support System (HDSS) software application was developed to facilitate production of several key deliverables. The application enables analysis of the prioritization decisions with charts to display and provide linkage of MESA's funding requests to the stakeholders' priorities, strategic objectives, nuclear deterrence programs, MESA priorities, and much more.
The Sandia Optical Fringe Analysis Slope Tool (SOFAST) is a tool that has been developed at Sandia to measure the surface slope of concentrating solar power optics. This tool has largely remained of research quality over the past few years. Since SOFAST is important to ongoing tests happening at Sandia as well as an interest to others outside Sandia, there is a desire to bring SOFAST up to professional software standards. The goal of this effort was to make progress in several broad areas including: code quality, sample data collection, and validation and testing. During the course of this effort, much progress was made in these areas. SOFAST is now a much more professional grade tool. There are, however, some areas of improvement that could not be addressed in the timeframe of this work and will be addressed in the continuation of this effort.
SIERRA/Aero is a compressible fluid dynamics program intended to solve a wide variety compressible fluid flows including transonic and hypersonic problems. This document describes the commands for assembling a fluid model for analysis with this module, henceforth referred to simply as Aero for brevity. Aero is an application developed using the SIERRA Toolkit (STK). The intent of STK is to provide a set of tools for handling common tasks that programmers encounter when developing a code for numerical simulation. For example, components of STK provide field allocation and management, and parallel input/output of field and mesh data. These services also allow the development of coupled mechanics analysis software for a massively parallel computing environment.
Geothermal energy has been underutilized in the U.S., primarily due to the high cost of drilling in the harsh environments encountered during the development of geothermal resources. Drilling depths can approach 5,000 m with temperatures reaching 170 C. In situ geothermal fluids are up to ten times more saline than seawater and highly corrosive, and hard rock formations often exceed 240 MPa compressive strength. This combination of extreme conditions pushes the limits of most conventional drilling equipment. Furthermore, enhanced geothermal systems are expected to reach depths of 10,000 m and temperatures more than 300 °C. To address these drilling challenges, Sandia developed a proof-of-concept tool called the auto indexer under an annual operating plan task funded by the Geothermal Technologies Program (GTP) of the U.S. Department of Energy Geothermal Technologies Office. The auto indexer is a relatively simple, elastomer-free motor that was shown previously to be compatible with pneumatic hammers in bench-top testing. Pneumatic hammers can improve penetration rates and potentially reduce drilling costs when deployed in appropriate conditions. The current effort, also funded by DOE GTP, increased the technology readiness level of the auto indexer, producing a scaled prototype for drilling larger diameter boreholes using pneumatic hammers. The results presented herein include design details, modeling and simulation results, and testing results, as well as background on percussive hammers and downhole rotation.
Zhang, Chen; Jacobson, Clas; Zhang, Qi; Biegler, Lorenz T.; Eslick, John C.; Zamarripa, Miguel A.; Stinchfield, Georgia; Siirola, John D.; Laird, Carl D.
The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 3.1.2 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sensitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.
Sandia National Laboratories has been tasked to operate and maintain the National Solar Thermal Test Facility (NSTTF) located in Kirtland Airforce Base near Albuquerque, NM. The NSTTF provides established test platforms and experienced researchers and technologists in the field of Concentrating Solar Technologies (CST) and Concentrating Solar Power (CSP). This three-year project seeks to maintain the NSTTF for development, testing, and application of new CSP technologies that are instrumental in advancing the state-of-the-art in support of SunShot and Generation 3 CSP technology goals. In turn, these technologies will form the foundation of the global CSP industry and continue to advance the technology to new levels of efficiency, higher temperatures, lower costs, lower risk, and higher reliability. The NSTTF provides established test platforms and highly experienced researchers and technologists in the CSP field.
We measured the Hugoniot, Hugoniot elastic limit (HEL), and spallation strength of laser powder bed fusion (LPBF) AlSi10Mg via uniaxial plate-impact experiments to stresses greater than 13 GPa. Despite its complex anisotropic microstructure, the LPBF AlSi10Mg did not exhibit significant orientation dependence or sample-to-sample variability in these measured quantities. We found that the Hugoniot response of the LPBF AlSi10Mg is similar to that of other Al-based alloys and is well approximated by a linear relationship: us = 5.49 + 1.39up. Additionally, the measured HELs ranged from 0.25 to 0.30 GPa and spallation strengths ranged from 1.16 to 1.45 GPa, consistent with values reported in other studies of LPBF AlSi10Mg and Al-based alloys. Furthermore, strain-rate and stress dependence of the spallation strength were also observed.
Lithium/fluorinated graphite (Li/CFx) primary batteries show great promise for applications in a wide range of energy storage systems due to their high energy density (>2100 Wh kg–1) and low self-discharge rate (<0.5% per year at 25 °C). While the electrochemical performance of the CFx cathode is indeed promising, the discharge reaction mechanism is not thoroughly understood to date. In this article, a multiscale investigation of the CFx discharge mechanism is performed using a novel cathode structure to minimize the carbon and fluorine additives for precise cathode characterizations. Titration gas chromatography, X-ray diffraction, Raman spectroscopy, X-ray photoelectron spectroscopy, scanning electron microscopy, cross-sectional focused ion beam, high-resolution transmission electron microscopy, and scanning transmission electron microscopy with electron energy loss spectroscopy are utilized to investigate this system. Results show no metallic lithium deposition or intercalation during the discharge reaction. Crystalline lithium fluoride particles uniformly distributed with <10 nm sizes into the CFx layers, and carbon with lower sp2 content similar to the hard-carbon structure are the products during discharge. This article deepens the understanding of CFx as a high energy density cathode material and highlights the need for future investigations on primary battery materials to advance performance.
Despite there being an infinite variety of types of flow, most rheological studies focus on a single type such as simple shear. Using discrete element simulations, we explore bulk granular systems in a wide range of flow types at large strains and characterize invariants of the stress tensor for different inertial numbers and interparticle friction coefficients. We identify a strong dependence on the type of flow, which grows with increasing inertial number or friction. Standard models of yielding, repurposed to describe the dependence of the stress on flow type in steady-state flow and at finite rates, are compared with data.
The How To Manual supplements the User’s Manual and the Theory Manual. The goal of the How To Manual is to reduce learning time for complex end to end analyses. These documents are intended to be used together. See the User’s Manual for a complete list of the options for a solution case. All the examples are part of the Sierra/SD test suite. Each runs as is. The organization is similar to the other documents: How to run, Commands, Solution cases, Materials, Elements, Boundary conditions, and then Contact. The table of contents and index are indispensable. The Geometric Rigid Body Modes section is shared with the Users Manual.
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high-fidelity, validated models used in modal, vibration, static and shock analysis of weapons systems. This document provides a user's guide to the input for Sierra/SD. Details of input specifications for the different solution types, output options, element types and parameters are included. The appendices contain detailed examples, and instructions for running the software on parallel platforms.
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of structural systems. This manual describes the theory behind many of the constructs in Sierra/SD. For a more detailed description of how to use Sierra/SD, we refer the reader to User's Manual. Many of the constructs in Sierra/SD are pulled directly from published material. Where possible, these materials are referenced herein. However, certain functions in Sierra/SD are specific to our implementation. We try to be far more complete in those areas. The theory manual was developed from several sources including general notes, a programmer_notes manual, the user's notes and of course the material in the open literature.
Accurate and efficient constitutive modeling remains a cornerstone issue for solid mechanics analysis. Over the years, the LAMÉ advanced material model library has grown to address this challenge by implementing models capable of describing material systems spanning soft polymers to stiff ceramics including both isotropic and anisotropic responses. Inelastic behaviors including (visco)plasticity, damage, and fracture have all incorporated for use in various analyses. This multitude of options and flexibility, however, comes at the cost of many capabilities, features, and responses and the ensuing complexity in the resulting implementation. Therefore, to enhance confidence and enable the utilization of the LAMÉ library in application, this effort seeks to document and verify the various models in the LAMÉ library. Specifically, the broader strategy, organization, and interface of the library itself is first presented. The physical theory, numerical implementation, and user guide for a large set of models is then discussed. Importantly, a number of verification tests are performed with each model to not only have confidence in the model itself but also highlight some important response characteristics and features that may be of interest to end-users. Finally, in looking ahead to the future, approaches to add material models to this library and further expand the capabilities are presented.
The purpose of this Seedling project is to couple a marine renewable energy (MRE) dynamics simulation software with the soil-foundation models in the OC6 Phase II project [Bergua et al., 2021] and evaluate the software’s performance. This is a first step to accurately evaluating soil-foundation impacts on other types of MRE, like wave or current energy converters (WECs, CECs). OC6 Phase II compares offshore wind turbine (OWT) simulations using several different soil-foundation models to identify and fill key gaps in soil-foundation analyses. WEC-Sim was chosen to model the OC6 Phase II offshore wind turbine and various load cases because of its adaptability, accuracy of hydrodynamic loads, and ability to apply an arbitrary wind loading. Of the four methods used in OC6, the apparent fixity soil-foundation method was coupled with WEC-Sim. Technical challenges with flexible hydrodynamic bodies, added mass and external function libraries inhibited the ability to compare the WEC-Sim results to other OC6 participants. These challenges required that the WEC-Sim model of the OC6 OWT use a combination of rigid and flexible bodies to ensure a numerically stable solution. The rigid monopile creates a more stiff system and causes smaller amplitude motion under hydrodynamic loading and higher dominant frequency of motion under wind loading. These discrepancies are expected based on the increased stiffness of the WEC-Sim case.
Rapid molecular-weight growth of hydrocarbons occurs in flames, in industrial synthesis, and potentially in cold astrochemical environments. A variety of high- and low-temperature chemical mechanisms have been proposed and confirmed, but more facile pathways may be needed to explain observations. We provide laboratory confirmation in a controlled pyrolysis environment of a recently proposed mechanism, radical–radical chain reactions of resonance-stabilized species. The recombination reaction of phenyl (c-C6H5) and benzyl (c-C6H5CH2) radicals produces both diphenylmethane and diphenylmethyl radicals, the concentration of the latter increasing with rising temperature. A second phenyl addition to the product radical forms both triphenylmethane and triphenylmethyl radicals, confirming the propagation of radical–radical chain reactions under the experimental conditions of high temperature (1100–1600 K) and low pressure (ca. 3 kPa). Similar chain reactions may contribute to particle growth in flames, the interstellar medium, and industrial reactors.
The canonical beam splitter - a fundamental building block of quantum optical systems - is a reciprocal element. It operates on forward- and backward-propagating modes in the same way, regardless of direction. The concept of nonreciprocal quantum photonic operations, by contrast, could be used to transform quantum states in a momentum- and direction-selective fashion. Here we demonstrate the basis for such a nonreciprocal transformation in the frequency domain through intermodal Bragg scattering four-wave mixing (BSFWM). Since the total number of idler and signal photons is conserved, the process can preserve coherence of quantum optical states, functioning as a nonreciprocal frequency beam splitter. We explore the origin of this nonreciprocity and find that the phase-matching requirements of intermodal BSFWM produce an enormous asymmetry (76×) in the conversion bandwidths for forward and backward configurations, yielding ∼25 dB of nonreciprocal contrast over several hundred GHz. We also outline how the demonstrated efficiencies (∼10-4) may be scaled to near-unity values with readily accessible powers and pumping configurations for applications in integrated quantum photonics.