Criticality Control Overpack (CCO) containers are being considered for the disposal of defense-related nuclear waste at the Waste Isolation Pilot Plant (WIPP).
A method is presented to detect clear-sky periods for plane-of-array, time-averaged irradiance data that is based on the algorithm originally described by Reno and Hansen. We show this new method improves the state-of-the-art by providing accurate detection at longer data intervals, and by detecting clear periods in plane-of-array data, which is novel. We illustrate how accurate determination of clear-sky conditions helps to eliminate data noise and bias in the assessment of long-term performance of PV plants.
High penetrations of residential solar PV can cause voltage issues on low-voltage (LV) secondary networks. Distribution utility planners often utilize model-based power flow solvers to address these voltage issues and accommodate more PV installations without disrupting the customers already connected to the system. These model-based results are computationally expensive and often prone to errors. In this paper, two novel deep learning-based model-free algorithms are proposed that can predict the change in voltages for PV installations without any inherent network information of the system. These algorithms will only use the real power (P), reactive power (Q), and voltage (V) data from Advanced Metering Infrastructure (AMI) to calculate the change in voltages for an additional PV installation for any customer location in the LV secondary network. Both algorithms are tested on three datasets of two feeders and compared to the conventional model-based methods and existing model-free methods. The proposed methods are also applied to estimate the locational PV hosting capacity for both feeders and have shown better accuracies compared to an existing model-free method. Results show that data filtering or pre-processing can improve the model performance if the testing data point exists in the training dataset used for that model.
This presentation describes a new effort to better understand insulator flashover in high current, high voltage pulsed power systems. Both experimental and modeling investigations are described. Particular emphasis is put upon understand flashover that initiate in the anode triple junction (anode-vacuum-dielectric).
Phosphor thermometry has become an established remote sensing technique for acquiring the temperature of surfaces and gas-phase flows. Often, phosphors are excited by a light source (typically emitting in the UV region), and their temperature-sensitive emission is captured. Temperature can be inferred from shifts in the emission spectra or the radiative decay lifetime during relaxation. While recent work has shown that the emission of several phosphors remains thermographic during x-ray excitation, the radiative decay lifetime was not investigated. The focus of the present study is to characterize the lifetime decay of the phosphor Gd2O2S:Tb for temperature sensitivity after excitation from a pulsed x-ray source. These results are compared to the lifetime decays found for this phosphor when excited using a pulsed UV laser. Results show that the lifetime of this phosphor exhibits comparable sensitivity to temperature between both excitation sources for a temperature range between 21 °C to 140 °C in increments of 20 °C. This work introduces a novel method of thermometry for researchers to implement when employing x-rays for diagnostics.
High-pressure, ultra-zero air is being evaluated as a potential replacement to SF6 in a strategic focus to move away from environmentally damaging insulating gasses. There are a lot of unknowns about the dominant breakdown mechanisms of ultra-zero air in the high-pressure regime. The classical equations for Paschen curves appear to not be valid above 500 psig. In order to better understand the phenomena of gas breakdown in the high-pressure regime, Sandia National Laboratories is evaluating the basic gas physics breakdown using nonuniform-field electrode designs. Recent data has been collected at SNL to study the breakdown of this high-pressure regime in the range of 300 - 1500 psi with gaps on the order of 0.6 - 1 cm with different electrode designs. The self-breakdown voltages range from 200-900 kV with a pulse-charge rise times of 200-300 ns and discharge currents from 25-60 kA. This research investigates the phenomenon of high-pressure breakdown, highlights the data collected, and presents a few of the mechanisms that dominate in the high-pressure regime for electronegative gasses.
Quantum computing testbeds exhibit high-fidelity quantum control over small collections of qubits, enabling performance of precise, repeatable operations followed by measurements. Currently, these noisy intermediate-scale devices can support a sufficient number of sequential operations prior to decoherence such that near term algorithms can be performed with proximate accuracy (like chemical accuracy for quantum chemistry problems). While the results of these algorithms are imperfect, these imperfections can help bootstrap quantum computer testbed development. Demonstrations of these algorithms over the past few years, coupled with the idea that imperfect algorithm performance can be caused by several dominant noise sources in the quantum processor, which can be measured and calibrated during algorithm execution or in post-processing, has led to the use of noise mitigation to improve typical computational results. Conversely, benchmark algorithms coupled with noise mitigation can help diagnose the nature of the noise, whether systematic or purely random. Here, we outline the use of coherent noise mitigation techniques as a characterization tool in trapped-ion testbeds. We perform model-fitting of the noisy data to determine the noise source based on realistic physics focused noise models and demonstrate that systematic noise amplification coupled with error mitigation schemes provides useful data for noise model deduction. Further, in order to connect lower level noise model details with application specific performance of near term algorithms, we experimentally construct the loss landscape of a variational algorithm under various injected noise sources coupled with error mitigation techniques. This type of connection enables application-aware hardware code-sign, in which the most important noise sources in specific applications, like quantum chemistry, become foci of improvement in subsequent hardware generations.
Bao, Jichao; Lee, Jonghyun; Yoon, Hongkyu Y.; Pyrak-Nolte, Laura
Characterization of geologic heterogeneity at an enhanced geothermal system (EGS) is crucial for cost-effective stimulation planning and reliable heat production. With recent advances in computational power and sensor technology, large-scale fine-resolution simulations of coupled thermal-hydraulic-mechanical (THM) processes have been available. However, traditional large-scale inversion approaches have limited utility for sites with complex subsurface structures unless one can afford high, often computationally prohibitive, computations. Key computational burdens are predominantly associated with a number of large-scale coupled numerical simulations and large dense matrix multiplications derived from fine discretization of the field site domain and a large number of THM and chemical (THMC) measurements. In this work, we present deep-generative model-based Bayesian inversion methods for the computationally efficient and accurate characterization of EGS sites. Deep generative models are used to learn the approximate subsurface property (e.g., permeability, thermal conductivity, and elastic rock properties) distribution from multipoint geostatistics-derived training images or discrete fracture network models as a prior and accelerated stochastic inversion is performed on the low-dimensional latent space in a Bayesian framework. Numerical examples with synthetic permeability fields with fracture inclusions with THM data sets based on Utah FORGE geothermal site will be presented to test the accuracy, speed, and uncertainty quantification capability of our proposed joint data inversion method.
Most recently, stochastic control methods such as deep reinforcement learning (DRL) have proven to be efficient and quick converging methods in providing localized grid voltage control. Because of the random dynamical characteristics of grid reactive loads and bus voltages, such stochastic control methods are particularly useful in accurately predicting future voltage levels and in minimizing associated cost functions. Although DRL is capable of quickly inferring future voltage levels given specific voltage control actions, it is prone to high variance when the learning rate or discount factors are set for rapid convergence in the presence of bus noise. Evolutionary learning is also capable of minimizing cost function and can be leveraged for localized grid control, but it does not infer future voltage levels given specific control inputs and instead simply selects those control actions that result in the best voltage control. For this reason, evolutionary learning is better suited than DRL for voltage control in noisy grid environments. To illustrate this, using a cyber adversary to inject random noise, we compare the use of evolutionary learning and DRL in autonomous voltage control (AVC) under noisy control conditions and show that it is possible to achieve a high mean voltage control using a genetic algorithm (GA). We show that the GA additionally can provide superior AVC to DRL with comparable computational efficiency. We illustrate that the superior noise immunity properties of evolutionary learning make it a good choice for implementing AVC in noisy environments or in the presence of random cyber-attacks.
Risk and resilience assessments for critical infrastructure focus on myriad objectives, from natural hazard evaluations to optimizing investments. Although research has started to characterize externalities associated with current or possible future states, incorporation of equity priorities at project inception is increasingly being recognized as critical for planning related activities. However, there is no standard methodology that guides development of equity-informed quantitative approaches for infrastructure planning activities. To address this gap, we introduce a logic model that can be tailored to capture nuances about specific geographies and community priorities, effectively incorporating them into different mathematical approaches for quantitative risk assessments. Specifically, the logic model uses a graded, iterative approach to clarify specific equity objectives as well as inform the development of equations being used to support analysis. We demonstrate the utility of this framework using case studies spanning aviation fuel, produced water, and microgrid electricity infrastructures. For each case study, the use of the logic model helps clarify the ways that local priorities and infrastructure needs are used to drive the types of data and quantitative methodologies used in the respective analyses. The explicit consideration of methodological limitations (e.g., data mismatches) and stakeholder engagements serves to increase the transparency of the associated findings as well as effectively integrate community nuances (e.g., ownership of assets) into infrastructure assessments. Such integration will become increasingly important to ensure that planning activities (which occur throughout the lifecycle of the infrastructure projects) lead to long-lasting solutions to meet both energy and sustainable development goals for communities.
Direct numerical simulations (DNS) were conducted of a high-velocity flat plate boundary layer with time-periodic fluctuating inflow. The DNS fluctuation growth and evolution over the plate is then compared to the solution as computed using classical linear stability theory (LST) and the parabolized stability equations (PSE) of a second mode eigen function. The decay rate of the free stream perturbations is also compared to LST and the choice of shock-capturing method and the associated dissipation rate is characterized. The agreement observed between the eigen function from LST and the fundamental harmonic of the temporal Fourier transform (FT) of the DNS simulation demonstrates the ability of the solver to capture the initiation and linear growth of a hypersonic boundary layer instability. The work characterizes the shock-capturing numerical dissipation for slow and second mode growth as well as provides confidence in the numerical solver to study further development towards non-linear growth and eventual transition to turbulence.
Distribution systems may experience fast voltage swings in the matter of seconds from distributed energy resources, such as Wind Turbines Generators (WTG) and Photovoltaic (PV) inverters, due to their dependency on variable and intermittent wind speed and solar irradiance. This work proposes a WTG reactive power controller for fast voltage regulation. The controller is tested on a simulation model of a real distribution system. Real wind speed, solar irradiation, and load consumption data is used. The controller is based on a Reinforcement Learning Deep Deterministic Policy Gradient (DDPG) model that determines optimum control actions to avoid significant voltage deviations across the system. The controller has access to voltage measurements at all system buses. Results show that the proposed WTG reactive power controller significantly reduces system-wide voltage deviations across a large number of generation scenarios in order to comply with standardized voltage tolerances.
Reinforcement learning (RL) may enable fixedwing unmanned aerial vehicle (UAV) guidance to achieve more agile and complex objectives than typical methods. However, RL has yet struggled to achieve even minimal success on this problem; fixed-wing flight with RL-based guidance has only been demonstrated in literature with reduced state and/or action spaces. In order to achieve full 6-DOF RL-based guidance, this study begins training with imitation learning from classical guidance, a method known as warm-staring (WS), before further training using Proximal Policy Optimization (PPO). We show that warm starting is critical to successful RL performance on this problem. PPO alone achieved a 2% success rate in our experiments. Warm-starting alone achieved 32% success. Warm-starting plus PPO achieved 57% success over all policies, with 40% of policies achieving 94% success.
Springs play important roles in many mechanisms, including critical safety components employed by Sandia National Laboratories. Due to the nature of these safety component applications, serious concerns arise if their springs become damaged or unhook from their posts. Finite element analysis (FEA) is one technique employed to ensure such adverse scenarios do not occur. Ideally, a very fine spring mesh would be used to make the simulation as accurate as possible with respect to mesh convergence. While this method does yield the best results, it is also the most time consuming and therefore most computationally expensive process. In some situations, reduced order models (ROMs) can be adopted to lower this cost at the expense of some accuracy. This study quantifies the error present between a fine, solid element mesh and a reduced order spring beam model, with the aim of finding the best balance of a low computational cost and high accuracy analysis. Two types of analyses were performed, a quasi-static displacement-controlled pull and a haversine shock. The first used implicit methods to examine basic properties as the elastic limit of the spring material was reached. This analysis was also used to study the convergence and residual tolerance of the models. The second used explicit dynamics methods to investigate spring dynamics and stress/strain properties, as well as examine the impact of the chosen friction coefficient. Both the implicit displacement-controlled pull test and explicit haversine shock test showed good similarities between the hexahedral and beam meshes. The results were especially favorable when comparing reaction force and stress trends and maximums. However, the EQPS results were not quite as favorable. This could be due to differences in how the shear stress is calculated in both models, and future studies will need to investigate the exact causes. The data indicates that the beam model may be less likely to correctly predict spring failure, defined as inappropriate application of tension and/or compressive forces to a larger assembly. Additionally, this study was able to quantify the computational cost advantage of using a reduced order model beam mesh. In the transverse haversine shock case, the hexahedral mesh took over three days with 228 processors to solve, compared to under 10 hours for the ROM using just a single processor. Depending on the required use case for the results, using the beam mesh will significantly improve the speed of work flows, especially when integrated into larger safety component models. However, appropriate use of the ROM should carefully balance these optimized run times with its reduction in accuracy, especially when examining spring failure and outputting variables such as equivalent plastic strain. Current investigations are broadening the scope of this work to include a validation study comparing the beam ROM to physical testing data.
This study explores the evolution of a turbulent hypersonic boundary layer over a spanwise-finite expansion-compression geometry. The geometry is based on a slender cone with an axial slice that subjects the cone boundary layer to a favorable pressure gradient. The mean flow field was obtained from a hybrid RANS-LES computation that showed the thickening of the boundary layer, a decrease in the mean pressure and the development of incipient streamwise vortical structures on the slice. The experiments use fluctuating surface pressure and shear-stress sensors along the centerline of the slice which demonstrate significant reduction in turbulence activity on the slice indicating relaminarization of the boundary-layer. These observations were corroborated by high framerate schlieren, filtered Rayleigh scattering and scanning focused laser differential interferometry. When a 10◦ ramp is introduced at the aft end of the slice, the effectively relaminarized boundary-layer separates upstream of the slice-ramp corner due to its increased susceptibility to separation in comparison to a turbulent boundary layer.
Modern Industrial Control Systems (ICS) attacks evade existing tools by using knowledge of ICS processes to blend their activities with benign Supervisory Control and Data Acquisition (SCADA) operation, causing physical world damages. We present Scaphy to detect ICS attacks in SCADA by leveraging the unique execution phases of SCADA to identify the limited set of legitimate behaviors to control the physical world in different phases, which differentiates from attacker's activities. For example, it is typical for SCADA to setup ICS device objects during initialization, but anomalous during process-control. To extract unique behaviors of SCADA execution phases, Scaphy first leverages open ICS conventions to generate a novel physical process dependency and impact graph (PDIG) to identify disruptive physical states. Scaphy then uses PDIG to inform a physical process-aware dynamic analysis, whereby code paths of SCADA process-control execution is induced to reveal API call behaviors unique to legitimate process-control phases. Using this established behavior, Scaphy selectively monitors attacker's physical world-targeted activities that violates legitimate process-control behaviors. We evaluated Scaphy at a U.S. national lab ICS testbed environment. Using diverse ICS deployment scenarios and attacks across 4 ICS industries, Scaphy achieved 95% accuracy & 3.5% false positives (FP), compared to 47.5% accuracy and 25% FP of existing work. We analyze Scaphy's resilience to futuristic attacks where attacker knows our approach.
A microgrid is characterized by a high R/X ratio, making the voltage more sensitive to active power changes unlike in bulk power systems where voltage is mostly regulated by reactive power. Because of its sensitivity to active power, control approach should incorporate active power as well. Thus, the voltage control approach for microgrids is very different from conventional power systems. The energy costs associated with these power are different. Furthermore, because of diverse generation sources and different components such as distributed energy resources, energy storage systems, etc, model-based control approaches might not perform very well. This paper proposes a reinforcement learning-based voltage support framework for a microgrid where an agent learns control policy by interacting with the microgrid without requiring a mathematical model of the system. A MATLAB/Simulink simulation study on a test system from Cordova, Alaska shows that there is a large reduction in voltage deviation (about 2.5-4.5 times). This reduction in voltage deviation can improve the power quality of the microgrid: ensuring a reliable supply, longer equipment lifespan, and stable user operations.
Proceedings - 2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications, SERA 2023
Shrestha, Madhukar; Kim, Yonghyun; Oh, Jeehyun; Rhee, Junghwan; Choe, Yung R.; Zuo, Fei; Park, Myungah; Qian, Gang
System provenance forensic analysis has been studied by a large body of research work. This area needs fine granularity data such as system calls along with event fields to track the dependencies of events. While prior work on security datasets has been proposed, we found a useful dataset of realistic attacks and details that can be used for provenance tracking is lacking. We created a new dataset of eleven vulnerable cases for system forensic analysis. It includes the full details of system calls including syscall parameters. Realistic attack scenarios with real software vulnerabilities and exploits are used. Also, we created two sets of benign and adversary scenarios which are manually labeled for supervised machine-learning analysis. We demonstrate the details of the dataset events and dependency analysis.
The Synchronic Web is a highly scalable notary infrastructure that provides tamper-evident data provenance for historical web data. In this document, we describe the applicability of this infrastructure for web archiving across three envisioned stages of adoption. We codify the core mechanism enabling the value proposition: a procedure for splitting and merging cryptographic information fluidly across blockchain-backed ledgers. Finally, we present preliminary performance results that indicate the feasibility of our approach for modern web archiving scales.
Introduction: The SARS-CoV-2 pandemic, and the subsequent limitations on standard diagnostics, has vastly expanded the user base of Reverse Transcription Loop-mediated isothermal Amplification (RT-LAMP) in fundamental research and development. RT-LAMP has also penetrated commercial markets, with emergency use authorizations for clinical diagnosis. Areas covered: This review discusses the role of RT-LAMP within the context of other technologies like RT-qPCR and rapid antigen tests, progress in sample preparation strategies to enable simplified workflow for RT-LAMP directly from clinical specimens, new challenges with primer and assay design for the evolving pandemic, prominent detection modalities including colorimetric and CRISPR-mediated methods, and translational research and commercial development of RT-LAMP for clinical applications. Expert opinion: RT-LAMP occupies a middle ground between RT-qPCR and rapid antigen tests. The simplicity approaches that of rapid antigen tests, making it suitable for point-of-care use, but the sensitivity nears that of RT-qPCR. RT-LAMP still lags RT-qPCR in fundamental understanding of the mechanism, and the interplay between sample preparation and assay performance. Industry is now beginning to address issues around scalability and usability, which could finally enable LAMP and RT-LAMP to find future widespread application as a diagnostic for other conditions, including other pathogens with pandemic potential.
Motion primitives provide an approach to kinodynamic path planning that does not require online solution of the equations of motion, permitting the use of complex high-fidelity models. The path planning problem with motion primitives is a Markov Decision Process (MDP) with the primitives defining the available actions. Uncertainty in evolution of the primitives means that there is uncertainty in the state resulting from each action. In this work, uncertain motion primitive planning is demonstrated for a high speed glide vehicle. A nonlinear 6- degree-of-freedom model is used to generate the primitive library, and the motion primitive planning problem is formulated so that the transition probabilities in the MDP may have a functional dependence on the state of the system. Single-query solutions to planning problems incorporating operational envelope constraints and no-fly zones are obtained using AO* under chance constraints on the risk of mission failure.