Two relatively under-reported facets of fuel storage fire safety are examined in this work for a 250, 000 gallon two-tank storage system. Ignition probability is linked to the radiative flux from a presumed fire. First, based on observed features of existing designs, fires are expected to be largely contained within a designed footprint that will hold the full spilled contents of the fuel. The influence of the walls and the shape of the tanks on the magnitude of the fire is not a well-described aspect of conventional fire safety assessment utilities. Various resources are herein used to explore the potential hazard for a contained fire of this nature. Second, an explosive attack on the fuel storage has not been widely considered in prior work. This work explores some options for assessing this hazard. The various methods for assessing the constrained conventional fires are found to be within a reasonable degree of agreement. This agreement contrasts with the hazard from an explosive dispersal. Best available assessment techniques are used, which highlight some inadequacies in the existing toolsets for making predictions of this nature. This analysis, using the best available tools, suggests the offset distance for the ignition hazard from a fireball will be on the same order as the offset distance for the blast damage. This suggests the buy-down of risk by considering the fireball is minimal when considering the blast hazards. Assessment tools for the fireball predictions are not particularly mature, and ways to improve them for a higher-fidelity estimate are noted.
Plasma sprays can be used to melt particles, which may be deposited on an engineered surface to apply unique properties to the part. Because of the extreme temperatures (>>3000ºC) it is desirable to conduct the process in a way to avoid melting the parts to which the coatings are being applied. A jet of ambient gas is sometimes used to deflect the hot gases, while allowing the melted particles to impact and adhere to the substrate. This is known as a plume quench. While plume quenching is done in practice, to our knowledge there have not been any studies on how to apply a plume quench, and how it may affect the flows. We have recently adapted our fire simulation tool to simulate argon plasma sprays with a variety of metal particles. Two nozzle conditions are considered, with very different gas flow and power conditions. Two particle types are considered, Tantalum and Nickel. For the model, the k-epsilon turbulence model is compared to a more dynamic TFNS turbulence model. Limited data comparisons suggest the higher-fidelity TFNS model is significantly more accurate than the k-epsilon model. Additionally, the plume quench is found to have a noticeable effect for the low inlet flow case, but minimal effect on the high flow case. This suggests the effectiveness of a quench relates to the relative momentum of the intersecting gas jets.
This work proposes a method of designing adaptive controllers for reliable and stable operation of a Grid-Forming Inverter (GFI) during black-start. Here, the characteristic loci method has been primarily used for guiding the adaptation and tuning of the control parameters, based on a thorough sensitivity analysis of the system over a desired frequency bandwidth. The control hierarchy comprises active-reactive (P-Q) power support, voltage regulation, current control, and frequency recovery over the sequence of various events during black-starting. These events comprise energization of transformers and different types of loads, alongside post-fault recovery. The developed method has been tested in a 75 MVA inverter system, which is simulated in PSCAD®. The inverter energizes static and induction motor loads, besides transformers. This system has also been subjected to a line-ground fault for validating the robustness of the proposed adaptive control structure in post-fault recovery.
Aperture near-field microscopy and spectroscopy (a-SNOM) enables the direct experimental investigation of subwavelength-sized resonators by sampling highly confined local evanescent fields on the sample surface. Despite its success, the versatility and applicability of a-SNOM is limited by the sensitivity of the aperture probe, as well as the power and versatility of THz sources used to excite samples. Recently, perfectly absorbing photoconductive metasurfaces have been integrated into THz photoconductive antenna detectors, enhancing their efficiency and enabling high signal-to-noise ratio THz detection at significantly reduced optical pump powers. Here, we discuss how this technology can be applied to aperture near-field probes to improve both the sensitivity and potentially spatial resolution of a-SNOM systems. In addition, we explore the application of photoconductive metasurfaces also as near-field THz sources, providing the possibility of tailoring the beam profile, polarity and phase of THz excitation. Photoconductive metasurfaces therefore have the potential to broaden the application scope of aperture near-field microscopy to samples and material systems which currently require improved spatial resolution, signal-to-noise ratio, or more complex excitation conditions.
We present the SEU sensitivity and SEL results from proton and heavy ion testing performed on NVIDIA Xavier NX and AMD Ryzen V1605B GPU devices in both static and dynamic operation.
In this study, we develop an end-to-end deep learning-based inverse design approach to determine the scatterer shape necessary to achieve a target acoustic field. This approach integrates non-uniform rational B-spline (NURBS) into a convolutional autoencoder (CAE) architecture while concurrently leveraging (in a weak sense) the governing physics of the acoustic problem. By utilizing prior physical knowledge and NURBS parameterization to regularize the ill-posed inverse problem, this method does not require enforcing any geometric constraint on the inverse design space, hence allowing the determination of scatterers with potentially any arbitrary shape (within the set allowed by NURBS). A numerical study is presented to showcase the ability of this approach to identify physically-consistent scatterer shapes capable of producing user-defined acoustic fields.
The error detection performance of cyclic redundancy check (CRC) codes combined with bit framing in digital serial communication systems is evaluated. Advantages and disadvantages of the combined method are treated in light of the probability of undetected errors. It is shown that bit framing can increase the burst error detection of the CRC but it can also adversely affect CRC random error detection performance. To quantify the effect of bit framing on CRC error detection the concept of error "exposure"is introduced. Our investigations lead us to propose resilient generator polynomials that, when combined with bit framing, can result in improved CRC error detection performance at no additional implementation cost. Example results are generated for short codewords showing that proper choice of CRC generator polynomial can improve error detection performance when combined with bit framing. The implication is that CRC combined with bit framing can reduce the probability of undetected errors even under high error rate conditions.
Unlike traditional base excitation vibration qualification testing, multi-axis vibration testing methods can be significantly faster and more accurate. Here, a 12-shaker multiple-input/multiple-output (MIMO) test method called intrinsic connection excitation (ICE) is developed and assessed for use on an example aerospace component. In this study, the ICE technique utilizes 12 shakers, 1 for each boundary condition attachment degree of freedom to the component, specially designed fixtures, and MIMO control to provide an accurate set of loads and boundary conditions during the test. Acceleration, force, and voltage control provide insight into the viability of this testing method. System field test and ICE test results are compared to traditional single degree of freedom specification development and testing. Results indicate the multi-shaker ICE test provided a much more accurate replication of system field test response compared with single degree of freedom testing.
This paper elaborates the results of the hardware implementation of a traveling wave (TW) protection device (PD) for DC microgrids. The proposed TWPD is implemented on a commercial digital signal processor (DSP) board. In the developed TWPD, first, the DSP board's Analog to Digital Converter (ADC) is used to sample the input at a 1 MHz sampling rate. The Analog Input card of DSP board measures the pole current at the TWPD location in DC microgrid. Then, a TW detection algorithm is applied on the output of the ADC to detect the fault occurrence instance. Once this instance is detected, multi-resolution analysis (MRA) is performed on a 128-sample data butter that is created around the fault instance. The MRA utilizes discrete wavelet transform (DWT) to extract the high-frequency signatures of measured pole current. To quantity the extracted TW features, the Parseval theorem is used to calculate the Parseval energy of reconstructed wavelet coefficients created by MRA. These Parseval energy values are later used as inputs to a polynomial linear regression tool to estimate the fault location. The performance of the created TWPD is verified using an experimental testbed.
This work investigates the low- and high-temperature ignition and combustion processes, applied to the Engine Combustion Network Spray A flame, combining advanced optical diagnostics and large-eddy simulations (LES). Simultaneous high-speed (50 kHz) formaldehyde (CH2O) planar laser-induced fluorescence (PLIF) and line-of-sight OH* chemiluminescence imaging were used to measure the low- and high-temperature flame, during ignition as well as during quasi-steady combustion. While tracking the cool flame at the laser sheet plane, the present experimental setup allows detection of distinct ignition spots and dynamic fluctuations of the lift-off length over time, which overcomes limitations for flame tracking when using schlieren imaging [Sim et al.Proc. Combust. Inst. 38 (4) (2021) 5713–5721]. After significant development to improve LES prediction of the low-and high-temperature flame position, both during the ignition processes and quasi-steady combustion, the simulations were analyzed to gain understanding of the mixture variance and how this variance affects formation/consumption of CH2O. Analysis of the high-temperature ignition period shows that a key improvement in the LES is the ability to predict heterogeneous ignition sites, not only in the head of the jet, but in shear layers at the jet edge close to the position where flame lift-off eventually stabilizes. The LES analysis also shows concentrated pockets of CH2O, in the center of jet and at 20 mm downstream of the injector (in regions where the equivalence ratio is greater than 6), that are of similar length scale and frequency as the experiment (approximately 5–6 kHz). The periodic oscillation of CH2O match the frequency of pressure waves generated during auto-ignition and reflected within the constant-volume vessel throughout injection. The ability of LES to capture the periodic appearance and destruction of CH2O is particularly important because these structures travel downstream and become rich premixed flames that affect soot production.
Event-based sensors are a novel sensing technology which capture the dynamics of a scene via pixel-level change detection. This technology operates with high speed (>10 kHz), low latency (10 µs), low power consumption (<1 W), and high dynamic range (120 dB). Compared to conventional, frame-based architectures that consistently report data for each pixel at a given frame rate, event-based sensor pixels only report data if a change in pixel intensity occurred. This affords the possibility of dramatically reducing the data reported in bandwidth-limited environments (e.g., remote sensing) and thus, the data needed to be processed while still recovering significant events. Degraded visual environments, such as those generated by fog, often hinder situational awareness by decreasing optical resolution and transmission range via random scattering of light. To respond to this challenge, we present the deployment of an event-based sensor in a controlled, experimentally generated, well-characterized degraded visual environment (a fog analogue), for detection of a modulated signal and comparison of data collected from an event-based sensor and from a traditional framing sensor.
A quantum-cascade-laser-absorption-spectroscopy (QCLAS) diagnostic was used to characterize post-detonation fireballs of RP-80 detonators via measurements of temperature, pressure, and CO column pressure at a repetition rate of 1 MHz. Scanned-wavelength direct-absorption spectroscopy was used to measure CO absorbance spectra near 2008.5 cm−1 which are dominated by the P(0,31), P(2,20), and P(3,14) transitions. Line-of-sight (LOS) measurements were acquired 51 and 91 mm above the detonator surface. Three strategies were employed to facilitate interpretation of the LAS measurements in this highly nonuniform environment and to evaluate the accuracy of four post-detonation fireball models: (1) High-energy transitions were used to deliberately bias the measurements to the high-temperature outer shell, (2) a novel dual-zone absorption model was used to extract temperature, pressure, and CO measurements in two distinct regions of the fireball at times where pressure variations along the LOS were pronounced, and (3) the LAS measurements were compared with synthetic LAS measurements produced using the simulated distributions of temperature, pressure, and gas composition predicted by reactive CFD modeling. The results indicate that the QCLAS diagnostic provides high-fidelity data for evaluating post-detonation fireball models, and that assumptions regarding thermochemical equilibrium and carbon freeze-out during expansion of detonation gases have a large impact on the predicted chemical composition of the fireball.
A comprehensive study of the mechanical response of a 316 stainless steel is presented. The split-Hopkinson bar technique was used to evaluate the mechanical behavior at dynamic strain rates of 500 s−1, 1500 s−1, and 3000 s−1 and temperatures of 22 °C and 300 °C under tension and compression loading, while the Drop-Hopkinson bar was used to characterize the tension behavior at an intermediate strain rate of 200 s−1. The experimental results show that the tension and compression flow stress are reasonably symmetric, exhibit positive strain rate sensitivity, and are inversely dependent on temperature. The true failure strain was determined by measuring the minimum diameter of the post-test tension specimen. The 316 stainless steel exhibited a ductile response, and the true failure strain increased with increasing temperature and decreased with increasing strain rate.
A high altitude electromagnetic pulse (HEMP) or other similar geomagnetic disturbance (GMD) has the potential to severely impact the operation of large-scale electric power grids. By introducing low-frequency common-mode (CM) currents, these events can impact the performance of key system components such as large power transformers. In this work, a solid-state transformer (SST) that can replace susceptible equipment and improve grid resiliency by safely absorbing these CM insults is described. An overview of the proposed SST power electronics and controls architecture is provided, a system model is developed, and the performance of the SST in response to a simulated CM insult is evaluated. Compared to a conventional magnetic transformer, the SST is found to recover quickly from the insult while maintaining nominal ac input/output behavior.
Laser-induced photoemission of electrons offers opportunities to trigger and control plasmas and discharges [1]. However, the underlying mechanisms are not sufficiently characterized to be fully utilized [2]. We present an investigation to characterize the effects of photoemission on plasma breakdown for different reduced electric fields, laser intensities, and photon energies. We perform Townsend breakdown experiments assisted by high-speed imaging and employ a quantum model of photoemission along with a 0D discharge model [3], [4] to interpret the experimental measurements.
We propose a set of benchmark tests for current-voltage (IV) curve fitting algorithms. Benchmark tests enable transparent and repeatable comparisons among algorithms, allowing for measuring algorithm improvement over time. An absence of such tests contributes to the proliferation of fitting methods and inhibits achieving consensus on best practices. Benchmarks include simulated curves with known parameter solutions, with and without simulated measurement error. We implement the reference tests on an automated scoring platform and invite algorithm submissions in an open competition for accurate and performant algorithms.
The Sliding Scale of Cybersecurity is a framework for understanding the actions that contribute to cybersecurity. The model consists of five categories that provide varying value towards cybersecurity and incur varying implementation costs. These categories range from offensive cybersecurity measures providing the least value and incurring the greatest cost, to architecture providing the greatest value and incurring the least cost. This paper presents an application of the Sliding Scale of Cybersecurity to the Tiered Cybersecurity Analysis (TCA) of digital instrumentation and control systems for advanced reactors. The TCA consists of three tiers. Tier 1 is design and impact analysis. In Tier 1 it is assumed that the adversary has control over all digital systems, components, and networks in the plant, and that the adversary is only constrained by the physical limitations of the plant design. The plant’s safety design features are examined to determine whether the consequences of an attack by this cyber-enabled adversary are eliminated or mitigated. Accident sequences that are not eliminated or mitigated by security by design features are examined in Tier 2 analysis. In Tier 2, adversary access pathways are identified for the unmitigated accident sequences, and passive measures are implemented to deny system and network access to those pathways wherever feasible. Any systems with remaining susceptible access pathways are then examined in Tier 3. In Tier 3, active defensive cybersecurity architecture features and cybersecurity plan controls are applied to deny the adversary the ability to conduct the tasks needed to cause a severe consequence. Earlier application of the TCA in the design process provides greater opportunity for an efficient graded approach and defense-in-depth.
Advancements in photonic quantum information systems (QIS) have driven the development of high-brightness, on-demand, and indistinguishable semiconductor epitaxial quantum dots (QDs) as single photon sources. Strain-free, monodisperse, and spatially sparse local-droplet-etched (LDE) QDs have recently been demonstrated as a superior alternative to traditional Stranski-Krastanov QDs. However, integration of LDE QDs into nanophotonic architectures with the ability to scale to many interacting QDs is yet to be demonstrated. We present a potential solution by embedding isolated LDE GaAs QDs within an Al0.4Ga0.6As Huygens’ metasurface with spectrally overlapping fundamental electric and magnetic dipolar resonances. We demonstrate for the first time a position- and size-independent, 1 order of magnitude increase in the collection efficiency and emission lifetime control for single-photon emission from LDE QDs embedded within the Huygens’ metasurfaces. Our results represent a significant step toward leveraging the advantages of LDE QDs within nanophotonic architectures to meet the scalability demands of photonic QIS.
Filamentous fungi can synthesize a variety of nanoparticles (NPs), a process referred to as mycosynthesis that requires little energy input, do not require the use of harsh chemicals, occurs at near neutral pH, and do not produce toxic byproducts. While NP synthesis involves reactions between metal ions and exudates produced by the fungi, the chemical and biochemical parameters underlying this process remain poorly understood. Here, the role of fungal species and precursor salt on the mycosynthesis of zinc oxide (ZnO) NPs is investigated. This data demonstrates that all five fungal species tested are able to produce ZnO structures that can be morphologically classified into i) well-defined NPs, ii) coalesced/dissolving NPs, and iii) micron-sized square plates. Further, species-dependent preferences for these morphologies are observed, suggesting potential differences in the profile or concentration of the biochemical constituents in their individual exudates. This data also demonstrates that mycosynthesis of ZnO NPs is independent of the anion species, with nitrate, sulfate, and chloride showing no effect on NP production. Finally, these results enhance the understanding of factors controlling the mycosynthesis of ceramic NPs, supporting future studies that can enable control over the physical and chemical properties of NPs formed through this “green” synthesis method.
In this work, we introduce and compare the results of several methods for determining the horizon profile at a PV site, and compare their use cases and limitations. The methods in this paper include horizon detection from time-series irradiance or performance data, modeling from GIS topology data, manual theodolite measurements, and camera-based horizon detection. We compare various combinations of these methods using data from 4 Regional Test Center sites in the US, and 3 World Bank sites in Nepal. The results show many differences between these methods, and we recommend the most practical solutions for various use-cases.
A microgrid is characterized by a high R/X ratio, making the voltage more sensitive to active power changes unlike in bulk power systems where voltage is mostly regulated by reactive power. Because of its sensitivity to active power, control approach should incorporate active power as well. Thus, the voltage control approach for microgrids is very different from conventional power systems. The energy costs associated with these power are different. Furthermore, because of diverse generation sources and different components such as distributed energy resources, energy storage systems, etc, model-based control approaches might not perform very well. This paper proposes a reinforcement learning-based voltage support framework for a microgrid where an agent learns control policy by interacting with the microgrid without requiring a mathematical model of the system. A MATLAB/Simulink simulation study on a test system from Cordova, Alaska shows that there is a large reduction in voltage deviation (about 2.5-4.5 times). This reduction in voltage deviation can improve the power quality of the microgrid: ensuring a reliable supply, longer equipment lifespan, and stable user operations.
Inverse problems constrained by partial differential equations (PDEs) play a critical role in model development and calibration. In many applications, there are multiple uncertain parameters in a model that must be estimated. However, high dimensionality of the parameters and computational complexity of the PDE solves make such problems challenging. A common approach is to reduce the dimension by fixing some parameters (which we will call auxiliary parameters) to a best estimate and use techniques from PDE-constrained optimization to estimate the other parameters. In this article, hyper-differential sensitivity analysis (HDSA) is used to assess the sensitivity of the solution of the PDE-constrained optimization problem to changes in the auxiliary parameters. Foundational assumptions for HDSA require satisfaction of the optimality conditions which are not always practically feasible as a result of ill-posedness in the inverse problem. We introduce novel theoretical and computational approaches to justify and enable HDSA for ill-posed inverse problems by projecting the sensitivities on likelihood informed subspaces and defining a posteriori updates. Our proposed framework is demonstrated on a nonlinear multiphysics inverse problem motivated by estimation of spatially heterogeneous material properties in the presence of spatially distributed parametric modeling uncertainties.
Surrogate construction is an essential component for all non-deterministic analyses in science and engineering. The efficient construction of easy and cheaper-to-run alternatives to a computationally expensive code paves the way for outer loop workflows for forward and inverse uncertainty quantification and optimization. Unfortunately, the accurate construction of a surrogate still remains a task that often requires a prohibitive number of computations, making the approach unattainable for large-scale and high-fidelity applications. Multifidelity approaches offer the possibility to lower the computational expense requirement on the highfidelity code by fusing data from additional sources. In this context, we have demonstrated that multifidelity Bayesian Networks (MFNets) can efficiently fuse information derived from models with an underlying complex dependency structure. In this contribution, we expand on our previous work by adopting a basis adaptation procedure for the selection of the linear model representing each data source. Our numerical results demonstrate that this procedure is computationally advantageous because it can maximize the use of limited data to learn and exploit the important structures shared among models. Two examples are considered to demonstrate the benefits of the proposed approach: an analytical problem and a nuclear fuel finite element assembly. From these two applications, a lower dependency of MFnets on the model graph has been also observed.
We demonstrate an InAs-based nonlinear dielectric metasurface, which can generate terahertz (THz) pulses with opposite phase in comparison to an unpatterned InAs layer. It enables binary phase THz metasurfaces for generation and focusing of THz pulses.
Previous research has provided strong evidence that CO2 and H2O gasification reactions can provide non-negligible contributions to the consumption rates of pulverized coal (pc) char during combustion, particularly in oxy-fuel environments. Fully quantifying the contribution of these gasification reactions has proven to be difficult, due to the dearth of knowledge of gasification rates at the elevated particle temperatures associated with typical pc char combustion processes, as well as the complex interaction of oxidation and gasification reactions. Gasification reactions tend to become more important at higher char particle temperatures (because of their high activation energy) and they tend to reduce pc oxidation due to their endothermicity (i.e. cooling effect). The work reported here attempts to quantify the influence of the gasification reaction of CO2 in a rigorous manner by combining experimental measurements of the particle temperatures and consumption rates of size-classified pc char particles in tailored oxy-fuel environments with simulations from a detailed reacting porous particle model. The results demonstrate that a specific gasification reaction rate relative to the oxidation rate (within an accuracy of approximately +/- 20% of the pre-exponential value), is consistent with the experimentally measured char particle temperatures and burnout rates in oxy-fuel combustion environments. Conversely, the results also show, in agreement with past calculations, that it is extremely difficult to construct a set of kinetics that does not substantially overpredict particle temperature increase in strongly oxygen-enriched N2 environments. This latter result is believed to result from deficiencies in standard oxidation mechanisms that fail to account for falloff in char oxidation rates at high temperatures.
Reinforcement learning (RL) may enable fixedwing unmanned aerial vehicle (UAV) guidance to achieve more agile and complex objectives than typical methods. However, RL has yet struggled to achieve even minimal success on this problem; fixed-wing flight with RL-based guidance has only been demonstrated in literature with reduced state and/or action spaces. In order to achieve full 6-DOF RL-based guidance, this study begins training with imitation learning from classical guidance, a method known as warm-staring (WS), before further training using Proximal Policy Optimization (PPO). We show that warm starting is critical to successful RL performance on this problem. PPO alone achieved a 2% success rate in our experiments. Warm-starting alone achieved 32% success. Warm-starting plus PPO achieved 57% success over all policies, with 40% of policies achieving 94% success.
High penetrations of residential solar PV can cause voltage issues on low-voltage (LV) secondary networks. Distribution utility planners often utilize model-based power flow solvers to address these voltage issues and accommodate more PV installations without disrupting the customers already connected to the system. These model-based results are computationally expensive and often prone to errors. In this paper, two novel deep learning-based model-free algorithms are proposed that can predict the change in voltages for PV installations without any inherent network information of the system. These algorithms will only use the real power (P), reactive power (Q), and voltage (V) data from Advanced Metering Infrastructure (AMI) to calculate the change in voltages for an additional PV installation for any customer location in the LV secondary network. Both algorithms are tested on three datasets of two feeders and compared to the conventional model-based methods and existing model-free methods. The proposed methods are also applied to estimate the locational PV hosting capacity for both feeders and have shown better accuracies compared to an existing model-free method. Results show that data filtering or pre-processing can improve the model performance if the testing data point exists in the training dataset used for that model.
The novel Hydromine harvests energy from flowing water with no external moving parts, resulting in a robust system with minimal environmental impact. Here two deployment scenarios are considered: an offshore floating platform configuration to capture energy from relatively steady ocean currents at megawatt-scale, and a river-based system at kilowatt-scale mounted on a pylon. Hydrodynamic and techno-economic models are developed. The hydrodynamic models are used to maximize the efficiency of the power conversion. The techno-economic models optimize the system size and layout and ultimately seek to minimize the levelized-cost-of-electricity produced. Parametric and sensitivity analyses are performed on the models to optimize performance and reduce costs.
Holography is an effective diagnostic for the three-dimensional imaging of multiphase and particle-laden flows. Traditional digital inline holography (DIH), however, is subject to distortions from phase delays caused by index-of-refraction changes. This prevents DIH from being implemented in extreme conditions where shockwaves and significant thermal gradients are present. To overcome this challenge, multiple techniques have been developed to correct for the phase distortions. In this work, several holography techniques for distortion removal are discussed, including digital off-axis holography, phase conjugate digital in-line holography, and electric field techniques. Then, a distortion cancelling off-axis holography configuration is implemented for distortion removal and a high-magnification phase conjugate system is evaluated. Finally, both diagnostics are applied to study extreme pyrotechnic igniter environments.
We demonstrate the use of low-temperature grown GaAs (LT-GaAs) metasurface as an ultrafast photoconductive switching element gated with 1550 nm laser pulses. The metasurface is designed to enhance a weak two-step photon absorption at 1550 nm, enabling THz pulse detection.
A high altitude electromagnetic pulse (HEMP) caused by a nuclear explosion has the potential to severely impact the operation of large-scale electric power grids. This paper presents a top-down mitigation design strategy that considers grid-wide dynamic behavior during a simulated HEMP event - and uses optimal control theory to determine the compensation signals required to protect critical grid assets. The approach is applied to both a standalone transformer system and a demonstrative 3-bus grid model. The performance of the top-down approach relative to conventional protection solutions is evaluated, and several optimal control objective functions are explored. Finally, directions for future research are proposed.
Awile, Omar; Knight, James C.; Nowotny, Thomas; Aimone, James B.; Diesmann, Markus; Schurmann, Felix
At the turn of the millennium the computational neuroscience community realized that neuroscience was in a software crisis: software development was no longer progressing as expected and reproducibility declined. The International Neuroinformatics Coordinating Facility (INCF) was inaugurated in 2007 as an initiative to improve this situation. The INCF has since pursued its mission to help the development of standards and best practices. In a community paper published this very same year, Brette et al. tried to assess the state of the field and to establish a scientific approach to simulation technology, addressing foundational topics, such as which simulation schemes are best suited for the types of models we see in neuroscience. In 2015, a Frontiers Research Topic “Python in neuroscience” by Muller et al. triggered and documented a revolution in the neuroscience community, namely in the usage of the scripting language Python as a common language for interfacing with simulation codes and connecting between applications. The review by Einevoll et al. documented that simulation tools have since further matured and become reliable research instruments used by many scientific groups for their respective questions. Open source and community standard simulators today allow research groups to focus on their scientific questions and leave the details of the computational work to the community of simulator developers. A parallel development has occurred, which has been barely visible in neuroscientific circles beyond the community of simulator developers: Supercomputers used for large and complex scientific calculations have increased their performance from ~10 TeraFLOPS (1013 floating point operations per second) in the early 2000s to above 1 ExaFLOPS (1018 floating point operations per second) in the year 2022. This represents a 100,000-fold increase in our computational capabilities, or almost 17 doublings of computational capability in 22 years. Moore's law (the observation that it is economically viable to double the number of transistors in an integrated circuit every other 18–24 months) explains a part of this; our ability and willingness to build and operate physically larger computers, explains another part. It should be clear, however, that such a technological advancement requires software adaptations and under the hood, simulators had to reinvent themselves and change substantially to embrace this technological opportunity. It actually is quite remarkable that—apart from the change in semantics for the parallelization—this has mostly happened without the users knowing. The current Research Topic was motivated by the wish to assemble an update on the state of neuroscientific software (mostly simulators) in 2022, to assess whether we can see more clearly which scientific questions can (or cannot) be asked due to our increased capability of simulation, and also to anticipate whether and for how long we can expect this increase of computational capabilities to continue.
This is an investigation on two experimental datasets of laminar hypersonic flows, over a double-cone geometry, acquired in Calspan—University at Buffalo Research Center’s Large Energy National Shock (LENS)-XX expansion tunnel. These datasets have yet to be modeled accurately. A previous paper suggested that this could partly be due to mis-specified inlet conditions. The authors of this paper solved a Bayesian inverse problem to infer the inlet conditions of the LENS-XX test section and found that in one case they lay outside the uncertainty bounds specified in the experimental dataset. However, the inference was performed using approximate surrogate models. In this paper, the experimental datasets are revisited and inversions for the tunnel test-section inlet conditions are performed with a Navier–Stokes simulator. The inversion is deterministic and can provide uncertainty bounds on the inlet conditions under a Gaussian assumption. It was found that deterministic inversion yields inlet conditions that do not agree with what was stated in the experiments. An a posteriori method is also presented to check the validity of the Gaussian assumption for the posterior distribution. This paper contributes to ongoing work on the assessment of datasets from challenging experiments conducted in extreme environments, where the experimental apparatus is pushed to the margins of its design and performance envelopes.
The International Electrotechnical Commission (IEC) Subcommittee SC45A has been active in development of cybersecurity standards and technical reports on the protection of Instrumentation and Control (I&C) and Electrical Power Systems (ES) that perform significant functions necessary for the safe and secure operation of Nuclear Power Plants (NPP). These international standards and reports advance and promote the implementation of good practices around the world. In recent years, there have been advances in NPP cybersecurity risk management nationally and internationally. For example, IAEA publications NSS 17-T [1] and NSS 33-T [2], propose a framework for computer security risk management that implements a risk management program at both the facility and individual system levels. These international approaches (i.e., IAEA), national approaches (e.g., Canada’s HTRA [3]) and technical methods (e.g., HAZCADS [4], Cyber Informed Engineering [5], France’s EBIOS [6]) have advanced risk management within NPP cybersecurity programmes that implement international and national standards. This paper summarizes key elements of the analysis that developed the new IEC Technical Report. The paper identifies the eleven challenges for applying ISO/IEC 27005:2018 [7]. cybersecurity risk management to I&C Systems and EPS of NPPs and a summary comparison of how national approaches address these challenges.
Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering - OMAE
Foulk, James W.; Davis, Jacob; Sharman, Krish; Tom, Nathan; Husain, Salman
Experiments were conducted on a wave tank model of a bottom raised oscillating surge wave energy converter (OSWEC) model in regular waves. The OSWEC model shape was a thin rectangular flap, which was allowed to pitch in response to incident waves about a hinge located at the intersection of the flap and the top of the supporting foundation. Torsion springs were added to the hinge in order to position the pitch natural frequency at the center of the wave frequency range of the wave maker. The flap motion as well as the loads at the base of the foundation were measured. The OSWEC was modeled analytically using elliptic functions in order to obtain closed form expressions for added mass and radiation damping coefficients, along with the excitation force and torque. These formulations were derived and reported in a previous publication by the authors. While analytical predictions of the foundation loads agree very well with experiments, large discrepancies are seen in the pitch response close to resonance. These differences are analyzed by conducting a sensitivity study, in which system parameters, including damping and added mass values, are varied. The likely contributors to the differences between predictions and experiments are attributed to tank reflections, standing waves that can occur in long, narrow wave tanks, as well as the thin plate assumption employed in the analytical approach.
Two relatively under-reported facets of fuel storage fire safety are examined in this work for a 250, 000 gallon two-tank storage system. Ignition probability is linked to the radiative flux from a presumed fire. First, based on observed features of existing designs, fires are expected to be largely contained within a designed footprint that will hold the full spilled contents of the fuel. The influence of the walls and the shape of the tanks on the magnitude of the fire is not a well-described aspect of conventional fire safety assessment utilities. Various resources are herein used to explore the potential hazard for a contained fire of this nature. Second, an explosive attack on the fuel storage has not been widely considered in prior work. This work explores some options for assessing this hazard. The various methods for assessing the constrained conventional fires are found to be within a reasonable degree of agreement. This agreement contrasts with the hazard from an explosive dispersal. Best available assessment techniques are used, which highlight some inadequacies in the existing toolsets for making predictions of this nature. This analysis, using the best available tools, suggests the offset distance for the ignition hazard from a fireball will be on the same order as the offset distance for the blast damage. This suggests the buy-down of risk by considering the fireball is minimal when considering the blast hazards. Assessment tools for the fireball predictions are not particularly mature, and ways to improve them for a higher-fidelity estimate are noted.
The research investigates novel techniques to enhance supply chain security via addition of configuration management controls to protect Instrumentation and Control (I&C) systems of a Nuclear Power Plant (NPP). A secure element (SE) is integrated into a proof-of-concept testbed by means of a commercially available smart card, which provides tamper resistant key storage and a cryptographic coprocessor. The secure element simplifies setup and establishment of a secure communications channel between the configuration manager and verification system and the I&C system (running OpenPLC). This secure channel can be used to provide copies of commands and configuration changes of the I&C system for analysis.
Creation of a Sandia internally developed, shock-hardened Recoverable Data Recorder (RDR) necessitated experimentation by ballistically-firing the device into water targets at velocities up to 5,000 ft/s. The resultant mechanical environments were very severe—routinely achieving peak accelerations in excess of 200 kG and changes in pseudo-velocity greater than 38,000 inch/s. High-quality projectile deceleration datasets were obtained though high-speed imaging during the impact events. The datasets were then used to calibrate and validate computational models in both CTH and EPIC. Hydrodynamic stability in these environments was confirmed to differ from aerodynamic stability; projectile stability is maintained through a phenomenon known as “tail-slapping” or impingement of the rear of the projectile on the cavitation vapor-water interface which envelopes the projectile. As the projectile slows the predominate forces undergo a transition which is outside the codes’ capabilities to calculate accurately, however, CTH and EPIC both predict the projectile trajectory well in the initial hypervelocity regime. Stable projectile designs and the achievable acceleration space are explored through a large parameter sweep of CTH simulations. Front face chamfer angle has the largest influence on stability with low angles being more stable.
Raffaelle, Patrick R.; Wang, George T.; Shestopalov, Alexander A.
The focus of this study was to demonstrate the vaporphase halogenation of Si(100) and subsequently evaluate the inhibiting ability of the halogenated surfaces toward atomic layer deposition (ALD) of aluminum oxide (Al2O3). Hydrogen-terminated silicon ⟨100⟩ (H−Si(100)) was halogenated using N-chlorosuccinimide (NCS), N-bromosuccinimide (NBS), and N-iodosuccinimide (NIS) in a vacuum-based chemical process. The composition and physical properties of the prepared monolayers were analyzed by using X-ray photoelectron spectroscopy (XPS) and contact angle (CA) goniometry. These measurements confirmed that all three reagents were more effective in halogenating H−Si(100) over OH−Si(100) in the vapor phase. The stability of the modified surfaces in air was also tested, with the chlorinated surface showing the greatest resistance to monolayer degradation and silicon oxide (SiO2) generation within the first 24 h of exposure to air. XPS and atomic force microscopy (AFM) measurements showed that the succinimide-derived Hal-Si(100) surfaces exhibited blocking ability superior to that of H− Si(100), a commonly used ALD resist. This halogenation method provides a dry chemistry alternative for creating halogen-based ALD resists on Si(100) in near-ambient environments.
The novel Hydromine harvests energy from flowing water with no external moving parts, resulting in a robust system with minimal environmental impact. Here two deployment scenarios are considered: an offshore floating platform configuration to capture energy from relatively steady ocean currents at megawatt-scale, and a river-based system at kilowatt-scale mounted on a pylon. Hydrodynamic and techno-economic models are developed. The hydrodynamic models are used to maximize the efficiency of the power conversion. The techno-economic models optimize the system size and layout and ultimately seek to minimize the levelized-cost-of-electricity produced. Parametric and sensitivity analyses are performed on the models to optimize performance and reduce costs.
Researchers at Sandia National Laboratories, in conjunction with the Nuclear Energy Institute and Light Water Reactor Sustainability Programs, have conducted testing and analysis to reevaluate and redefine the minimum passible opening size through which a person can effectively pass and navigate. Physical testing with a representative population has been performed on both simple two-dimensional (rectangular and circular cross sections up to 91.4 cm in depth) and more complex three-dimensional (circular cross sections of longer lengths up to 9.1 m and changes in direction) opening configurations. The primary impact of this effort is to define the physical design in which an adversary could successfully pass through a potentially complex opening, as well as to define the designs in which an adversary would not be expected to successfully traverse a complex opening. These data can then be used to support risk-informed decision making.
A comprehensive study of the mechanical response of a 316 stainless steel is presented. The split-Hopkinson bar technique was used to evaluate the mechanical behavior at dynamic strain rates of 500 s−1, 1500 s−1, and 3000 s−1 and temperatures of 22 °C and 300 °C under tension and compression loading, while the Drop-Hopkinson bar was used to characterize the tension behavior at an intermediate strain rate of 200 s−1. The experimental results show that the tension and compression flow stress are reasonably symmetric, exhibit positive strain rate sensitivity, and are inversely dependent on temperature. The true failure strain was determined by measuring the minimum diameter of the post-test tension specimen. The 316 stainless steel exhibited a ductile response, and the true failure strain increased with increasing temperature and decreased with increasing strain rate.
Multiple rotors on single structures have long been proposed to increase wind turbine energy capture with no increase in rotor size, but at the cost of additional mechanical complexity in the yaw and tower designs. Standard turbines on their own very-closely-spaced towers avoid these disadvantages but create a significant disadvantage; for some wind directions the wake turbulence of a rotor enters the swept area of a very close downwind rotor causing low output, fatigue stress, and changes in wake recovery. Knowing how the performance of pairs of closely spaced rotors varies with wind direction is essential to design a layout that maximizes the useful directions and minimizes the losses and stress at other directions. In the current work, the high-fidelity large-eddy simulation (LES) code Exa-Wind/Nalu-Wind is used to simulate the wake interactions from paired-rotor configurations in a neutrally stratified atmospheric boundary layer to investigate performance and feasibility. Each rotor pair consists of two Vestas V27 turbines with hub-to-hub separation distances of 1.5 rotor diameters. The on-design wind direction results are consistent with previous literature. For an off-design wind direction of 26.6°, results indicate little change in power and far-wake recovery relative to the on-design case. At a direction of 45.0°, significant rotor-wake interactions produce an increase in power but also in far-wake velocity deficit and turbulence intensity. A severely off-design case is also considered.
Open Charge Point Protocol (OCPP) 1.6 is widely used in the electric vehicle (EV) charging industry to communicate between Charging System Management Services (CSMSs) and Electric Vehicle Supply Equipment (EVSE). Unlike OCPP 2.0.1, OCPP 1.6 uses unencrypted websocket communications to exchange information between EVSE devices and an on-premise or cloud-based CSMS. In this work, we demonstrate two machine-in-the-middle attacks on OCPP sessions to terminate charging sessions and gain root access to the EVSE equipment via remote code execution. Second, we demonstrate a malicious firmware update with a code injection payload to compromise an EVSE. Lastly, we demonstrate two methods to prevent availability of the EVSE or CSMS. One of these, originally reported by SaiFlow, prevents traffic to legitimate EVSE equipment using a DoS-like attack on CSMSs by repeatedly connecting and authenticating several CPs with the same identities as the legitimate CP. These vulnerabilities were demonstrated with proof-of-concept exploits in a virtualized Cyber Range at Wright State University and/or with a 350 kW Direct Current Fast Charger at Idaho National Laboratory. The team found that OCPP 1.6 could be protected from these attacks by adding secure shell tunnels to the protocol, if upgrading to OCPP 2.0.1 was not an option.
The increased complexity of high-consequence digital system designs with intricate interactions between numerous components has placed a greater need on ensuring that the design satisfies its intended requirements. This digital assurance can only come about through rigorous mathematical analysis of the design. This manuscript provides a detailed description of a formal language semantics that can be used for modeling and verification of systems. We use Event-B to build a formalized semantics that supports the construction of triggered enable statecharts with a run-to-completion scheduling. Rodin has previously been used to develop and analyse models using this semantics.
Visualization of mode shapes is a crucial step in modal analysis. However, the methods to create the test geometry, which typically require arduous hand measurements and approximations of rotation matrices, are crude. This leads to a lengthy test set-up process and a test geometry with potentially high measurement errors. Test and analysis delays can also be experienced if the orientation of an accelerometer is documented incorrectly, which happens more often than engineers would like to admit. To mitigate these issues, a methodology has been created to generate the test geometry (coordinates and rotation matrices) with probe data from a portable coordinate measurement machine (PCMM). This methodology has led to significant reductions in the test geometry measurement time, reductions in test geometry measurement errors, and even reduced test times. Simultaneously, a methodology has also been created to use the PCMM to easily identify desired measurement locations, as specified by a model. This paper will discuss the general framework of these methods and the realized benefits, using examples from actual tests.
Polymers are widely used as damping materials in vibration and impact applications. Liquid crystal elastomers (LCEs) are a unique class of polymers that may offer the potential for enhanced energy absorption capacity under impact conditions over conventional polymers due to their ability to align the nematic phase during loading. Being a relatively new material, the high rate compressive properties of LCEs have been minimally studied. Here, we investigated the high strain rate compression behavior of different solid LCEs, including cast polydomain and 3D-printed, preferentially oriented monodomain samples. Direct ink write (DIW) 3D printed samples allow unique sample designs, namely, a specific orientation of mesogens with respect to the loading direction. Loading the sample in different orientations can induce mesogen rotation during mechanical loading and subsequently different stress-strain responses under impact. We also used a reference polymer, bisphenol-A (BPA) cross-linked resin, to contrast LCE behavior with conventional elastomer behavior.
Sustainable use of water resources continues to be a challenge across the globe. This is in part due to the complex set of physical and social behaviors that interact to influence water management from local to global scales. Analyses of water resources have been conducted using a variety of techniques, including qualitative evaluations of media narratives. This study aims to augment these methods by leveraging computational and quantitative techniques from the social sciences focused on text analyses. Specifically, we use natural language processing methods to investigate a large corpus (approx. 1.8M) of newspaper articles spanning approximately 35 years (1982–2017) for insights into human-nature interactions with water. Focusing on local and regional United States publications, our analysis demonstrates important dynamics in water-related dialogue about drinking water and pollution to other critical infrastructures, such as energy, across different parts of the country. Our assessment, which looks at water as a system, also highlights key actors and sentiments surrounding water. Extending these analytical methods could help us further improve our understanding of the complex roles of water in current society that should be considered in emerging activities to mitigate and respond to resource conflicts and climate change.