Large scale non-intrusive inspection (NII) of commercial vehicles is being adopted in the U.S. at a pace and scale that will result in a commensurate growth in adjudication burdens at land ports of entry. The use of computer vision and machine learning models to augment human operator capabilities is critical in this sector to ensure the flow of commerce and to maintain efficient and reliable security operations. The development of models for this scale and speed requires novel approaches to object detection and novel adjudication pipelines. Here we propose a notional combination of existing object detection tools using a novel ensembling framework to demonstrate the potential for hierarchical and recursive operations. Further, we explore the combination of object detection with image similarity as an adjacent capability to provide post-hoc oversight to the detection framework. The experiments described herein, while notional and intended for illustrative purposes, demonstrate that the judicious combination of diverse algorithms can result in a resilient workflow for the NII environment.
It is impossible in practice to comprehensively test even small software programs due to the vastness of the reachable state space; however, modern cyber-physical systems such as aircraft require a high degree of confidence in software safety and reliability. Here we explore methods of generating test sets to effectively and efficiently explore the state space for a module based on the Traffic Collision Avoidance System (TCAS) used on commercial aircraft. A formal model of TCAS in the model-checking language NuSMV provides an output oracle. We compare test sets generated using various methods, including covering arrays, random, and a low-complexity input paradigm applied to 28 versions of the TCAS C program containing seeded errors. Faults are triggered by tests for all 28 programs using a combination of covering arrays and random input generation. Complexity-based inputs perform more efficiently than covering arrays, and can be paired with random input generation to create efficient and effective test sets. A random forest classifier identifies variable values that can be targeted to generate tests even more efficiently in future work, by combining a machine-learned fuzzing algorithm with more complex model oracles developed in model-based systems engineering (MBSE) software.
This paper presents the formulation, implementation, and demonstration of a new, largely phenomenological, model for the damage-free (micro-crack-free) thermomechanical behavior of rock salt. Unlike most salt constitutive models, the new model includes both drag stress (isotropic) and back stress (kinematic) hardening. The implementation utilizes a semi-implicit scheme and a fall-back fully-implicit scheme to numerically integrate the model's differential equations. Particular attention was paid to the initial guesses for the fully-implicit scheme. Of the four guesses investigated, an initial guess that interpolated between the previous converged state and the fully saturated hardening state had the best performance. The numerical implementation was then used in simulations that highlighted the difference between drag stress hardening versus combined drag and back stress hardening. Simulations of multi-stage constant stress tests showed that only combined hardening could qualitatively represent reverse (inverse transient) creep, as well as the large transient strains experimentally observed upon switching from axisymmetric compression to axisymmetric extension. Simulations of a gas storage cavern subjected to high and low gas pressure cycles showed that combined hardening led to substantially greater volume loss over time than drag stress hardening alone.
This paper presents a run-to-run (R2R) controller for mechanical serial sectioning (MSS). MSS is a destructive material analysis process which repeatedly removes a thin layer of material and images the exposed surface. The images are then used to gain insight into the material properties and often to construct a 3-dimensional reconstruction of the material sample. Currently, an experience human operator selects the parameters of the MSS to achieve the desired thickness. The proposed R2R controller will automate this process while improving the precision of the material removal. The proposed R2R controller solves an optimization problem designed to minimize the variance of the material removal subject to achieving the expected target removal. This optimization problem was embedded in an R2R framework to provide iterative feedback for disturbance rejection and convergence to the target removal amount. Since an analytic model of the MSS system is unavailable, we adopted a data-driven approach to synthesize our R2R controller from historical data. The proposed R2R controller is demonstrated through simulations. Future work will empirically demonstrate the proposed R2R through experiments with a real MSS system.
We have extended the computational singular perturbation (CSP) method to differential algebraic equation (DAE) systems and demonstrated its application in a heterogeneous-catalysis problem. The extended method obtains the CSP basis vectors for DAEs from a reduced Jacobian matrix that takes the algebraic constraints into account. We use a canonical problem in heterogeneous catalysis, the transient continuous stirred tank reactor (T-CSTR), for illustration. The T-CSTR problem is modelled fundamentally as an ordinary differential equation (ODE) system, but it can be transformed to a DAE system if one approximates typically fast surface processes using algebraic constraints for the surface species. We demonstrate the application of CSP analysis for both ODE and DAE constructions of a T-CSTR problem, illustrating the dynamical response of the system in each case. We also highlight the utility of the analysis in commenting on the quality of any particular DAE approximation built using the quasi-steady state approximation (QSSA), relative to the ODE reference case.
ASHRAE and IBPSA-USA Building Simulation Conference
Villa, Daniel L.; Carvallo, Juan P.; Bianchi, Carlo; Lee, Sang H.
Heat waves are increasing in severity, duration, and frequency, making historical weather patterns insufficient for assessments of building resilience. This work introduces a stochastic weather generator called the multi-scenario extreme weather simulator (MEWS) that produces credible future heat waves. MEWS calculates statistical parameters from historical weather data and then shifts them using climate projections of increasing severity and frequency. MEWS is demonstrated using the EnergyPlus medium office prototype model for climate zone 4B using five climate scenarios to 2060. The results show how changes in climate and heat waves affect electric loads, peak loads, and thermal comfort with uncertainty.
Density fluctuations in compressible turbulent boundary layers cause aero-optical distortions that affect the performance of optical systems such as sensors and lasers. The development of models for predicting the aero-optical distortions relies on theory and reference data that can be obtained from experiments and time-resolved simulations. This paper reports on wall-modeled large-eddy simulations of turbulent boundary layers over a flat plate at Mach 3.5, 7.87, and 13.64. The conditions for the Mach 3.5 case match those for the DNS presented by Miller et al.1 The Mach 7.87 simulation match those inside the Hypersonic Wind Tunnel at Sandia National Laboratories. For the Mach 13.64, the conditions inside the Arnold Engineering Development Complex Hypervelocity Tunnel 9 are matched. Overall, adequate agreement of the velocity and temperature as well as Reynolds stress profiles with reference data from direct numerical simulations is obtained for the different Mach numbers. For all three cases, the normalized root-mean-square optical path difference was computed and compared with data obtained from the reference direct numerical simulations and experiments, as well as predictions obtained with a semi-analytical relationship by Notre Dame University. Above Mach five, the normalized path difference obtained from the simulations is above the model prediction. This provides motivation for future work aimed at evaluating the assumptions behind the Notre Dame model for hypersonic boundary layer flows.
Quantum computers can now run interesting programs, but each processor’s capability—the set of programs that it can run successfully—is limited by hardware errors. These errors can be complicated, making it difficult to accurately predict a processor’s capability. Benchmarks can be used to measure capability directly, but current benchmarks have limited flexibility and scale poorly to many-qubit processors. We show how to construct scalable, efficiently verifiable benchmarks based on any program by using a technique that we call circuit mirroring. With it, we construct two flexible, scalable volumetric benchmarks based on randomized and periodically ordered programs. We use these benchmarks to map out the capabilities of twelve publicly available processors, and to measure the impact of program structure on each one. We find that standard error metrics are poor predictors of whether a program will run successfully on today’s hardware, and that current processors vary widely in their sensitivity to program structure.
The penetration of renewable energy resources (RER) and energy storage systems (ESS) into the power grid has been accelerated in recent times due to the aggressive emission and RER penetration targets. The Integrated resource planning (IRP) framework can help in ensuring long-term resource adequacy while satisfying RER integration and emission reduction targets in a cost-effective and reliable manner. In this paper, we present pIRP (probabilistic Integrated Resource Planning), an open-source Python-based software tool designed for optimal portfolio planning for an RER and ESS rich future grid and for addressing the capacity expansion problem. The tool, which is planned to be released publicly, with its ESS and RER modeling capabilities along with enhanced uncertainty handling make it one of the more advanced non-commercial IRP tools available currently. Additionally, the tool is equipped with an intuitive graphical user interface and expansive plotting capabilities. Impacts of uncertainties in the system are captured using Monte Carlo simulations and lets the users analyze hundreds of scenarios with detailed scenario reports. A linear programming based architecture is adopted which ensures sufficiently fast solution time while considering hundreds of scenarios and characterizing profile risks with varying levels of RER and ESS penetration levels. Results for a test case using data from parts of the Eastern Interconnection are provided in this paper to demonstrate the capabilities offered by the tool.
In accident scenarios involving release of tritium during handling and storage, the level of risk to human health is dominated by the extent to which radioactive tritium is oxidized to the water form (T2O or THO). At some facilities, tritium inventories consist of very small quantities stored at sub-atmospheric pressure, which means that tritium release accident scenarios will likely produce concentrations in air that are well below the lower flammability limit. It is known that isotope effects on reaction rates should result in slower oxidation rates for heavier isotopes of hydrogen, but this effect has not previously been quantified for oxidation at concentrations well below the lower flammability limit for hydrogen. This work describes hydrogen isotope oxidation measurements in an atmospheric tube furnace reactor. These measurements consist of five concentration levels between 0.01% and 1% protium or deuterium and two residence times. Oxidation is observed to occur between about 550°C and 800°C, with higher levels of conversion achieved at lower temperatures for protium with respect to deuterium at the same volumetric inlet concentration and residence time. Computational fluid dynamics simulations of the experiments were used to customize reaction orders and Arrhenius parameters in a 1-step oxidation mechanism. The trends in the rates for protium and deuterium are extrapolated based on guidance from literature to produce kinetic rate parameters appropriate for tritium oxidation at low concentrations.
The precise estimation of performance loss rate (PLR) of photovoltaic (PV) systems is vital for reducing investment risks and increasing the bankability of the technology. Until recently, the PLR of fielded PV systems was mainly estimated through the extraction of a linear trend from a time series of performance indicators. However, operating PV systems exhibit failures and performance losses that cause variability in the performance and may bias the PLR results obtained from linear trend techniques. Change-point (CP) methods were thus introduced to identify nonlinear trend changes and behaviour. The aim of this work is to perform a comparative analysis among different CP techniques for estimating the annual PLR of eleven grid-connected PV systems installed in Cyprus. Outdoor field measurements over an 8-year period (June 2006-June 2014) were used for the analysis. The obtained results when applying different CP algorithms to the performance ratio time series (aggregated into monthly blocks) demonstrated that the extracted trend may not always be linear but sometimes can exhibit nonlinearities. The application of different CP methods resulted to PLR values that differ by up to 0.85% per year (for the same number of CPs/segments).
Femtosecond laser electronic excitation tagging (FLEET) is a powerful unseeded velocimetry technique typically used to measure one component of velocity along a line, or two or three components from a dot. In this Letter, we demonstrate a dotted-line FLEET technique which combines the dense profile capability of a line with the ability to perform two-component velocimetry with a single camera on a dot. Our set-up uses a single beam path to create multiple simultaneous spots, more than previously achieved in other FLEET spot configurations. We perform dotted-line FLEET measurements downstream of a highly turbulent, supersonic nitrogen free jet. Dotted-line FLEET is created by focusing light transmitted by a periodic mask with rectangular slits of 1.6 × 40 mm2 and an edge-to-edge spacing of 0.5 mm, then focusing the imaged light at the measurement region. Up to seven symmetric dots spaced approximately 0.9 mm apart, with mean full-width at half maximum diameters between 150 and 350 µm, are simultaneously imaged. Both streamwise and radial velocities are computed and presented in this Letter.
Rock salt is being considered as a medium for energy storage and radioactive waste disposal. A Disturbed Rock Zone (DRZ) develops in the immediate vicinity of excavations in rock salt, with an increase in permeability, which alters the migration of gases and liquids around the excavation. When creep occurs adjacent to a stiff inclusion such as a concrete plug, it is expected that the stress state near the inclusion will become more hydrostatic and less deviatoric, promoting healing (permeability reduction) of the DRZ. In this scoping study, we measured the permeability of DRZ rock salt with time adjacent to inclusions (plugs) of varying stiffness to determine how the healing of rock salt, as reflected in the permeability changes, is a function of the stress and time. Samples were created with three different inclusion materials in a central hole along the axis of a salt core: (i) very soft silicone sealant, (ii) sorel cement, and (iii) carbon steel. The measured permeabilities are corrected for the gas slippage effect. We observed that the permeability change is a function of the inclusion material. The stiffer the inclusion, the more rapidly the permeability reduces with time.
For the model-based control of low-voltage microgrids, state and parameter information are required. Different optimal estimation techniques can be employed for this purpose. However, these estimation techniques require knowledge of noise covariances (process and measurement noise). Incorrect values of noise covariances can deteriorate the estimator performance, which in turn can reduce the overall controller performance. This paper presents a method to identify noise covariances for voltage dynamics estimation in a microgrid. The method is based on the autocovariance least squares technique. A simulation study of a simplified 100 kVA, 208 V microgrid system in MATLAB/Simulink validates the method. Results show that estimation accuracy is close to the actual value for Gaussian noise, and non-Gaussian noise has a slightly larger error.
Decarbonizing natural gas networks is a challenging enterprise. Replacing natural gas with renewable hydrogen is one option under global consideration to decarbonize heating, power and residential uses of natural gas. Hydrogen is known to degrade fatigue and fracture properties of structural steels, including pipeline steels. In this study, we describe environmental testing strategies aimed at generating baseline fatigue and fracture trends with efficient use of testing resources. For example, by controlling the stress intensity factor (K) in both K-increasing and K-decreasing modes, fatigue crack growth can be measured for multiple load ratios with a single specimen. Additionally, tests can be designed such that fracture tests can be performed at the conclusion of the fatigue crack growth test, further reducing the resources needed to evaluate the fracture mechanics parameters utilized in design. These testing strategies are employed to establish the fatigue crack growth behavior and fracture resistance of API grade steels in gaseous hydrogen environments. In particular, we explore the effects of load ratio and hydrogen partial pressure on the baseline fatigue and fracture trends of line pipe steels in gaseous hydrogen. These data are then used to test the applicability of a simple, universal fatigue crack growth model that accounts for both load ratio and hydrogen partial pressure. The appropriateness of this model for use as an upper bound fatigue crack growth is discussed.
This study presents a method that can be used to gain information relevant to determining the corrosion risk for spent nuclear fuel (SNF) canisters during extended dry storage. Currently, it is known that stainless steel canisters are susceptible to chloride-induced stress corrosion cracking (CISCC). However, the rate of CISCC degradation and the likelihood that it could lead to a through-wall crack is unknown. This study uses well-developed computational fluid dynamics and particle-tracking tools and applies them to SNF storage to determine the rate of deposition on canisters. The deposition rate is determined for a vertical canister system and a horizontal canister system, at various decay heat rates with a uniform particle size distribution, ranging from 0.25 to 25 µm, used as an input. In all cases, most of the dust entering the overpack passed through without depositing. Most of what was retained in the overpack was deposited on overpack surfaces (e.g., inlet and outlet vents); only a small fraction was deposited on the canister itself. These results are provided for generalized canister systems with a generalized input; as such, this technical note is intended to demonstrate the technique. This study is a part of an ongoing effort funded by the U.S. Department of Energy, Nuclear Energy Office of Spent Fuel Waste Science and Technology, which is tasked with doing research relevant to developing a sound technical basis for ensuring the safe extended storage and subsequent transport of SNF. This work is being presented to demonstrate a potentially useful technique for SNF canister vendors, utilities, regulators, and stakeholders to utilize and further develop for their own designs and site-specific studies.
Neural networks (NN) have become almost ubiquitous with image classification, but in their standard form produce point estimates, with no measure of confidence. Bayesian neural networks (BNN) provide uncertainty quantification (UQ) for NN predictions and estimates through the posterior distribution. As NN are applied in more high-consequence applications, UQ is becoming a requirement. Automating systems can save time and money, but only if the operator can trust what the system outputs. BNN provide a solution to this problem by not only giving accurate predictions and estimates, but also an interval that includes reasonable values within a desired probability. Despite their positive attributes, BNN are notoriously difficult and time consuming to train. Traditional Bayesian methods use Markov Chain Monte Carlo (MCMC), but this is often brushed aside as being too slow. The most common method is variational inference (VI) due to its fast computation, but there are multiple concerns with its efficacy. MCMC is the gold standard and given enough time, will produce the correct result. VI, alternatively, is an approximation that converges asymptotically. Unfortunately (or fortunately), high consequence problems often do not live in the land of asymtopia so solutions like MCMC are preferable to approximations. We apply and compare MCMC-and VI-trained BNN in the context of target detection in hyperspectral imagery (HSI), where materials of interest can be identified by their unique spectral signature. This is a challenging field, due to the numerous permuting effects practical collection of HSI has on measured spectra. Both models are trained using out-of-the-box tools on a high fidelity HSI target detection scene. Both MCMC-and VI-trained BNN perform well overall at target detection on a simulated HSI scene. Splitting the test set predictions into two classes, high confidence and low confidence predictions, presents a path to automation. For the MCMC-trained BNN, the high confidence predictions have a 0.95 probability of detection with a false alarm rate of 0.05 when considering pixels with target abundance of 0.2. VI-trained BNN have a 0.25 probability of detection for the same, but its performance on high confidence sets matched MCMC for abundances >0.4. However, the VI-trained BNN on this scene required significant expert tuning to get these results while MCMC worked immediately. On neither scene was MCMC prohibitively time consuming, as is often assumed, but the networks we used were relatively small. This paper provides an example of how to utilize the benefits of UQ, but also to increase awareness that different training methods can give different results for the same model. If sufficient computational resources are available, the best approach rather than the fastest or most efficient should be used, especially for high consequence problems.
We evaluate the use of reference modules for monitoring effective irradiance in PV power plants, as compared with traditional plane-of-array (POA) irradiance sensors, for PV monitoring and capacity tests. Common POA sensors such as pyranometers and reference cells are unable to capture module-level irradiance nonuniformity and require several correction factors to accurately represent the conditions for fielded modules. These problems are compounded for bifacial systems, where the power loss due to rear side shading and rear-side plane-of-array (RPOA) irradiance gradients are greater and more difficult to quantify. The resulting inaccuracy can have costly real-world consequences, particularly when the data are used to perform power ratings and capacity tests. Here we analyze data from a bifacial single-axis tracking PV power plant, (175.6 MWdc) using 5 meteorological (MET) stations, located on corresponding inverter blocks with capacities over 4 MWdc. Each MET station consists of bifacial reference modules as well pyranometers mounted in traditional POA and RPOA installations across the PV power plant. Short circuit current measurements of the reference modules are converted to effective irradiance with temperature correction and scaling based on flash test or nameplate short circuit values. Our work shows that bifacial effective irradiance measured by pyranometers averages 3.6% higher than the effective irradiance measured by bifacial reference modules, even when accounting for spectral, angle of incidence, and irradiance nonuniformity. We also performed capacity tests using effective irradiance measured by pyranometers and reference modules for each of the 5 bifacial single-axis tracking inverter blocks mentioned above. These capacity tests evaluated bifacial plant performance at ∼3.9% lower when using bifacial effective irradiance from pyranometers as compared to the same calculation performed with reference modules.
The focus of this study is on spectral equivalence results for higher-order tensor product finite elements in the H(curl), H(div), and L2 function spaces. For certain choices of the higher-order shape functions, the resulting mass and stiffness matrices are spectrally equivalent to those for an assembly of lowest-order edge-, face- or interior-based elements on the associated Gauss–Lobatto–Legendre (GLL) mesh.
With the increase in penetration of inverter-based resources (IBRs) in the electrical power system, the ability of these devices to provide grid support to the system has become a necessity. With standards previously developed for the interconnection requirements of grid-following inverters (GFLI) (most commonly photovoltaic inverters), it has been well documented how these inverters 'should' respond to changes in voltage and frequency. However, with other IBRs such as grid-forming inverters (GFMIs) (used for energy storage systems, standalone systems, and as uninterruptable power supplies) these requirements are either: not yet documented, or require a more in deep analysis. With the increased interest in microgrids, GFMIs that can be paralleled onto a distribution system have become desired. With the proper control schemes, a GFMI can help maintain grid stability through fast response compared to rotating machines. This paper will present an experimental comparison of commercially available GFMIand GFLI ' responses to voltage and frequency deviation, as well as the GFMIoperating as a standalone system and subjected to various changes in loads.
Any program tasked with the evaluation and acquisition of algorithms for use in deployed scenarios must have an impartial, repeatable, and auditable means of benchmarking both candidate and fielded algorithms. Success in this endeavor requires a body of representative sensor data, data labels indicating the proper algorithmic response to the data as adjudicated by subject matter experts, a means of executing algorithms under review against the data, and the ability to automatically score and report algorithm performance. Each of these capabilities should be constructed in support of program and mission goals. By curating and maintaining data, labels, tests, and scoring methodology, a program can understand and continually improve the relationship between benchmarked and fielded performance of acquired algorithms. A system supporting these program needs, deployed in an environment with sufficient computational power and necessary security controls is a powerful tool for ensuring due diligence in evaluation and acquisition of mission critical algorithms. This paper describes the Seascape system and its place in such a process.
Metasurface lenses are fabricated using membrane projection lithography following a CMOS-compatible process flow. The lenses are 10-mm in diameter and employ 3-dimensional unit cells designed to function in the mid-infrared spectral range.