Filamentous fungi can synthesize a variety of nanoparticles (NPs), a process referred to as mycosynthesis that requires little energy input, do not require the use of harsh chemicals, occurs at near neutral pH, and do not produce toxic byproducts. While NP synthesis involves reactions between metal ions and exudates produced by the fungi, the chemical and biochemical parameters underlying this process remain poorly understood. Here, the role of fungal species and precursor salt on the mycosynthesis of zinc oxide (ZnO) NPs is investigated. This data demonstrates that all five fungal species tested are able to produce ZnO structures that can be morphologically classified into i) well-defined NPs, ii) coalesced/dissolving NPs, and iii) micron-sized square plates. Further, species-dependent preferences for these morphologies are observed, suggesting potential differences in the profile or concentration of the biochemical constituents in their individual exudates. This data also demonstrates that mycosynthesis of ZnO NPs is independent of the anion species, with nitrate, sulfate, and chloride showing no effect on NP production. Finally, these results enhance the understanding of factors controlling the mycosynthesis of ceramic NPs, supporting future studies that can enable control over the physical and chemical properties of NPs formed through this “green” synthesis method.
Phosphor thermometry has become an established remote sensing technique for acquiring the temperature of surfaces and gas-phase flows. Often, phosphors are excited by a light source (typically emitting in the UV region), and their temperature-sensitive emission is captured. Temperature can be inferred from shifts in the emission spectra or the radiative decay lifetime during relaxation. While recent work has shown that the emission of several phosphors remains thermographic during x-ray excitation, the radiative decay lifetime was not investigated. The focus of the present study is to characterize the lifetime decay of the phosphor Gd2O2S:Tb for temperature sensitivity after excitation from a pulsed x-ray source. These results are compared to the lifetime decays found for this phosphor when excited using a pulsed UV laser. Results show that the lifetime of this phosphor exhibits comparable sensitivity to temperature between both excitation sources for a temperature range between 21 °C to 140 °C in increments of 20 °C. This work introduces a novel method of thermometry for researchers to implement when employing x-rays for diagnostics.
Laser-induced photoemission of electrons offers opportunities to trigger and control plasmas and discharges [1]. However, the underlying mechanisms are not sufficiently characterized to be fully utilized [2]. We present an investigation to characterize the effects of photoemission on plasma breakdown for different reduced electric fields, laser intensities, and photon energies. We perform Townsend breakdown experiments assisted by high-speed imaging and employ a quantum model of photoemission along with a 0D discharge model [3], [4] to interpret the experimental measurements.
We propose a set of benchmark tests for current-voltage (IV) curve fitting algorithms. Benchmark tests enable transparent and repeatable comparisons among algorithms, allowing for measuring algorithm improvement over time. An absence of such tests contributes to the proliferation of fitting methods and inhibits achieving consensus on best practices. Benchmarks include simulated curves with known parameter solutions, with and without simulated measurement error. We implement the reference tests on an automated scoring platform and invite algorithm submissions in an open competition for accurate and performant algorithms.
This work proposes a method of designing adaptive controllers for reliable and stable operation of a Grid-Forming Inverter (GFI) during black-start. Here, the characteristic loci method has been primarily used for guiding the adaptation and tuning of the control parameters, based on a thorough sensitivity analysis of the system over a desired frequency bandwidth. The control hierarchy comprises active-reactive (P-Q) power support, voltage regulation, current control, and frequency recovery over the sequence of various events during black-starting. These events comprise energization of transformers and different types of loads, alongside post-fault recovery. The developed method has been tested in a 75 MVA inverter system, which is simulated in PSCAD®. The inverter energizes static and induction motor loads, besides transformers. This system has also been subjected to a line-ground fault for validating the robustness of the proposed adaptive control structure in post-fault recovery.
In recent years, high-altitude infrasound sensing has become more prolific, demonstrating an enormous value especially when utilized over regions inaccessible to traditional ground-based sensing. Similar to ground-based infrasound detectors, airborne sensors take advantage of the fact that impulsive atmospheric events such as explosions can generate low frequency acoustic waves, also known as infrasound. Due to negligible attenuation, infrasonic waves can travel over long distances, and provide important clues about their source. Here, we report infrasound detections of the Apollo detonation that was carried on 29 October 2020 as part of the Large Surface Explosion Coupling Experiment in Nevada, USA. Infrasound sensors attached to solar hot air balloons floating in the stratosphere detected the signals generated by the explosion at distances 170–210 km. Three distinct arrival phases seen in the signals are indicative of multipathing caused by the small-scale perturbations in the atmosphere. We also found that the local acoustic environment at these altitudes is more complex than previously thought.
This work developed a methodology for transmission line modeling of cable installations to predict the propagation of conducted high altitude electromagnetic pulses in a substation or generating plant. The methodology was applied to a termination cabinet example that was modeled with SPICE transmission line elements with information from electromagnetic field modeling and with validation using experimental data. The experimental results showed reasonable agreement to the modeled propagating pulse and can be applied to other installation structures in the future.
Visualization of mode shapes is a crucial step in modal analysis. However, the methods to create the test geometry, which typically require arduous hand measurements and approximations of rotation matrices, are crude. This leads to a lengthy test set-up process and a test geometry with potentially high measurement errors. Test and analysis delays can also be experienced if the orientation of an accelerometer is documented incorrectly, which happens more often than engineers would like to admit. To mitigate these issues, a methodology has been created to generate the test geometry (coordinates and rotation matrices) with probe data from a portable coordinate measurement machine (PCMM). This methodology has led to significant reductions in the test geometry measurement time, reductions in test geometry measurement errors, and even reduced test times. Simultaneously, a methodology has also been created to use the PCMM to easily identify desired measurement locations, as specified by a model. This paper will discuss the general framework of these methods and the realized benefits, using examples from actual tests.
Creation of a Sandia internally developed, shock-hardened Recoverable Data Recorder (RDR) necessitated experimentation by ballistically-firing the device into water targets at velocities up to 5,000 ft/s. The resultant mechanical environments were very severe—routinely achieving peak accelerations in excess of 200 kG and changes in pseudo-velocity greater than 38,000 inch/s. High-quality projectile deceleration datasets were obtained though high-speed imaging during the impact events. The datasets were then used to calibrate and validate computational models in both CTH and EPIC. Hydrodynamic stability in these environments was confirmed to differ from aerodynamic stability; projectile stability is maintained through a phenomenon known as “tail-slapping” or impingement of the rear of the projectile on the cavitation vapor-water interface which envelopes the projectile. As the projectile slows the predominate forces undergo a transition which is outside the codes’ capabilities to calculate accurately, however, CTH and EPIC both predict the projectile trajectory well in the initial hypervelocity regime. Stable projectile designs and the achievable acceleration space are explored through a large parameter sweep of CTH simulations. Front face chamfer angle has the largest influence on stability with low angles being more stable.
Two relatively under-reported facets of fuel storage fire safety are examined in this work for a 250, 000 gallon two-tank storage system. Ignition probability is linked to the radiative flux from a presumed fire. First, based on observed features of existing designs, fires are expected to be largely contained within a designed footprint that will hold the full spilled contents of the fuel. The influence of the walls and the shape of the tanks on the magnitude of the fire is not a well-described aspect of conventional fire safety assessment utilities. Various resources are herein used to explore the potential hazard for a contained fire of this nature. Second, an explosive attack on the fuel storage has not been widely considered in prior work. This work explores some options for assessing this hazard. The various methods for assessing the constrained conventional fires are found to be within a reasonable degree of agreement. This agreement contrasts with the hazard from an explosive dispersal. Best available assessment techniques are used, which highlight some inadequacies in the existing toolsets for making predictions of this nature. This analysis, using the best available tools, suggests the offset distance for the ignition hazard from a fireball will be on the same order as the offset distance for the blast damage. This suggests the buy-down of risk by considering the fireball is minimal when considering the blast hazards. Assessment tools for the fireball predictions are not particularly mature, and ways to improve them for a higher-fidelity estimate are noted.
A high altitude electromagnetic pulse (HEMP) or other similar geomagnetic disturbance (GMD) has the potential to severely impact the operation of large-scale electric power grids. By introducing low-frequency common-mode (CM) currents, these events can impact the performance of key system components such as large power transformers. In this work, a solid-state transformer (SST) that can replace susceptible equipment and improve grid resiliency by safely absorbing these CM insults is described. An overview of the proposed SST power electronics and controls architecture is provided, a system model is developed, and the performance of the SST in response to a simulated CM insult is evaluated. Compared to a conventional magnetic transformer, the SST is found to recover quickly from the insult while maintaining nominal ac input/output behavior.
As the width and depth of quantum circuits implemented by state-of-the-art quantum processors rapidly increase, circuit analysis and assessment via classical simulation are becoming unfeasible. It is crucial, therefore, to develop new methods to identify significant error sources in large and complex quantum circuits. In this work, we present a technique that pinpoints the sections of a quantum circuit that affect the circuit output the most and thus helps to identify the most significant sources of error. The technique requires no classical verification of the circuit output and is thus a scalable tool for debugging large quantum programs in the form of circuits. We demonstrate the practicality and efficacy of the proposed technique by applying it to example algorithmic circuits implemented on IBM quantum machines.
We present the SEU sensitivity and SEL results from proton and heavy ion testing performed on NVIDIA Xavier NX and AMD Ryzen V1605B GPU devices in both static and dynamic operation.
While research in multiple-input/multiple-output (MIMO) random vibration testing techniques, control methods, and test design has been increasing in recent years, research into specifications for these types of tests has not kept pace. This is perhaps due to the very particular requirement for most MIMO random vibration control specifications – they must be narrowband, fully populated cross-power spectral density matrices. This requirement puts constraints on the specification derivation process and restricts the application of many of the traditional techniques used to define single-axis random vibration specifications, such as averaging or straight-lining. This requirement also restricts the applicability of MIMO testing by requiring a very specific and rich field test data set to serve as the basis for the MIMO test specification. Here, frequency-warping and channel averaging techniques are proposed to soften the requirements for MIMO specifications with the goal of expanding the applicability of MIMO random vibration testing and enabling tests to be run in the absence of the necessary field test data.
Molten Salt Reactor (MSR) systems can be divided into two basic categories: liquid-fueled MSRs in which the fuel is dissolved in the salt, and solid-fueled systems such as the Fluoride-salt-cooled High-temperature Reactor (FHR). The molten salt provides an impediment to fission product release as actinides and many fission products are soluble in molten salt. Nonetheless, under accident conditions, some radionuclides may escape the salt by vaporization and aerosol formation, which may lead to release into the environment. We present recent enhancements to MELCOR to represent the transport of radionuclides in the salt and releases from the salt. Some soluble but volatile radionuclides may vaporize and subsequently condense to aerosol. Insoluble fission products can deposit on structures. Thermochimica, an open-source Gibbs Energy Minimization (GEM) code, has been integrated into MELCOR. With the appropriate thermochemical database, Thermochimica provides the solubility and vapor pressure of species as a function of temperature, pressure, and composition, which are needed to characterize the vaporization rate and the state of the salt with fission products. Since thermochemical databases are still under active development for molten salt systems, thermodynamic data for fission product solubility and vapor pressure may be user specified. This enables preliminary assessments of fission product transport in molten salt systems. In this paper, we discuss modeling of soluble and insoluble fission product releases in a MSR with Thermochimica incorporated into MELCOR. Separate-effects experiments performed as part of the Molten Salt Reactor Experiment in which radioactive aerosol was released are discussed as needed for determining the source term.
Puerto Rico faced a double strike from hurricanes Irma and Maria in 2017. The resulting damage required a comprehensive rebuild of electric infrastructure. There are plans and pilot projects to rebuild with microgrids to increase resilience. This paper provides a techno-economic analysis technique and case study of a potential future community in Puerto Rico that combines probabilistic microgrid design analysis with tiered circuits in building energy modeling. Tiered circuits in buildings allow electric load reduction via remote disconnection of non-critiñl circuits during an emergency. When coupled to a microgrid, tiered circuitry can reduce the chances of a microgrid's storage and generation resources being depleted. The analysis technique is applied to show 1) Approximate cost savings due to a tiered circuit structure and 2) Approximate cost savings gained by simultaneously considering resilience and sustainability constraints in the microgrid optimization. The analysis technique uses a resistive capacitive thermal model with load profiles for four tiers (tier 1-3 and non-critical loads). Three analyses were conducted using: 1) open-source software called Tiered Energy in Buildings and 2) the Microgrid Design Toolkit. For a fossil fuel based microgrid 30% of the total microgrid costs of 1.18 million USD were calculated where the non-tiered case keeps all loads 99.9% available and the tiered case keeps tier 1 at 99.9%, tier 2 at 95%, tier 3 at 80% availability, with no requirement on non-critical loads. The same comparison for a sustainable microgrid showed 8% cost savings on a 5.10 million USD microgrid due to tiered circuits. The results also showed 6-7% cost savings when our analysis technique optimizes sustainability and resilience simultaneously in comparison to doing microgrid resilience analysis and renewables net present value analysis independently. Though highly specific to our case study, similar assessments using our analysis technique can elucidate value of tiered circuits and simultaneous consideration of sustainability and resilience in other locations.
Polymers are widely used as damping materials in vibration and impact applications. Liquid crystal elastomers (LCEs) are a unique class of polymers that may offer the potential for enhanced energy absorption capacity under impact conditions over conventional polymers due to their ability to align the nematic phase during loading. Being a relatively new material, the high rate compressive properties of LCEs have been minimally studied. Here, we investigated the high strain rate compression behavior of different solid LCEs, including cast polydomain and 3D-printed, preferentially oriented monodomain samples. Direct ink write (DIW) 3D printed samples allow unique sample designs, namely, a specific orientation of mesogens with respect to the loading direction. Loading the sample in different orientations can induce mesogen rotation during mechanical loading and subsequently different stress-strain responses under impact. We also used a reference polymer, bisphenol-A (BPA) cross-linked resin, to contrast LCE behavior with conventional elastomer behavior.
Due to their increased levels of reliability, meshed low-voltage (LV) grid and spot networks are common topologies for supplying power to dense urban areas and critical customers. Protection schemes for LV networks often use highly sensitive reverse current trip settings to detect faults in the medium-voltage system. As a result, interconnecting even low levels of distributed energy resources (DERs) can impact the reliability of the protection system and cause nuisance tripping. This work analyzes the possibility of modifying the reverse current relay trip settings to increase the DER hosting capacity of LV networks without impacting fault detection performance. The results suggest that adjusting relay settings can significantly increase DER hosting capacity on LV networks without adverse effects, and that existing guidance on connecting DERs to secondary networks, such as that contained in IEEE Std 1547-2018, could potentially be modified to allow higher DER deployment levels.
The research investigates novel techniques to enhance supply chain security via addition of configuration management controls to protect Instrumentation and Control (I&C) systems of a Nuclear Power Plant (NPP). A secure element (SE) is integrated into a proof-of-concept testbed by means of a commercially available smart card, which provides tamper resistant key storage and a cryptographic coprocessor. The secure element simplifies setup and establishment of a secure communications channel between the configuration manager and verification system and the I&C system (running OpenPLC). This secure channel can be used to provide copies of commands and configuration changes of the I&C system for analysis.
The widespread adoption of residential solar PV requires distribution system studies to ensure the addition of solar PV at a customer location does not violate the system constraints, which can be referred to as locational hosting capacity (HC). These model-based analyses are prone to error due to their dependencies on the accuracy of the system information. Model-free approaches to estimate the solar PV hosting capacity for a customer can be a good alternative to this approach as their accuracies do not depend on detailed system information. In this paper, an Adaptive Boosting (AdaBoost) algorithm is deployed to utilize the statistical properties (mean, minimum, maximum, and standard deviation) of the customer's historical data (real power, reactive power, voltage) as inputs to estimate the voltage-constrained PV HC for the customer. A baseline comparison approach is also built that utilizes just the maximum voltage of the customer to predict PV HC. The results show that the ensemble-based AdaBoost algorithm outperformed the proposed baseline approach. The developed methods are also compared and validated by existing state-of-the-art model-free PV HC estimation methods.
The structure-property linkage is one of the two most important relationships in materials science besides the process-structure linkage, especially for metals and polycrystalline alloys. The stochastic nature of microstructures begs for a robust approach to reliably address the linkage. As such, uncertainty quantification (UQ) plays an important role in this regard and cannot be ignored. To probe the structure-property linkage, many multi-scale integrated computational materials engineering (ICME) tools have been proposed and developed over the last decade to accelerate the material design process in the spirit of Material Genome Initiative (MGI), notably crystal plasticity finite element model (CPFEM) and phase-field simulations. Machine learning (ML) methods, including deep learning and physics-informed/-constrained approaches, can also be conveniently applied to approximate the computationally expensive ICME models, allowing one to efficiently navigate in both structure and property spaces effortlessly. Since UQ also plays a crucial role in verification and validation for both ICME and ML models, it is important to include UQ in the picture. In this paper, we summarize a few of our recent research efforts addressing UQ aspects of homogenized properties using CPFEM in a big picture context.
Awile, Omar; Knight, James C.; Nowotny, Thomas; Aimone, James B.; Diesmann, Markus; Schurmann, Felix
At the turn of the millennium the computational neuroscience community realized that neuroscience was in a software crisis: software development was no longer progressing as expected and reproducibility declined. The International Neuroinformatics Coordinating Facility (INCF) was inaugurated in 2007 as an initiative to improve this situation. The INCF has since pursued its mission to help the development of standards and best practices. In a community paper published this very same year, Brette et al. tried to assess the state of the field and to establish a scientific approach to simulation technology, addressing foundational topics, such as which simulation schemes are best suited for the types of models we see in neuroscience. In 2015, a Frontiers Research Topic “Python in neuroscience” by Muller et al. triggered and documented a revolution in the neuroscience community, namely in the usage of the scripting language Python as a common language for interfacing with simulation codes and connecting between applications. The review by Einevoll et al. documented that simulation tools have since further matured and become reliable research instruments used by many scientific groups for their respective questions. Open source and community standard simulators today allow research groups to focus on their scientific questions and leave the details of the computational work to the community of simulator developers. A parallel development has occurred, which has been barely visible in neuroscientific circles beyond the community of simulator developers: Supercomputers used for large and complex scientific calculations have increased their performance from ~10 TeraFLOPS (1013 floating point operations per second) in the early 2000s to above 1 ExaFLOPS (1018 floating point operations per second) in the year 2022. This represents a 100,000-fold increase in our computational capabilities, or almost 17 doublings of computational capability in 22 years. Moore's law (the observation that it is economically viable to double the number of transistors in an integrated circuit every other 18–24 months) explains a part of this; our ability and willingness to build and operate physically larger computers, explains another part. It should be clear, however, that such a technological advancement requires software adaptations and under the hood, simulators had to reinvent themselves and change substantially to embrace this technological opportunity. It actually is quite remarkable that—apart from the change in semantics for the parallelization—this has mostly happened without the users knowing. The current Research Topic was motivated by the wish to assemble an update on the state of neuroscientific software (mostly simulators) in 2022, to assess whether we can see more clearly which scientific questions can (or cannot) be asked due to our increased capability of simulation, and also to anticipate whether and for how long we can expect this increase of computational capabilities to continue.
Mann, James B.; Mohanty, Debapriya P.; Kustas, Andrew B.; Stiven Puentes Rodriguez, B.; Issahaq, Mohammed N.; Udupa, Anirudh; Sugihara, Tatsuya; Trumble, Kevin P.; M'Saoubi, Rachid; Chandrasekar, Srinivasan
Machining-based deformation processing is used to produce metal foil and flat wire (strip) with suitable properties and quality for electrical power and renewable energy applications. In contrast to conventional multistage rolling, the strip is produced in a single-step and with much less process energy. Examples are presented from metal systems of varied workability, and strip product scale in terms of size and production rate. By utilizing the large-strain deformation intrinsic to cutting, bulk strip with ultrafine-grained microstructure, and crystallographic shear-texture favourable for formability, are achieved. Implications for production of commercial strip for electric motor applications and battery electrodes are discussed.
When exposed to mechanical environments such as shock and vibration, electrical connections may experience increased levels of contact resistance associated with the physical characteristics of the electrical interface. A phenomenon known as electrical chatter occurs when these vibrations are large enough to interrupt the electric signals. It is critical to understand the root causes behind these events because electrical chatter may result in unexpected performance or failure of the system. The root causes span a variety of fields, such as structural dynamics, contact mechanics, and tribology. Therefore, a wide range of analyses are required to fully explore the physical phenomenon. This paper intends to provide a better understanding of the relationship between structural dynamics and electrical chatter events. Specifically, electrical contact assembly composed of a cylindrical pin and bifurcated structure were studied using high fidelity simulations. Structural dynamic simulations will be performed with both linear and nonlinear reduced-order models (ROM) to replicate the relevant structural dynamics. Subsequent multi-physics simulations will be discussed to relate the contact mechanics associated with the dynamic interactions between the pin and receptacle to the chatter. Each simulation method was parametrized by data from a variety of dynamic experiments. Both structural dynamics and electrical continuity were observed in both the simulation and experimental approaches, so that the relationship between the two can be established.