Granular metals (GMs), consisting of metal nanoparticles separated by an insulating matrix, frequently serve as a platform for fundamental electron transport studies. However, few technologically mature devices incorporating GMs have been realized, in large part because intrinsic defects (e.g., electron trapping sites and metal/insulator interfacial defects) frequently impede electron transport, particularly in GMs that do not contain noble metals. Here, we demonstrate that such defects can be minimized in molybdenum-silicon nitride (Mo-SiNx) GMs via optimization of the sputter deposition atmosphere. For Mo-SiNx GMs deposited in a mixed Ar/N2 environment, x-ray photoemission spectroscopy shows a 40%-60% reduction of interfacial Mo-silicide defects compared to Mo-SiNx GMs sputtered in a pure Ar environment. Electron transport measurements confirm the reduced defect density; the dc conductivity improved (decreased) by 104-105 and the activation energy for variable-range hopping increased 10×. Since GMs are disordered materials, the GM nanostructure should, theoretically, support a universal power law (UPL) response; in practice, that response is generally overwhelmed by resistive (defective) transport. Here, the defect-minimized Mo-SiNx GMs display a superlinear UPL response, which we quantify as the ratio of the conductivity at 1 MHz to that at dc, Δ σ ω . Remarkably, these GMs display a Δ σ ω up to 107, a three-orders-of-magnitude improved response than previously reported for GMs. By enabling high-performance electric transport with a non-noble metal GM, this work represents an important step toward both new fundamental UPL research and scalable, mature GM device applications.
The report summarizes the work and accomplishments of DOE SETO funded project 36533 “Adaptive Protection and Control for High Penetration PV and Grid Resilience”. In order to increase the amount of distributed solar power that can be integrated into the distribution system, new methods for optimal adaptive protection, artificial intelligence or machine learning based protection, and time domain traveling wave protection are developed and demonstrated in hardware-in-the-loop and a field demonstration.
The following article describes an optimal control algorithm for the operation and study of an electric microgrid designed to power a lunar habitat. A photovoltaic (PV) generator powers the habitat and the presence of predictable lunar eclipses necessitates a system to prioritize and control loads within the microgrid. The algorithm consists of a reduced order model (ROM) that describes the microgrid, a discretization of the equations that result from the ROM, and an optimization formulation that controls the microgrid’s behavior. In order to validate this approach, the paper presents results from simulation based on lunar eclipse information and a schedule of intended loads.
The National Aeronautics and Space Administration’s (NASA) Artemis program seeks to establish the first long-term presence on the Moon as part of a larger goal of sending the first astronauts to Mars. To accomplish this, the Artemis program is designed to develop, test, and demonstrate many technologies needed for deep space exploration and supporting life on another planet. Long-term operations on the lunar base include habitation, science, logistics, and in-situ resource utilization (ISRU). In this paper, a Lunar DC microgrid (LDCMG) structure is the backbone of the energy distribution, storage, and utilization infrastructure. The method to analyze the LDCMG power distribution network and ESS design is the Hamiltonian surface shaping and power flow control (HSSPFC). This ISRU system will include a networked three-microgrid system which includes a Photo-voltaic (PV) array (generation) on one sub-microgrid and water extraction (loads) on the other two microgrids. A system's reduced-order model (ROM) will be used to create a closed-form analytical model. Ideal ESS devices will be placed alongside each state of the ROM. The ideal ESS devices determine the response needed to conform to a specific operating scenario and system specifications.
This report is a summary of a 3-year LDRD project that developed novel methods to detect faults in the electric power grid dramatically faster than today’s protection systems. Accurately detecting and quickly removing electrical faults is imperative for power system resilience and national security to minimize impacts to defense critical infrastructure. The new protection schemes will improve grid stability during disturbances and allow additional integration of renewable energy technologies with low inertia and low fault currents. Signal-based fast tripping schemes were developed that use the physics of the grid and do not rely on communication to reduce cyber risks for safely removing faults.
Structural modularity is critical to solid-state transformer (SST) and solid-state power substation (SSPS) concepts, but operational aspects related to this modularity are not yet fully understood. Previous studies and demonstrations of modular power conversion systems assume identical module compositions, but dependence on module uniformity undercuts the value of the modular framework. In this project, a hierarchical control approach was developed for modular SSTs which achieves system-level objectives while ensuring equitable power sharing between nonuniform building block modules. This enables module replacements and upgrades which leverage circuit and device technology advancements to improve system-level performance. The functionality of the control approach is demonstrated in detailed time-domain simulations. Results of this project provide context and strategic direction for future LDRD projects focusing on technologies supporting the SST crosscut outcome of the resilient energy systems mission campaign.
In the near future, grid operators are expected to regularly use advanced distributed energy resource (DER) functions, defined in IEEE 1547-2018, to perform a range of grid-support operations. Many of these functions adjust the active and reactive power of the device through commanded or autonomous operating modes which induce new stresses on the power electronics components. In this work, an experimental and theoretical framework is introduced which couples laboratory-measured component stress with advanced inverter functionality and derives a reduction in useful lifetime based on an applicable reliability model. Multiple DER devices were instrumented to calculate the additional component stress under multiple reactive power setpoints to estimate associated DER lifetime reductions. A clear increase in switch loss was demonstrated as a function of irradiance level and power factor. This is replicated in the system-level efficiency measurements, although magnitudes were different—suggesting other loss mechanisms exist. Using an approximate Arrhenius thermal model for the switches, the experimental data indicate a lifetime reduction of 1.5% when operating the inverter at 0.85 PF—compared to unity PF—assuming the DER failure mechanism thermally driven within the H-bridge. If other failure mechanisms are discovered for a set of power electronics devices, this testing and calculation framework can easily be tailored to those failure mechanisms.
This quick note outlines what we found after our conversion with you and your team. As suggested, we loaded 1547-2003 source requirements document (SRD) and then went back and loaded 1547-2018 SRD. This did result in implementing the new 1547-2018 settings. This short report focuses on the frequency-watt function and shows a couple of screen shots of the parameter settings via the Mojave HMI interface and plots of the results of the inverter with FW function enabled in both default and most aggressive settings response to frequency events. The first screen shot shows the 1547-2018 selected after selecting 1547-2003.