The peridynamic theory of solid mechanics is applied to the continuum modeling of the impact of small, high-velocity silica spheres on multilayer graphene targets. The model treats the laminate as a brittle elastic membrane. The material model includes separate failure criteria for the initial rupture of the membrane and for propagating cracks. Material variability is incorporated by assigning random variations in elastic properties within Voronoi cells. The computational model is shown to reproduce the primary aspects of the response observed in experiments, including the growth of a family of radial cracks from the point of impact.
Our goal is compression of massive-scale grid-structured data, such as the multi-terabyte output of a high-fidelity computational simulation. For such data sets, we have developed a new software package called TuckerMPI, a parallel C++/MPI software package for compressing distributed data. The approach is based on treating the data as a tensor, i.e., a multidimensional array, and computing its truncated Tucker decomposition, a higher-order analogue to the truncated singular value decomposition of a matrix. The result is a low-rank approximation of the original tensor-structured data. Compression efficiency is achieved by detecting latent global structure within the data, which we contrast to most compression methods that are focused on local structure. In this work, we describe TuckerMPI, our implementation of the truncated Tucker decomposition, including details of the data distribution and in-memory layouts, the parallel and serial implementations of the key kernels, and analysis of the storage, communication, and computational costs. We test the software on 4.5 and 6.7 terabyte data sets distributed across 100 s of nodes (1,000 s of MPI processes), achieving compression ratios between 100 and 200,000×, which equates to 99-99.999% compression (depending on the desired accuracy) in substantially less time than it would take to even read the same dataset from a parallel file system. Moreover, we show that our method also allows for reconstruction of partial or down-sampled data on a single node, without a parallel computer so long as the reconstructed portion is small enough to fit on a single machine, e.g., in the instance of reconstructing/visualizing a single down-sampled time step or computing summary statistics. The code is available at https://gitlab.com/tensors/TuckerMPI.
Arc flash hazard prediction methods have become more sophisticated because the knowledge about arc flash phenomenon has advanced since the publication of IEEE Std. 1584-2002 [17]. The IEEE Std. 1584-2018 [13] has added parameters for more accurate arc flash incident energy, arcing current and protection boundary estimation. The parameters in the updated estimation models include electrode configuration, open circuit voltage, bolted fault current, arc duration, gap width, working distance, and enclosure dimension. The sensitivity and effect changes of other parameters have been discussed the previous literatures [8] [9] [11] [2] [12] [15], this paper explains the fundamental theory on the selection of electrode configurations and performs sensitivity analysis of the enclosure dimension, that have been introduced in the IEEE Std. 1584-2018. According to the newly published model for incident energy (IE) estimation, the IE between VCB (Vertical Electrodes inside a metal Box) and HCB (Horizontal Electrodes inside a metal Box) can differ by a factor of two with other parameters constant. Using HCB as the worst-case scenario to determine the personal protection requirements [7] [10] may not be the best practice in all circumstances. This paper provides guidance for electrode configuration selection and a sensitivity analysis for determining a reasonable engineering margin when actual dimension is not available.
We illustrate a theoretical side-channel analysis on the intermediate rounds of AES, using only the Hamming weights of the bytes registered after the S-box operation. Input and output state values are unknown. Simulations and a blind test were used to show the feasibility of the analysis under ideal conditions. General applicability of the idea and possible extensions are discussed, as well as limiting assumptions. Some implementation approaches are described in Appendix A, in the case of constrained computing capabilities (desktop or laptop).
Ali, Amir; Kim, Hyun G.; Hattar, Khalid M.; Briggs, Samuel; Jun Park, Dong; Hwan Park, Jung; Lee, Youho
The concept of coating the currently used nuclear fuel cladding (zirconium-based alloy, typically Zircaloy-4 or Zirc-4) with an oxidation preventive layer is a progressing Accident tolerant Fuel (ATF) candidate alloys. The coated Zirc-4-based alloys could be a solution to suppress undesirable fast reaction kinetics with high-temperature steam. Zirc-4 has been the most preferred cladding material in pressurized water reactors (PWRs). Chromium (Cr) based alloys as a coating material provides excellent corrosion protection and good strength and wear resistance. This paper presents the surface wettability measurements and pool boiling Critical Heat Flux (CHF) for Cr-coated Zirc-4 claddings pre- and post-exposure to an ion irradiation environment. The wettability measurements, including static contact angle (contact angle, θ) and average surface roughness (surface roughness, Ra), are introduced for samples of different coating thicknesses (5–30 μm thick). The coatings fabricated by the cold spray of Cr-Al particles to 10 mm × 10 mm × 1.95 mm Zirc-4 substrates. Post fabrication, a Pilgering (cold rolling) process, was applied to finalize the coating thickness and resulted in a significant reduction in surface roughness of initially fabricated rough surfaces. The process produced three distinguished samples 5-μm unpolished (as machined), 5-μm, and 30-μm polished (cold rolled). The measurements are presented for the three surfaces and bare Zirc-4 as a baseline surface. The contact angle analyses were implemented in theoretical models from the literature to predict pool boiling CHF. Pool boiling experiments were conducted to measure the pool boiling CHF values and compare them to the predicted values. Scanning Electron Microscope (SEM) images and Energy Dispersive X-ray Spectroscopy (EDS) analysis was performed to characterize the surfaces for better understanding and interpreting the results. The SEM images showed localized surface damage due to ion irradiation. No recognized change in the measured surface roughness due to ion irradiation. The contact angles of irradiated Cr-coated surfaces are consistently higher (10°) than pre-irradiated surfaces. Decreasing the Cr-coating layer thickness resulted in lower contact angle pre- and post- ion irradiation. The predicted pool boiling CHF using the Kandlikar model is in good agreement with the experimentally measured CHF values within ±12% for all samples.
Subscale wind turbines can be installed in the field for the development of wind technologies, for which the blade aerodynamics can be designed in a way similar to that of a full-scale wind turbine. However, it is not clear whether the wake of a subscale turbine, which is located closer to the ground and faces different incoming turbulence, is also similar to that of a full-scale wind turbine. In this work we investigate the wakes from a full-scale wind turbine of rotor diameter 80 m and a subscale wind turbine of rotor diameter of 27 m using large-eddy simulation with the turbine blades and nacelle modeled using actuator surface models. The blade aerodynamics of the two turbines are the same. In the simulations, the two turbines also face the same turbulent boundary inflows. The computed results show differences between the two turbines for both velocity deficits and turbine-added turbulence kinetic energy. Such differences are further analyzed by examining the mean kinetic energy equation.
This paper presents a techno-economic analysis of behind-the-meter (BTM) solar photovoltaic (PV) and battery energy storage systems (BESS) applied to an Electric Vehicle (EV) fast-charging station. The goal is to estimate the maximum return on investment (ROI) that can be obtained for optimum BESS and PV size and their operation. Fast charging is a technology that will speed up mass adoption of EVs, which currently requires several hours to achieve full recharge in level 1 or 2 chargers. Fast chargers demand from tens to hundreds of kilowatts from the distribution grid, potentially leading to system congestion and overload. The problem is formulated as a linear program that obtains the size of PV, power and energy ratings of BESS as well as charging and discharging scheduling of the storage system to maximize ROI under operational constraints of BESS and PV. The revenue are cost-savings of demand and time-of-use charges, with a penalty for BESS degradation. We have considered Los Angeles Department of Water and Power tariff A-2 and fast charger data derived from the EV Project. The results show that a 46.5 kW/28.3 kWh BESS can obtain a ROI of about $22.4k over 10 years for a small 4-port fast-charging station.
The PSL has reviewed the documentation and data provided by NNSS–Livermore Operations with respect to this proficiency test. This proficiency test was performed to assess NNSS–Livermore Operations’ ability to perform scattering parameter calibrations. The level of documentation was satisfactory. On 5/19/2020, NNSS–Livermore Operations reported the data for the proficiency test conducted on the attenuator. NNSS–Livermore Operations performed this proficiency test using an Anritsu vector network analyzer, an electronic calibration module, and verification kit. The PSL used a Keysight vector network analyzer and mechanical calibration kit. The PSL results included in this proficiency test report were taken on June 23, 2020.
In this work we provide direct evidence of shock-induced melting and associated kinetics in a porous solid (aluminum powder) using time-resolved x-ray diffraction. Unambiguous evidence of melting in 50% porous aluminum (Al) powder samples, shocked to peak pressures between ∼13-19GPa, was provided by the broadening of the Debye-Scherrer ring corresponding to the (111) peak. Shocked Al powder did not melt completely in any of our experiments within the durations of measurement. Incomplete (partial) melting of the powder, even after several hundreds of nanoseconds of shock loading, provides insights into thermal transport with Al powder particles under high-pressure dynamic loading. Such insights are quite valuable for developing well-constrained melting models and thermodynamic equations of state for porous Al and other porous solids relevant to planetary and materials science.
Lim, Jinhyuk; Kim, Minseob; Duwal, Sakun D.; Ohishi, Yasuo; Hrubiak, Rostislav; Tse, John S.; Yoo, Choong-Shik
We have studied the compression behavior of H2-He mixtures in comparison with pure H2 and He using powder synchrotron x-ray diffraction and present the pressure-volume (PV) compression data of H2-He mixtures to 160 GPa. The results indicate that both H2 and He in H2-He mixtures remain in hcp to the maximum pressure studied, yet develop a substantial level of lattice distortion in the (100) plane, most profound in He-rich solids and below 66 GPa. The measured PV data also indicate softening of He (or H2)-rich lattice upon increasing the level of the guest H2 (or He) concentration. We suggest that the observed softening and lattice distortion are due to a substitutional incorporation of H2 (guest) molecules into the basal plane of hcp-He (host) lattice and, thereby, reflect the miscibility between H2 and He in H2-He mixtures. Interestingly, solid He exhibits a lesser degree of preferred orientation in H2-He mixtures than in pure He, likely due to the presence of solid H2 disturbing the crystalline ordering of He-rich solids. Finally, the present PV compression data of H2-rich and He-rich solids to 160 GPa deviate from those of pure H2 and pure He above ~70 and 45 GPa respectively, providing new constraints for development of the EOS for H2-He mixtures for planetary models.
The Port of Alaska in Anchorage enables the economic vitality of the Municipality of Anchorage and State of Alaska. It also provides significant support to defense activities across Alaska, especially to the Joint Base Elmendorf-Richardson (JBER) that is immediately adjacent to the Port. For this reason, stakeholders are interested in the resilience of the Ports operations. This report documents a preliminary feasibility analysis for developing an energy system that increases electric supply resilience for the Port and for a specific location inside JBER. The project concept emerged from prior work led by the Municipality of Anchorage and consultation with Port stakeholders. The project consists of a microgrid with PV, storage and diesel generation, capable of supplying electricity to loads at the Port a specific JBER location during utility outages, while also delivering economic value during blue-sky conditions. The study aims to estimate the size, configuration and concept of operations based on existing infrastructure and limited demand data. It also explores potential project benefits and challenges. The report goal is to inform further stakeholder consultation and next steps.
Richardson, Christopher J.K.; Lordi, Vincenzo; Misra, Shashank M.; Shabani, Javad
Quantum computing, sensing, and communications are emerging technologies that may circumvent known limitations of their existing traditional counterparts. While the promises of these technologies are currently narrow in scope, it is possible that they will broadly impact our lives by revolutionizing the capabilities of data centers and medical diagnostics, for example. At the heart of these technologies is the use of a quantum object to contain information, called a quantum bit or qubit. Current realizations of qubits exist in a broad variety of material systems, including individual spins in semiconductors or insulators, superconducting circuits, and trapped ions. Further advancement of qubits requires significant contributions from materials science in areas of materials selection, synthesis, fabrication, simulation and characterization. Here, we discuss some of the needs and opportunities for contributions to advance the fundamental understanding of materials used in quantum information applications.
The MACCS (MELCOR Accident Consequence Code System) code is the U.S. Nuclear Regulatory Commission (NRC) tool used to perform probabilistic health and economic consequence assessments for atmospheric releases of radionuclides. It is also used by international organizations, both reactor owners and regulators. It is intended and most commonly used for hypothetical accidents that could potentially occur in the future rather than to evaluate past accidents or to provide emergency response during an ongoing accident. It is designed to support probabilistic risk and consequence analyses and is used by the NRC, U.S. nuclear licensees, the Department of Energy, and international vendors, licensees, and regulators. This report describes the modeling framework, implementation, verification, and benchmarking of a GDP-based model for economic losses that has recently been developed as an alternative to the original cost-based economic loss model in MACCS. The GDP-based model has its roots in a code developed by Sandia National Laboratories for the Department of Homeland Security to estimate short-term losses from natural and manmade accidents, called the Regional Economic Accounting analysis tool (REAcct). This model was adapted and modified for MACCS and is now called the Regional Disruption Economic Impact Model (RDEIM). It is based on input-output theory, which is widely used in economic modeling. It accounts for direct losses to a disrupted region affected by an accident, indirect losses to the national economy due to disruption of the supply chain, and induced losses from reduced spending by displaced workers. RDEIM differs from REAcct in its treatment and estimation of indirect loss multipliers, elimination of double counting associated with inter-industry trade in the affected area, and that it is designed to be used to estimate impacts for extended periods that can occur from a major nuclear reactor accident, such as the one that occurred at the Fukushima Daiichi site in Japan. Most input-output models do not account for economic adaptation and recovery, and in this regard RDEIM differs from its parent, REAcct, because it allows for a user-definable national recovery period. Implementation of a recovery period was one of several recommendations made by an independent peer review panel to ensure that RDEIM is state-of-practice. For this and several other reasons, RDEIM differs from REAcct. Both the original and the RDEIM economic loss models account for costs from evacuation and relocation, decontamination, depreciation, and condemnation. Where the original model accounts for an expected rate of return, based on the value of property, that is lost during interdiction, the RDEIM model instead accounts for losses of GDP based on the industrial sectors located within a county. The original model includes costs for disposal of crops and milk that the RDEIM model currently does not, but these costs tend to contribute insignificantly to the overall losses. This document discusses three verification exercises to demonstrate that the RDEIM model is implemented correctly in MACCS. It also describes a benchmark study at five nuclear power plants chosen to represent the spectrum of U.S. commercial sites. The benchmarks provide perspective on the expected differences between the RDEIM and the original cost-based economic loss models. The RDEIM model is shown to consistently predict larger losses than the original model, probably in part because it accounts for national losses by including indirect and induced losses; whereas, the original model only accounts for regional losses. Nonetheless, the RDEIM model predicts losses that are remarkably consistent with the original cost-based model, differing by 16% at most for the five sites combined with three source terms considered in this benchmark.
Boronic acid-modified polymers (BAMPs) can interact with glycoproteins and other glycosylated compounds through covalent binding of the boronic acid moieties to saccharide residues. As a first step toward evaluating the utility of BAMPs as SARS-CoV-2 antiviral agents, this COVID-19 rapid response LDRD was intended to examine the effect of BAMPs on SARS-CoV-2 spike glycoprotein and its subsequent binding with ACE2 receptor protein. Multiple different approaches were attempted in order to determine whether BAMPs based on poly(ethylene glycol) and poly(ethylenimine) bind the spike protein, but failed to produce a definitive answer. However, two different enzyme-linked immunosorbent assays clearly showed no discernable effect of boronic acid in inhibiting spike-ACE2 binding.
A novel derivative of a previously-published polymeric material has been synthesized and developed into an easily-sprayable coating. Surface characterization of coatings confirm correct elemental presence, and viral assays reveal quantitative elimination of MS2 bacteriophage and Phi6 bacteriophage, surrogates used for SARS-CoV-2, in as little as 5 minutes upon contact. Furthermore, an N95 mask was dip-coated in the polymer solution and analyzed through microscopy and filtration efficacy testing. Though coating was successful, electrostatic interactions between mask layers and polymer reduced filtration efficacy significantly. As such, we expect the current results of this work to be applicable on non-respiratory PPE and on solid substrates of commonly-touched surfaces for rapid self-decontamination.
The Strategic Performance Evaluation Measurement Plan (PEMP) Scorecard, now housed in QuickScore, is an assurance and governance data scorecard which shows the health of the labs against the current fiscal year PEMP objectives. POC's for PEMP objective owners are requested to provide status of their PEMP objective, capture the top 6 cumulative accomplishments, top 6 issues, which include IFR issues and provide mitigation and or improvement action plans. Updated scorecards are used in OMR and BOM GS&S committee
In March and April of 2020 there was widespread concern about availability of medical resources required to treat Covid-19 patients who become seriously ill. A simulation model of supply management was developed to aid understanding of how to best manage available supplies and channel new production. Forecasted demands for critical therapeutic resources have tremendous uncertainty, largely due to uncertainties about the number and timing of patient arrivals. It is therefore essential to evaluate any process for managing supplies in view of this uncertainty. To support such evaluations, we developed a modeling framework that would allow an integrated assessment in the context of uncertainty quantification. At the time of writing there has been no need to execute this framework because adaptations of the medical system have been able to respond effectively to the outbreak. This report documents the framework and its implemented components should need later arise for its application.
We present a numerical framework for recovering unknown non-autonomous dynamical systems with time-dependent inputs. To circumvent the difficulty presented by the non-autonomous nature of the system, our method transforms the solution state into piecewise integration of the system over a discrete set of time instances. The time-dependent inputs are then locally parameterized by using a proper model, for example, polynomial regression, in the pieces determined by the time instances. This transforms the original system into a piecewise parametric system that is locally time invariant. We then design a deep neural network structure to learn the local models. Once the network model is constructed, it can be iteratively used over time to conduct global system prediction. We provide theoretical analysis of our algorithm and present a number of numerical examples to demonstrate the effectiveness of the method.
This report documents a statistical method for the "real-time" characterization of partially observed epidemics. Observations consist of daily counts of symptomatic patients, diagnosed with the disease. Characterization, in this context, refers to estimation of epidemiological parameters that can be used to provide short-term forecasts of the ongoing epidemic, as well as to provide gross information for the time-dependent infection rate. The characterization problem is formulated as a Bayesian inverse problem, and is predicated on a model for the distribution of the incubation period. The model parameters are estimated as distributions using a Markov Chain Monte Carlo (MCMC) method, thus quantifying the uncertainty in the estimates. The method is applied to the COVID-19 pandemic of 2020, using data at the country, provincial (e.g., states) and regional (e.g. county) levels. The epidemiological model includes a stochastic component due to uncertainties in the incubation period. This model-form uncertainty is accommodated by a pseudo-marginal Metropolis-Hastings MCMC sampler, which produces posterior distributions that reflect this uncertainty. We approximate the discrepancy between the data and the epidemiological model using Gaussian and negative binomial error models; the latter was motivated by the over-dispersed count data. For small daily counts we find the performance of the calibrated models to be similar for the two error models. For large daily counts the negative-binomial approximation is numerically unstable unlike the Gaussian error model. Application of the model at the country level (for the United States, Germany, Italy, etc.) generally provided accurate forecasts, as the data consisted of large counts which suppressed the day-to-day variations in the observations. Further, the bulk of the data is sourced over the duration before the relaxation of the curbs on population mixing, and is not confounded by any discernible country-wide second wave of infections. At the state-level, where reporting was poor or which evinced few infections (e.g., New Mexico), the variance in the data posed some, though not insurmountable, difficulties, and forecasts were able to capture the data with large uncertainty bounds. The method was found to be sufficiently sensitive to discern the flattening of the infection and epidemic curve due to shelter-in-place orders after around 90% quantile for the incubation distribution (about 10 days for COVID-19). The proposed model was also used at a regional level to compare the forecasts for the central and north-west regions of New Mexico. Modeling the data for these regions illustrated different disease spread dynamics captured by the model. While in the central region the daily counts peaked in the late April, in the north-west region the ramp-up continued for approximately three more weeks.
The Sandia National Laboratories Physical Security Center of Excellence (PSCOE) has been tasked by the Department of State (DOS) Bureau of Diplomatic Security Research and Development branch to investigate the potential anti-climb benefits of newly developed skid-resistant paint coatings - one light base and one dark base. DOS is interested in studying the application of the coatings on passive barriers commonly used at diplomatic facilities. The purpose of the anti-climb coating in this context is to deter and delay adversaries from climbing onto the passive barriers. PSCOE was tasked to perform delay testing that focused on the effectiveness of the coatings on the two DOS perimeter passive barriers - the DS-41 anti-ram fence and a 9 foot high by 1 foot thick reinforced concrete wall intended to mimic the DS-30 anti-ram perimeter wall. PSCOE was also tasked in performing skid-resistance testing using a British Pendulum skid-resistance tester. Delay testing and skid-resistance testing were also performed on two different passive barriers without any anti-climb coating to determine a baseline.
The State-of-the-Art Reactor Consequence Analyses (SOARCA) project has focused on best estimate analyses and uncertainty analysis for postulated accidents at specific nuclear power plants. The consequences of these accidents are estimated using the simulation tools MELCOR and MACCS. To understand which uncertain input variables are important to determining these consequences, analysts have performed sensitivity analyses. The tool used to perform these sensitivity analyses in previous SOARCA work, CompModSA, is no longer supported. Therefore, the current work focuses on migrating these analyses to another tool and evaluating its performance. Dakota, which is a tool developed at Sandia National Laboratories, is used in this work. Sensitivity results are created for three analyses from the SOARCA Surry UA. Though CompModSA and Dakota vary slightly in their algorithms and implementation, their sensitivity results generally agree, which gives confidence in the Dakota approach and increases confidence in the original analyses. It is likely that this methodology is extendable to the rest of SOARCA analyses.
Attaway is a recently installed High-Performance Computing (HPC) machine at Sandia National Labs that is 70% water-cooled and 30% air-cooled. This machine, supplied by Penguin Computing, uses a novel new cooling system from Chilldyne that operates in a vacuum, preventing water leaks. If water-cooling is to fail, fans inside of each node will ramp up to do 100% of the cooling on Attaway. Various tests were completed on Attaway to determine the robustness of its cooling system as well as its ability to respond to sudden changes in states. These changes include an immediate change from an idle compute load to full load (Linpack) as well as running Linpack without any water cooling from Attaway's CDUs. It was discovered that Attaway could respond to sudden compute load changes very well, never throttling any nodes. When Linpack was run without water cooling, the system was able to operate for a short time before throttling happened.
Compact diffusion bonded heat exchangers are essential for high pressure heat exchange, but they are subject to thermal fatigue and ramp rate limitations. Simulation of these geometries is challenging with a large range of length and time scales from thousands of mm-sized microchannels inside a m-sized heat exchanger. Multi-physics simulations including thermal, fluid, and solid mechanics components are being used to predict stress within the heat exchangers under these conditions. These predictions can then be used to understand thermal ramp rate limitations while keeping maximum stresses low as well as fatigue life predictions from well-known empirical models.
Machine learning models, trained on data from ab initio quantum simulations, are yielding molecular dynamics potentials with unprecedented accuracy. One limiting factor is the quantity of available training data, which can be expensive to obtain. A quantum simulation often provides all atomic forces, in addition to the total energy of the system. These forces provide much more information than the energy alone. It may appear that training a model to this large quantity of force data would introduce significant computational costs. Actually, training to all available force data should only be a few times more expensive than training to energies alone. Here, we present a new algorithm for efficient force training, and benchmark its accuracy by training to forces from real-world datasets for organic chemistry and bulk aluminum.
The U.S. Nuclear Regulatory Commission (NRC) performed a first-of-a-kind uncertainty analysis (UA) of the accident progression, radiological releases, and offsite consequences for the State-of- the-Art Reactor Consequence Analyses (SOARCA) of an unmitigated long-term station blackout (LTSBO) severe accident scenario at the Peach Bottom Atomic Power Station. The objective of the UA was to evaluate the robustness of the SOARCA deterministic "best estimate results and conclusions documented in NUREG-1935, and to develop insight into the overall sensitivity of the SOARCA results to uncertainty in key modeling inputs. The study was completed in 2015 and documented in NUREG/CR-7155. Since 2015, two other SOARCA UAs were completed for two pressurized water reactor (PWR) plants. The PWR UAs incrementally updated the approach and methodology, including using the latest release of the MELCOR 2.2 computer code. There were also advances made in the state-of-the-art modeling related to NRC efforts using the Peach Bottom model in NUREG-2206, which provided the technical basis for the containment protection and release reduction rulemaking for boiling water reactors with Mark I and Mark II containments. This report documents the input model changes from the NUREG/CR-7155 study and performs a small number of reference calculations to assess the changes of the new computer code and the model input updates. The objective of the work is to verify whether the updated Peach Bottom MELCOR model and updated version of MELCOR support the conclusions formed in the Peach Bottom SOARCA UA by performing these representative calculations.
This paper describes the source term calculation documented in SAND2011-0128, ''Accident Source Terms for Light-Water Nuclear Power Plants Using High-Burnup or MOX Fuel'' and describes one method for implementing the calculation in MATLAB.
The flow and cavitation behavior inside fuel injectors is known to affect spray development, mixing and combustion characteristics. While diesel fuel injectors with converging and hydro-eroded holes are generally known to limit cavitation and feature higher discharge coefficients during the steady period of injection, less is known about the flow during transient periods corresponding to needle opening and closing. Multiple injection strategies involve short injections, multiplying these aspects and giving them a growing importance as part of the fuel delivery process. In this study, single-hole transparent nozzles were manufactured with the same hole inlet radius and diameter as the Engine Combustion Network Spray D nozzle, mounted to a modified version of a common-rail Spray A injector body and needle. Needle opening and closing periods were visualized with stereoscopic high-speed microscopy at injection pressures relevant to modern diesel engines. Time-resolved sac pressure was extracted via elastic deformation analysis of the transparent nozzles. Sources of cavitation were observed and tracked, enabling the identification of a gas exchange process after the end of injection with ingestion of chamber gas into the sac and orifice. We observed that the gas exchange contributed widely to disrupting the start of injection and outlet flow during the subsequent injection event.
Brittan, Andrew M.; Mahaffey, Jacob T.; Colgan, Nathan E.; Elbakhshwan, Mohamed; Anderson, Mark H.
This study investigates the effectiveness of Cu as a corrosion barrier in supercritical carbon dioxide (s−CO2) by coating 316 stainless steel (316) with various thicknesses of Cu. 316 exposed to s−CO2 with 50 ppm CO showed a reduction in oxidation corresponding to the thickness of its Cu barrier coating. Additionally, a continuous Cu layer between the environment and the alloy was found to correlate to an elimination of carburization and corrosion-related mechanical degradation. This Cu coating technique could be applied over a variety of temperatures to improve the corrosion resistance of alloys that are susceptible to carburization.
Fuel injection rate laws are one of the most important pieces of information needed when modeling engine combustion with computational fluid dynamics. In this study, a simple phenomenological model of a common-rail injector was developed and calibrated for the Bosch CRI2.2 platform. The model requires three tunable parameter fits, making it relatively easy to calibrate and suitable for injector modeling when high-fidelity information about the internal injector’s geometry and electrical circuit details are not available. Each injection pulse is modeled as a sequence of up to four stages: an injection needle mechanical opening transient, a full-lift viscous flow inertial transient, a Bernoulli steady-state stage, and a needle descent transient. Parameters for each stage are obtained as polynomial fits from measured injection rate properties. The model enforces total injected mass, and the intermediate stages are only introduced if the injection pulse duration is long enough. Experimental rates of injection from two separate campaigns on the same injector were used to calibrate the model. The model was first validated against measured injection rate laws featuring pilot injections, short partially premixed combustion pulses, and conventional diesel combustion injection strategies. Then, it was employed as an input to engine computational fluid dynamics simulations, which were run to simulate experiments of mixture formation in an optically accessible light-duty diesel engine. It was found that, though simple, this model is capable of predicting both pilot and main injection pulse mass flow rates well: the simulations yielded accurate predictions of in-cylinder equivalence ratio distributions from injection strategies for both partially premixed combustion and pilot injections. Also, once calibrated, the model produced appropriate results for a wide range of injected mass and rail pressure values. Finally, it was observed that usage of such a relatively simple model can be a good choice when high-fidelity injection rate input and highly detailed information of the injector’s geometry and operation are not available, particularly as noticeable discrepancies can be present also among different experimental campaigns on similar hardware.
Faraggiana, E.; Whitlam, C.; Chapman, J.; Hillis, A.; Roesner, J.; Hann, M.; Greaves, D.; Yu, Y.H.; Ruehl, Kelley M.; Masters, I.; Foster, G.; Stockman, G.
A submerged wave device generates energy from the relative motion of floating bodies. In WaveSub, three floats are joined to a reactor; each connected to a spring and generator. Electricity generated damps the orbital movements of the floats. The forces are non-linear and each float interacts with the others. Tuning to the wave climate is achieved by changing the line lengths, so there is a need to understand the performance trade-offs for a large number of configurations. This requires an efficient, large displacement, multidirectional, multi-body numerical scheme. Results from a 1/25 scale wave basin experiment are described. Here, we show that a time domain linear potential flow formulation (Nemoh, WEC-Sim) can match the tank testing provided that suitably tuned drag coefficients are employed. Inviscid linear potential models can match some wave device experiments; however, additional viscous terms generally provide better accuracy. Scale experiments are also prone to mechanical friction, and we estimate friction terms to improve the correlation further. The resulting error in mean power between numerical and physical models is approximately 10%. Predicted device movement shows a good match. Overall, drag terms in time domain wave energy modelling will improve simulation accuracy in wave renewable energy device design.
Communication hypergraph model was proposed in a two-phase setting for encapsulating multiple communication cost metrics (bandwidth and latency), which are proven to be important in parallelizing irregular applications. In the first phase, computational-task-to-processor assignment is performed with the objective of minimizing total volume while maintaining computational load balance. In the second phase, communication-task-to-processor assignment is performed with the objective of minimizing total number of messages while maintaining communication-volume balance. The reduce-communication hypergraph model suffers from failing to correctly encapsulate send-volume balancing. We propose a novel vertex weighting scheme that enables part weights to correctly encode send-volume loads of processors for send-volume balancing. The model also suffers from increasing the total communication volume during partitioning. To decrease this increase, we propose a method that utilizes the recursive bipartitioning framework and refines each bipartition by vertex swaps. For performance evaluation, we consider column-parallel SpMV, which is one of the most widely known applications in which the reduce-task assignment problem arises. Extensive experiments on 313 matrices show that, compared to the existing model, the proposed models achieve considerable improvements in all communication cost metrics. These improvements lead to an average decrease of 30 percent in parallel SpMV time on 512 processors for 70 matrices with high irregularity.
We present a novel formulation for startup cost computation in the unit commitment problem (UC). Both our proposed formulation and existing formulations in the literature are placed in a formal, theoretical dominance hierarchy based on their respective linear programming relaxations. Our proposed formulation is tested empirically against existing formulations on large-scale UC instances drawn from real-world data. While requiring more variables than the current state-of-the-art formulation, our proposed formulation requires fewer constraints, and is empirically demonstrated to be as tight as a perfect formulation for startup costs. This tightening can reduce the computational burden in comparison to existing formulations, especially for UC instances with large reserve margins and high penetration levels of renewables.
Casali, L.; Osborne, T.H.; Wang, H.; Meier, E.T.; Ren, J.; Shafer, M.W.; Watkins, Jonathan G.
Impurity seeding studies in the small angle slot (SAS) divertor at DIII-D have revealed a strong relationship between the detachment onset and pedestal characteristics with both target geometry and impurity species. N2 seeding in the slot has led to the first simultaneous observation of detachment on the entire suite of boundary diagnostics viewing the SAS without degradation of core confinement. SOLPS-ITER simulations with D+C+N, full cross field drifts, and n-n collisions activated are performed for the first time in DIII-D to interpret the behavior. This highlights a strong effect of divertor configuration and plasma drifts on the recycling source distribution with significant consequences on plasma flows. Flow reversal is found for both main ions and impurities affecting strongly the impurity transport and providing an explanation for the observed dependence on the strike point location of the detachment onset and impurity leakage found in the experiments. Matched discharges with either nitrogen or neon injection show that while nitrogen does not significantly affect the pedestal, neon leads to increased pedestal pressure gradients and improved pedestal stability. Little nitrogen penetrates in the core, but a significant amount of neon is found in the pedestal consistent with the different ionization potentials of the two impurities. This work demonstrates that neutral and impurity distributions in the divertor can be controlled through variations in strike point locations in a fixed baffle structure. Divertor geometry combined with impurity seeding enables mitigated divertor heat flux balancing core contamination, thus leading to enhanced divertor dissipation and improved core-edge compatibility, which are essential for ITER and for future fusion reactors.
Magnetized Liner Inertial Fusion (MagLIF) at Sandia National Laboratories involves a laser preheating stage where a few-ns laser pulse passes through a few-micron-thick plastic window to preheat gaseous fusion fuel contained within the MagLIF target. Interactions with this window reduce heating efficiency and mix window and target materials into the fuel. A recently proposed idea called "Laser Gate"involves removing the window well before the preheating laser is applied. In this article, we present experimental proof-of-principle results for a pulsed-power implementation of Laser Gate, where a thin current-carrying wire weakens the perimeter of the window, allowing the fuel pressure to push the window open and away from the preheating laser path. For this effort, transparent targets were fabricated and a test facility capable of studying this version of Laser Gate was developed. A 12-frame bright-field laser schlieren/shadowgraphy imaging system captured the window opening dynamics on microsecond timescales. The images reveal that the window remains largely intact as it opens and detaches from the target. A column of escaping pressurized gas appears to prevent the detached window from inadvertently moving into the preheating laser path.
Deliverable Description: Identify and evaluate options for fuel and basket modifications, for dual-purpose canisters (DPCs) to be loaded in the future, that would substantially reduce the probability of post closure criticality after waste package breach and flooding with ground water. Planned work in FY20 will examine the feasibility of criticality control features, particularly neutron absorbing inserts or replacement channels for boiling water reactor (BWR) fuel assemblies. The expected outcome is additional engineering information that can be used to guide the R&D program, and to support future stakeholder interactions. This document will be incorporated into planned deliverable DPC Disposal Concepts of Operation (M3 SF-20SNO10305052, 18S ep20).
The Dial-A-Cluster (DAC) model allows interactive visualization of multivariate time series data. A multivariate time series dataset consists of an ensemble of data points, where each data point consists of a set of time series curves. The example of a DAC dataset used in this guide is a collection of 100 cities in the United States, where each city collects a year's worth of weather data, including daily temperature, humidity, and wind speed measurements.
Chemical kinetics simulations are used to explore whether detailed measurements of relevant chemical species during the oxidation of very dilute fuels (less than 1 Torr partial pressure) in a high-pressure plug flow reactor (PFR) can predict autoignition propensity. We find that for many fuels the timescale for the onset of spontaneous oxidation in dilute fuel/air mixtures in a simple PFR is similar to the 1st-stage ignition delay time (IDT) at stoichiometric engine-relevant conditions. For those fuels that deviate from this simple trend, the deviation is closely related to the peak rate of production of OH, HO2, CH2O, and CO2 formed during oxidation. We use these insights to show that an accurate correlation between simulated profiles of these species in a PFR and 1st-stage IDT can be developed using convolutional neural networks. Our simulations suggest that the accuracy of such a correlation is 10–50%, which is appropriate for rapid fuel screening and may be sufficient for predictive fuel performance modeling.
The US Department of Energy (DOE) Nuclear Energy Research Initiative funded the design and construction of the Seven Percent Critical Experiment (7uPCX) at Sandia National Laboratories. The start-up of the experiment facility and the execution of the experiments described here were funded by the DOE Nuclear Criticality Safety Program. The 7uPCX is designed to investigate critical systems with fuel for light water reactors in the enrichment range above 5 % 235U. The 7uPCX assembly is a water-moderated and -reflected array of aluminum-clad square-pitched U(6.90 %)02 fuel rods. Other critical experiments performed in the 7uPCX assembly are documented in LEU-COMP-THERM-078, LEU-COMP-THERM-080, LEU-COMP-THERM-096, and LEU-COMP-THERM-097.
Subsidence monitoring is a crucial component to understanding cavern integrity of salt storage caverns. This report looks at the historical and current subsidence monitoring program and includes interpretation of the data from the Bayou Choctaw Strategic Petroleum Reserve site. The current monitoring program consists of an annual elevation survey as well as GPS and tiltmeter instruments above both Cavern 4 and Cavern 20. This year's level and rod survey indicates little subsidence across the site. In addition, the GPS and tiltmeter instruments do not indicate any substantial movement above caverns 4 and 20. As such, there is no reason to indicate any of the caverns at Bayou Choctaw have lost integrity.
In this paper, we continue our efforts to exploit optimization and control ideas as a common foundation for the development of property-preserving numerical methods. Here we focus on a class of scalar advection equations whose solutions have fixed mass in a given Eulerian region and constant bounds in any Lagrangian volume. Our approach separates discretization of the equations from the preservation of their solution properties by treating the latter as optimization constraints. This relieves the discretization process from having to comply with additional restrictions and makes stability and accuracy the sole considerations in its design. A property-preserving solution is then sought as a state that minimizes the distance to an optimally accurate but not property-preserving target solution computed by the scheme, subject to constraints enforcing discrete proxies of the desired properties. Furthermore, we consider two such formulations in which the optimization variables are given by the nodal solution values and suitably defined nodal fluxes, respectively. A key result of the paper reveals that a standard Algebraic Flux Correction (AFC) scheme is a modified version of the second formulation obtained by shrinking its feasible set to a hypercube. In conclusion, we present numerical studies illustrating the optimization-based formulations and comparing them with AFC
Li-ion batteries currently dominate electrochemical energy storage for grid-scale applications, but there are promising aqueous battery technologies on the path to commercial adoption. Though aqueous batteries are considered lower risk, they can still undergo problematic degradation processes. This perspective details the degradation that aqueous batteries can experience during normal and abusive operation, and how these processes can even lead to cascading failure. We outline methods for studying these phenomena at the material and single-cell level. Considering reliability and safety studies early in technology development will facilitate translation of emerging aqueous batteries from the lab to the field.
A physically unclonable function (PUF) is an embedded hardware security measure that provides protection against counterfeiting. In this article, we present our work on using an array of randomly magnetized micrometer-sized ferromagnetic bars (micromagnets) as a PUF. We employ a 4μm thick surface layer of nitrogen-vacancy (NV) centers in diamond to image the magnetic field from each micromagnet in the array, after which we extract the magnetic polarity of each micromagnet using image analysis techniques. Finally, after evaluating the randomness of the micromagnet array PUF and the sensitivity of the NV readout, we conclude by discussing the possible future enhancements for improved security and magnetic readout.
Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutions of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes . This document describes the functionality and input syntax for Sierra/SM.
Rempe, Susan R.; Lubkowski, Jacek; Vanegas, Juan; Chan, Wai K.; Lorenzi, Philip L.; Weinstein, John N.; Sukharev, Sergei; Fushman, David; Anishkin, Andriy; Wlodawer, Alexander
Two bacterial type II l-asparaginases, from Escherichia coli and Dickeya chrysanthemi, have played a critical role for more than 40 years as therapeutic agents against juvenile leukemias and lymphomas. Despite a long history of successful pharmacological applications and the apparent simplicity of the catalytic reaction, controversies still exist regarding major steps of the mechanism. In this report, we provide a detailed description of the reaction catalyzed by E. coli type II l-asparaginase (EcAII). Our model was developed on the basis of new structural and biochemical experiments combined with previously published data. The proposed mechanism is supported by quantum chemistry calculations based on density functional theory. We provide strong evidence that EcAII catalyzes the reaction according to the double-displacement (ping-pong) mechanism, with formation of a covalent intermediate. Several steps of catalysis by EcAII are unique when compared to reactions catalyzed by other known hydrolytic enzymes. Here, the reaction is initiated by a weak nucleophile, threonine, without direct assistance of a general base, although a distant general base is identified. Furthermore, tetrahedral intermediates formed during the catalytic process are stabilized by a never previously described motif. Although the scheme of the catalytic mechanism was developed only on the basis of data obtained from EcAII and its variants, this novel mechanism of enzymatic hydrolysis could potentially apply to most (and possibly all) l-asparaginases.
We extend and improve recent results given by Singh and Watson on using classical bounds on the union of sets in a chance-constrained optimization problem. Specifically, we revisit the so-called Dawson and Sankoff bound that provided one of the best approximations of a chance constraint in the previous analysis. Next, we show that our work is a generalization of the previous work, and in fact the inequality employed previously is a very relaxed approximation with assumptions that do not generally hold. Computational results demonstrate on average over a 43% improvement in the bounds. As a byproduct, we provide an exact reformulation of the floor function in optimization models.
Maes, Noud; Skeen, Scott A.; Bardi, Michele; Fitzgerald, Russell P.; Malbec, Louis M.; Bruneaux, Gilles; Pickett, Lyle M.; Yasutomi, Koji; Martin, Glen
In a collaborative effort to identify key aspects of heavy-duty diesel injector behavior, the Engine Combustion Network (ECN) Spray C and Spray D injectors were characterized in three independent research laboratories using constant volume pre-burn vessels and a heated constant-pressure vessel. This work reports on experiments with nominally identical injectors used in different optically accessible combustion chambers, where one of the injectors was designed intentionally to promote cavitation. Optical diagnostic techniques specifically targeted liquid- and vapor-phase penetration, combustion indicators, and sooting behavior over a large range of ambient temperatures—from 850 K to 1100 K. Because the large-orifice injectors employed in this work result in flame lengths that extend well beyond the optical diagnostics’ field-of-view, a novel method using a characteristic volume is proposed for quantitative comparison of soot under such conditions. Further, the viability of extrapolating these measurements downstream is considered. The results reported in this publication explain trends and unique characteristics of the two different injectors over a range of conditions and serve as calibration targets for numerical efforts within the ECN consortium and beyond. Building on agreement for experimental results from different institutions under inert conditions, apparent differences found in combustion indicators and sooting behavior are addressed and explained. Ignition delay and soot onset are correlated and the results demonstrate the sensitivity of soot formation to the major species of the ambient gas (i.e., carbon dioxide, water, and nitrogen in the pre-burn ambient versus nitrogen only in the constant pressure vessel) when holding ambient oxygen volume percent constant.
We present the draft genome sequences of three Burkholderia thailandensis strains, E421, E426, and DW503. E421 consists of 90 contigs of 6,639,935 bp and 67.73% GC content. E426 consists of 106 contigs of 6,587,853 bp and 67.73% GC content. DW503 consists of 102 contigs of 6,458,767 bp and 67.64% GC content.
Density Functional Theory (DFT) calculations of electrode material properties in high energy density storage devices like lithium batteries have been standard practice for decades. In contrast, DFT modelling of explicit interfaces in batteries arguably lacks universally adopted methodology and needs further conceptual development. In this paper, we focus on solid-solid interfaces, which are ubiquitous not just in all-solid state batteries; liquid-electrolyte-based batteries often rely on thin, solid passivating films on electrode surfaces to function. We use metal anode calculations to illustrate that explicit interface models are critical for elucidating contact potentials, electric fields at interfaces, and kinetic stability with respect to parasitic reactions. The examples emphasize three key challenges: (1) the "dirty" nature of most battery electrode surfaces; (2) voltage calibration and control; and (3) the fact that interfacial structures are governed by kinetics, not thermodynamics. To meet these challenges, developing new computational techniques and importing insights from other electrochemical disciplines will be beneficial.
A novel metal-organic framework (MOF), Mn-DOBDC, has been synthesized in an effort to investigate the role of both the metal center and presence of free linker hydroxyls on the luminescent properties of DOBDC (2,5-dihydroxyterephthalic acid) containing MOFs. Co-MOF-74, RE-DOBDC (RE-Eu and Tb), and Mn-DOBDC have been synthesized and analyzed by powder X-ray diffraction (PXRD) and the fluorescent properties probed by UV-Vis spectroscopy and density functional theory (DFT). Mn-DOBDC has been synthesized by a new method involving a concurrent facile reflux synthesis and slow crystallization, resulting in yellow single crystals in monoclinic space group C2/c. Mn-DOBDC was further analyzed by single-crystal X-ray diffraction (SCXRD), scanning electron microscopy-energy-dispersive spectroscopy (SEM-EDS), and photoluminescent emission. Results indicate that the luminescent properties of the DOBDC linker are transferred to the three-dimensional structures of both the RE-DOBDC and Mn-DOBDC, which contain free hydroxyls on the linker. In Co-MOF-74 however, luminescence is quenched in the solid state due to binding of the phenolic hydroxyls within the MOF structure. Mn-DOBDC exhibits a ligand-based tunable emission that can be controlled in solution by the use of different solvents.
Accurate models of thermal runaway in lithium-ion batteries require quantitative knowledge of heat release during thermochemical processes. A capability to predict at least some aspects of heat release for a wide variety of candidate materials a priori is desirable. This work establishes a framework for predicting staged heat release from basic thermodynamic properties for layered metal-oxide cathodes. Available enthalpies relevant to thermal decomposition of layered metal-oxide cathodes are reviewed and assembled in this work to predict potential heat release in the presence of alkyl-carbonate electrolytes with varying state of charge. Cathode delithiation leads to a less stable metal oxide subject to phase transformations including oxygen release when heated. We recommend reaction enthalpies and show the thermal consequences of metal-oxide phase changes and solvent oxidation within the battery are of comparable magnitudes. Heats of reaction are related in this work to typical observations reported in the literature for species characterization and calorimetry. The methods and assembled databases of formation and reaction enthalpies in this work lay groundwork a new generation of thermal runaway models based on fundamental material thermodynamics, capable of predicting accurate maximum cell temperatures and hence cascading cell-to-cell propagation rates.
This manual gives usage information for the Charon semiconductor device simulator. Charon was developed to meet the modeling needs of Sandia National Laboratories and to improve on the capabilities of the commercial TCAD simulators; in particular, the additional capabilities are running very large simulations on parallel computers and modeling displacement damage and other radiation effects in significant detail. The parallel capabilities are based around the MPI interface which allows the code to be ported to a large number of parallel systems, including linux clusters and proprietary "big iron" systems found at the national laboratories and in large industrial settings.
Terahertz laser frequency combs based on quantum cascade lasers provide coherent, broadband, electrically pumped, THz radiation sources for use in future spectroscopic applications. Here, we explore the feasibility of such lasers in a dual-comb spectroscopy configuration for the detection of multiple molecular samples in the gas phase. The lasers span approximately 180 GHz of optical bandwidth, centered at 3.4 THz, with submilliwatt total optical power. One of the main advantages of dual-comb spectroscopy is its high speed, which opens up the possibility for direct observations of chemical reaction dynamics in the terahertz spectral region. As a proof-of-concept, we recorded continuously evolving spectra from gas mixtures with 1 ms temporal resolution.
Tetragonal tungsten bronze (TTB) materials are one of the most promising classes of materials for ferroelectric and nonlinear optical devices, owing to their very unique noncentrosymmetric crystal structure. In this work, a new TTB phase of LiNb6Ba5Ti4O30 (LNBTO) has been discovered and studied. A small amount of a secondary phase, LiTiO2 (LTO), has been incorporated as nanopillars that are vertically embedded in the LNBTO matrix. The new multifunctional nanocomposite thin film presents exotic highly anisotropic microstructure and properties, e.g., strong ferroelectricity, high optical transparency, anisotropic dielectric function, and strong optical nonlinearity evidenced by the second harmonic generation results. An optical waveguide structure based on the stacks of α-Si on SiO2/LNBTO-LTO has been fabricated, exhibiting low optical dispersion with an optimized evanescent field staying in the LNBTO-LTO active layer. This work highlights the combination of new TTB material designs and vertically aligned nanocomposite structures for further enhanced anisotropic and nonlinear properties.
The dramatic increase in the scale of current and planned high-end HPC systems is leading new challenges, such as the growing costs of data movement and IO, and the reduced mean time between failures (MTBF) of system components. In-situ workflows, i.e., executing the entire application workflows on the HPC system, have emerged as an attractive approach to address data-related challenges by moving computations closer to the data, and staging-based frameworks have been effectively used to support in-situ workflows at scale. However, the resilience of these staging-based solutions has not been addressed, and they remain susceptible to expensive data failures. Furthermore, naive use of data resilience techniques such as n-way replication and erasure codes can impact latency and/or result in significant storage overheads. In this article, we present CoREC, a scalable and resilient in-memory data staging runtime for large-scale in-situ workflows. CoREC uses a novel hybrid approach that combines dynamic replication with erasure coding based on data access patterns. It also leverages multiple levels of replications and erasure coding to support diverse data resiliency requirements. Furthermore, the article presents optimizations for load balancing and conflict-avoiding encoding, and a low overhead, lazy data recovery scheme. We have implemented the CoREC runtime and have deployed with the DataSpaces staging service on leadership class computing machines and present an experimental evaluation in the article. The experiments demonstrate that CoREC can tolerate in-memory data failures while maintaining low latency and sustaining high overall storage efficiency at large scales.
We demonstrated production of a superior performance biodiesel referred to here as fatty acid fusel alcohol esters (FAFE) – by reacting fusel alcohols (isobutanol, 3-methyl-1-butanol, and (S)-(-)-2-methyl-1-butanol) with oil (glyceryl trioleate) using lipase from Aspergillus oryzae. Reaction conditions corresponding to a molar ratio of 5:1 (fusel alcohols to oil), enzyme loading of 2% w/w, reaction temperature of 35 °C, shaking speed of 250 rpm, and reaction time of 24 h achieved >97% conversion to FAFE. Further, FAFE obtained from reacting a fusel alcohol mixture with corn oil were evaluated for use as a fuel for diesel engines. FAFE mixtures showed superior combustion and cold-flow properties, with the derived cetane numbers up to 4.8 points higher, cloud points up to −6 °C lower, and the heat of combustion up to 2.1% higher than the corresponding FAME samples, depending on the fusel mixture used. This represents a significant improvement for all three metrics, which are typically anti-correlated. FAFE provides a new opportunity for expanded usage of biodiesel by addressing feedstock limitations, fuel performance, and low temperature tolerance.
The structure of the edge plasma in a magnetic confinement system has a strong impact on the overall plasma performance. We uncover for the first time a magnetic-field-direction dependent density shelf, i.e., local flattening of the density radial profile near the magnetic separatrix, in high confinement plasmas with low edge collisionality in the DIII-D tokamak. The density shelf is correlated with a doubly peaked density profile near the divertor target plate, which tends to occur for operation with the ion B×â‡B drift direction away from the X-point, as currently employed for DIII-D advanced tokamak scenarios. This double-peaked divertor plasma profile is connected via the E×B drifts, arising from a strong radial electric field induced by the radial electron temperature gradient near the divertor target. The drifts lead to the reversal of the poloidal flow above the divertor target, resulting in the formation of the density shelf. The edge density shelf can be further enhanced at higher heating power, preventing large, periodic bursts of the plasma, i.e., edge-localized modes, in the edge region, consistent with ideal magnetohydrodynamics calculations.
Mimetic methods discretize divergence by restricting the Gauss theorem to mesh cells. Because point clouds lack such geometric entities, construction of a compatible meshfree divergence remains a challenge. In this work, we define an abstract Meshfree Mimetic Divergence (MMD) operator on point clouds by contraction of field and virtual face moments. This MMD satisfies a discrete divergence theorem, provides a discrete local conservation principle, and is first-order accurate. We consider two MMD instantiations. The first one assumes a background mesh and uses generalized moving least squares (GMLS) to obtain the necessary field and face moments. This MMD instance is appropriate for settings where a mesh is available but its quality is insufficient for a robust and accurate mesh-based discretization. The second MMD operator retains the GMLS field moments but defines virtual face moments using computationally efficient weighted graph-Laplacian equations. This MMD instance does not require a background grid and is appropriate for applications where mesh generation creates a computational bottleneck. It allows one to trade an expensive mesh generation problem for a scalable algebraic one, without sacrificing compatibility with the divergence operator. We demonstrate the approach by using the MMD operator to obtain a virtual finite-volume discretization of conservation laws on point clouds. Numerical results in the paper confirm the mimetic properties of the method and show that it behaves similarly to standard finite volume methods.
We utilize generalized moving least squares (GMLS) to develop meshfree techniques for discretizing hydrodynamic flow problems on manifolds. We use exterior calculus to formulate incompressible hydrodynamic equations in the Stokesian regime and handle the divergence-free constraints via a generalized vector potential. This provides less coordinate-centric descriptions and enables the development of efficient numerical methods and splitting schemes for the fourth-order governing equations in terms of a system of second-order elliptic operators. Using a Hodge decomposition, we develop methods for manifolds having spherical topology. We show the methods exhibit high-order convergence rates for solving hydrodynamic flows on curved surfaces. The methods also provide general high-order approximations for the metric, curvature, and other geometric quantities of the manifold and associated exterior calculus operators. The approaches also can be utilized to develop high-order solvers for other scalar-valued and vector-valued problems on manifolds.
Despite extensive research on symmetric polynomial quadrature rules for triangles, as well as approaches to their calculation, few studies have focused on non-polynomial functions, particularly on their integration using symmetric triangle rules. In this paper, we present two approaches to computing symmetric triangle rules for singular integrands by developing rules that can integrate arbitrary functions. The first approach is well suited for a moderate amount of points and retains much of the efficiency of polynomial quadrature rules. The second approach better addresses large amounts of points, though it is less efficient than the first approach. We demonstrate the effectiveness of both approaches on singular integrands, which can often yield relative errors two orders of magnitude less than those from polynomial quadrature rules.
The goal of the ExaWind project is to enable predictive simulations of wind farms comprised of many megawatt-scale turbines situated in complex terrain. Predictive simulations will require computational fluid dynamics (CFD) simulations for which the mesh resolves the geometry of the turbines and captures the rotation and large deflections of blades. Whereas such simulations for a single turbine are arguably petascale class, multi-turbine wind farm simulations will require exascale-class resources. The primary physics codes in the ExaWind project are Nalu-Wind, which is an unstructured-grid solver for the acoustically incompressible Navier-Stokes equations, and OpenFAST, which is a whole-turbine simulation code. The Nalu-Wind model consists of the mass-continuity Poisson-type equation for pressure and a momentum equation for the velocity. For such modeling approaches, simulation times are dominated by linear-system setup and solution for the continuity and momentum systems. For the ExaWind challenge problem, the moving meshes greatly affect overall solver costs as reinitialization of matrices and recomputation of preconditioners is required at every time step. In this report we evaluated GPU-performance baselines for the linear solvers in the Trilinos and hypre solver stacks using two representative Nalu-Wind simulations: an atmospheric boundary layer precursor simulation on a structured mesh, and a fixed-wing simulation using unstructured overset meshes. Both strong-scaling and weak-scaling experiments were conducted on the OLCF supercomputer Summit and similar proxy clusters. We focused on the performance of multi-threaded Gauss-Seidel and two-stage Gauss-Seidel that are extensions of classical Gauss-Seidel; of one-reduce GMRES, a communication-reducing variant of the Krylov GMRES; and algebraic multigrid methods that incorporate the afore-mentioned methods. The team has established that AMG methods are capable of solving linear systems arising from the fixed-wing overset meshes on CPU, a critical intermediate result for ExaWind FY20 Q3 and Q4 milestones. For the fixed-wing strong-scaling study (model with 3M grid-points), the team identified that Nalu-Wind simulations with the new Trilinos and hypre solvers scale to modest GPU counts, maintaining above 70% efficiency up to 6 GPUs. However, there still remain significant bottlenecks to performance: matrix assembly (hypre), AMG setup (hypre and Trilinos) In the weak-scaling experiments (going from 0.4M to 211M gridpoints), it's shown that the solver apply phases are faster on GPUs, but that Nalu-Wind simulation times grow, primarily due to the multigrid-setup process. Finally, based on the report outcomes, we propose a linear solver path-forward for the remainder of the ExaWind project. Near term, the NREL team will continue their work on GPU-based linear-system assembly. They will also investigate how the use of alternatives to the NVIDIA UVM (unified virtual memory) paradigm affects performance. Longer term, the NREL team will evaluate algorithmic performance on other types of accelerators and merge their improvements back to the main hypre repository branch. Near term, the Trilinos team will address performance bottlenecks identified in this milestone, such as implementing a GPU-based segregated momentum solve and reusing matrix graphs across linear-system assembly phases. Longer term, the Trilinos team will do detailed analysis and optimization of multigrid setup.
Specialized computational chemistry packages have permanently reshaped the landscape of chemical and materials science by providing tools to support and guide experimental efforts and for the prediction of atomistic and electronic properties. In this regard, electronic structure packages have played a special role by using first-principle-driven methodologies to model complex chemical and materials processes. Over the past few decades, the rapid development of computing technologies and the tremendous increase in computational power have offered a unique chance to study complex transformations using sophisticated and predictive many-body techniques that describe correlated behavior of electrons in molecular and condensed phase systems at different levels of theory. In enabling these simulations, novel parallel algorithms have been able to take advantage of computational resources to address the polynomial scaling of electronic structure methods. In this paper, we briefly review the NWChem computational chemistry suite, including its history, design principles, parallel tools, current capabilities, outreach, and outlook.
Scientific knowledge and engineering tools for predicting coastal erosion are largely confined to temperate climate zones that are dominated by non-cohesive sediments. The pattern of erosion exhibited by the ice-bonded permafrost bluffs in Arctic Alaska, however, is not well-explained by these tools. Investigation of the oceanographic, thermal, and mechanical processes that are relevant to permafrost bluff failure along Arctic coastlines is needed. We conducted physics-based numerical simulations of mechanical response that focus on the impact of geometric and material variability on permafrost bluff stress states for a coastal setting in Arctic Alaska that is prone to toppling mode block failure. Our three-dimensional geomechanical boundary-value problems output static realizations of compressive and tensile stresses. We use these results to quantify variability in the loci of potential instability. We observe that niche dimension affects the location and magnitude of the simulated maximum tensile stress more strongly than the bluff height, ice wedge polygon size, ice wedge geometry, bulk density, Young's Modulus, and Poisson's Ratio. Our simulations indicate that variations in niche dimension can produce radically different potential failure areas and that even relatively shallow vertical cracks can concentrate displacement within ice-bonded permafrost bluffs. These findings suggest that stability assessment approaches, for which the geometry of the failure plane is delineated a priori, may not be ideal for coastlines similar to our study area and could hamper predictions of erosion rates and nearshore sediment/biogeochemical loading.
We report the formation mechanism and compositions of a solid-electrolyte interphase (SEI) on a microporous carbon/sulfur (MC/S) cathode in Li-S batteries using a carbonate-based electrolyte (1 M LiPF6 in ethylene carbonate (EC)/dimethyl carbonate, v:v = 1:1). Through characterizations using 1D and 2D solution-phase nuclear magnetic resonance spectroscopy, coupled with model chemical reactions and DFT calculations, we have identified two critical roles of Li+ in steering the SEI formation. First, the preferential solvation of Li+ by EC in the mixed carbonate electrolyte renders EC as the dominant participant in the SEI formation, and second, Li+ coordination to the EC carbonyl alters activation barriers and changes the reaction pathways relative to Na+. The main organic components in the SEI are identified as lithium ethylene monocarbonate and lithium methyl carbonate, which are virtually identical to those formed on Li and graphite anodes of lithium-ion batteries but via a different pathway.
Here, the design, fabrication, and characterization of an actively tunable long-wave infrared detector, made possible through direct integration of a graphene-enabled metasurface with a conventional type-II superlattice infrared detector, are reported. This structure allows for post-fabrication tuning of the detector spectral response through voltage-induced modification of the carrier density within graphene and, therefore, its plasmonic response. These changes modify the transmittance through the metasurface, which is fabricated monolithically atop the detector, allowing for spectral control of light reaching the detector. Importantly, this structure provides a fabrication-controlled alignment of the metasurface filter to the detector pixel and is entirely solid-state. Using single pixel devices, relative changes in the spectral response exceeding 8% have been realized. These proof-of-concept devices present a path toward solid-state hyperspectral imaging with independent pixel-to-pixel spectral control through a voltage-actuated dynamic response.
We review recent advances in the capabilities of the open source ab initio Quantum Monte Carlo (QMC) package QMCPACK and the workflow tool Nexus used for greater efficiency and reproducibility. The auxiliary field QMC (AFQMC) implementation has been greatly expanded to include k-point symmetries, tensor-hypercontraction, and accelerated graphical processing unit (GPU) support. These scaling and memory reductions greatly increase the number of orbitals that can practically be included in AFQMC calculations, increasing the accuracy. Advances in real space methods include techniques for accurate computation of bandgaps and for systematically improving the nodal surface of ground state wavefunctions. Results of these calculations can be used to validate application of more approximate electronic structure methods, including GW and density functional based techniques. To provide an improved foundation for these calculations, we utilize a new set of correlation-consistent effective core potentials (pseudopotentials) that are more accurate than previous sets; these can also be applied in quantum-chemical and other many-body applications, not only QMC. These advances increase the efficiency, accuracy, and range of properties that can be studied in both molecules and materials with QMC and QMCPACK.
Vansco, Michael F.; Caravan, Rebecca L.; Zuraski, Kristen; Winiberg, Frank A.F.; Au, Kendrew; Trongsiriwat, Nisalak; Walsh, Patrick J.; Osborn, David L.; Percival, Carl J.; Khan, M.A.H.; Shallcross, Dudley E.; Taatjes, Craig A.; Lester, Marsha I.
Ozonolysis of isoprene, one of the most abundant volatile organic compounds emitted into the Earth's atmosphere, generates two four-carbon unsaturated Criegee intermediates, methyl vinyl ketone oxide (MVK-oxide) and methacrolein oxide (MACR-oxide). The extended conjugation between the vinyl substituent and carbonyl oxide groups of these Criegee intermediates facilitates rapid electrocyclic ring closures that form five-membered cyclic peroxides, known as dioxoles. This study reports the first experimental evidence of this novel decay pathway, which is predicted to be the dominant atmospheric sink for specific conformational forms of MVK-oxide (anti) and MACR-oxide (syn) with the vinyl substituent adjacent to the terminal O atom. The resulting dioxoles are predicted to undergo rapid unimolecular decay to oxygenated hydrocarbon radical products, including acetyl, vinoxy, formyl, and 2-methylvinoxy radicals. In the presence of O2, these radicals rapidly react to form peroxy radicals (ROO), which quickly decay via carbon-centered radical intermediates (QOOH) to stable carbonyl products that were identified in this work. The carbonyl products were detected under thermal conditions (298 K, 10 Torr He) using multiplexed photoionization mass spectrometry (MPIMS). The main products (and associated relative abundances) originating from unimolecular decay of anti-MVK-oxide and subsequent reaction with O2 are formaldehyde (88 ± 5%), ketene (9 ± 1%), and glyoxal (3 ± 1%). Those identified from the unimolecular decay of syn-MACR-oxide and subsequent reaction with O2 are acetaldehyde (37 ± 7%), vinyl alcohol (9 ± 1%), methylketene (2 ± 1%), and acrolein (52 ± 5%). In addition to the stable carbonyl products, the secondary peroxy chemistry also generates OH or HO2 radical coproducts.
We developed a method for examining ice formation on solid substrates exposed to cloud-like atmospheres. Our experimental approach couples video-rate optical microscopy of ice formation with high-resolution atomic-force microscopy (AFM) of the initial mineral surface. We demonstrate how colocating stitched AFM images with video microscopy can be used to relate the likelihood of ice formation to nanoscale properties of a mineral substrate, e.g., the abundance of surface steps of a certain height. We also discuss the potential of this setup for future iterative investigations of the properties of ice nucleation sites on materials.
Easterling, Charles P.; Coste, Guilhem; Sanchez, Jose E.; Fanucci, Gail E.; Sumerlin, Brent S.
We report a post-polymerization modification strategy to functionalize methacrylic copolymers through enol-ester transesterification. A new monomer, vinyl methacryloxy acetate (VMAc), containing both enol-ester and methacryloyl functionality, was successfully copolymerized with methyl methacrylate (MMA) by selective reversible addition-fragmentation chain transfer (RAFT) polymerization. Post-polymerization modification of pendent enol esters proceeded through an "irreversible"transesterification process, driven by the low nucleophilicity of the tautomerization product, to result in high conversion under mild conditions.
Despite its scarcity in terrestrial life, helium effects on microstructure evolution and thermo-mechanical properties can have a significant impact on the operation and lifetime of applications, including: advanced structural steels in fast fission reactors, plasma facing and structural materials in fusion devices, spallation neutron target designs, energetic alpha emissions in actinides, helium precipitation in tritium-containing materials, and nuclear waste materials. The small size of a helium atom combined with its near insolubility in almost every solid makes the helium–solid interaction extremely complex over multiple length and time scales. This Special Issue, “Radiation Damage in Materials—Helium Effects”, contains review articles and full-length papers on new irradiation material research activities and novel material ideas using experimental and/or modeling approaches. These studies elucidate the interactions of helium with various extreme environments and tailored nanostructures, as well as their impact on microstructural evolution and material properties.
Explosions detonated in geologic media damage it in various ways via processes that include vaporization, fracturing, crushing of interstitial pores, etc. Seismic waves interact with the altered media in ways that could be important to the discrimination, characterization, and location of the explosions. As part of the Source Physics Experiment, we acquired multiple pre- and post-explosion near-field seismic datasets and analyzed changes to seismic P-wave velocity. Our results indicate that the first explosion detonated in an intact media can cause fracturing and, consequently, a decrease in P-wave velocity. After the first explosion, subsequent detonations in the pre-damaged media have limited discernible effects. We hypothesize this is due to the stress-relief provided by a now pre-existing network of fractures into which gasses produced by the explosion migrate. We also see an overall increase in velocity of the damaged region over time, either due to a slow healing process or closing of the fractures by subsequent explosions.
The NASA Space Shuttle Tiles were used by Sandia in the process of developing the Laser Dynamic Range Imager (LDRI) in support of NASA's Return to Flight following the 2003 Space Shuttle Columbia disaster. The heat shield tiles, provided to Sandia by NASA, are identical to those that were located on the underbelly of the Space Shuttle Columbia's orbiter. Sandia used the tiles to test the efficacy of the LDRI's imaging capabilities. The LDRI was utilized during every space shuttle mission between 2005 and 2011. The tiles are currently located in Building 891 and need to be moved to free up space for operational use. Given their technical significance, Sandia would like to archive them as historically significant items in long-term storage until such time as they can be appropriately displayed or employed as a demonstration artifact. This document provides basic information about the provenance of this artifact.
We propose a bilevel optimization approach for the estimation of parameters in nonlocal image denoising models. The parameters we consider are both the space-dependent fidelity weight and weights within the kernel of the nonlocal operator. In both cases we investigate the differentiability of the solution operator in function spaces and derive a first order optimality system that characterizes local minima. For the numerical solution of the problems, we propose a second-order trust-region algorithm in combination with a finite element discretization of the nonlocal denoising models and we introduce a computational strategy for the solution of the resulting dense linear systems. Several experiments illustrate the applicability and effectiveness of our approach.
Nonlocal models provide accurate representations of physical phenomena ranging from fracture mechanics to complex subsurface flows, settings in which traditional partial differential equation models fail to capture effects caused by long-range forces at the microscale and mesoscale. However, the application of nonlocal models to problems involving interfaces, such as multimaterial simulations and fluid-structure interaction, is hampered by the lack of a physically consistent interface theory which is needed to support numerical developments and, among other features, reduces to classical models in the limit as the extent of nonlocal interactions vanish. In this paper, we use an energy-based approach to develop a formulation of a nonlocal interface problem which provides a physically consistent extension of the classical perfect interface formulation for partial differential equations. Numerical examples in one and two dimensions validate the proposed framework and demonstrate the scope of our theory.
Historically, neuroscience principles have heavily influenced artificial intelligence (AI), for example the influence of the perceptron model, essentially a simple model of a biological neuron, on artificial neural networks. More recently, notable recent AI advances, for example the growing popularity of reinforcement learning, often appear more aligned with cognitive neuroscience or psychology, focusing on function at a relatively abstract level. At the same time, neuroscience stands poised to enter a new era of large-scale high-resolution data and appears more focused on underlying neural mechanisms or architectures that can, at times, seem rather removed from functional descriptions. While this might seem to foretell a new generation of AI approaches arising from a deeper exploration of neuroscience specifically for AI, the most direct path for achieving this is unclear. Here we discuss cultural differences between the two fields, including divergent priorities that should be considered when leveraging modern-day neuroscience for AI. For example, the two fields feed two very different applications that at times require potentially conflicting perspectives. We highlight small but significant cultural shifts that we feel would greatly facilitate increased synergy between the two fields.
Materials exhibiting metal-to-insulator transitions (MITs) could enable low power neuromorphic computing, but progress is hindered by insufficient mechanistic understanding. In this issue of Matter, Banerjee and colleagues describe with intricate detail a new MIT mechanism in β′-CuxV2O5, with potential applications to neuromorphic computing. Materials exhibiting metal-to-insulator transitions (MITs) could enable low power neuromorphic computing, but progress is hindered by insufficient mechanistic understanding. In this issue of Matter, Banerjee and colleagues describe with intricate detail a new MIT mechanism in β′-CuxV2O5, with potential applications to neuromorphic computing.
Casias, Lilian K.; Morath, Christian P.; Steenbergen, Elizabeth H.; Umana-Membreno, Gilberto A.; Webster, Preston T.; Logan, Julie V.; Kim, Jin K.; Balakrishnan, Ganesh; Faraone, Lorenzo; Krishna, Sanjay
Anisotropic carrier transport properties of unintentionally doped InAs/InAs0.65Sb0.35 type-II strain-balanced superlattice material are evaluated using temperature-and field-dependent magnetotransport measurements performed in the vertical direction on a substrate-removed metal-semiconductor-metal device structure. To best isolate the measured transport to the superlattice, device fabrication entails flip-chip bonding and backside device processing to remove the substrate material and deposit contact metal directly to the bottom of an etched mesa. High-resolution mobility spectrum analysis is used to calculate the conductance contribution and corrected mixed vertical-lateral mobility of the two carrier species present. Combining the latter with lateral mobility results from in-plane magnetotransport measurements on identical superlattice material allows for the calculation of the true vertical majority electron and minority hole mobilities; amplitudes of 4.7 × 10 3 cm2/V s and 1.60 cm2/V s are determined at 77 K, respectively. The temperature-dependent results show that vertical hole mobility rapidly decreases with decreasing temperature due to trap-induced localization and then hopping transport, whereas vertical electron mobility appears phonon scattering-limited at high temperature, giving way to interface roughness scattering at low temperatures, analogous to the lateral electron mobility but with a lower overall magnitude.
The coordination behavior of the tridentate alkoxy ligand 6,6'-(((2-hydroxyethyl)azanediyl)bis(methylene)) bis(2,4-di-tert-butylphenol) (termed H3-AM-DBP2) with group 4 metal alkoxides ([M(OR)4]) in a 1:1 ratio was previously found to generate [(ONep)Ti(κ 4 (O,O’,O”,N)-AM-DBP2)] and [(OR)Zr(κ 4 (μ-O,O’,O”,N)-AM-DBP2)]2 (M = Zr, Hf). Additional studies revealed that increasing the stoichiometric ratio to 1:2 H3-AM-DBP2:[M(OR)4] led to the isolation of [(ONep)Ti(κ 4 (μ-O,O’,O”,N)-AM-DBP2)(μ-ONep)Ti(ONep)3] (1)•tol, [(OBu t)Zr(κ 4 (μ-O,O’,O”,N)-AM-DBP2)(μ-OBu t)Zr(OBu t)3] (2) and [(OBu t)Hf(κ 4 (μ-O,O’,O”,N)-AM-DBP2)(μ-OBu t)Hf(OBu t)3] (3). The asymmetric dinuclear complexes of 1-3 resemble the chelation of a [M(OR)4] moiety to a “(OR)M(κ 4 (O,O’,O”,N)-AM-DBP2)” fragment. The metal complexed by the AM-DBP2 ligand has a pseudo octahedral geometry while the other metal adopts an intermediate trigonal bipyramidal (TBP-5)/square base pyramidal (SBP-5) geometry for 1 but a distorted SBP-5 for both 2 and 3. The structure and properties of 1-3 were analyzed by computational modeling and fully characterized by standard analytical methods. (Figure presented.).
Many types of vehicles using fuels that differ from typical hydrocarbons such as gasoline and diesel are in use throughout the world. These include vehicles running on the combustion of natural gas and propane as well as electrical drive vehicles utilizing batteries or hydrogen as energy storage. These alternative fuels pose hazards that are different from traditional fuels and the safety of these vehicles are being questioned in areas such as tunnels and other enclosed spaces. Much scientific research and analysis has been conducted on tunnel and garage hazard scenarios; however, the data and conclusions might not seem to be immediately applicable to highway tunnel owners and authorities having jurisdiction over tunnels. This report provides a comprehensive, concise summary of the literature available characterizing the various hazards presented by all alternative fuel vehicles, including light-duty, medium- and heavy-duty, as well as buses. Research characterizing both worst-case and more plausible scenarios and risk-based analysis is also summarized Gaps in the research are identified in order to guide future research efforts to provide a complete analysis of the hazards and recommendations for the use of alternative fuel vehicles in tunnels.
This interim report is an update of the report Jove Colon et al. (2019; M4SF-19SN010301091) describing international collaboration activities pertaining to FEBEX-DP and DECOVALEX19 Task C projects. Although work on these two international repository science activities is no longer continuing by the international partners, investigations on the collected data and samples is still ongoing. Descriptions of these underground research laboratory (URL) R&D activities are given in Jové Colón et al. (2018; 2019) but will repeated here for completeness. The 2019 status of work conducted at Sandia National Laboratories (SNL) on these two activities is summarized along with other international collaboration activities in Birkholzer et al. (2019).
The Biometric Access Control System Industrialization project was initiated as Project 2 under the umbrella Cooperative Research and Development Agreement (CRADA) No. SC 14/01816.00.00 between National Technology and Engineering Solutions of Sandia (NTESS) and AQUILA on July 16, 2019. The purpose of this project has been to evaluate alternatives to the more traditional biometric access control methods, such as fingerprints and retinal scanners.
A useful performance metric for Intelligence, Surveillance, and Reconnaissance (ISR) radar systems is the Impulse Response (IPR). This is true for a fidelity metric for the signal channel, as well as a stability measure across multiple pulses. The IPR represents performance with respect to both amplitude and phase modulations of the transfer function for components, circuits, subassemblies, and even the looped radar hardware. The proper IPR performance specification limits will depend on radar operating mode. Generally, it will be the intersection of the strictest requirements.
This study evaluated gamma irradiation for sterilization and reuse of two models of N95 respirators after gamma radiation sterilization as a method to increase availability of N95 respirators during a shortage. The Sandia National Laboratories Gamma Irradiation Facility was used to irradiate two different models of N95 filtering facepiece respirators at doses ranging from 0 kGy(tissue) to 50 kGy(tissue). The following tests were used to determine the efficacy of the respirator after irradiation sterilization: Ambient Aerosol Condensation Nuclei Counter Quantitative Fit Test, tensile test, strain cycling, oscillatory dynamic mechanical analysis, microscopic image analysis of fiber layers, and electrostatic field measurements. Both of the respirator models exhibited statistically significant changes after gamma irradiation as shown by the Quantitative Fit Test, electrostatic testing and the aerosol testing. The change in electrostatic capability of the filter reduced the efficiency of challenging particles near the 200 nm size by approximately 40-50%. Both tested respirators showed statistically significant changes associated with gamma sterilization. However, our results indicate that choices in materials and manufacturing methods to achieve N95 filtration lead to different magnitudes of damage when exposed to gamma radiation at sterilization relevant doses. This damage results in lower filtration performance. While our sample size (2 different types of respirators) was small, we did observe a change in electrostatic properties on a filter layer that coincided with the failure on the Quantitative Fit Test.
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.
This study evaluated gamma irradiation for sterilization and reuse of two models of N95 respirators after gamma radiation sterilization as a method to increase availability of N95 respirators during a shortage. The Sandia National Laboratories Gamma Irradiation Facility was used to irradiate two different models of N95 filtering facepiece respirators at doses ranging from 0 kGy(tissue) to 50 kGy(tissue). The following tests were used to determine the efficacy of the respirator after irradiation sterilization: Ambient Aerosol Condensation Nuclei Counter Quantitative Fit Test, tensile test, strain cycling, oscillatory dynamic mechanical analysis, microscopic image analysis of fiber layers, and electrostatic field measurements. Both of the respirator models exhibited statistically significant changes after gamma irradiation as shown by the Quantitative Fit Test, electrostatic testing and the aerosol testing. The change in electrostatic charge of the filter was correlated with a reduction in capturing particles near the 200 nm size by approximately 40-50%. Both tested respirators showed statistically significant changes associated with gamma sterilization. However, our results indicate that choices in materials and manufacturing methods to achieve N95 filtration lead to different magnitudes of damage when exposed to gamma radiation at sterilization relevant doses. This damage results in lower filtration performance. While our sample size (2 different types of respirators) was small, we did observe a change in electrostatic properties on a filter layer that coincided with the failure on the Quantitative Fit Test and reduction in aerosol filtering efficiency. Key Words: N95 respirators, respirators, airborne transmission, pandemic prevention, COVID-19, gamma sterilization
The Center for Disease Control has recommended that the public should wear cloth face coverings in public settings. Face coverings and face shields can be made by using Commonly Available Materials (CAMs). As part of the Sandia COVID-19 LDRD effort (funded under the Materials Science Investment Area), the Sandia E-PiPEline task evaluated design options for face coverings and face shields considering their effectiveness, durability, build difficulty, build cost, and comfort. Observations from this investigation are presented here to provide guidelines for home construction of face coverings and face shields. This executive summary includes a brief roadmap of the analysis methodology, two one-page handouts geared to be distributed to the public at large (one for face coverings and one for face shields), and additional observations regarding potential solutions for face coverings and face shields included to further support the one-page handouts.
The solderability of MetGlasTM (subsidiary of Hitachi Metals, America Ltd) 2826 MB, a rapidly solidified metallic foil, was evaluated by the meniscus height/wetting force method for tin-silver-copper (SnAgCu) and tin-silver-bismuth (SnAgBi) solders to understand the effects of the extreme non-equilibrium condition of the MetglasTM surface on solderability performance. Of the variables studied here (solder temperature, heat treatment, and solder composition), solder composition had the largest impact on contact angle. Flux and foil composition remained constant throughout; but, these factors would also be predicted to significantly affect solderability. A greater understanding of the manner whereby non-equilibrium cooling affects solderability of these foils will broaden the application of soldering technology of structures fabricated by rapid cooling process (i.e. additively manufactured coatings and parts). Developing a robust database for Pb-free solderability behavior is also necessary, as industry transitions from tin-lead (SnPb) to lead free (Pb-free) solders.
As part of the Department of Energy response to the novel coronavirus disease (COVID-19) pandemic of 2020, a modeling effort was sponsored by the DOE Office of Science. Through this effort, an integrated planning framework was developed whose capabilities were demonstrated with the combination of a treatment resource demand model and an optimization model for routing supplies. This report documents this framework and models, and an application involving ventilator demands and supplies in the continental United States. The goal of this application is to test the feasibility of implementing nationwide ventilator sharing in response to the COVID-19 crisis. Multiple scenarios were run using different combinations of forecasted and observed patient streams, and it is demonstrated that using a "worst-case forecast for planning may be preferable to best mitigate supply-demand risks in an uncertain future. There is also a brief discussion of model uncertainty and its implications for the results.
Simple but mission-critical internet-based applications that require extremely high reliability and availability could potentially benefit from running on robust public programmable blockchain platforms such as Ethereum. Unfortunately, program code running on such blockchains is ordinarily publicly viewable, rendering these platforms unsuitable for applications requiring strict privacy of application code, data, and results. However, might it be possible to encode an application's business logic and data for these platforms in such a way that it becomes impossible for unauthorized parties to infer any meaningful information whatsoever about the semantics of the data, and the operations being performed on that data? In this report, we describe GABLE (Garbled Autonomous Bots Leveraging Ethereum), a system concept developed at Sandia that achieves this security goal in a limited, but still useful range of circumstances. GABLE, uses simple but effective algorithms to permit secure private execution of garbled state machines (and more efficient garbled circuits) on public computing resources. We give an example working implementation for garbled state machines, written using the Python and Solidity programming languages, and outline how our methods can be extended to support a more powerful garbled universal circuit model of computation. The capability embodied by the GABLE, system has significant potential applications, a few of which we discuss in this report.
Moriarty, Patrick; Hamilton, Nicholas; Debnath, Mithu; Herges, Thomas H.; Isom, Brad; Lundquist, Julie K.; Maniaci, David C.; Naughton, Brian T.; Pauly, Rebecca; Roadman, Jason; Shaw, Will; Van Dam, Jeroen; Wharton, Sonia
The American WAKE experimeNt (AWAKEN) is an international multi-institutional wind energy field campaign to better understand wake losses within operational wind farms. Wake interactions are among the least understood physical interactions in wind plants today, leading to unexpected power and profit losses. For example, Ørsted, the world’s largest offshore wind farm developer, recently announced a downward revision in energy estimates across their energy generation portfolio, primarily caused by underprediction of energy losses from wind farm blockage and wakes. In their announcement, they noted that the standard industry models used for their original energy estimates were inaccurate, and this was likely an industry-wide issue. To help further improve and validate wind plant models across scales from individual turbines as well as interfarm interactions between plants, new observations, such as those planned for AWAKEN, are critical. These model improvements will enable both improved layout and more optimal operation of wind farms with greater power production and improved reliability, ultimately leading to lower wind energy costs.
We discuss the experimental and modeling results for the x-ray heating and temperature of laboratory photoionized plasmas. A method is used to extract the electron temperature based on the analysis of transmission spectroscopy data that is independent of atomic kinetics modeling. The results emphasized the critical role of x-ray heating and radiation cooling in determining the energy balance of the plasma. They also demonstrated the dramatic impact of photoexcitation on excited-state populations, line emissivity, and radiation cooling. Modeling calculations performed with astrophysical codes significantly overestimated the measured temperature.
Partial differential equations (PDEs) are used with huge success to model phenomena across all scientific and engineering disciplines. However, across an equally wide swath, there exist situations in which PDEs fail to adequately model observed phenomena, or are not the best available model for that purpose. On the other hand, in many situations, nonlocal models that account for interaction occurring at a distance have been shown to more faithfully and effectively model observed phenomena that involve possible singularities and other anomalies. In this article we consider a generic nonlocal model, beginning with a short review of its definition, the properties of its solution, its mathematical analysis and of specific concrete examples. We then provide extensive discussions about numerical methods, including finite element, finite difference and spectral methods, for determining approximate solutions of the nonlocal models considered. In that discussion, we pay particular attention to a special class of nonlocal models that are the most widely studied in the literature, namely those involving fractional derivatives. The article ends with brief considerations of several modelling and algorithmic extensions, which serve to show the wide applicability of nonlocal modelling.
Jaervinen, A.E.; Allen, S.L.; Eldon, D.; Fenstermacher, M.E.; Groth, M.; Hill, D.N.; Lasnier, C.J.; Leonard, A.W.; Mclean, A.G.; Moser, A.L.; Porter, G.D.; Rognlien, T.D.; Samuell, C.M.; Wang, H.Q.; Watkins, Jonathan G.
UEDGE simulations highlight the role of cross-field drifts on the onset of detached conditions, and new calibrated divertor vacuum ultra violet (VUV) spectroscopy is used to challenge the predictions of radiative constituents in these simulations. UEDGE simulations for DIII-D H-mode plasmas with the open divertor with the ion ∇B-drift towards the X-point show a bifurcated onset of the low field side (LFS) divertor detachment, consistent with experimentally observed step-like detachment onset (Jaervinen A.E. et al 2018 Phys. Rev. Lett. 121 075001). The divertor plasma in the simulations exhibits hysteresis in upstream separatrix density between attached and detached solution branches. Reducing the drift magnitude by a factor of 3 eliminates the step-like detachment onset in the simulations, confirming the strong role of drifts in the bifurcated detachment onset. When measured local plasma densities and temperatures are within proximity of predicted values in the simulations, there is no shortfall of the local emission of the dominant resonant radiating lines. However, the simulations systematically predict a factor of two lower total integrated radiated power than measured by the bolometer with the difference lost through radial heat flow out of the computational domain. Even though there is no shortfall in the emission of the dominant lines, a shortfall of total radiated power can be caused by underpredicted spatial extent of the radiation front, indicating a potential upstream or divertor transport physics origin for the radiation shortfall, or shortfall of radiated power in the spectrum between the dominant lines. In addition to the underpredicted spatial extent, in detached conditions, the simulations overpredict the peak radiation and dominant carbon lines near the X-point, which can be alleviated by manually increasing divertor diffusivity in the simulations, highlighting the ad hoc cross-field transport as one of the key limitations of the predictive capability of these divertor fluid codes.
Quasi-static structural finite-element models of an aluminum-framed crystalline silicon photovoltaic module and a glass-glass thin-film module were constructed and validated against experimental measurements of deflection under uniform pressure loading. Specific practices in the computational representation of module assembly were identified as influential to matching experimental deflection observations. Additionally, parametric analyses using Latin hypercube sampling were performed to propagate input uncertainties related to module materials, dimensions, and tolerances into uncertainties in simulated deflection. Sensitivity analyses were performed on the uncertainty quantification datasets using linear correlation coefficients and variance-based sensitivity indices to elucidate key parameters influencing module deformation. Results identified edge tape and adhesive material properties as being strongly correlated to module deflection, suggesting that optimization of these materials could yield module stiffness gains at par with the conventionally structural parameters, such as glass thickness. This exercise verifies the applicability of finite-element models for accurately predicting mechanical behavior of solar modules and demonstrates a workflow for model-based parametric uncertainty quantification and sensitivity analysis. Applications of this capability include the assessment of field environment loads, derivation of representative loading conditions for reduced-scale testing, and module design optimization, among others.
Using the analogy between hydrodynamic and electrical current flow, we study how electrical current density j redistributes and amplifies due to two commonly encountered inhomogeneities in metals. First, we consider flow around a spherical resistive inclusion and find significant j amplification, independent of inclusion size. Hence, even μm-scale inclusions can affect performance in applications by creating localized regions of enhanced Joule heating. Next, we investigate j redistribution due to surface roughness, idealized as a sinusoidal perturbation with amplitude A and wavelength λ. Theory predicts that j amplification is determined by the ratio A/λ, so that even "smooth"surface finishes (i.e., small A) can generate significant amplification, if λ is correspondingly small. We compare theory with magnetohydrodynamic simulation to illustrate both the utility and limitations of the steady-state theory.
Using LGR on ATS-2 hardware, we have simulated the performance of nearly ten thousand possible component designs, allowing designers to map and assess the design space prior to experimental testing.
Nonlocal and fractional-order models capture effects that classical partial differential equations cannot describe; for this reason, they are suitable for a broad class of engineering and scientific applications that feature multiscale or anomalous behavior. This has driven a desire for a vector calculus that includes nonlocal and fractional gradient, divergence and Laplacian type operators, as well as tools such as Green's identities, to model subsurface transport, turbulence, and conservation laws. In the literature, several independent definitions and theories of nonlocal and fractional vector calculus have been put forward. Some have been studied rigorously and in depth, while others have been introduced ad-hoc for specific applications. The goal of this work is to provide foundations for a unified vector calculus by (1) consolidating fractional vector calculus as a special case of nonlocal vector calculus, (2) relating unweighted and weighted Laplacian operators by introducing an equivalence kernel, and (3) proving a form of Green's identity to unify the corresponding variational frameworks for the resulting nonlocal volume-constrained problems. The proposed framework goes beyond the analysis of nonlocal equations by supporting new model discovery, establishing theory and interpretation for a broad class of operators, and providing useful analogues of standard tools from the classical vector calculus.
This document is UUR survey questions for use in an exploratory express LDRD experiment. The purpose of the study is to understand if people overestimate their performance only in some situations or some people are more prone to it do to an underlying trait. To investigate our aims, we must use 3 experimental tasks: two domain general (an English grammar task a logic task) and a domain specific task (a science & technology questionnaire). The reason we are using these tasks is to see if people overestimate their abilities on tasks they are more familiar with (grammar and logic) but not on domains in which they are more specialized (science and technology). To understand the traits and characteristics of our participants, we are using 7 well-validated assessments from the field of psychology. All questionnaires are available for research and teaching purposes. Citations for all materials have been included.
Growing interest in compact, easily transportable sources of baseload electricity has manifested in the proposal and early deployment of portable nuclear reactors (PNRs). PNRs are sought because they are scalable, efficient, and cost-effective for meeting energy demands in unique, remote, or contested areas. For example, Russia's KLT-40S Akademik Lomonosov is a floating nuclear power plant (FNPP) that successfully reached the Arctic coastal city of Pevek. It began providing power to the local grid in December 2019. While providing such key advantages as having a highly flexible power generation mechanism, FNPPs appear to directly challenge international norms and conventions for nuclear safety, safeguards, and security. FNPPs are neither a purely fixed nuclear fuel cycle activity nor a purely transportation-based nuclear fuel cycle activity. In response, Sandia's Mitigating International Nuclear Enogy Risks (MINER) research perspective frames this discussion in terms of risk complexity and the interdependencies between safety, safeguards, and security in FNPPs, and PNRs more generally. This systems study is a technically rigorous analysis of the safety, safeguards, and security risks of FNPP technologies. This research's aims are three-fold. The first aim is to provide analytical evidence to support safety, safeguards, and security claims related to PNRs and FNPPs (Study Report Volume I). Second, this study aims to introduce a systems- theoretic approach for exploring interdependencies between the technical evaluations (Study Report Volume II). The third aim is to show Sandia's ability for prompt, rigorous, and technical analysis to support emerging complex MINER mission objectives.
Nuclear weapons stockpile planning is a complex process. The Non-Proliferation Treaty,' New START Treaty,2 DOE/NNSA, STRATCOM, Navy, Air Force, and Executive-Branch all have objectives that drive requirements for the types and quantities of nuclear weapons, which in turn drive how nuclear weapons are designed, manufactured, tested, maintained, deployed, transported, stored, retired, and ultimately dismantled. An estimated 200 distinct individuals contribute to the development, completion, and approval of this plan. And once that plan is completed, herein called the N NSA Program of Record (POR), ensuring that the plan is feasible — that the stockpile work can get done — ensures that the Nuclear Security Enterprise (NSE) can deliver the intended nuclear force posture.
The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.
The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
In 2015, an incident released approximately 40 Ci of T2 gas directly into the Tritium Exhaust System. Data from a bubbler system that monitored the stack effluent during the time period encompassing the accident, from 9 days prior through approximately 26 hours following the release, indicated that approximately 0.25% of the total accumulated tritium gas was in the form of tritiated water; however this value does not account for sources of tritium exhaust from other building operations and processes during the 9 days prior to this incident. Further analysis of the bubbler data around this time period considered the 9-day background contributions and shows that the actual fraction of the tritium that was released as tritiated water vapor (during and within 26 hours after the release) was likely lower than 0.1%.
One of the earliest applications for radar was to search for and find maritime vessels on the open sea. Proper design and operation of an airborne Maritime Wide Area Search (MWAS) radar requires an understanding of system performance characteristics and limitations, and furthermore understanding the trades amongst a large number of interdependent system parameters. This report identifies and explores those characteristics and limits, and how they depend on hardware system parameters and environmental conditions. Ultimately, this leads to a characterization of parameters that offer optimum performance for the overall MWAS radar system. While the information herein is not new to the literature, its collection into a single report hopes to offer some value in reducing the 'seek time'. Acknowledgements This report was funded by General Atomics Aeronautical Systems, Inc. (GA-ASI) Mission Systems under Cooperative Research and Development Agreement (CRADA) SC08/01749 between Sandia National Laboratories and GA-ASI. General Atomics Aeronautical Systems, Inc. (GA-ASI), an affiliate of privately-held General Atomics, is a leading manufacturer of Remotely Piloted Aircraft (RPA) systems, radars, and electro-optic and related mission systems, including the Predator/Gray Eagle-series and Lynx Multi-mode Radar. -
This paper examines national and tribal collaborative opportunities to get ahead of the critical infrastructure insecurity problem. Recommendations are viewed through the lens of the Sandia Labs Tribal Cyber-Energy initiative and national security projects. Recommendations include 1) Collaboratively address national priority and shared challenges to gain faster and better solutions to national priority problems on a smaller yet comprehensive American Indian and Alaskan Native sovereign single-point of authority scale 2) Utilize newer standards-based technologies to provide scalable, capable, and manageable solutions for greatly expanded and connected national critical infrastructures 3) Employ Cyber-Physical-Resilient design preliminary analysis to define concept- to-disposition design requirements for preemptive critical infrastructure risk mitigation and baked-in security; 4) Develop data-centric protection to provide increased information asset protection as data shifts from data-owner operated on-premises infrastructure to virtual service provider data-steward owned and operated off-premises infrastructure; and 5) Balance shared solutions with the National Institute of Science and Technology (NIST) Cybersecurity and Risk Management frameworks, and the System Security Engineering Guidelines. As yet unallocated federal funding would support research, development, the timely application of National-Tribal critical infrastructure protection, and critical infrastructure Cyber disruption response and recovery with extraordinary mutual benefits for the foreseeable future. The Critical Infrastructure Insecurity Problem: Rapid modernization and expansive connectivity are due to advances in Information and Communications Technologies that have sweeping cyber impact across all critical infrastructure sectors. Supervisory Control and Data Acquisition and Industrial Control Systems are particularly impacted as systems long separated from the Internet are now being connected and computerized. Virtualization and mobility create a Data Everywhere-User Anywhere paradigm that has evaporated the enterprise network perimeter. There are multi-front technological challenges at play, where long depended on technologies simply don't scale to current needs resulting in a digital dichotomy of competing old and new standards. New standards-based technologies scale but are not as well-known or as widely deployed, which leaves decision makers, stakeholders, and the workforce in a quandary, caught mid-stream between the technological past and the virtual future. Rapid and expansive cyber threat accompanies disruptive change in connectivity and computational dependencies. A lack of action will exacerbate the problem if new technologies roll out without baked-in security design. The Risk: If National-Tribal CIP collaboration to design in security is not done, then an ongoing state of insufficient bolt-on security and elevated threat exposure will remain for years to come.
Sandia's Academic Alliance (SAA) program takes a deliberate approach to building partnerships with universities that combine strengths in key academic disciplines, contain sizable portfolios of relevant research capabilities, and demonstrate a strong institutional commitment to national security. The SAA Program aims to solve significant problems that Sandia could not address alone, sustain and enrich Sandia's talent pipeline, and accelerate the commercialization and adoption of new technologies.
The purpose of this report is to provide updates on the experimental components, methodology, and instrumentation under development for use in advanced studies of realistic drying operations conducted on surrogate spent nuclear fuel. Validation of the extent of water removal in a dry spent nuclear fuel storage system based on drying procedures used at nuclear power plants is needed to close existing technical gaps. Operational conditions leading to incomplete drying may have potential impacts on the fuel, cladding, and other components in the system. Water remaining in canisters upon completion of drying procedures can lead to cladding corrosion, embrittlement, and breaching, as well as fuel degradation. Additional information is needed on the drying process efficacy to help evaluate the potential impacts of water retention on extended longterm dry storage. A general lack of data suitable for model validation of commercial nuclear canister drying processes necessitates additional, well-designed investigations. Smaller-scale tests that incorporate relevant physics and well-controlled boundary conditions are essential to provide insight and guidance to the simulation of prototypic systems undergoing drying processes. This report describes the implementation of moisture monitoring equipment on a pressurized, submersible system employing a single waterproof, electrically heated spent fuel rod simulator as a demonstration of analytical capabilities during a drying process. A mass spectrometer with specially designed inlets was used to monitor moisture and other gases at 150 kPa to 800 kPa for a test simulating a forced helium dehydration procedure and below 1 torr for tests mimicking a vacuum drying process. The dew point data from the mass spectrometer was found to be in good agreement with a solid-state moisture probe. A distinct advantage of the mass spectrometer system was the capability to directly sample from the hightemperature (>200 °C) head space expected in a prototypic scale experiment where a solid-state moisture probe would suffer considerable loss of accuracy or fail altogether. The operational and analytical experiences gained from this test series are poised to support an expansion to assembly-scale tests at prototypic length. These assemblies are designed to feature prototypic assembly hardware, advanced diagnostics for in situ internal rod pressure monitoring, and failed fuel rod simulators with engineered cladding defects to challenge the drying system with waterlogged fuel.
Low frequency electromechanical oscillations can pose a threat to the stability of power systems if not properly addressed. This paper proposes a novel methodology to damp these inter-area oscillations using loads, the demand side of the system. In the proposed methodology, loads are assigned to an aggregated cluster whose demand is modulated for oscillation damping. The load cluster control action is obtained from an optimal output feedback control (OOFC) strategy. The paper presents an extension to the regular OOFC formulation by imposing a constraint on the sum of the rows in the optimal gain matrix. This constraint is useful when the feedback signals are generator speeds. In this case, the sum of the rows of the optimal gain matrix is the droop gain of each load actuator. Time-domain simulations of a large-scale power system are used to demonstrate the efficacy of the proposed control algorithm. Two different cases are considered: a power imbalance and a line fault. The simulation results show that the proposed controllers successfully damp inter-area oscillations under different operating conditions and with different clustering for the events considered. In addition, the simulations illustrate the benefit of the proposed extension to the OOFC that enable load to provide a combination of droop control and small signal stability augmentation.
Nonlinear WKB is a multiscale technique for studying locally plane-wave solutions of nonlinear partial differential equations (PDEs). Its application comprises two steps: (1) replacement of the original PDE with an extended system separating the large scales from the small and (2) reduction of the extended system to its slow manifold. In the context of variational fluid theories with particle relabeling symmetry, nonlinear WKB in the mean Eulerian frame is known to possess a variational structure. This much has been demonstrated using, for instance, the theoretical apparatus known as the generalized Lagrangian mean. On the other hand, the variational structure of nonlinear WKB in the conventional Eulerian frame remains mysterious. By exhibiting a variational principle for the extended equations from step (1) above, we demonstrate that nonlinear WKB in the Eulerian frame is in fact variational. Remarkably, the variational principle for the extended system admits loops of relabeling transformations as a symmetry group. Noether's theorem therefore implies that the extended Eulerian equations possess a family of circulation invariants parameterized by S1. As an illustrative example, we use our results to systematically deduce a variational model of high-frequency acoustic waves interacting with a larger-scale compressible isothermal flow.
We use ab initio spin-polarized density functional theory to study the magnetic order in a Kagomé-like 2D metamaterial consisting of pristine or substitutionally doped phenalenyl radicals polymerized into a nanoporous, graphene-like structure. In this and in a larger class of related structures, the constituent polyaromatic hydrocarbon molecules can be considered as quantum dots that may carry a net magnetic moment. The structure of this porous system and the coupling between the quantum dots may be changed significantly by applying moderate strain, thus allowing to control the magnetic order and the underlying electronic structure.
The Leo Brady Seismic Network (LBSN, originally the Sandia Seismic Network) was established in 1960 by Sandia National Laboratories to monitor underground nuclear tests (UGTs) at the Nevada National Security Site (NNSS, formerly named the Nevada Test Site). The LBSN has been in various configurations throughout its existence, but it has generally been comprised of four to six stations at regional distances (∼ 150-400 km) from the NNSS with approximately evenly spaced azimuthal coverage. Between 1962 and the end of nuclear testing in 1992, the LBSN-and a sister network operated by Lawrence Livermore National Laboratories-was the most comprehensive United States source of regional seismic data of UGTs. Approximately 75% of all UGTs performed by the United States occurred in the predigital era. At that time, LBSN data were transmitted as frequency-modulated (FM) audio over telephone lines to a central location and recorded as analog waveforms on high-fidelity magnetic audio tapes. These tapes have been in dry temperature-stable storage for decades and contain the sole record of this irreplaceable data; full waveforms of LBSN-recorded UGTs from this era were not routinely digitized or otherwise published. We have developed a process to recover and calibrate data from these tapes. First, we play back and digitize the tapes as audio. Next, we demodulate the FM “audio” into individual waveforms. We then estimate the various instrument constants through careful measurement of “weight-lift” tests performed prior to each UGT on each instrument. Finally, these coefficients allow us to scale and shape the derived instrument response of the seismographs and compute poles and zeros. The result of this process is a digital record of the recorded seismic ground motion in a modern data format, stored in a searchable database. To date, we have digitized tapes from 592 UGTs.
Additive manufacturing (AM) promises rapid development cycles and fabrication of ready-to-use, geometrically-complex parts. The metallic parts produced by AM often contain highly non-equilibrium microstructures, e.g. chemical microsegregation and residual dislocation networks. While such microstructures can enhance some material properties, they are often undesirable. Many AM parts are thus heat-treated after fabrication, a process that significantly slows production. This study investigated if electropulsing, the process of sending high-current-density electrical pulses through a metallic part, could be used to modify the microstructures of AM 316 L stainless steel (SS) and AlSi10Mg parts fabricated by selective laser melting (SLM) more rapidly than thermal annealing. Electropulsing has shown promise as a rapid postprocessing method for materials fabricated using conventional methods, e.g. casting and rolling, but has never been applied to AM materials. For both the materials used in this study, as-fabricated SLM parts contained significant chemical heterogeneity, either chemical microsegregation (316 L SS) or a cellular interdendritic phase (AlSi10Mg). In both cases, annealing times on the order of hours at high homologous temperatures are necessary for homogenization. Using electropulsing, chemical microsegregation was eliminated in 316 L SS samples after 10, 16 ms electrical pulses. In AlSi10Mg parts, electropulsing produced spheroidized Si-rich particles after as few as 15, 16 ms electrical pulses with a corresponding increase in ductility. This study demonstrated that electropulsing can be used to modify the microstructures of AM metals.
We present a new, optimally accurate finite element method for interface problems that does not require matching interface grids or spatially coincident interfaces. The key idea is to enforce “extended” interface conditions through pullbacks onto the discretized interfaces. In so doing our approach circumvents the accuracy barriers prompted by polytopial approximations of the subdomains and enables high-order finite element solutions without needing more expensive curvilinear maps. Since the discrete interfaces are not required to match, the approach is also appropriate for multiphysics couplings where each subdomain is meshed independently and solved by a separate code. Error analysis reveals that the new approach is well posed and optimally convergent with respect to a broken H1 norm. Numerical examples confirm this result and also indicate optimal convergence in a broken L2 norm.
Uncertainty pervades virtually every branch of science and engineering, and in many disciplines, the underlying phenomena can be modeled by partial differential equations (PDEs) with uncertain or random inputs. This work is motivated by risk-averse stochastic programming problems constrained by PDEs. These problems are posed in infinite dimensions, which leads to a significant increase in the scale of the (discretized) problem. In order to handle the inherent nonsmoothness of, for example, coherent risk measures and to exploit existing solution techniques for smooth, PDE-constrained optimization problems, we propose a variational smoothing technique called epigraphical (epi-)regularization. We investigate the effects of epi-regularization on the axioms of coherency and prove differentiability of the smoothed risk measures. In addition, we demonstrate variational convergence of the epi-regularized risk measures and prove the consistency of minimizers and first-order stationary points for the approximate risk-averse optimization problem. We conclude with numerical experiments confirming our theoretical results.
Many optical systems are used for specific tasks such as classification. Of these systems, the majority are designed to maximize image quality for human observers. However, machine learning classification algorithms do not require the same data representation used by humans. We investigate the compressive optical systems optimized for a specific machine sensing task. Two compressive optical architectures are examined: an array of prisms and neutral density filters where each prism and neutral density filter pair realizes one datum from an optimized compressive sensing matrix, and another architecture using conventional optics to image the aperture onto the detector, a prism array to divide the aperture, and a pixelated attenuation mask in the intermediate image plane. We discuss the design, simulation, and trade-offs of these systems built for compressed classification of the Modified National Institute of Standards and Technology dataset. Both architectures achieve classification accuracies within 3% of the optimized sensing matrix for compression ranging from 98.85% to 99.87%. The performance of the systems with 98.85% compression were between an F / 2 and F / 4 imaging system in the presence of noise.
Through-silicon vias (TSVs) are a critical technology for three-dimensional integrated circuit technology. These through-substrate interconnects allow electronic devices to be stacked vertically for a broad range of applications and performance improvements such as increased bandwidth, reduced signal delay, improved power management, and smaller form-factors. There are many interdependent processing steps involved in the successful integration of TSVs. This article provides a tutorial style review of the following semiconductor fabrication process steps that are commonly used in forming TSVs: deep etching of silicon to form the via, thin film deposition to provide insulation, barrier, and seed layers, electroplating of copper for the conductive metal, and wafer thinning to reveal the TSVs. Recent work in copper electrochemical deposition is highlighted, analyzing the effect of accelerator and suppressor additives in the electrolyte to enable void-free bottom-up filling from a conformally lined seed metal.
We describe and analyze a variance reduction approach for Monte Carlo (MC) sampling that accelerates the estimation of statistics of computationally expensive simulation models using an ensemble of models with lower cost. These lower cost models — which are typically lower fidelity with unknown statistics — are used to reduce the variance in statistical estimators relative to a MC estimator with equivalent cost. We derive the conditions under which our proposed approximate control variate framework recovers existing multifidelity variance reduction schemes as special cases. We demonstrate that existing recursive/nested strategies are suboptimal because they use the additional low-fidelity models only to efficiently estimate the unknown mean of the first low-fidelity model. As a result, they cannot achieve variance reduction beyond that of a control variate estimator that uses a single low-fidelity model with known mean. However, there often exists about an order-of-magnitude gap between the maximum achievable variance reduction using all low-fidelity models and that achieved by a single low-fidelity model with known mean. We show that our proposed approach can exploit this gap to achieve greater variance reduction by using non-recursive sampling schemes. The proposed strategy reduces the total cost of accurately estimating statistics, especially in cases where only low-fidelity simulation models are accessible for additional evaluations. Several analytic examples and an example with a hyperbolic PDE describing elastic wave propagation in heterogeneous media are used to illustrate the main features of the methodology.
Prior to implosion in Magnetized Liner Inertial Fusion (MagLIF), the fuel is heated to temperatures on the order of several hundred eV with a multi-kJ, multi-ns laser pulse. We present two laser heated plasma experiments, relevant to the MagLIF preheat stage, performed at Z with beryllium liners filled with deuterium and a trace amount of argon. In one experiment, there is no magnetic field and, in the other, the liner and fuel are magnetized with an 8.5 T axial magnetic field. The recorded time integrated, spatially resolved spectra of the Ar K-shell emission are sensitive to electron temperature Te. Individual analysis of the spatially resolved spectra produces electron temperature distributions Te(z) that are resolved along the axis of laser propagation. In the experiment with magnetic field, the plasma reaches higher temperatures and the heated region extends deeper within the liner than in the unmagnetized case. Radiation magnetohydrodynamics simulations of the experiments are presented and post-processed. A comparison of the results from experimental and simulated data reveals that the simulations underpredict Te in both cases but the differences are larger in the case with magnetic field.
The high-cycle fatigue life of nanocrystalline and ultrafine-grained Ni-Fe was examined for five distinct grain sizes ranging from approximately 50–600 nm. The fatigue properties were strongly dependent on grain size, with the endurance limit changing by a factor of 4 over this narrow range of grain size. The dataset suggests a breakdown in fatigue improvement for the smallest grain sizes <100 nm, likely associated with a transition to grain coarsening as a dominant rate-limiting mechanism. The dataset also is used to explore fatigue prediction from monotonic tensile properties, suggesting that a characteristic flow strength is more meaningful than the widely-utilized ultimate tensile strength.
The development of a next generation high-fidelity modeling code for wind plant applications is one of the central focus areas of the U.S. Department of Energy Atmosphere to Electrons (A2e) initiative. The code is based on a highly scalable framework, currently called Nalu-Wind. One key aspect of the model development is a coordinated formal validation program undertaken specifically to establish the predictive capability of Nalu-Wind for wind plant applications. The purpose of this document is to define the verification and validation (V&V) plan for the A2e high-fidelity modeling capability. It summarizes the V&V framework, identifies code capability users and use cases, describes model validation needs, and presents a timeline to meet those needs.
Rydberg-assisted atomic electrometry using alkali-metal atoms contained inside a vacuum environment for detecting external electric fields at frequencies below a few kilohertz has been quite challenging due to the low-frequency electric-field-screening effect that is caused by the alkali-metal atoms adsorbed on the inner surface of the container. We report a very slow electric-field-screening phenomenon with a time scale up to the order of seconds on a rubidium-vapor cell that is made of monocrystalline sapphire. Using this sapphire rubidium-vapor cell with an optically induced, internal bias electric field, we demonstrate vapor-cell-based, low-frequency atomic electrometry that responds to the electric field strength linearly. Limited by the given experimental conditions, this demonstrated atomic electrometer uses an active volume of 11mm3 and delivers a spectral noise floor of around 0.34mV/mHz and a 3-dB low cutoff frequency of around 770 Hz inside the vapor cell. This work investigates a regime of vapor-cell-based atomic electrometry that was seldom studied before, which may enable more applications that use atomic electric-field-sensing technology.
The mission of Employee Health Services (EHS) at Sandia National Laboratories is to positively and efficiently impact the health of Sandia through patient-centered, cost-effective, community-connected care in support of its mission and people. We strive to continuously improve the delivery of health and wellness services. For the past few years, we have modeled our health programs around findings from the 2010 World Economics Forum (WEF): there are 8 top health risks and behaviors that drive 15 chronic conditions which account for 80% of the total health-care costs for all chronic illness worldwide. The WEF goes on to state that information and innovation are the keys to prevention, and EHS has decided to take a strong stand in both categories by utilizing health scorecards and department dashboards to visualize and share metrics with Sandia Leadership to foster improvements in employee wellness and optimize wellness offerings. Finally, these 2 types of information-sharing tools allow us to track the dollars spent and saved on our wellness programs, and show leadership where there is risk, where there is progress, and where there is need by providing current data that gives monthly, quarterly, and yearly feedback.
Remote Direct Memory Access (RDMA) is an increasingly important technology in high-performance computing (HPC). RDMA provides low-latency, high-bandwidth data transfer between compute nodes. Additionally, it does not require explicit synchronization with the destination processor. Eliminating unnecessary synchronization can significantly improve the communication performance of large-scale scientific codes. A long-standing challenge presented by RDMA communication is mitigating the cost of registering memory with the network interface controller (NIC). Reusing memory once it is registered has been shown to significantly reduce the cost of RDMA communication. However, existing approaches for reusing memory rely on implicit memory semantics. In this paper, we introduce an approach that makes memory reuse semantics explicit by exposing a separate allocator for registered memory. The data and analysis in this paper yield the following contributions: (i) managing registered memory explicitly enables efficient reuse of registered memory; (ii) registering large memory regions to amortize the registration cost over multiple user requests can significantly reduce cost of acquiring new registered memory; and (iii) reducing the cost of acquiring registered memory can significantly improve the performance of RDMA communication. Reusing registered memory is key to high-performance RDMA communication. By making reuse semantics explicit, our approach has the potential to improve RDMA performance by making it significantly easier for programmers to efficiently reuse registered memory.
Graph partitioning has been an important tool to partition the work among several processors to minimize the communication cost and balance the workload. While accelerator-based supercomputers are emerging to be the standard, the use of graph partitioning becomes even more important as applications are rapidly moving to these architectures. However, there is no scalable, distributed-memory, multi-GPU graph partitioner available for applications. We developed a spectral graph partitioner, Sphynx, using the portable, accelerator-friendly stack of the Trilinos framework. We use Sphnyx to systematically evaluate the various algorithmic choices in spectral partitioning with a focus on GPU performance. We perform those evaluations on irregular graphs, because state-of-the-art partitioners have the most difficulty on them. We demonstrate that Sphynx is up to 17x faster on GPUs compared to the case on CPUs, and up to 580x faster compared to a state-of-the-art multilevel partitioner. Sphynx provides a robust alternative for applications looking for a GPU-based partitioner.
Lateral inhibition is an important functionality in neuromorphic computing, modeled after the biological neuron behavior that a firing neuron deactivates its neighbors belonging to the same layer and prevents them from firing. In most neuromorphic hardware platforms lateral inhibition is implemented by external circuitry, thereby decreasing the energy efficiency and increasing the area overhead of such systems. Recently, the domain wall - magnetic tunnel junction (DW-MTJ) artificial neuron is demonstrated in modeling to be intrinsically inhibitory. Without peripheral circuitry, lateral inhibition in DW-MTJ neurons results from magnetostatic interaction between neighboring neuron cells. However, the lateral inhibition mechanism in DW-MTJ neurons has not been studied thoroughly, leading to weak inhibition only in very closely-spaced devices. This work approaches these problems by modeling current- and field- driven DW motion in a pair of adjacent DW-MTJ neurons. We maximize the magnitude of lateral inhibition by tuning the magnetic interaction between the neurons. The results are explained by current-driven DW velocity characteristics in response to an external magnetic field and quantified by an analytical model. Dependence of lateral inhibition strength on device parameters is also studied. Finally, lateral inhibition behavior in an array of 1000 DW-MTJ neurons is demonstrated. Our results provide a guideline for the optimization of lateral inhibition implementation in DW-MTJ neurons. With strong lateral inhibition achieved, a path towards competitive learning algorithms such as the winner-take-all are made possible on such neuromorphic devices.
Understanding the nature of fluid flow through fractured wellbore cement is fundamental for evaluating the leakage potential and risk assessments of leaky wellbores. In this study, the conditions that require considering visco-inertial flow for describing the gas flow through wellbore cement fractures were investigated. Nitrogen gas flow tests were conducted on fractured cement samples under varying pressure conditions and flow rates, covering both viscous and visco-inertial flow regimes. The data substantially deviated from Darcy's law at higher flowrates and were well-fit to Forchheimer's equation for visco-inertial flow. The inertial coefficient and critical Reynolds number were expressed as a function of the hydraulic aperture. The empirical function obtained from the experiments was used as an input to numerical simulations which showed the significant role of visco-inertial flow in wellhead pressure build-up and leakage rates, and demonstrated the importance of visco-inertial flow when modeling gas flow through wellbore cement fractures.
Equal channel angular extrusion (ECAE) of 49Fe-49Co-2V, also known as Hiperco® 50A or Permendur-2V, greatly improves the strength and ductility of this alloy, while sacrificing soft magnetic performance. In this work, ECAE Hiperco specimens were subjected to post-ECAE annealing in order to improve soft magnetic properties. The microstructure, mechanical properties, and magnetic performance are summarized in this study. Annealing begins above 650°C and a steep decline in yield strength is observed for heat treatments between 700 and 840°C due to grain growth and the Hall-Petch effect, although some strength benefit is still observed in fully annealed ECAE material compared to conventionally processed bar. Soft magnetic properties were assessed through B-H hysteresis curves from which coercivity (Hc) values were extracted. Hc decreases rapidly with annealing above 650°C as well, i.e. improved soft magnetic behavior. The observed trend is attributed to annealing and grain growth in this temperature regime, which facilitates magnetic domain wall movement. The coercivity vs. grain size results generally follow the trend predicted in the literature. The magnetic behavior of annealed ECAE material compares favorably to conventional bar, possibly due to mild crystallographic texturing which enhances properties in the post-ECAE annealed material. Overall, this study highlights a definitive tradeoff between mechanical and magnetic properties brought about by post-ECAE annealing and grain growth.
National security decisions are driven by complex, interconnected contextual, individual, and strategic variables. Modeling and simulation tools are often used to identify relevant patterns, which can then be shaped through policy remedies. In the paper to follow, however, we argue that models of these scenarios may be prone to the complexity-scarcity gap, in which relevant scenarios are too complex to model from first principles and data from historical scenarios are too sparse - making it difficult to draw representative conclusions. The result are models that are either too simple or are unduly biased by the assumptions of the analyst. We outline a new method of quantitative inquiry - experimental wargaming - as a means to bridge the complexity-scarcity gap that offers human-generated, empirical data to inform a variety of model and simulation tasks (model building, calibration, testing, and validation). Below, we briefly describe SIGNAL - our first-of-a-kind experimental wargame designed to study strategic stability in conflict settings with nuclear weapons. We then highlight the potential utility of this data for modeling and simulation efforts in the future using this data.
Rushdi, Mostafa A.; Dief, Tarek N.; Yoshida, Shigeo; Schmehl, Roland; Rushdi, Ahmad R.
Kites can be used to harvest wind energy at higher altitudes while using only a fraction of the material required for conventional wind turbines. In this work, we present the kite system of Kyushu University and demonstrate how experimental data can be used to train machine learning regression models. The system is designed for 7 kW traction power and comprises an inflatable wing with suspended kite control unit that is either tethered to a fixed ground anchor or to a towing vehicle to produce a controlled relative flow environment. A measurement unit was attached to the kite for data acquisition. To predict the generated tether force, we collected input–output samples from a set of well-designed experimental runs to act as our labeled training data in a supervised machine learning setting. We then identified a set of key input parameters which were found to be consistent with our sensitivity analysis using Pearson input–output correlation metrics. Finally, we designed and tested the accuracy of a neural network, among other multivariate regression models. The quality metrics of our models show great promise in accurately predicting the tether force for new input/feature combinations and potentially guide new designs for optimal power generation.
Grade 92 ferritic-martensitic steel is a candidate alloy for medium temperature (< 550 °C) components for the supercritical carbon dioxide (s-CO2) Brayton cycle. 1000 hours exposures were performed on base and welded material in s-CO2 at temperatures of 450 °C or 550 °C and compared to samples aged in Ar at 550 °C. Both s-CO2 exposures resulted in a duplex oxide growth and carburization, with 450 °C exhibiting carburization in a power law diffusion profile up to a depth of 200-250 µm, while 550 °C showed a linear profile up to a depth of 100 µm. The different profiles indicate much slower precipitation and coarsening of carbides at the lower temperature, allowing carbon to diffuse deeper into the material. However, 450 °C produced improved mechanical properties while 550 °C produced deteriorated properties. This was due to the higher density of carbon near the metal–oxide interface which leads to significant carbide coarsening and, subsequently, crack initiation and early failure. Additional exposure at 450 °C is predicted to increase deposited carbon, but further study would be needed to understand if and when carburization will produce a negative mechanical effect.
Kustas, Jessica K.; Hoffman, Jacob B.; Reed, Julian H.; Gonsalves, Andrew E.; Oh, Junho; Li, Longnan; Hong, Sungmin; Jo, Kyoo D.; Dana, Catherine E.; Cropek, Donald M.; Alleyne, Marianne
Numerous natural surfaces have micro/nanostructures that result in extraordinary functionality, such as superhydrophobicity, self-cleaning, antifogging, and antimicrobial properties. One such example is the cicada wing, where differences in nanopillar geometry and composition among species can impact and influence the degree of exhibited properties. To understand the relationships between surface topography and chemical composition with multifunctionality, the wing properties of Neotibicen pruinosus (superhydrophobic) and Magicicada cassinii (hydrophobic) cicadas are investigated at time points after microwave-assisted extraction of surface molecules to characterize the chemical contribution to nanopillar functionality. Electron microscopy of the wings throughout the extraction process illustrates nanoscale topographical changes, while concomitant changes in hydrophobicity, bacterial fouling, and bactericidal properties are also measured. Extract analysis reveals the major components of the nanostructures to be fatty acids and saturated hydrocarbons ranging from C17 to C44. Effects on the antimicrobial character of a wing surface with respect to the extracted chemicals suggest that the molecular composition of the nanopillars plays both a direct and an indirect role in concert with nanopillar geometry. The data presented not only correlates the nanopillar molecular organization to macroscale functional properties, but it also presents design guidelines to consider during the replication of natural nanostructures onto engineered substrates to induce desired properties.
Detection of radioxenon and radioargon produced by underground nuclear explosions is one of the primary methods by which the Comprehensive Nuclear-Test–Ban Treaty (CTBT) monitors for nuclear activities. However, transport of these noble gases to the surface via barometric pumping is a complex process relying on advective and diffusive processes in a fractured porous medium to bring detectable levels to the surface. To better understand this process, experimental measurements of noble gas and chemical surrogate diffusivity in relevant lithologies are necessary. However, measurement of noble gas diffusivity in tight or partially saturated porous media is challenging due to the transparent nature of noble gases, the lengthy diffusion times, and difficulty maintaining consistent water saturation. Here, the quasi-steady-state Ney–Armistead method is modified to accommodate continuous gas sampling via effusive flow to a mass spectrometer. An analytical solution accounting for the cumulative sampling losses and induced advective flow is then derived. Experimental results appear in good agreement with the proposed theory, suggesting the presence of retained groundwater reduces the effective diffusivity of the gas tracers by 10–1000 times. Furthermore, by using a mass spectrometer, the method described herein is applicable to a broad range of gas species and porous media.
Generally, scientific simulations load the entire simulation domain into memory because most, if not all, of the data changes with each time step. This has driven application structures that have, in turn, affected the design of popular IO libraries, such as HDF-5, ADIOS, and NetCDF. This assumption makes sense for many cases, but there is also a significant collection of simulations where this approach results in vast swaths of unchanged data written each time step.This paper explores a new IO approach that is capable of stitching together a coherent global view of the total simulation space at any given time. This benefit is achieved with no performance penalty compared to running with the full data set in memory, at a radically smaller process requirement, and results in radical data reduction with no fidelity loss. Additionally, the structures employed enable online simulation monitoring.
AlGaN polarization-doped field-effect transistors were characterized by DC and pulsed measurements from room temperature to 500 °C in ambient. DC current-voltage characteristics demonstrated only a 70% reduction in on-state current from 25 to 500 °C and full gate modulation, regardless of the operating temperature. Near ideal gate lag measurement was realized across the temperature range that is indicative of a high-quality substrate and sufficient surface passivation. The ability for operation at high temperature is enabled by the high Schottky barrier height from the Ni/Au gate contact, with values of 2.05 and 2.76 eV at 25 and 500 °C, respectively. The high barrier height due to the insulatorlike aluminum nitride layer leads to an ION/IOFF ratio of 1.5 × 109 and 6 × 103 at room temperature and 500 °C, respectively. Transmission electron microscopy was used to confirm the stability of the heterostructure even after an extended high-temperature operation with only minor interdiffusion of the Ni/Au Schottky contact. The use of refractory metals in all contacts will be key to ensure a stable extended high-temperature operation.
We design a resonant metasurface that uses Mie quadrupole modes to suppress the-1 diffraction order. We show that this suppression can be spectrally tuned using optical pumping on a picosecond timescale.
Impacts of a high-altitude electromagnetic pulse (HEMP) on the power grid are a growing concern due to the increased reliance on the power grid. A critical area of research is quantifying power system equipment response to HEMP since this is not known in general. Substation site surveys were performed at seven high voltage substations across the United States to gather substation layout and construction details pertinent to HEMP coupling calculations and component vulnerability assessments. The primary objective for the survey was to gather information on cable layouts and cable construction within substations. Additional information was also gathered on equipment present within the substations and control house layouts. This report provides information gathered from the substation surveys.