We used the CTH shock physics code to simulate the explosion of an 18-t chemical explosive at a depth of 250 m. We used the CTH in the two-dimensional axisymmetric (cylindrical) geometry (2DC) and most simulations included fully tamped explosions in wet tuff. Our study focused on parametric studies of three of the traditional strength models available in CTH, namely, geologic-yield, elastic perfectly-plastic von Mises, and Johnson-Cook strength (flow stress) models. We processed CTH results through a code that generates Reduced Displacement Potential (RDP) histories for each simulation. Since RDP is the solution of the linear wave equation in spherical coordinates, it is mainly valid at far-enough distance from the explosion the elastic radius. Among various parameters examined, we found the yield strength to have the greatest effect on the resulting RDP, where the peak RDP reduces almost linearly in log-log space as the yield strength increases. Moreover, an underground chemical explosion results in a cavity whose final diameter is inversely proportional to the material yield strength, i.e., as the material's yield strength increases the resulting final cavity radius decreases. Additionally, we found the choice of explosive material (COMP-C4 versus COMP-B) has minor effects on the peak RDP, where denser COMP-C4 shows higher peak RDP than the less dense COMP-B by a factor of ~1.1. In addition to wet tuff, we studied explosions in dry tuff, salt, and basalt, for a single strength model and yield strength value. We found wet tuff has the highest peak RDP value, followed by dry tuff, salt, and basalt. 2DC simulations of explosions in 11 m radius spherical, hemispherical, and cylindrical cavities showed the RDP signals have much lower magnitude than tamped explosions, where the cavity explosions mimicked nearly decoupled explosions.
Security assessments support decision-makers' ability to evaluate current capabilities of high consequence facilities (HCF) to respond to possible attacks. However, increasing complexity of today's operational environment requires a critical review of traditional approaches to ensure that implemented assessments are providing relevant and timely insights into security of HCFs. Using interviews and focus groups with diverse subject matter experts (SMEs), this study evaluated the current state of security assessments and identified opportunities to achieve a more "ideal" state. The SME-based data underscored the value of a systems approach for understanding the impacts of changing operational designs and contexts (as well as cultural influences) on security to address methodological shortcomings of traditional assessment processes. These findings can be used to inform the development of new approaches to HCF security assessments that are able to more accurately reflect changing operational environments and effectively mitigate concerns arising from new adversary capabilities.
Progress and status reviews allow teams to provide updates and targeted information designed to inform the customer of progress and to help the customer understand current risks and challenges. Both presenters and the customer should have well-calibrated expectations for the level of content and information. However, what needs to be covered in systems-level management reviews can too often be poorly defined. These unclear expectations can lead teams to overpreparing or attempting to guess what information the customer considers as most critical. This aspect of the review process is stressful, disruptive, and bad for morale – and time spent overpreparing reports is time spent not focusing on the technical work necessary to stay on schedule. To define and address these issues, this report was designed to observe various aspects of development program coordination and review activities for NNSA and Navy customers, and then to conduct unbiased, independent Human Factors observation and analysis from an outside perspective. The report concludes with suggestions and recommendations for improving the efficiency of information flow related to reviews, with the goals of increasing productivity and benefitting both Sandia and the customer.
Although unique expected energy models can be generated for a given photovoltaic (PV) site, a standardized model is also needed to facilitate performance comparisons across fleets. Current standardized expected energy models for PV work well with sparse data, but they have demonstrated significant over-estimations, which impacts accurate diagnoses of field operations and maintenance issues. This research addresses this issue by using machine learning to develop a data-driven expected energy model that can more accurately generate inferences for energy production of PV systems. Irradiance and system capacity information was used from 172 sites across the United States to train a series of models using Lasso linear regression. The trained models generally perform better than the commonly used expected energy model from international standard (IEC 61724-1), with the two highest performing models ranging in model complexity from a third-order polynomial with 10 parameters (R2adj= 0.994) to a simpler, second-order polynomial with 4 parameters (R2adj= 0.993), the latter of which is subject to further evaluation. Subsequently, the trained models provide a more robust basis for identifying potential energy anomalies for operations and maintenance activities as well as informing planning-related financial assessments. We conclude with directions for future research, such as using splines to improve model continuity and better capture systems with low (≤1000 kW DC) capacity.
De Lucia, Frank C.; Giri, Lily; Pesce-Rodriguez, Rose A.; Wu, Chi C.; Dean, Steven W.; Tovar, Trenton M.; Sausa, Rosario C.; Wainwright, Elliot R.; Gottfried, Jennifer L.
We characterized nine commercial aluminum (Al) powders using several methods to measure particle characteristics and thermal analysis, with the goal to understand how these parameters influence energy release. Although it is well-known that lot-to-lot variations in commercial nanoparticles are common, the Al powders were more heterogeneous than anticipated – both with regards to particle size distributions and impurities. Manufacturer specifications – often quoted in the literature without confirmation – were not always accurate for the specific sample lots we investigated. In several cases, different conclusions could be drawn from individual particle size techniques; a combination of multiple techniques provides a clearer picture of the powder properties. Thorough characterization of Al powders is required prior to interpretation of experimental results from a variety of applications. In particular, previous studies have shown contradictory results on the influence of Al on detonation performance, perhaps partially due to insufficient characterization of the Al powders themselves.
Radiographic diodes focus on an intense electron beam to a small spot size to minimize the source area of energetic photons for radiographic interrogation. The self-magnetic pinch (SMP) diode has been developed as such a source and operated as a load for the six-cavity radiographic integrated test stand (RITS-6) inductive voltage adder driver. While experiments support the generally accepted conclusion that a 1:1 aspect diode (cathode diameter equals anode-cathode gap) delivers optimum SMP performance, such experiments also show that reducing the cathode diameter, while reducing spot size, also results in reduced radiation dose, by as much as 50%, and degraded shot reproducibility. Analysis of the effective electron impingement angle on the anode converter with time made possible by a newly developed dose-rate array diagnostic indicates that fast-developing oscillations of the angle are correlated with early termination of the radiation pulse on many of the smaller-diameter SMP shots. This behavior as a function of relative cathode size persists through experiments with output voltages and currents up to 11.5 MV and 225 kA, respectively, and with spot sizes below approximately few millimeters. Since simulations to date have not predicted such oscillatory behavior, considerable discussion of the angle behavior of SMP shots is made to lend credence to the inference. There is clear anecdotal evidence that DC heating of the SMP diode region leads to stabilization of this oscillatory behavior. This is the first of two papers on the performance of the SMP diode on the RITS-6 accelerator.
Dual-fuel (DF) engines, in which premixed natural gas and air in an open-type combustion chamber is ignited by diesel-fuel pilot sprays, have been more popular for marine use than pre-chamber spark ignition (PCSI) engines because of their superior durability. However, control of ignition and combustion in DF engines is more difficult than in PCSI engines. In this context, this study focuses on the ignition stability of n-heptane pilot-fuel jets injected into a compressed premixed charge of natural gas and air at low-load conditions. To aid understanding of the experimental data, chemical-kinetics simulations were carried out in a simplified engine-environment that provided insight into the chemical effects of methane (CH4) on pilot-fuel ignition. The simulations reveal that CH4 has an effect on both stages of n-heptane autoignition: the small, first-stage, cool-flame-type, low-temperature ignition (LTI) and the larger, second-stage, high-temperature ignition (HTI). As the ratio of pilot-fuel to CH4 entrained into the spray decreases, the initial oxidization of CH4 consumes the OH radicals produced by pilot-fuel decomposition during LTI, thereby inhibiting its progression to HTI. Using imaging diagnostics, the spatial and temporal progression of LTI and HTI in DF combustion are measured in a heavy-duty optical engine, and the imaging data are analyzed to understand the cause of severe fluctuations in ignition timing and combustion completeness at low-load conditions. Images of cool-flame and hydroxyl radical (OH*) chemiluminescence serve as indicators of LTI and HTI, respectively. The cycle-to-cycle and spatial variation in ignition extracted from the imaging data are used as key metrics of comparison. The imaging data indicate that the local concentration of the pilot-fuel and the richness of the surrounding natural-gas air mixture are important for LTI and HTI, but in different ways. In particular, higher injection pressures and shorter injection durations increase the mixing rate, leading to lower concentrations of pilot-fuel more quickly, which can inhibit HTI even as LTI remains relatively robust. Decreasing the injection pressure from 80 MPa to 40 MPa and increasing the injection duration from 500 µs to 760 µs maintained constant pilot-fuel mass, while promoting robust transition from LTI to HTI by effectively slowing the mixing rate. This allows enough residence time for the OH radicals, produced by the two-stage ignition chemistry of the pilot-fuel, to accelerate the transition from LTI to HTI before being consumed by CH4 oxidation. Thus from a practical perspective, for a premixed natural gas fuel–air equivalence-ratio, it is possible to improve the “stability” of the combustion process by solely manipulating the pilot-fuel injection parameters while maintaining constant mass of injected pilot-fuel. This allows for tailoring mixing trajectories to offset changes in fuel ignition chemistry, so as to promote a robust transition from LTI to HTI by changing the balance between the local concentration of the pilot-fuel and richness of the premixed natural gas and air. This could prove to be a valuable tool for combustion design to improve fuel efficiency or reduce noise or perhaps even reduce heat-transfer losses by locating early combustion away from in-cylinder walls.
Kiani, Mehrdad T.; Gan, Lucia T.; Traylor, Rachel; Yang, Rui; Barr, Christopher M.; Hattar, Khalid M.; Fan, Jonathan A.; Wendy Gu, X.
Grain boundaries have complex structural features that influence strength, ductility and fracture in metals and alloys. Grain boundary misorientation angle has been identified as a key parameter that controls their mechanical behavior, but the effect of misorientation angle has been challenging to isolate in polycrystalline materials. Here, we describe the use of bicrystal Au thin films made using a rapid melt growth process to study deformation at a single grain boundary. Tensile testing is performed on bicrystals with different misorientation angles using in situ TEM, as well as on a single crystalline sample. Plastic deformation is initiated through dislocation nucleation from free surfaces. Grain boundary sliding is not observed, and failure occurs away from the grain boundary through plastic collapse in all cases. The failure behavior in these nanoscale bicrystals does not appear to depend on the misorientation angle or grain boundary energy but instead has a more complex dependence on sample surface structure and dislocation activity.
An improved electrical contact resistance (ECR) model for elastic rough electrode contact is proposed, incorporating the effects of asperity interactions and temperature rise by frictional and joule heating. The analytical simulation results show that the ECR decreases steeply at the beginning of the contact between Al and Cu. However, it becomes stabilized after reaching a specific contact force. It is also found that the longer elapsed sliding contact time, the higher ECR due to the increase in electrical resistivity of electrode materials by the frictional temperature rise at the interface. The effects of surface roughness parameters on ECR are studied through the 32 full-factorial design-of-experiment analysis. Based on the two representative roughness parameters, i.e., root-mean-square (rms) roughness and asperity radius, their individual and coupled effects on the saturated ECR are examined. The saturated ECR increases with the rms roughness for a rough machined surface condition, but it is hardly affected by the asperity radius. On the other hand, the saturated ECR increases with both the rms roughness and the asperity radius under a smooth thin film surface condition.
With a wide variety of wave energy device archetypes currently under consideration, it is a major challenge to ensure that research findings and methods are broadly applicable. In particular, the design and testing of wave energy control systems, a process which includes experimental design, empirical modeling, control design, and performance evaluation, is of interest. This goal motivated the redesign and testing of a floating dual flap wave energy converter. As summarized in this paper, the steps taken in the design, testing, and analysis of the device mirrored those previously demonstrated on a three-degree of freedom point absorber device. The method proposed does not require locking WEC degrees of freedom to develop an excitation model, and presents a more attainable system identification procedure for at-sea deployments. The results show that the methods employed work well for this dual flap device, lending additional support for the broad applicability of the design and testing methods applied here. The aim of this paper is to demonstrate that these models are particularly useful for deducing areas of device design or controller implementation that can be reasonably improved to increase device power capture.
The self-magnetic pinch (SMP) diode is a type of radiographic diode used to generate an intense electron beam for radiographic applications. At Sandia National Laboratories, SMP was the diode load for the six-cavity radiographic integrated test stand inductive voltage adder (IVA) driver operated in a magnetically insulated transmission line (MITL). The MITL contributes a flow current in addition to the current generated within the diode itself. Extensive experiments with a MITL of 40 ω load impedance [T. J. Renk et al., Phys. Plasmas 29, 023105 (2022)] indicate that the additional flow current leads to results similar to what might be expected from a conventional high-voltage interface driver, where flow current is not present. However, when the MITL flow impedance was increased to 80 ω, qualitatively different diode behavior was observed. This includes large retrapping waves suggestive of an initial coupling to low impedance as well as diode current decreasing with time even as the total current does not. A key observation is that the driver generates total current (flow + diode) consistent with the flow impedance of the MITL used. The case is made in this paper that the 80 ω MITL experiments detailed here can only be understood when the IVA-MITL-SMP diode is considered as a total system. The constraint of fixed total current plus the relatively high flow impedance limits the ability of the diode (whether SMP or other type) to act as an independent load. An unexpected new result is that in tracking the behavior of the electron strike angle on the converter as a function of time, we observed that the conventional cIVx "Radiographic"radiation scaling (where x ∼2.2) begins to break down for voltages above 8 MV, and cubic scaling is required to recover accurate angle tracking.
Social Infrastructure Service Burden (abbr. Social Burden) is defined as the burden to a population for attaining services needed from infrastructure. Infrastructure services represent opportunities to acquire things that people need, such as food, water, healthcare, financial services, etc. Accessing services requires effort, disruption to schedules, expenditure of money, etc. Social Burden represents the relative hardship people experience in the process of acquiring needed services. Social Burden is comprised of several components. One component is the effort associated with travel to a facility that provides a needed service. Another component of burden is the financial impact of acquiring resources once at the providing location. We are applying Social Burden as a resilience metric by quantifying it following a major disruption to infrastructure. Specifically, we are most interested in quantifying this metric for events in which energy systems are a major component of the disruption. We do not believe this is the only such use of the Social Burden metric, and therefore we will also be exploring its use to describe blue-sky conditions of a society in the future. Furthermore, while the construct can be applied to a dynamically changing situation, we are applying it statically, directly following a disruption. This notably ignores recovery dynamics that are a key capability of resilient systems. This too will be explored in future research.
In 2010, nuclear weapon effects experts at Sandia National Laboratories (SNL) were asked to provide a quick reference document containing estimated prompt nuclear effects. This report is an update to the 2010 document that includes updated model assumptions. This report addresses only the prompt effects associated with a nuclear detonation (e.g., blast, thermal fluence, and prompt ionizing radiation). The potential medium- and longer-term health effects associated with nuclear fallout are not considered in this report because, in part, of the impracticality of making generic estimates given the high dependency of fallout predictions on the local meteorological conditions at the time of the event. The results included in this report also do not consider the urban environment (e.g., shielding by or collapse of structures) which may affect the extent of prompt effects. It is important to note that any operational recommendations made using the estimates in this report are limited by the generic assumptions considered in the analysis and should not replace analyses made for a specific scenario/device. Furthermore, nuclear effects experts (John Hogan, SNL, and Byron Ristvet, Defense Threat Reduction Agency (DTRA)) have indicated that the accuracy of effects predictions below 0.5 kilotons (kT) or 500 tons nuclear yield have greater uncertainty because of the limited data available for the prompt effects in this regime. The Specialized Hazard Assessment Response Capability (SHARC) effects prediction tool was used for these analyses. Specifically, the NUKE model within SHARC 2021 Version 10.2 was used. NUKE models only the prompt effects following a nuclear detonation. The algorithms for predicting range-to-output data contained within the NUKE model are primarily based on nuclear test effects data. Probits have been derived from nuclear test data and the U.S. Environmental Protection Agency (EPA) protective action guides. Probits relate the probability of a hazard (e.g., fatality or injury) caused by a given insult (e.g., overpressure, thermal fluence, dose level). Several probits have been built into SHARC to determine the fatality and injury associated with a given level of insult. Some of these probits differ with varying yield. Such probits were used to develop the tables and plots in this report.
The Multi-Fidelity Toolkit (MFTK) is a simulation tool being developed at Sandia National Laboratories for aerodynamic predictions of compressible flows over a range of physics fidelities and computational speeds. These models include the Reynolds-Averaged-Navier-Stokes (RANS) equations, the Euler equations, and modified Newtonian aerodynamics (MNA) equations, and they can be invoked independently or coupled with hierarchical Kriging to interpolate between high-fidelity simulations using lower-fidelity data. However, as with any new simulation capability, verification and validation are necessary to gather credibility evidence. This work describes formal code- and solution-verification activities as well as model validation with uncertainty considerations. Code verification is performed on the MNA model by comparing with an analytical solution for flat-plate and inclined-plate geometries. Solution-verification activities include grid-refinement studies of HIFiRE-1 wind tunnel measurements, which are used for validation, for all model fidelities. A thorough treatment of the validation comparison with prediction error and validation uncertainty is also presented.
Magnetized Liner Inertial Fusion (MagLIF) [Slutz et al., Phys. Plasmas 17, 056303 (2010)] experiments driven by the Z machine produce >1013 deuterium-deuterium fusion reactions [Gomez et al., Phys. Rev. Lett. 125, 155002 (2020)]. Simulations indicate high yields and gains (1000) with increased current and deuterium-tritium layers for burn propagation [Slutz et al., Phys. Plasmas 23, 022702 (2016)]. Such a coating also isolates the metal liner from the gaseous fuel, which should reduce mixing of liner material into the fuel. However, the vapor density at the triple point is only 0.3 kg/m3, which is not high enough for MagLIF operation. We present two solutions to this problem. First, a fuel wetted low-density plastic foam can be used to form a layer on the inside of the liner. The desired vapor density can be obtained by controlling the temperature. This does however introduce carbon into the layer which will enhance radiation losses. Simulations indicate that this wetted foam layer can significantly contribute to the fusion yield when the foam density is less than 35 kg/m3. Second, we show that a pure frozen fuel layer can first be formed on the inside of the liner and then low temperature gaseous fuel can be introduced just before the implosion without melting a significant amount of the ice layer. This approach is the most promising for MagLIF to produce high yield and gain.
Neuromorphic computing, which aims to replicate the computational structure and architecture of the brain in synthetic hardware, has typically focused on artificial intelligence applications. What is less explored is whether such brain-inspired hardware can provide value beyond cognitive tasks. Here we show that the high degree of parallelism and configurability of spiking neuromorphic architectures makes them well suited to implement random walks via discrete-time Markov chains. These random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Using IBM’s TrueNorth and Intel’s Loihi neuromorphic computing platforms, we show that our neuromorphic computing algorithm for generating random walk approximations of diffusion offers advantages in energy-efficient computation compared with conventional approaches. We also show that our neuromorphic computing algorithm can be extended to more sophisticated jump-diffusion processes that are useful in a range of applications, including financial economics, particle physics and machine learning.
For the PACT center to both develop testing protocols and provide service to the metal halide perovskite (MHP) PV community, PACT will seek modules (mini and full-sized) for testing purposes. To ensure both safety and high-quality samples PACT publishes acceptance criteria to define the minimum characteristics of modules the center will accept for testing. These criteria help to ensure we are accepting technologies that are compatible with our technical facilities and testing equipment and can transition to large scale commercial manufacturing. This module design acceptance criteria document is for industry partners and is different from the acceptance criteria for research partners (academia, national laboratories) partners.
A data analysis automation interface that incorporates machine learning (ML) has been developed to improve productivity, efficiency, and consistency in identifying and defining critical load values (or other values associated with optically identifiable characteristics) of a coating when a scratch test is performed. In this specific program, the machine learning component of the program has been trained to identify the Critical Load 2 (LC2 ) value by analyzing images of the scratch tracks created in each test. An optical examination of the scratch by a human operator is currently used to determine where this value occurs. However, the vagueness of the standard has led to varying interpretations and nonuniform usage by different operators at different laboratories where the test is implemented, resulting in multiple definitions of the desired parameter. Using a standard set of training and validation images to create the dataset, the critical load can be identified consistently amongst different laboratories using the automation interface without requiring the training of human operators. When the model was used in conjunction with an instrument manufacturer's scratch test software, the model produced accurate and repeatable results and defined LC2 values in as little as half of the time compared to a human operator. When combined with a program that automates other aspects of the scratch testing process usually conducted by a human operator, scratch testing and analysis can occur with little to no intervention from a human beyond initial setup and frees them to complete other work in the lab.
Triangle counting is a fundamental building block in graph algorithms. In this article, we propose a block-based triangle counting algorithm to reduce data movement during both sequential and parallel execution. Our block-based formulation makes the algorithm naturally suitable for heterogeneous architectures. The problem of partitioning the adjacency matrix of a graph is well-studied. Our task decomposition goes one step further: it partitions the set of triangles in the graph. By streaming these small tasks to compute resources, we can solve problems that do not fit on a device. We demonstrate the effectiveness of our approach by providing an implementation on a compute node with multiple sockets, cores and GPUs. The current state-of-the-art in triangle enumeration processes the Friendster graph in 2.1 seconds, not including data copy time between CPU and GPU. Using that metric, our approach is 20 percent faster. When copy times are included, our algorithm takes 3.2 seconds. This is 5.6 times faster than the fastest published CPU-only time.
For the PACT center to both develop testing protocols and provide service to the metal halide perovskite (MHP) PV community, PACT will seek modules (mini and full-sized) for testing purposes. To ensure both safety and high-quality samples PACT publishes acceptance criteria to define the minimum characteristics of modules the center will accept for testing. These criteria help to ensure we are accepting technologies that are compatible with our technical facilities and testing equipment and can transition to large scale commercial manufacturing. This module design acceptance criteria document is for research partners (academia, national laboratories) and is different from the acceptance criteria for industry partners.
The purpose of this protocol is to use accelerated stress testing to assess the durability of metal halide perovskite (MHP) photovoltaic (PV) modules. The protocol aims to apply field relevant stressors to packaged MHP modules to screen for early failures that may be observed in the field. The current protocol has been designed with a glass/glass-PIB edge seal, no encapsulant package in mind. PACT anticipates adding additional testing sequences to evaluate additional stressors (e.g., PID, reverse bias) in the future.
He, Xu; Zhou, Yang; Liu, Zechang; Yang, Qing; Sjoberg, Carl M.; Vuilleumier, David; Ding, Carl P.; Liu, Fushui
The direct injection spark ignition (DISI) engine has received considerable attention due to its potential to increase the power density of traditional spark ignition engines while significantly improving fuel economy through lean, unthrottled combustion. However, the market introduction of DISI engines operated in a lean combustion mode is inhibited by their unsatisfactory emissions, especially during cold start conditions that make proper mixture formation more challenging. Ethanol-blended gasoline, now a widely used fuel, makes the cold start of a DISI engine more difficult, leading to higher HC and soot emissions because of the high latent heat of vaporization of ethanol relative to gasoline. This work investigated the impact of coolant temperature on the characteristics of combustion and emissions in a stratified-charge DISI engine fueled with an E30 fuel (i.e. 30% ethanol in gasoline), while the coolant temperature was alternated between four levels (45, 60, 75, and 90 °C) to simulate different conditions throughout the warm-up process. The experiments showed that the coolant temperature affected the post-spark inflammation time, as well as the speed, intensity, and stability of the combustion process in the engine. When the coolant temperature rose, the engine produced more NOX and less CO, PM and HC. In addition, high-speed direct photography was used to obtain crank-angle resolved images of fuel sprays and flames in the cylinder. As the coolant temperature rose, the liquid spray lengths became shorter, reducing the possibility of wall wetting, and reduced irradiance from soot particles also indicated less non-premixed combustion. The in-cylinder imaging results are consistent with the observed combustion and emission characteristics and shed light on the underlying processes. Some potential solutions to the emissions challenges faced here could be either raising in-cylinder temperatures by using trapped residuals or modifying the injection schedule, for example by increasing the number of injections or to inject later in the cycle into a higher-density environment.
Levelized costs of electricity (LCOE) approaching the U.S. Department of Energy Solar Energy Technologies Office 2030 goal of 0.05 $/kWh may be achievable using Brayton power cycles that use supercritical CO2 as the working fluid and flowing solid particles with temperatures >700° C as the heat transfer media. The handling and conveyance of bulk solid particles at these temperatures in an insulated environment is a critical technical challenge that must be solved for this approach to be used. A design study was conducted at the National Solar Thermal Test Facility (NSTTF) at Sandia National Laboratories in Albuquerque, NM, with the objective of identifying the technical readiness level, performance limits, capital and O&M costs, and expected thermal losses of particle handling and conveyance components in a particle-based CSP plant. Key findings indicated that chutes can be a low-cost option for particle handling but uncertainties in tower costs make it difficult to know whether they can be cost effective in areas above the receiver if tower heights must then be increased. Skips and high temperature particle conveyance technology are available for moving particles up to 640° C. This limits the use of mechanical conveyance above the heat exchanger and suggests vertical integration of the hot storage bin and heat exchanger to facilitate direct gravity fed handling of particles.
This white paper represents the status of Proliferation Resistance and Physical Protection (PR&PP) characteristics for the Gas-cooled Fast reactor (GFR) reference designs selected by the Generation IV International Forum (GIF) GFR System Steering Committee (SSC). The intent is to generate preliminary information about the PR&PP features of the GFR reactor technology and to provide insights for optimizing their PR&PP performance for the benefit of GFR system designers. It updates the GFR analysis published in the 2011 report “Proliferation Resistance and Physical Protection of the Six Generation IV Nuclear Energy Systems”, prepared Jointly by the Proliferation Resistance and Physical Protection Working Group (PRPPWG) and the System Steering Committees and provisional System Steering Committees of the Generation IV International Forum, taking into account the evolution of both the systems, the GIF R&D activities, and an increased understanding of the PR&PP features. The white paper, prepared jointly by the GIF PRPPWG and the GIF GFR SSC, follows the high-level paradigm of the GIF PR&PP Evaluation Methodology to investigate the PR&PP features of the GIF GFR 2400 MWth reference design. The ALLEGRO reactor is also described. The EM2 and HEN MHR reactor are mentioned. An overview of fuel cycle for the GFR reference design and for the ALLEGRO reactor are provided. For PR, the document analyses and discusses the proliferation resistance aspects in terms of robustness against State-based threats associated with diversion of materials, misuse of facilities, breakout scenarios, and production in clandestine facilities. Similarly, for PP, the document discusses the robustness against theft of material and sabotage by non-State actors. The document follows a common template adopted by all the white papers in the updated series.
Fast detection and isolation of faults in a DC microgrid is of particular importance. Fast tripping protection (i) increases the lifetime of power electronics (PE) switches by avoiding high fault current magnitudes and (ii) enhances the controllability of PE converters. This paper proposes a traveling wave (TW) based scheme for fast tripping protection of DC microgrids. The proposed scheme utilizes a discrete wavelet transform (DWT) to calculate the high-frequency components of DC fault currents. Multiresolution analysis (MRA) using DWT is utilized to detect TW components for different frequency ranges. The Parseval energy of the MRA coefficients are then calculated to demonstrate a quantitative relationship between the fault current signal energy and coefficients’ energy. The calculated Parseval energy values are used to train a Support Vector Machine classifier to identify the fault type and a Gaussian Process regression engine to estimate the fault location on the DC cables. The proposed approach is verified by simulating two microgrid test systems in PSCAD/EMTDC.
We explore the character angle dependence of dislocation-solute interactions in a face-centered cubic random Fe0.70Ni0.11Cr0.19 alloy through molecular dynamics (MD) simulations of dislocation mobility. Using the MD mobility data, we determine the phonon and thermally activated solute drag parameters which govern mobility for each dislocation character angle. The resulting parameter set indicates that, surprisingly, the solute energy barrier does not depend on character angle. Instead, only the zero-temperature flow stress—which is dictated by the activation area for thermal activation—is dependent on character angle. By analyzing the line roughness from MD simulations and the geometry of a bowing dislocation line undergoing thermal activation, we conclude that the character angle dependence of the activation area in this alloy is governed by the dislocation line tension, rather than the dislocation-solute interaction itself. Our findings motivate further investigation into the line geometry of dislocations in solid solutions.
Cookoff experiments of powdered and pressed TATB-based plastic bonded explosives (PBXs) have been modeled using a pressure-dependent universal cookoff model (UCM) in combination with a micromechanics pressurization (MMP) model described in a companion paper. The MMP model is based on the accumulation of decomposition gases at nucleation sites that load the surrounding TATB crystals and binder. This is the first cookoff model to use an analytical mechanics solution for compressibility and thermal expansion to describe internal pressurization caused by both temperature and decomposition occurring within closed-pore explosives. This approach produces more accurate predictions of ignition time and pressurization within high-density explosives than simple equation-of-state models. The current paper gives details of the reaction chemistry, model parameters, predicted uncertainty, and validation using experiments from multiple laboratories with errors less than 6 %. The UCM/MMP model framework gives more accurate thermal ignition predictions for high density explosives that are initially impermeable to decomposition gases.
Economically successful microalgal mass cultivation is dependent on overcoming several barriers that contribute to the cost of production. The severity of these barriers is dependent on the market value of the final product. These barriers prevent the commercially viable production of algal biofuels but are also faced by any producers of any algal product. General barriers include the cost of water and limits on recycling, costs and recycling of nutrients, CO2 utilization, energy costs associated with harvesting and biomass loss due to biocontamination and pond crashes. In this paper, recent advances in overcoming these barriers are discussed.
Electromagnetic (EM) methods are among the original techniques for subsurface characterization in exploration geophysics because of their particular sensitivity to the earth electrical conductivity, a physical property of rocks distinct yet complementary to density, magnetization, and strength. However, this unique ability also makes them sensitive to metallic artifacts - infrastructure such as pipes, cables, and other forms of cultural clutter - the EM footprint of which often far exceeds their diminutive stature when compared to that of bulk rock itself. In the hunt for buried treasure or unexploded ordnance, this is an advantage; in the long-term monitoring of mature oil fields after decades of production, it is quite troublesome indeed. Here we consider the latter through the lens of an evolving energy industry landscape in which the traditional methods of EM characterization for the exploration geophysicist are applied toward emergent problems in well-casing integrity, carbon capture and storage, and overall situational awareness in the oil field. We introduce case studies from these exemplars, showing how signals from metallic artifacts can dominate those from the target itself and impose significant burdens on the requisite simulation complexity. We also show how recent advances in numerical methods mitigate the computational explosivity of infrastructure modeling, providing feasible and real-time analysis tools for the desktop geophysicist. Lastly, we demonstrate through comparison of field data and simulation results that incorporation of infrastructure into the analysis of such geophysical data is, in a growing number of cases, a requisite but now manageable step.
Montmorillonite (MMT) clays are important industrial materials used as catalysts, chemical sorbents and fillers in polymer–clay nanocomposites. The layered structure of these clays has motivated research into further applications of these low-cost materials, including use as ion exchange media and solid-state ionic conductors. In these applications, the mechanical properties of MMT are key when considering long-term, reliable performance. Previous studies have focused on the mechanical properties of nanocomposites with MMT as the minority component or pure MMT thin films. In this work, the microstructure and mechanical properties of pure MMT and majority MMT/polyethylene composites pressed into dense pellets are examined. Characterization methods such as X-ray diffraction, atomic force microscopy and scanning electron microscopy together with nanoindentation reveal important structure–property relationships in the clay-based materials. Utilizing these techniques, we have discovered that MMT processing impacts the layered microstructure, chemical stability and, critically, the elastic modulus and hardness of bulk MMT samples. Particularly, the density of the pellets and the ordering of the clay platelets within them strongly influence the elastic modulus and hardness of the pellets. By increasing pressing force or by incorporating secondary components, the density, and therefore mechanical properties, can be increased. If the layered structure of the clay is destroyed by exfoliation, the mechanical properties will be compromised. Understanding these relationships will help guide new studies to engineer mechanically stable MMT-based materials for industrial applications. Graphical abstract: [Figure not available: see fulltext.].
Agencies that monitor for underground nuclear tests are interested in techniques that automatically characterize mining blasts to reduce the human analyst effort required to produce high-quality event bulletins. Waveform correlation is effective in finding similar waveforms from repeating seismic events, including mining blasts. We report the results of an experiment to detect and identify mining blasts for two regions, Wyoming (U.S.A.) and Scandinavia, using waveform templates recorded by multiple International Monitoring System stations of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO PrepCom) for up to 10 yr prior to the time of interest. We discuss approaches for template selection, threshold setting, and event detection that are specialized for characterizing mining blasts using a sparse, global network. We apply the approaches to one week of data for each of the two regions to evaluate the potential for establishing a set of standards for waveform correlation processing of mining blasts that can be generally applied to operational monitoring systems with a sparse network. We compare candidate events detected with our processing methods to the Reviewed Event Bulletin of the International Data Centre to assess potential reduction in analyst workload.
Stainless steels are susceptible to localized forms of corrosion attack, such as pitting. The size and lifetime of a nucleated pit can vary, depending on a critical potential or current density criterion, which determines if the pit repassivates or continues growing. This work uses finite element method (FEM) modeling to compare the critical pit radii predicted by thermodynamic and kinetic repassivation criteria. Experimental electrochemical boundary conditions are used to capture the active pit kinetics. Geometric and environmental parameters, such as the pit shape and size (analogous to additively manufactured lack-of-fusion pores), solution concentration, and water layer thickness were considered to assess their impact on the pit repassivation criterion. The critical pit radius (the transition point from stable growth to repassivation) predicted for a hemispherical pit was larger when using the repassivation potential (Erp) criteria, as opposed to the current density criteria (pit stability product). Including both the pit stability product and Erp into its calculations, the analytical maximum pit model predicted a critical radius two times more conservative than the FEA approach, under the conditions studied herein. The complex pits representing lack-of-fusion pores were shown to have minimal impact on the critical radius in atmospheric conditions.
The progression of wind turbine technology has led to wind turbines being incredibly optimized machines often approaching their theoretical maximum production capabilities. When placed together in arrays to make wind farms, however, they are subject to wake interference that greatly reduces downstream turbines' power production, increases structural loading and maintenance, reduces their lifetimes, and ultimately increases the levelized cost of energy. Development of techniques to manage wakes and operate larger and larger arrays of turbines more efficiently is now a crucial field of research. Herein, four wake management techniques in various states of development are reviewed. These include axial induction control, wake steering, the latter two combined, and active wake control. Each of these is reviewed in terms of its control strategies and use for power maximization, load reduction, and ancillary services. By evaluating existing research, several directions for future research are suggested.
The purpose of the Sandia CIO Cloud Strategy is to establish the strategic direction for the adoption of cloud services and technologies as the prevailing IT solution for Sandia National Laboratories. Sandia’s Chief Information Officer (CIO) will champion unified, site-wide adoption of cloud and will amplify business and mission impacts across the Labs. Sandia’s CIO Cloud Strategy aligns to the Federal Cloud Computing Strategy1 (Cloud Smart) and the Sandia Management and Operating Contract (Prime Contract).
Poondla, Yasvanth; Goldstein, David; Varghese, Philip; Clarke, Peter; Moore, Christopher H.
The goal of this work is to build up the capability of quasi-particle simulation (QuiPS), a novel flow solver, such that it can adequately model the rarefied portion of an atmospheric reentry trajectory. Direct simulation Monte Carlo (DSMC) is the conventional solver for such conditions, but struggles to resolve transient flows, trace species, and high-level internal energy states due to stochastic noise. Quasi-particle simulation (QuiPS) is a novel Boltzmann solver that describes a system with a discretized, truncated velocity distribution function. The resulting fixed-velocity, variable weight quasi-particles enable smooth variation of macroscopic properties. The distribution function description enables the use of a variance-reduced collision model, greatly minimizing expense near equilibrium. This work presents the addition of a neutral air chemistry model to QuiPS and some demonstrative 0D simulations. The explicit representation of internal distributions in QuiPS reveals some of the flaws in existing physics models. Variance reduction, a key feature of QuiPS, can greatly reduce expense of multi-dimensional calculations, but is only cheaper when the gas composition is near chemical equilibrium.