A new non-neutral generalized Ohm's law (GOL) model for atomic plasmas is presented. This model differs from previous models of this type in that quasi-neutrality is not assumed at any point. Collisional effects due to ionization, recombination, and elastic scattering are included, and an expression for the associated plasma conductivity is derived. An initial set of numerical simulations are considered that compare the GOL model to a two-fluid model in the ideal (collisionless) case. The results demonstrate that solutions obtained from the two models are essentially indistinguishable in most cases when the ion-electron mass ratio is within the range of physical values for atomic plasmas. Additionally, some limitations of the model are discussed.
This paper describes the implementation of the stress-fluctuation technique into the LAMMPS code to compute the anisotropic thermal elastic constants tensor of materials. The implementation provides both methods for computing the analytical fluctuation expressions and also a generic numerical derivative method. The former makes the extension to new potentials straightforward, as it requires writing code only for the second derivatives of each energy term w.r.t. distance, angle, etc. The latter provides a generic interface to compute an accurate approximation of the elastic constants for any potential already implemented in LAMMPS. We show how both methods compare with the direct deformation computation in several test cases and discuss the implementation advantages and limitations.
Solving large number of small linear systems is increasingly becoming a bottleneck in computational science applications. While dense linear solvers for such systems have been studied before, batched sparse linear solvers are just starting to emerge. In this paper, we discuss algorithms for solving batched sparse linear systems and their implementation in the Kokkos Kernels library. The new algorithms are performance portable and map well to the hierarchical parallelism available in modern accelerator architectures. The sparse matrix vector product (SPMV) kernel is the main performance bottleneck of the Krylov solvers we implement in this work. The implementation of the batched SPMV and its performance are therefore discussed thoroughly in this paper. The implemented kernels are tested on different Central Processing Unit (CPU) and Graphic Processing Unit (GPU) architectures. We also develop batched Conjugate Gradient (CG) and batched Generalized Minimum Residual (GMRES) solvers using the batched SPMV. Our proposed solver was able to solve 20,000 sparse linear systems on V100 GPUs with a mean speedup of 76x and 924x compared to using a parallel sparse solver with a block diagonal system with all the small linear systems, and compared to solving the small systems one at a time, respectively. We see mean speedup of 0.51 compared to dense batched solver of cuSOLVER on V100, while using lot less memory. Thorough performance evaluation on three different architectures and analysis of the performance are presented.
Fluid–structure interactions were measured between a representative control surface and the hypersonic flow deflected by it. The control surface is simplified as a spanwise finite ramp placed on a longitudinal slice of a cone. The front surface of the ramp contains a thin panel designed to respond to the unsteady fluid loading arising from the shock-wave/boundary-layer interactions. Experiments were conducted at Mach 5 and Mach 8 with ramps of different angles. High-speed schlieren captured the unsteady flow dynamics and accelerometers behind the thin panel measured its structural response. Panel vibrations were dominated by natural modes that were excited by the broadband aerodynamic fluctuations arising in the flowfield. However, increased structural response was observed in two distinct flow regimes: 1) attached or small separation interactions, where the transitional regime induced the strongest panel fluctuations. This was in agreement with the observation of increased convective undulations or bulges in the separation shock generated by the passage of turbulent spots, and 2) large separated interactions, where shear layer flapping in the laminar regime produced strong panel response at the flapping frequency. In addition, panel heating during the experiment caused a downward shift in its natural mode frequencies.
Induced seismicity is an inherent risk associated with geologic carbon storage (GCS) in deep rock formations that could contain undetected faults prone to failure. Modeling-based risk assessment has been implemented to quantify the potential of injection-induced seismicity, but typically simplified multiscale geologic features or neglected multiphysics coupled mechanisms because of the uncertainty in field data and computational cost of field-scale simulations, which may limit the reliable prediction of seismic hazard caused by industrial-scale CO2 storage. The degree of lateral continuity of the stratigraphic interbedding below the reservoir and depth-dependent fault permeability can enhance or inhibit pore-pressure diffusion and corresponding poroelastic stressing along a basement fault. This study presents a rigorous modeling scheme with optimal geological and operational parameters needed to be considered in seismic monitoring and mitigation strategies for safe GCS.
Characterizing explosion sources and differentiating between earthquake and underground explosions using distributed seismic networks becomes non-trivial when explosions are detonated in cavities or heterogeneous ground material. Moreover, there is little understanding of how changes in subsurface physical properties affect the far-field waveforms we record and use to infer information about the source. Simulations of underground explosions and the resultant ground motions can be a powerful tool to systematically explore how different subsurface properties affect far-field waveform features, but there are added variables that arise from how we choose to model the explosions that can confound interpretation. To assess how both subsurface properties and algorithmic choices affect the seismic wavefield and the estimated source functions, we ran a series of 2-D axisymmetric non-linear numerical explosion experiments and wave propagation simulations that explore a wide array of parameters. We then inverted the synthetic far-field waveform data using a linear inversion scheme to estimate source–time functions (STFs) for each simulation case. We applied principal component analysis (PCA), an unsupervised machine learning method, to both the far-field waveforms and STFs to identify the most important factors that control variance in the waveform data and differences between cases. For the far-field waveforms, the largest variance occurs in the shallower radial receiver channels in the 0–50 Hz frequency band. For the STFs, both peak amplitude and rise times across different frequencies contribute to the variance. We find that the ground equation of state (i.e. lithology and rheology) and the explosion emplacement conditions (i.e. tamped versus cavity) have the greatest effect on the variance of the far-field waveforms and STFs, with the ground yield strength and fracture pressure being secondary factors. Differences in the PCA results between the far-field waveforms and STFs could possibly be due to near-field non-linearities of the source that are not accounted for in the estimation of STFs and could be associated with yield strength, fracture pressure, cavity radius and cavity shape parameters. Other algorithmic parameters are found to be less important and cause less variance in both the far-field waveforms and STFs, meaning algorithmic choices in how we model explosions are less important, which is encouraging for the further use of explosion simulations to study how physical Earth properties affect seismic waveform features and estimated STFs.
Diesel piston-bowl shape is a key design parameter that affects spray-wall interactions and turbulent flow development, and in turn affects the engine’s thermal efficiency and emissions. It is hypothesized that thermal efficiency can be improved by enhancing squish-region vortices as they are hypothesized to promote fuel-air mixing, leading to faster heat-release rates. However, the strength and longevity of these vortices decrease with advanced injection timings for typical stepped-lip (SL) piston geometries. Dimple stepped-lip (DSL) pistons enhance vortex formation at early injection timings. Previous engine experiments with such a bowl show 1.4% thermal efficiency gains over an SL piston. However, soot was increased dramatically [SAE 2022-01-0400]. In a previous study, a new DSL bowl was designed using non-combusting computational fluid dynamic simulations. This improved DSL bowl is predicted to promote stronger, more rotationally energetic vortices than the baseline DSL piston: it employs shallower, narrower, and steeper-curved dimples that are placed further out into the squish region. In the current experimental study, this improved bowl is tested in a medium-duty diesel engine and compared against the SL piston over an injection timing sweep at low-load and part-load operating conditions. No substantial thermal efficiency gains are achieved at the early injection timing with the improved DSL design, but soot emissions are lowered by 45% relative to the production SL piston, likely due to improved air utilization and soot oxidation. However, these benefits are lost at late injection timings, where the DSL piston renders a lower thermal efficiency than that of the SL piston. Energy balance analyses show higher wall heat transfer with the DSL piston than with the SL piston despite a 1.3% reduction in the piston surface area. Vortex enhancement may not necessarily lead to improved efficiency as more energetic squish-region vortices can lead to higher convective heat transfer losses.
Protecting against multi-step attacks of uncertain start times and duration forces the defenders into indefinite, always ongoing, resource-intensive response. To allocate resources effectively, the defender must analyze and respond to an uncertain stream of potentially undetected multiple multi-step attacks and take measures of attack and response intensity over time into account. Such response requires estimation of overall attack success metrics and evaluating effect of defender strategies and actions associated with specific attack steps on overall attack metrics. We present a novel game-theoretic approach GPLADD to attack metrics estimation and demonstrate it on attack data derived from MITRE's ATT&CK Framework and other sources. In GPLADD, the time to complete attack steps is explicit; the attack dynamics emerges from attack graph and attacker-defender capabilities and strategies and therefore reflects 'physics' of attacks. The time the attacker takes to complete an attack step is drawn from a probability distribution determined by attacker and defender strategies and capabilities. This makes time a physical constraint on attack success parameters and enables comparing different defender resource allocation strategies across different attacks. We solve for attack success metrics by approximating attacker-defender games as discrete-time Markov chains and show evaluation of return on detection investments associated with different attack steps. We apply GPLADD to MITRE's APT3 data from ATT&CK Framework and show that there are substantial and un-intuitive differences in estimated real-world vendor performance against a simplified APT3 attack. We focus on metrics that reflect attack difficulty versus attacker ability to remain hidden in the system after gaining control. This enables practical defender optimization and resource allocation against multi-step attacks.
This document provides the instructions for participating in the 2021 blind photovoltaic (PV) modeling intercomparison organized by the PV Performance Modeling Collaborative (PVPMC). It describes the system configurations, metadata, and other information necessary for the modeling exercise. The practical details of the validation datasets are also described. The datasets were published online in open access in April 2023, after completing the analysis of the results.
In magneto-inertial fusion, the ratio of the characteristic fuel length perpendicular to the applied magnetic field R to the α-particle Larmor radius Q α is a critical parameter setting the scale of electron thermal-conduction loss and charged burn-product confinement. Using a previously developed deep-learning-based Bayesian inference tool, we obtain the magnetic-field fuel-radius product B R ∝ R / Q α from an ensemble of 16 magnetized liner inertial fusion (MagLIF) experiments. Observations of the trends in BR are consistent with relative trade-offs between compression and flux loss as well as the impact of mix from 1D resistive radiation magneto-hydrodynamics simulations in all but two experiments, for which 3D effects are hypothesized to play a significant role. Finally, we explain the relationship between BR and the generalized Lawson parameter χ. Our results indicate the ability to improve performance in MagLIF through careful tuning of experimental inputs, while also highlighting key risks from mix and 3D effects that must be mitigated in scaling MagLIF to higher currents with a next-generation driver.
We propose primal–dual mesh optimization algorithms that overcome shortcomings of the standard algorithm while retaining some of its desirable features. “Hodge-Optimized Triangulations” defines the “HOT energy” as a bound on the discretization error of the diagonalized Delaunay Hodge star operator. HOT energy is a natural choice for an objective function, but unstable for both mathematical and algorithmic reasons: it has minima for collapsed edges, and its extrapolation to non-regular triangulations is inaccurate and has unbounded minima. We propose a different extrapolation with a stronger theoretical foundation, and avoid extrapolation by recalculating the objective just beyond the flip threshold. We propose new objectives, based on normalizations of the HOT energy, with barriers to edge collapses and other undesirable configurations. We propose mesh improvement algorithms coupling these. When HOT optimization nearly collapses an edge, we actually collapse the edge. Otherwise, we use the barrier objective to update positions and weights and remove vertices. By combining discrete connectivity changes with continuous optimization, we more fully explore the space of possible meshes and obtain higher quality solutions.
Interval Assignment (IA) is the problem of selecting the number of mesh edges (intervals) for each curve for conforming quad and hex meshing. The intervals x is fundamentally integer-valued. Many other approaches perform numerical optimization then convert a floating-point solution into an integer solution, which is slow and error prone. We avoid such steps: we start integer, and stay integer. Incremental Interval Assignment (IIA) uses integer linear algebra (Hermite normal form) to find an initial solution to the meshing constraints, satisfying the integer matrix equation Ax=b. Solving for reduced row echelon form provides integer vectors spanning the nullspace of A. We add vectors from the nullspace to improve the initial solution, maintaining Ax=b. Heuristics find good integer linear combinations of nullspace vectors that provide strict improvement towards variable bounds or goals. IIA always produces an integer solution if one exists. In practice we usually achieve solutions close to the user goals, but there is no guarantee that the solution is optimal, nor even satisfies variable bounds, e.g. has positive intervals. We describe several algorithmic changes since first publication that tend to improve the final solution. The software is freely available.