Protecting against multi-step attacks of uncertain start times and duration forces the defenders into indefinite, always ongoing, resource-intensive response. To allocate resources effectively, the defender must analyze and respond to an uncertain stream of potentially undetected multiple multi-step attacks and take measures of attack and response intensity over time into account. Such response requires estimation of overall attack success metrics and evaluating effect of defender strategies and actions associated with specific attack steps on overall attack metrics. We present a novel game-theoretic approach GPLADD to attack metrics estimation and demonstrate it on attack data derived from MITRE's ATT&CK Framework and other sources. In GPLADD, the time to complete attack steps is explicit; the attack dynamics emerges from attack graph and attacker-defender capabilities and strategies and therefore reflects 'physics' of attacks. The time the attacker takes to complete an attack step is drawn from a probability distribution determined by attacker and defender strategies and capabilities. This makes time a physical constraint on attack success parameters and enables comparing different defender resource allocation strategies across different attacks. We solve for attack success metrics by approximating attacker-defender games as discrete-time Markov chains and show evaluation of return on detection investments associated with different attack steps. We apply GPLADD to MITRE's APT3 data from ATT&CK Framework and show that there are substantial and un-intuitive differences in estimated real-world vendor performance against a simplified APT3 attack. We focus on metrics that reflect attack difficulty versus attacker ability to remain hidden in the system after gaining control. This enables practical defender optimization and resource allocation against multi-step attacks.
A large body of work has demonstrated that parameterized artificial neural networks (ANNs) can efficiently describe ground states of numerous interesting quantum many-body Hamiltonians. However, the standard variational algorithms used to update or train the ANN parameters can get trapped in local minima, especially for frustrated systems and even if the representation is sufficiently expressive. We propose a parallel tempering method that facilitates escape from such local minima. This methods involves training multiple ANNs independently, with each simulation governed by a Hamiltonian with a different “driver” strength, in analogy to quantum parallel tempering, and it incorporates an update step into the training that allows for the exchange of neighboring ANN configurations. We study instances from two classes of Hamiltonians to demonstrate the utility of our approach using Restricted Boltzmann Machines as our parameterized ANN. The first instance is based on a permutation-invariant Hamiltonian whose landscape stymies the standard training algorithm by drawing it increasingly to a false local minimum. The second instance is four hydrogen atoms arranged in a rectangle, which is an instance of the second quantized electronic structure Hamiltonian discretized using Gaussian basis functions. We study this problem in a minimal basis set, which exhibits false minima that can trap the standard variational algorithm despite the problem’s small size. We show that augmenting the training with quantum parallel tempering becomes useful to finding good approximations to the ground states of these problem instances.
HyRAM+ is a toolkit that includes fast-running models for the unconstrained (i.e., no wall interactions) dispersion and flames for non-premixed fuels. The models were developed for use with hydrogen, but the toolkit was expanded to include propane and methane in a recent release. In this work we validate the dispersion and flame models for these additional fuels, based on reported literature data. The validation efforts spanned a range of release conditions, from subsonic to underexpanded jets and flames for a range of mass flow rates. In general, the dispersion model works well for both propane and methane although the width of the jet/plume is predicted to be wider than observed in some cases. The flame model tends to over-predict the induced buoyancy for low-momentum flames, while the radiative heat flux agrees with the experimental data reasonably well, for both fuels. The models could be improved but give acceptable predictions for propane and methane behavior for the purposes of risk assessment.
Induced seismicity is an inherent risk associated with geologic carbon storage (GCS) in deep rock formations that could contain undetected faults prone to failure. Modeling-based risk assessment has been implemented to quantify the potential of injection-induced seismicity, but typically simplified multiscale geologic features or neglected multiphysics coupled mechanisms because of the uncertainty in field data and computational cost of field-scale simulations, which may limit the reliable prediction of seismic hazard caused by industrial-scale CO2 storage. The degree of lateral continuity of the stratigraphic interbedding below the reservoir and depth-dependent fault permeability can enhance or inhibit pore-pressure diffusion and corresponding poroelastic stressing along a basement fault. This study presents a rigorous modeling scheme with optimal geological and operational parameters needed to be considered in seismic monitoring and mitigation strategies for safe GCS.
Characterizing the shallow structure of the Rock Valley region of the Nevada National Security Site is a critical component of the Rock Valley Direct Comparison project. Geophysical data of the region is needed for operational decisions, to constrain geologic models used for simulation, and to facilitate the analysis of future explosive source data. Local measurements of gravity are a key piece of geophysical information that helps to resolve the underlying geologic composition, fault structure, and density characteristics, yet, in the Rock Valley region these measurements are sparse on the scale of the testbed. In this report, we present the details of a recent gravity data acquisition survey designed to collect a dense dataset in the region of interest that complements the existing gravity work but greatly enhances our resolution. This dataset will be integrated with a complementary Los Alamos National Laboratory gravity collection and combined with the existing seismic data in a joint inversion. These measurements were conducted over two weeks with a portable gravimeter and high-resolution GPS and include repeat measurements at a USGS base station as well as reoccupation of gravity sites in the regional dataset. This collection of over 100 new dense gravity measurements will facilitate refinement of the existing Geologic Framework Model and directly complement newly acquired dense seismic data, ultimately improving the project’s ability to investigate the direct comparison of shallow earthquake and explosive sources.
This work presents measurements of liquid drop deformation and breakup time behind approximately conical shock waves and evaluates the predictive capabilities of low-order models and correlations developed using planar shock experiments. A conical shock was approximated by firing a bullet at Mach 4.5 past a vertical column of water drops with a mean initial diameter of 192 µm. The time-resolved drop position and maximum transverse dimension were characterized using backlit stereo images taken at 500 kHz. The gas density and velocity fields experienced by the drops were estimated using a Reynolds-averaged Navier-Stokes simulation of the bullet. Classical correlations predict drop breakup times and deformation in error by a factor of 3 or more. The Taylor analogy breakup (TAB) model predicts deformed drop diameters that agree within the confidence bounds of the ensemble-averaged experimental values using a dimensionless constant C2 = 2 compared to the accepted value C2 = 2/3. Results demonstrate existing correlations are inadequate for predicting the drop response to the three-dimensional relaxation of the flowfield downstream of a conical-like shock and suggest the TAB model results represent a path toward improved predictions.
In this work, we use the Brillouin flow analytic framework to examine the physics of Magnetically Insulated Transmission Lines (MITL). We derive a model applicable to any particle species, including both positive and negative ions, in planar and cylindrical configurations. We then show how to self-consistently solve for two-species simultaneously, using magnetically insulated electrons and positive ions as an example. We require both layers to be spatially separated and magnetically insulated (mutually magnetically insulated); for a 7.5 cm gap with a 2 MV bias voltage, this condition requires magnetic fields in excess of 2.73 T. We see a close match between mutually insulated MITL performance and “superinsulated” (high degree of magnetic insulation) electron-only theory, as may be expected for these high magnetic fields. However, the presence of ions leads to several novel effects: (1) Opposite to electron-only theory, total electron currents increase rather than decrease as the degree of magnetic insulation becomes stronger. The common assumption of neglecting electrons for superinsulated MITL operation must be revisited when ions are present—we calculate up to 20× current enhancement. (2) The electron flow layer thickness increases up to double, due to ion space-charge enhancement. (3) The contributions from both ions and electrons to the MITL flow impedance are calculated. The flow impedance drops by over 50% when ions fill the gap, which can cause significant reflections at the load if not anticipated and degrade performance. Additional effects and results from the inclusion of the ion layer are discussed.
Interval Assignment (IA) is the problem of selecting the number of mesh edges (intervals) for each curve for conforming quad and hex meshing. The intervals x is fundamentally integer-valued. Many other approaches perform numerical optimization then convert a floating-point solution into an integer solution, which is slow and error prone. We avoid such steps: we start integer, and stay integer. Incremental Interval Assignment (IIA) uses integer linear algebra (Hermite normal form) to find an initial solution to the meshing constraints, satisfying the integer matrix equation Ax=b. Solving for reduced row echelon form provides integer vectors spanning the nullspace of A. We add vectors from the nullspace to improve the initial solution, maintaining Ax=b. Heuristics find good integer linear combinations of nullspace vectors that provide strict improvement towards variable bounds or goals. IIA always produces an integer solution if one exists. In practice we usually achieve solutions close to the user goals, but there is no guarantee that the solution is optimal, nor even satisfies variable bounds, e.g. has positive intervals. We describe several algorithmic changes since first publication that tend to improve the final solution. The software is freely available.