Probabilistic Interpretation of Improved Neural Operators for Large-Scale Geological Carbon Storage
Abstract not provided.
Abstract not provided.
Abstract not provided.
Securing satellite groundstations against cyber-attacks is vital to national security missions. However, these cyber threats are constantly evolving. As vulnerabilities are discovered and patched, new vulnerabilities are discovered and exploited. In order to automate the process of discovering existing vulnerabilities and the means to exploit them, a reinforcement learning framework is presented in this report. We demonstrate that this framework can learn to successfully navigate an unknown network and detect nodes of interest despite the presence of a moving target defense. The agent then exfiltrates a file of interest from the node as quickly as possible. This framework also incorporates a defensive software agent that learns to impede the attacking agents progress. This setup allows for the agents to work against each other and improve their abilities. We anticipate that this capability will help uncover unforeseen vulnerabilities and the means to mitigate them. The modular nature of the framework enables users to swap out learning algorithms and modify the reward functions in order to adapt the learning tasks to various use cases and environments. Several algorithms, viz., tabular Q learning, deep Q networks, proximal policy optimization, advantage actor-critic, generative adversarial imitation learning, are explored for the agents and the results highlighted. The agent learns to solve the tasks in a light-weight abstract environment. Once the agent learns to perform sufficiently well, it can be deployed in a minimega virtual machine environment (or a real network) with wrappers that map abstract actions to software commands. The agent also uses a local representation of the actions called a ‘slot-mechanism’. This allows the agent to learn in a certain network and generalize it to different networks. The defensive agent learns to predict the actions taken by an offensive agent and uses that information to anticipate the threat. This information can then either be used to raise an alarm or to take actions to thwart the attack. We believe that with the appropriate reward design, a representative environment, and action set, this framework can be generalized to tackle other cybersecurity tasks. By sufficiently training these agents, we can anticipate vulnerabilities leading to robust future designs. We can also deploy automated defensive agents that can help secure satellite groundstation and their vital national security missions.
Both human subject experiments and computational, modeling and simulations have been used to study detection of deception. This work aims to combine these two methods by integrating empirically-derived information (from human subject experiments) into agent-based models to generate novel insights into the complex problems of detection of disinformation content. Computational experiments are used to simulate across multiple scenarios for evaluation and decision-making regarding the validity of potentially deceptive scientific documents. Factors influencing the human agent behaviors in the model were identified through a human subject experiment that was conducted to evaluate and characterize decision making related to disinformation discernment. Correlation and regression analyses were used to translate insights from the human subjects experiment to inform the parameterization of agent features and scenario development. Three scenarios were evaluated with the agent-based models to help evaluate the replicability of the simulations (validation analysis) and assess the influence of human agent and document features (sensitivity analyses). A replication of the human participant experiment demonstrated that the agent-based simulations compare favorably to empirical findings. The agent-based modeling was then used to conduct sensitivity analysis on the accuracy of deception detection as a function of document proportions and human agent features. Results indicate that precision values are adversely impacted when the proportion of deceptive documents is lower in the overall sample, whereas recall values are more sensitive to changes in human agent features. These findings indicate important nuances in accuracy evaluations that should be further considered (including consideration of potential alternate metrics) in future agent-based models of disinformation. Additional areas for future exploration include extension of simulations to consider other ways to align the agent-based model design with psychological theory and inclusion of agent-agent interactions, especially as it pertains to sharing of scientific information within an organizational context.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Computational and Mathematical Organization Theory
Abstract not provided.
Journal of Simulation
Measures of simulation model complexity generally focus on outputs; we propose measuring the complexity of a model’s causal structure to gain insight into its fundamental character. This article introduces tools for measuring causal complexity. First, we introduce a method for developing a model’s causal structure diagram, which characterises the causal interactions present in the code. Causal structure diagrams facilitate comparison of simulation models, including those from different paradigms. Next, we develop metrics for evaluating a model’s causal complexity using its causal structure diagram. We discuss cyclomatic complexity as a measure of the intricacy of causal structure and introduce two new metrics that incorporate the concept of feedback, a fundamental component of causal structure. The first new metric introduced here is feedback density, a measure of the cycle-based interconnectedness of causal structure. The second metric combines cyclomatic complexity and feedback density into a comprehensive causal complexity measure. Finally, we demonstrate these complexity metrics on simulation models from multiple paradigms and discuss potential uses and interpretations. These tools enable direct comparison of models across paradigms and provide a mechanism for measuring and discussing complexity based on a model’s fundamental assumptions and design.
Grid operating security studies are typically employed to establish operating boundaries, ensuring secure and stable operation for a range of operation under NERC guidelines. However, if these boundaries are severely violated, existing system security margins will be largely unknown, as would be a secure incremental dispatch path to higher security margins while continuing to serve load. As an alternative to the use of complex optimizations over dynamic conditions, this work employs the use of machine learning to identify a sequence of secure state transitions which place the grid in a higher degree of operating security with greater static and dynamic stability margins. Several reinforcement learning solution methods were developed using deep learning neural networks, including Deep Q-learning, Mu-Zero, and the continuous algorithms Proximal Reinforcement Learning, and Advantage Actor Critic Learning. The work is demonstrated on a power grid with three control dimensions but can be scaled in size and dimensionality, which is the subject of ongoing research.
The prevalence of COVID-19 is shaped by behavioral responses to recommendations and warnings. Available information on the disease determines the population’s perception of danger and thus its behavior; this information changes dynamically, and different sources may report conflicting information. We study the feedback between disease, information, and stay-at-home behavior using a hybrid agent-based-system dynamics model that incorporates evolving trust in sources of information. We use this model to investigate how divergent reporting and conflicting information can alter the trajectory of a public health crisis. The model shows that divergent reporting not only alters disease prevalence over time, but also increases polarization of the population’s behaviors and trust in different sources of information.
Abstract not provided.
Abstract not provided.
In recent years, infections and damage caused by malware have increased at exponential rates. At the same time, machine learning (ML) techniques have shown tremendous promise in many domains, often out performing human efforts by learning from large amounts of data. Results in the open literature suggest that ML is able to provide similar results for malware detection, achieving greater than 99% classifcation accuracy [49]. However, the same detection rates when applied in deployed settings have not been achieved. Malware is distinct from many other domains in which ML has shown success in that (1) it purposefully tries to hide, leading to noisy labels and (2) often its behavior is similar to benign software only differing in intent, among other complicating factors. This report details the reasons for the diffcultly of detecting novel malware by ML methods and offers solutions to improve the detection of novel malware.
Abstract not provided.
The causal structure of a simulation is a major determinant of both its character and behavior, yet most methods we use to compare simulations focus only on simulation outputs. We introduce a method that combines graphical representation with information theoretic metrics to quantitatively compare the causal structures of models. The method applies to agent-based simulations as well as system dynamics models and facilitates comparison within and between types. Comparing models based on their causal structures can illuminate differences in assumptions made by the models, allowing modelers to (1) better situate their models in the context of existing work, including highlighting novelty, (2) explicitly compare conceptual theory and assumptions to simulated theory and assumptions, and (3) investigate potential causal drivers of divergent behavior between models. We demonstrate the method by comparing two epidemiology models at different levels of aggregation.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Computational and Mathematical Organization Theory
This report describes research conducted to use data science and machine learning methods to distinguish targeted genome editing versus natural mutation and sequencer machine noise. Genome editing capabilities have been around for more than 20 years, and the efficiencies of these techniques has improved dramatically in the last 5+ years, notably with the rise of CRISPR-Cas technology. Whether or not a specific genome has been the target of an edit is concern for U.S. national security. The research detailed in this report provides first steps to address this concern. A large amount of data is necessary in our research, thus we invested considerable time collecting and processing it. We use an ensemble of decision tree and deep neural network machine learning methods as well as anomaly detection to detect genome edits given either whole exome or genome DNA reads. The edit detection results we obtained with our algorithms tested against samples held out during training of our methods are significantly better than random guessing, achieving high F1 and recall scores as well as with precision overall.
IEEE Transactions on Reliability
Complex networks of information processing systems, or information supply chains, present challenges for performance analysis. We establish a mathematical setting, in which a process within an information supply chain can be analyzed in terms of the functionality of the system's components. Principles of this methodology are rigorously defended and induce a model for determining the reliability for the various products in these networks. Our model does not limit us from having cycles in the network, as long as the cycles do not contain negation. It is shown that our approach to reliability resolves the nonuniqueness caused by cycles in a probabilistic Boolean network. An iterative algorithm is given to find the reliability values of the model, using a process that can be fully automated. This automated method of discerning reliability is beneficial for systems managers. As a systems manager considers systems modification, such as the replacement of owned and maintained hardware systems with cloud computing resources, the need for comparative analysis of system reliability is paramount. The model is extended to handle conditional knowledge about the network, allowing one to make predictions of weaknesses in the system. Finally, to illustrate the model's flexibility over different forms, it is demonstrated on a system of components and subcomponents.