The research team developed models of Attentional Control (AC) that are unique to existing modeling approaches in the literature. The goal was to enable the research team to (1) make predictions about AC and human performance in real-world scenarios and (2) to make predictions about individual characteristics based on human data. First, the team developed a proof-of-concept approach for representing an experimental design and human subjects data in a Bayesian model, then demonstrated an ability to draw inferences about conditions of interest relevant to real-world scenarios. Ultimately, this effort was successful, and we were able to make reasonable (meaning supported by behavioral data) inferences about conditions of interest to develop a risk model for AC (where risk is defined as a mismatch between AC and attentional demand). The team additionally defined a path forward for a human-constrained machine learning (HCML) approach to make predictions about an individual's state based on performance data. The effort represents a successful first step in both modeling efforts and serves as a basis for future work activities. Numerous opportunities for future work have been defined.
To date, disinformation research has focused largely on the production of false information ignoring the suppression of select information. We term this alternative form of disinformation information suppression. Information suppression occurs when facts are withheld with the intent to mislead. In order to detect information suppression, we focus on understanding the actors who withhold information. In this research, we use knowledge of human behavior to find signatures of different gatekeeping behaviors found in text. Specifically, we build a model to classify the different types of edits on Wikipedia using the added text alone and compare a human-informed feature engineering approach to a featureless algorithm. Being able to computationally distinguish gatekeeping behaviors is a first step towards identifying when information suppression is occurring.
Vaccination and the alternative behavior, vaccine refusal, are a classic example of manifesting behaviors driven by social norms and norm violations. Establishing how norms emerge, and under what circumstances people choose to violate them are key issues to understand in modeling epidemics. Interactions between individuals can lead to large-scale patterning of behavior (emergent phenomena). As norm violations are revealed through human behavior, drawing on psychological theory and principles to predict those violations is a viable approach for more human-constrained epidemiological models. As an example of the implications at scale, vaccine refusal is correlated with the spread of mis/disinformation about vaccine side-effects. Considering the complexities of network dynamics, the downstream effects means that if even a small group within a population are persuaded against vaccination, there is a reservoir from which disease and disease outbreaks can propagate. This work will attempt to identify those psychological indicators, to define circumstances that predict health behaviors, and identify potentially modifiable antecedents of health behavior, and factors that influence changes toward health protective behaviors.
Data science includes a variety of scientific methods and processes to extract data from various sources. The integration of interdisciplinary fields such as mathematics, statistics, information science, and computer science affords techniques to analyze large volumes of data to arrive at unique insights and make data-driven decisions (Sinelnikov et al., 2015) in real time. The technique lends itself to other applications across many domains including hazard assessments, analysis of near-miss data, identification of leading and lagging indicators from past accidents, and others. Benefits of this technique include efficiency due to improved data acquisition. Near-miss data represents an important source to identify conditions that lead to accidents to develop strategies to prevent them. Analysis of near-miss data sets can involve various techniques. This paper will explore the use of data science to mine accident reports, with a special emphasis on near misses to uncover occurrences that were not initially identified in the documentation. Data-science techniques such as text analyses facilitate searching large volumes of data to uncover patterns for more informed decisions. Regarding near-miss data, data science techniques can be used to test the ability to uncover new hazards/ hazardous preconditions and the accuracy of those findings. With the benefits of crunching large data sets and uncovering new hazards, considerations and implications are also made regarding how that might influence safety culture.
Malicious cyber-attacks are becoming increasingly prominent due to the advance of technology and attack methods over the last decade. These attacks have the potential to bring down critical infrastructures, such as nuclear power plants (NPP’s), which are so vital to the country that their incapacitation would have debilitating effects on national security, public health, or safety. Despite the devastating effects a cyber-attack could have on NPP’s, it is unclear how control room operations would be affected in such a situation. In this project, the authors are collaborating with NPP operators to discern the impact of cyber-attacks on control room operations and lay out a framework to better understand the control room operators’ tasks and decision points. A cyber emulation of a digital control system was developed and coupled with a generic pressurized water reactor (GPWR) training simulator at Idaho National Laboratories. Licensed operators were asked to complete a series of scenarios on the simulator in which some of the scenarios were purposely obfuscated; that is, in which indicators were purposely displaying inaccurate information. Of interest is how this obfuscation impacts the ability to keep the plant safe and how it affects operators’ perceptions of workload and performance. Results, conclusions and lessons learned from this pilot experiment will be discussed. This research sheds light onto about how cyber events impact plant operations.
There are differences in how cyber-attack, sabotage, or discrete component failure mechanisms manifest within power plants and what these events would look like within the control room from an operator's perspective. This research focuses on understanding how a cyber event would affect the operation of the plant, how an operator would perceive the event, and if the operator's actions based on those perceptions will allow him/her to maintain plant safety. This research is funded as part of Sandia's Laboratory Directed Research and Development (LDRD) program to develop scenarios with cyber induced failure of plant systems coupled with a generic pressurized water reactor plant training simulator. The cyber scenario s w ere developed separately and injected into the simulator operational state to simulate an attack. These scenarios will determine if Nuclear Power Plant (NPP) operators can 1) recognize that the control room indicators were presenting incorrect or erroneous information and 2) take appropriate actions to keep the plant safe. This will also provide the opportunity to assess the operator cognitive workload during such events and identify where improvements might be made. This paper will review results of a pilot study run with NPP operators to investigate performance under various cyber scenarios. The discussion will provide an overview of the approach, scenario selection, metrics captured, resulting insights into operator actions and plant response to multiple scenarios of the NPP system.
Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of the risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.
Cyber defense is an asymmetric battle today. We need to understand better what options are available for providing defenders with possible advantages. Our project combines machine learning, optimization, and game theory to obscure our defensive posture from the information the adversaries are able to observe. The main conceptual contribution of this research is to separate the problem of prediction, for which machine learning is used, and the problem of computing optimal operational decisions based on such predictions, coupled with a model of adversarial response. This research includes modeling of the attacker and defender, formulation of useful optimization models for studying adversarial interactions, and user studies to measure the impact of the modeling approaches in realistic settings.