Although visualizations are a useful tool for helping people to understand information, they can also have unintended effects on human cognition. This is especially true for uncertain information, which is difficult for people to understand. Prior work has found that different methods of visualizing uncertain information can produce different patterns of decision making from users. However, uncertainty can also be represented via text or numerical information, and few studies have systematically compared these types of representations to visualizations of uncertainty. We present two experiments that compared visual representations of risk (icon arrays) to numerical representations (natural frequencies) in a wildfire evacuation task. Like prior studies, we found that different types of visual cues led to different patterns of decision making. In addition, our comparison of visual and numerical representations of risk found that people were more likely to evacuate when they saw visualizations than when they saw numerical representations. These experiments reinforce the idea that design choices are not neutral: seemingly minor differences in how information is represented can have important impacts on human risk perception and decision making.
This research explores novel methods for extracting relevant information from EEG data to characterize individual differences in cognitive processing. Our approach combines expertise in machine learning, statistics, and cognitive science, advancing the state-of-the art in all three domains. Specifically, by using cognitive science expertise to interpret results and inform algorithm development, we have developed a generalizable and interpretable machine learning method that can accurately predict individual differences in cognition. The output of the machine learning method revealed surprising features of the EEG data that, when interpreted by the cognitive science experts, provided novel insights to the underlying cognitive task. Additionally, the outputs of the statistical methods show promise as a principled approach to quickly find regions within the EEG data where individual differences lie, thereby supporting cognitive science analysis and informing machine learning models. This work lays methodological ground work for applying the large body of cognitive science literature on individual differences to high consequence mission applications.
The goal of this project was test how different representations of state uncertainty impact human decision making. Across a series of experiments, we sought to answer fundamental questions about human cognitive biases and how they are impacted by visual and numerical information. The results of these experiments identify problems and pitfalls to avoid when for presenting algorithmic outputs that include state uncertainty to human decision makers. Our findings also point to important areas for future research that will enable system designers to minimize biases in human interpretation for the outputs of artificial intelligence, machine learning, and other advanced analytic systems.
Creation of streaming video stimuli that allow for strict experimental control while providing ease of scene manipulation is difficult to achieve but desired by researchers seeking to approach ecological validity in contexts that involve processing streaming visual information. To that end, we propose leveraging video game modding tools as a method of creating research quality stimuli. As a pilot effort, we used a video game sandbox tool (Garry’s Mod) to create three steaming video scenarios designed to mimic video feeds that physical security personnel might observe. All scenarios required participants to identify the presences of a threat appearing during the video feed. Each scenario differed in level of complexity, in that one scenario required only location monitoring, one required location and action monitoring, and one required location, action, and conjunction monitoring in that when an action was performed it was only considered a threat when performed by a certain character model. While there was no behavioral effect of scenario in terms of accuracy or response times, in all scenarios we found evidence of a P300 when comparing response to threatening stimuli to that of standard stimuli. Results therefore indicate that sufficient levels of experimental control may be achieved to allow for the precise timing required for ERP analysis. Thus, we demonstrate the feasibility of using existing modding tools to create video scenarios amenable to neuroimaging analysis.
With machine learning (ML) technologies rapidly expanding to new applications and domains, users are collaborating with artificial intelligence-assisted diagnostic tools to a larger and larger extent. But what impact does ML aid have on cognitive performance, especially when the ML output is not always accurate? Here, we examined the cognitive effects of the presence of simulated ML assistance-including both accurate and inaccurate output-on two tasks (a domain-specific nuclear safeguards task and domain-general visual search task). Patterns of performance varied across the two tasks for both the presence of ML aid as well as the category of ML feedback (e.g., false alarm). These results indicate that differences such as domain could influence users' performance with ML aid, and suggest the need to test the effects of ML output (and associated errors) in the specific context of use, especially when the stimuli of interest are vague or ill-defined.
Eye tracking is a useful tool for studying human cognition, both in the laboratory and in real-world applications. However, there are cases in which eye tracking is not possible, such as in high-security environments where recording devices cannot be introduced. After facing this challenge in our own work, we sought to test the effectiveness of using artificial foveation as an alternative to eye tracking for studying visual search performance. Two groups of participants completed the same list comparison task, which was a computer-based task designed to mimic an inventory verification process that is commonly performed by international nuclear safeguards inspectors. We manipulated the way in which the items on the inventory list were ordered and color coded. For the eye tracking group, an eye tracker was used to assess the order in which participants viewed the items and the number of fixations per trial in each list condition. For the artificial foveation group, the items were covered with a blurry mask except when participants moused over them. We tracked the order in which participants viewed the items by moving their mouse and the number of items viewed per trial in each list condition. We observed the same overall pattern of performance for the various list display conditions, regardless of the method. However, participants were much slower to complete the task when using artificial foveation and had more variability in their accuracy. Our results indicate that the artificial foveation method can reveal the same pattern of differences across conditions as eye tracking, but it can also impact participants’ task performance.
Reverse engineering (RE) analysts struggle to address critical questions about the safety of binary code accurately and promptly, and their supporting program analysis tools are simply wrong sometimes. The analysis tools have to approximate in order to provide any information at all, but this means that they introduce uncertainty into their results. And those uncertainties chain from analysis to analysis. We hypothesize that exposing sources, impacts, and control of uncertainty to human binary analysts will allow the analysts to approach their hardest problems with high-powered analytic techniques that they know when to trust. Combining expertise in binary analysis algorithms, human cognition, uncertainty quantification, verification and validation, and visualization, we pursue research that should benefit binary software analysis efforts across the board. We find a strong analogy between RE and exploratory data analysis (EDA); we begin to characterize sources and types of uncertainty found in practice in RE (both in the process and in supporting analyses); we explore a domain-specific focus on uncertainty in pointer analysis, showing that more precise models do help analysts answer small information flow questions faster and more accurately; and we test a general population with domain-general sudoku problems, showing that adding "knobs" to an analysis does not significantly slow down performance. This document describes our explorations in uncertainty in binary analysis.
In this project, our goal was to develop methods that would allow us to make accurate predictions about individual differences in human cognition. Understanding such differences is important for maximizing human and human-system performance. There is a large body of research on individual differences in the academic literature. Unfortunately, it is often difficult to connect this literature to applied problems, where we must predict how specific people will perform or process information. In an effort to bridge this gap, we set out to answer the question: can we train a model to make predictions about which people understand which languages? We chose language processing as our domain of interest because of the well- characterized differences in neural processing that occur when people are presented with linguistic stimuli that they do or do not understand. Although our original plan to conduct several electroencephalography (EEG) studies was disrupted by the COVID-19 pandemic, we were able to collect data from one EEG study and a series of behavioral experiments in which data were collected online. The results of this project indicate that machine learning tools can make reasonably accurate predictions about an individual?s proficiency in different languages, using EEG data or behavioral data alone.
Due to their recent increases in performance, machine learning and deep learning models are being increasingly adopted across many domains for visual processing tasks. One such domain is international nuclear safeguards, which seeks to verify the peaceful use of commercial nuclear energy across the globe. Despite recent impressive performance results from machine learning and deep learning algorithms, there is always at least some small level of error. Given the significant consequences of international nuclear safeguards conclusions, we sought to characterize how incorrect responses from a machine or deep learning-assisted visual search task would cognitively impact users. We found that not only do some types of model errors have larger negative impacts on human performance than other errors, the scale of those impacts change depending on the accuracy of the model with which they are presented and they persist in scenarios of evenly distributed errors and single-error presentations. Further, we found that experiments conducted using a common visual search dataset from the psychology community has similar implications to a safeguards- relevant dataset of images containing hyperboloid cooling towers when the cooling tower images are presented to expert participants. While novice performance was considerably different (and worse) on the cooling tower task, we saw increased novice reliance on the most challenging cooling tower images compared to experts. These findings are relevant not just to the cognitive science community, but also for developers of machine and deep learning that will be implemented in multiple domains. For safeguards, this research provides key insights into how machine and deep learning projects should be implemented considering their special requirements that information not be missed.