This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need for cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene have visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.
Today’s international nuclear safeguards inspectors have access to an increasing volume of supplemental information about the facilities under their purview, including commercial satellite imagery, nuclear trade data, open source information, and results from previous safeguards activities. In addition to completing traditional in-field safeguards activities, inspectors are now responsible for being able to act upon this growing corpus of supplemental safeguards-relevant data and for maintaining situational awareness of unusual activities taking place in their environment. However, cognitive science research suggests that maintaining too much information can be detrimental to a user’s understanding, and externalizing information (for example, to a mobile device) to reduce cognitive burden can decrease cognitive function related to memory, navigation, and attention. Given this dichotomy, how can international nuclear safeguards inspectors better synthesize information to enhance situational awareness, decision making, and performance in the field? This paper examines literature from the fields of cognitive science and human factors in the areas of wayfinding, situational awareness, equipment and technical assistance, and knowledge transfer, and describes the implications for the provision of, and interaction with, safeguards-relevant information for international nuclear safeguards inspectors working in the field.
Data visualizations are used to communicate information to people in a wide variety of contexts, but few tools are available to help visualization designers evaluate the effectiveness of their designs. Visual saliency maps that predict which regions of an image are likely to draw the viewer’s attention could be a useful evaluation tool, but existing models of visual saliency often make poor predictions for abstract data visualizations. These models do not take into account the importance of features like text in visualizations, which may lead to inaccurate saliency maps. In this paper we use data from two eye tracking experiments to investigate attention to text in data visualizations. The data sets were collected under two different task conditions: a memory task and a free viewing task. Across both tasks, the text elements in the visualizations consistently drew attention, especially during early stages of viewing. These findings highlight the need to incorporate additional features into saliency models that will be applied to visualizations.
Although working memory (WM) training programs consistently result in improvement on the trained task, benefit is typically short-lived and extends only to tasks very similar to the trained task (i.e., near transfer). It is possible that pairing repeated performance of a WM task with brain stimulation encourages plasticity in brain networks involved in WM task performance, thereby improving the training benefit. In the current study, transcranial direct current stimulation (tDCS) was paired with performance of a WM task (n-back). In Experiment 1, participants performed a spatial location-monitoring n-back during stimulation, while Experiment 2 used a verbal identity-monitoring n-back. In each experiment, participants received either active (2.0 mA) or sham (0.1 mA) stimulation with the anode placed over either the right or the left dorsolateral prefrontal cortex (DLPFC) and the cathode placed extracephalically. In Experiment 1, only participants receiving active stimulation with the anode placed over the right DLPFC showed marginal improvement on the trained spatial n-back, which did not extend to a near transfer (verbal n-back) or far transfer task (a matrix-reasoning task designed to measure fluid intelligence). In Experiment 2, both left and right anode placements led to improvement, and right DLPFC stimulation resulted in numerical (though not sham-adjusted) improvement on the near transfer (spatial n-back) and far transfer (fluid intelligence) task. Results suggest that WM training paired with brain stimulation may result in cognitive enhancement that transfers to performance on other tasks, depending on the combination of training task and tDCS parameters used.
There is a great deal of debate concerning the benefits of working memory (WM) training and whether that training can transfer to other tasks. Although a consistent finding is that WM training programs elicit a short-term near-transfer effect (i.e., improvement in WM skills), results are inconsistent when considering persistence of such improvement and far transfer effects. In this study, we compared three groups of participants: a group that received WM training, a group that received training on how to use a mental imagery memory strategy, and a control group that received no training. Although the WM training group improved on the trained task, their posttraining performance on nontrained WM tasks did not differ from that of the other two groups. In addition, although the imagery training group’s performance on a recognition memory task increased after training, the WM training group’s performance on the task decreased after training. Participants’ descriptions of the strategies they used to remember the studied items indicated that WM training may lead people to adopt memory strategies that are less effective for other types of memory tasks. These results indicate that WM training may have unintended consequences for other types of memory performance.
In this paper, we argue that information theoretic measures may provide a robust, broadly applicable, repeatable metric to assess how a system enables people to reduce high-dimensional data into topically relevant subsets of information. Explosive growth in electronic data necessitates the development of systems that balance automation with human cognitive engagement to facilitate pattern discovery, analysis and characterization, variously described as "cognitive augmentation" or "insight generation." However, operationalizing the concept of insight in any measurable way remains a difficult challenge for visualization researchers. The "golden ticket" of insight evaluation would be a precise, generalizable, repeatable, and ecologically valid metric that indicates the relative utility of a system in heightening cognitive performance or facilitating insights. Unfortunately, the golden ticket does not yet exist. In its place, we are exploring information theoretic measures derived from Shannon's ideas about information and entropy as a starting point for precise, repeatable, and generalizable approaches for evaluating analytic tools. We are specifically concerned with needle-in-haystack workflows that require interactive search, classification, and reduction of very large heterogeneous datasets into manageable, task-relevant subsets of information. We assert that systems aimed at facilitating pattern discovery, characterization and analysis - i.e., "insight" - must afford an efficient means of sorting the needles from the chaff; and simple compressibility measures provide a way of tracking changes in information content as people shape meaning from data.
Here, transcranial direct current stimulation (tDCS) is a potent ial tool for alleviating various forms of cognitive decline, including memory loss, in older adults. However, past effects of tDCS on cognitive ability have been mixed. One important potential moderator of tDCS effects is the baseline level of cognitive performance. We tested the effects of tDCS on face-name associative memory in older adults, who suffer from performance deficits in this task relative to younger adults. Stimulation was applied to the left inferior prefrontal cortex during encoding of face-name pairs, and memory was assessed with both a recognition and recall task. As a result, face–name memory performance was decreased with the use of tDCS. This result was driven by increased false alarms when recognizing rearranged face–name pairs.