Publications

153 Results

Search results

Jump to search filters

Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making

IEEE Computer Graphics and Applications

Matzen, Laura E.; Howell, Breannan C.; Trumbo, Michael C.S.; Divis, Kristin M.

Although visualizations are a useful tool for helping people to understand information, they can also have unintended effects on human cognition. This is especially true for uncertain information, which is difficult for people to understand. Prior work has found that different methods of visualizing uncertain information can produce different patterns of decision making from users. However, uncertainty can also be represented via text or numerical information, and few studies have systematically compared these types of representations to visualizations of uncertainty. We present two experiments that compared visual representations of risk (icon arrays) to numerical representations (natural frequencies) in a wildfire evacuation task. Like prior studies, we found that different types of visual cues led to different patterns of decision making. In addition, our comparison of visual and numerical representations of risk found that people were more likely to evacuate when they saw visualizations than when they saw numerical representations. These experiments reinforce the idea that design choices are not neutral: seemingly minor differences in how information is represented can have important impacts on human risk perception and decision making.

More Details

Using Eye-Tracking to Quantify Reverse Engineering Expertise

Stites, Mallory C.; Matzen, Laura E.; Rodhouse, Kathryn N.; Howell, Breannan C.; Rogers, Alisa

Software reverse engineering (RE) requires analysts to closely read and make decisions about code. Little is known about what makes an analyst successful, making it difficult to train new analysts or design tools to augment existing ones. The goal of this project was to quantify the eye movement behaviors supporting RE and code comprehension more generally. We applied eye-tracking methods from the language comprehension literature to understand where analysts direct their attention over time when completing tasks (e.g., function identification, bug detection). Across three studies, we manipulated aspects of code hypothesized to impact comprehension (e.g., variable name meaningfulness, code complexity) and presentation methods (e.g., line-by-line, free viewing, gaze-contingent moving window) to understand effects on accuracy and gaze patterns. Results showed clear benefits of meaningful variable names, and effects of expertise on global and line-specific viewing patterns. Findings could inspire empirically-supported tool or analytic adaptations that help to reduce analyst workload.

More Details

The Impact of Specificity on Human Interpretations of State Uncertainty

Matzen, Laura E.; Howell, Breannan C.; Trumbo, Michael C.S.

The goal of this project was test how different representations of state uncertainty impact human decision making. Across a series of experiments, we sought to answer fundamental questions about human cognitive biases and how they are impacted by visual and numerical information. The results of these experiments identify problems and pitfalls to avoid when for presenting algorithmic outputs that include state uncertainty to human decision makers. Our findings also point to important areas for future research that will enable system designers to minimize biases in human interpretation for the outputs of artificial intelligence, machine learning, and other advanced analytic systems.

More Details

MIDAS: Modeling Individual Differences using Advanced Statistics

Wisniewski, Kyra L.; Matzen, Laura E.; Stites, Mallory C.; Ting, Christina; Tuft, Marie; Sorge, Marieke A.

This research explores novel methods for extracting relevant information from EEG data to characterize individual differences in cognitive processing. Our approach combines expertise in machine learning, statistics, and cognitive science, advancing the state-of-the art in all three domains. Specifically, by using cognitive science expertise to interpret results and inform algorithm development, we have developed a generalizable and interpretable machine learning method that can accurately predict individual differences in cognition. The output of the machine learning method revealed surprising features of the EEG data that, when interpreted by the cognitive science experts, provided novel insights to the underlying cognitive task. Additionally, the outputs of the statistical methods show promise as a principled approach to quickly find regions within the EEG data where individual differences lie, thereby supporting cognitive science analysis and informing machine learning models. This work lays methodological ground work for applying the large body of cognitive science literature on individual differences to high consequence mission applications.

More Details

The Cognitive Effects of Machine Learning Aid in Domain-Specific and Domain-General Tasks

Proceedings of the Annual Hawaii International Conference on System Sciences

Divis, Kristin M.; Howell, Breannan C.; Matzen, Laura E.; Stites, Mallory C.; Gastelum, Zoe N.

With machine learning (ML) technologies rapidly expanding to new applications and domains, users are collaborating with artificial intelligence-assisted diagnostic tools to a larger and larger extent. But what impact does ML aid have on cognitive performance, especially when the ML output is not always accurate? Here, we examined the cognitive effects of the presence of simulated ML assistance-including both accurate and inaccurate output-on two tasks (a domain-specific nuclear safeguards task and domain-general visual search task). Patterns of performance varied across the two tasks for both the presence of ML aid as well as the category of ML feedback (e.g., false alarm). These results indicate that differences such as domain could influence users' performance with ML aid, and suggest the need to test the effects of ML output (and associated errors) in the specific context of use, especially when the stimuli of interest are vague or ill-defined.

More Details

A Method of Developing Video Stimuli that Are Amenable to Neuroimaging Analysis: An EEG Pilot Study

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Trumbo, Michael C.S.; Jones, Aaron; Robert, Bradley; Trumbo, Derek; Matzen, Laura E.

Creation of streaming video stimuli that allow for strict experimental control while providing ease of scene manipulation is difficult to achieve but desired by researchers seeking to approach ecological validity in contexts that involve processing streaming visual information. To that end, we propose leveraging video game modding tools as a method of creating research quality stimuli. As a pilot effort, we used a video game sandbox tool (Garry’s Mod) to create three steaming video scenarios designed to mimic video feeds that physical security personnel might observe. All scenarios required participants to identify the presences of a threat appearing during the video feed. Each scenario differed in level of complexity, in that one scenario required only location monitoring, one required location and action monitoring, and one required location, action, and conjunction monitoring in that when an action was performed it was only considered a threat when performed by a certain character model. While there was no behavioral effect of scenario in terms of accuracy or response times, in all scenarios we found evidence of a P300 when comparing response to threatening stimuli to that of standard stimuli. Results therefore indicate that sufficient levels of experimental control may be achieved to allow for the precise timing required for ERP analysis. Thus, we demonstrate the feasibility of using existing modding tools to create video scenarios amenable to neuroimaging analysis.

More Details

Studying visual search without an eye tracker: an assessment of artificial foveation

Cognitive Research: Principles and Implications

Matzen, Laura E.; Stites, Mallory C.; Gastelum, Zoe N.

Eye tracking is a useful tool for studying human cognition, both in the laboratory and in real-world applications. However, there are cases in which eye tracking is not possible, such as in high-security environments where recording devices cannot be introduced. After facing this challenge in our own work, we sought to test the effectiveness of using artificial foveation as an alternative to eye tracking for studying visual search performance. Two groups of participants completed the same list comparison task, which was a computer-based task designed to mimic an inventory verification process that is commonly performed by international nuclear safeguards inspectors. We manipulated the way in which the items on the inventory list were ordered and color coded. For the eye tracking group, an eye tracker was used to assess the order in which participants viewed the items and the number of fixations per trial in each list condition. For the artificial foveation group, the items were covered with a blurry mask except when participants moused over them. We tracked the order in which participants viewed the items by moving their mouse and the number of items viewed per trial in each list condition. We observed the same overall pattern of performance for the various list display conditions, regardless of the method. However, participants were much slower to complete the task when using artificial foveation and had more variability in their accuracy. Our results indicate that the artificial foveation method can reveal the same pattern of differences across conditions as eye tracking, but it can also impact participants’ task performance.

More Details

Exploring Explicit Uncertainty for Binary Analysis (EUBA)

Leger, Michelle A.; Darling, Michael C.; Jones, Stephen T.; Matzen, Laura E.; Stracuzzi, David J.; Wilson, Andrew T.; Bueno, Denis; Christentsen, Matthew; Ginaldi, Melissa; Foulk, James W.; Heidbrink, Scott; Howell, Breannan C.; Leger, Chris; Reedy, Geoffrey; Rogers, Alisa; Williams, Jack

Reverse engineering (RE) analysts struggle to address critical questions about the safety of binary code accurately and promptly, and their supporting program analysis tools are simply wrong sometimes. The analysis tools have to approximate in order to provide any information at all, but this means that they introduce uncertainty into their results. And those uncertainties chain from analysis to analysis. We hypothesize that exposing sources, impacts, and control of uncertainty to human binary analysts will allow the analysts to approach their hardest problems with high-powered analytic techniques that they know when to trust. Combining expertise in binary analysis algorithms, human cognition, uncertainty quantification, verification and validation, and visualization, we pursue research that should benefit binary software analysis efforts across the board. We find a strong analogy between RE and exploratory data analysis (EDA); we begin to characterize sources and types of uncertainty found in practice in RE (both in the process and in supporting analyses); we explore a domain-specific focus on uncertainty in pointer analysis, showing that more precise models do help analysts answer small information flow questions faster and more accurately; and we test a general population with domain-general sudoku problems, showing that adding "knobs" to an analysis does not significantly slow down performance. This document describes our explorations in uncertainty in binary analysis.

More Details

Assessing Cognitive Impacts of Errors from Machine Learning and Deep Learning Models: Final Report

Gastelum, Zoe N.; Matzen, Laura E.; Stites, Mallory C.; Divis, Kristin M.; Howell, Breannan C.; Jones, Aaron; Trumbo, Michael C.S.

Due to their recent increases in performance, machine learning and deep learning models are being increasingly adopted across many domains for visual processing tasks. One such domain is international nuclear safeguards, which seeks to verify the peaceful use of commercial nuclear energy across the globe. Despite recent impressive performance results from machine learning and deep learning algorithms, there is always at least some small level of error. Given the significant consequences of international nuclear safeguards conclusions, we sought to characterize how incorrect responses from a machine or deep learning-assisted visual search task would cognitively impact users. We found that not only do some types of model errors have larger negative impacts on human performance than other errors, the scale of those impacts change depending on the accuracy of the model with which they are presented and they persist in scenarios of evenly distributed errors and single-error presentations. Further, we found that experiments conducted using a common visual search dataset from the psychology community has similar implications to a safeguards- relevant dataset of images containing hyperboloid cooling towers when the cooling tower images are presented to expert participants. While novice performance was considerably different (and worse) on the cooling tower task, we saw increased novice reliance on the most challenging cooling tower images compared to experts. These findings are relevant not just to the cognitive science community, but also for developers of machine and deep learning that will be implemented in multiple domains. For safeguards, this research provides key insights into how machine and deep learning projects should be implemented considering their special requirements that information not be missed.

More Details

Physiological Characterization of Language Comprehension

Matzen, Laura E.; Stites, Mallory C.; Ting, Christina; Howell, Breannan C.; Wisniewski, Kyra L.

In this project, our goal was to develop methods that would allow us to make accurate predictions about individual differences in human cognition. Understanding such differences is important for maximizing human and human-system performance. There is a large body of research on individual differences in the academic literature. Unfortunately, it is often difficult to connect this literature to applied problems, where we must predict how specific people will perform or process information. In an effort to bridge this gap, we set out to answer the question: can we train a model to make predictions about which people understand which languages? We chose language processing as our domain of interest because of the well- characterized differences in neural processing that occur when people are presented with linguistic stimuli that they do or do not understand. Although our original plan to conduct several electroencephalography (EEG) studies was disrupted by the COVID-19 pandemic, we were able to collect data from one EEG study and a series of behavioral experiments in which data were collected online. The results of this project indicate that machine learning tools can make reasonably accurate predictions about an individual?s proficiency in different languages, using EEG data or behavioral data alone.

More Details

Evaluating the Impact of Algorithm Confidence Ratings on Human Decision Making in Visual Search

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Jones, Aaron; Trumbo, Michael C.S.; Matzen, Laura E.; Stites, Mallory C.; Howell, Breannan C.; Divis, Kristin M.; Gastelum, Zoe N.

As the ability to collect and store data grows, so does the need to efficiently analyze that data. As human-machine teams that use machine learning (ML) algorithms as a way to inform human decision-making grow in popularity it becomes increasingly critical to understand the optimal methods of implementing algorithm assisted search. In order to better understand how algorithm confidence values associated with object identification can influence participant accuracy and response times during a visual search task, we compared models that provided appropriate confidence, random confidence, and no confidence, as well as a model biased toward over confidence and a model biased toward under confidence. Results indicate that randomized confidence is likely harmful to performance while non-random confidence values are likely better than no confidence value for maintaining accuracy over time. Providing participants with appropriate confidence values did not seem to benefit performance any more than providing participants with under or over confident models.

More Details

Using Machine Learning to Predict Bilingual Language Proficiency from Reaction Time Priming Data

Proceedings of the 43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021

Matzen, Laura E.; Ting, Christina; Stites, Mallory C.

Studies of bilingual language processing typically assign participants to groups based on their language proficiency and average across participants in order to compare the two groups. This approach loses much of the nuance and individual differences that could be important for furthering theories of bilingual language comprehension. In this study, we present a novel use of machine learning (ML) to develop a predictive model of language proficiency based on behavioral data collected in a priming task. The model achieved 75% accuracy in predicting which participants were proficient in both Spanish and English. Our results indicate that ML can be a useful tool for characterizing and studying individual differences.

More Details

Is the Testing Effect Ready to Be Put to Work? Evidence From the Laboratory to the Classroom

Translational Issues in Psychological Science

Trumbo, Michael C.S.; Mcdaniel, Mark A.; Hodge, Gordon K.; Jones, Aaron; Matzen, Laura E.; Kittinger, Liza I.; Kittinger, Robert; Clark, Vincent P.

The testing effect refers to the benefits to retention that result from structuring learning activities in the form of a test. As educators consider implementing testenhanced learning paradigms in real classroom environments, we think it is critical to consider how an array of factors affecting test-enhanced learning in laboratory studies bear on test-enhanced learning in real-world classroom environments. This review discusses the degree to which test feedback, test format (of formative tests), number of tests, level of the test questions, timing of tests (relative to initial learning), and retention duration have import for testing effects in ecologically valid contexts (e.g., classroom studies). Attention is also devoted to characteristics of much laboratory testing-effect research that may limit translation to classroom environments, such as the complexity of the material being learned, the value of the testing effect relative to other generative learning activities in classrooms, an educational orientation that favors criterial tests focused on transfer of learning, and online instructional modalities. We consider how student-centric variables present in the classroom (e.g., cognitive abilities, motivation) may have bearing on the effects of testing-effect techniques implemented in the classroom. We conclude that the testing effect is a robust phenomenon that benefits a wide variety of learners in a broad array of learning domains. Still, studies are needed to compare the benefit of testing to other learning strategies, to further characterize how individual differences relate to testing benefits, and to examine whether testing benefits learners at advanced levels.

More Details

Using Machine Learning to Predict Bilingual Language Proficiency from Reaction Time Priming Data

Proceedings of the 43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021

Matzen, Laura E.; Ting, Christina; Stites, Mallory C.

Studies of bilingual language processing typically assign participants to groups based on their language proficiency and average across participants in order to compare the two groups. This approach loses much of the nuance and individual differences that could be important for furthering theories of bilingual language comprehension. In this study, we present a novel use of machine learning (ML) to develop a predictive model of language proficiency based on behavioral data collected in a priming task. The model achieved 75% accuracy in predicting which participants were proficient in both Spanish and English. Our results indicate that ML can be a useful tool for characterizing and studying individual differences.

More Details

Measuring Intelligence with the Sandia Matrices: Psychometric Review and Recommendations for Free Raven-Like Item Sets

Personnel Assessment and Decisions

Harris, Alexandra; Mcmillan, Jeremiah T.; Listyg, Ben J.; Matzen, Laura E.; Carter, Nathan T.

The Sandia Matrices are a free alternative to the Raven’s Progressive Matrices (RPMs). This study offers a psychometric review of Sandia Matrices items focused on two of the most commonly investigated issues regarding the RPMs: (a) dimensionality and (b) sex differences. Model-data fit of three alternative factor structures are compared using confirmatory multidimensional item response theory (IRT) analyses, and measurement equivalence analyses are conducted to evaluate potential sex bias. Although results are somewhat inconclusive regarding factor structure, results do not show evidence of bias or mean differences by sex. Finally, although the Sandia Matrices software can generate infinite items, editing and validating items may be infeasible for many researchers. Further, to aide implementation of the Sandia Matrices, we provide scoring materials for two brief static tests and a computer adaptive test. Implications and suggestions for future research using the Sandia Matrices are discussed.

More Details

Applying Compression-Based Metrics to Seismic Data in Support of Global Nuclear Explosion Monitoring

Matzen, Laura E.; Ting, Christina; Field, Richard V.; Morrow, J.D.; Brogan, Ronald; Young, Christopher J.; Zhou, Angela; Trumbo, Michael C.S.; Coram, Jamie L.

The analysis of seismic data for evidence of possible nuclear explosion testing is a critical global security mission that relies heavily on human expertise to identify and mark seismic signals embedded in background noise. To assist analysts in making these determinations, we adapted two compression distance metrics for use with seismic data. First, we demonstrated that the Normalized Compression Distance (NCD) metric can be adapted for use with waveform data and can identify the arrival times of seismic signals. Then we tested an approximation for the NCD called Sliding Information Distance (SLID), which can be computed much faster than NCD. We assessed the accuracy of the SLID output by comparing it to both the Akaike Information Criterion (AIC) and the judgments of expert seismic analysts. Our results indicate that SLID effectively identifies arrival times and provides analysts with useful information that can aid their analysis process.

More Details

A heuristic approach to value-driven evaluation of visualizations

IEEE Transactions on Visualization and Computer Graphics

Wall, Emily; Agnihotri, Meeshu; Matzen, Laura E.; Divis, Kristin M.; Haass, Michael J.; Endert, Alex; Stasko, John

Recently, an approach for determining the value of a visualization was proposed, one moving beyond simple measurements of task accuracy and speed. The value equation contains components for the time savings a visualization provides, the insights and insightful questions it spurs, the overall essence of the data it conveys, and the confidence about the data and its domain it inspires. This articulation of value is purely descriptive, however, providing no actionable method of assessing a visualization's value. In this work, we create a heuristic-based evaluation methodology to accompany the value equation for assessing interactive visualizations. We refer to the methodology colloquially as ICE-T, based on an anagram of the four value components. Our approach breaks the four components down into guidelines, each of which is made up of a small set of low-level heuristics. Evaluators who have knowledge of visualization design principles then assess the visualization with respect to the heuristics. We conducted an initial trial of the methodology on three interactive visualizations of the same data set, each evaluated by 15 visualization experts. We found that the methodology showed promise, obtaining consistent ratings across the three visualizations and mirroring judgments of the utility of the visualizations by instructors of the course in which they were developed.

More Details

Effects of Note-Taking Method on Knowledge Transfer in Inspection Tasks

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Stites, Mallory C.; Matzen, Laura E.; Smartt, Heidi A.; Gastelum, Zoe N.

International nuclear safeguards inspectors visit nuclear facilities to assess their compliance with international nonproliferation agreements. Inspectors note whether anything unusual is happening in the facility that might indicate the diversion or misuse of nuclear materials, or anything that changed since the last inspection. They must complete inspections under restrictions imposed by their hosts, regarding both their use of technology or equipment and time allotted. Moreover, because inspections are sometimes completed by different teams months apart, it is crucial that their notes accurately facilitate change detection across a delay. The current study addressed these issues by investigating how note-taking methods (e.g., digital camera, hand-written notes, or their combination) impacted memory in a delayed recall test of a complex visual array. Participants studied four arrays of abstract shapes and industrial objects using a different note-taking method for each, then returned 48–72Â h later to complete a memory test using their notes to identify objects changed (e.g., location, material, orientation). Accuracy was highest for both conditions using a camera, followed by hand-written notes alone, and all were better than having no aid. Although the camera-only condition benefitted study times, this benefit was not observed at test, suggesting drawbacks to using just a camera to aid recall. Change type interacted with note-taking method; although certain changes were overall more difficult, the note-taking method used helped mitigate these deficits in performance. Finally, elaborative hand-written notes produced better performance than simple ones, suggesting strategies for individual note-takers to maximize their efficacy in the absence of a digital aid.

More Details

The Impact of Information Presentation on Visual Inspection Performance in the International Nuclear Safeguards Domain

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Matzen, Laura E.; Stites, Mallory C.; Smartt, Heidi A.; Gastelum, Zoe N.

International nuclear safeguards inspectors are tasked with verifying that nuclear materials in facilities around the world are not misused or diverted from peaceful purposes. They must conduct detailed inspections in complex, information-rich environments, but there has been relatively little research into the cognitive aspects of their jobs. We posit that the speed and accuracy of the inspectors can be supported and improved by designing the materials they take into the field such that the information is optimized to meet their cognitive needs. Many in-field inspection activities involve comparing inventory or shipping records to other records or to physical items inside of a nuclear facility. The organization and presentation of the records that the inspectors bring into the field with them could have a substantial impact on the ease or difficulty of these comparison tasks. In this paper, we present a series of mock inspection activities in which we manipulated the formatting of the inspectors’ records. We used behavioral and eye tracking metrics to assess the impact of the different types of formatting on the participants’ performance on the inspection tasks. The results of these experiments show that matching the presentation of the records to the cognitive demands of the task led to substantially faster task completion.

More Details

The Impact of Information Presentation on Visual Inspection Performance in the International Nuclear Safeguards Domain

Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics

Matzen, Laura E.; Stites, Mallory C.; Smartt, Heidi A.; Gastelum, Zoe N.

International nuclear safeguards inspectors are tasked with verifying that nuclear materials in facilities around the world are not misused or diverted from peaceful purposes. They must conduct detailed inspections in complex, information-rich environments, but there has been relatively little research into the cognitive aspects of their jobs. We posit that the speed and accuracy of the inspectors can be supported and improved by designing the materials they take into the field such that the information is optimized to meet their cognitive needs. Many in-field inspection activities involve comparing inventory or shipping records to other records or to physical items inside of a nuclear facility. The organization and presentation of the records that the inspectors bring into the field with them could have a substantial impact on the ease or difficulty of these comparison tasks. In this paper, we present a series of mock inspection activities in which we manipulated the formatting of the inspectors’ records. We used behavioral and eye tracking metrics to assess the impact of the different types of formatting on the participants’ performance on the inspection tasks. The results of these experiments show that matching the presentation of the records to the cognitive demands of the task led to substantially faster task completion.

More Details

Creating an Interprocedural Analyst-Oriented Data Flow Representation for Binary Analysts (CIAO)

Leger, Michelle A.; Butler, Karin; Bueno, Denis; Crepeau, Matthew; Cuellar, Christopher R.; Godwin, Alex; Haass, Michael J.; Loffredo, Timothy J.; Mangal, Ravi; Matzen, Laura E.; Nguyen, Vivian; Orso, Alessandro; Reedy, Geoffrey; Stasko, John T.; Stites, Mallory C.; Tuminaro, Julian; Wilson, Andrew T.

National security missions require understanding third-party software binaries, a key element of which is reasoning about how data flows through a program. However, vulnerability analysts protecting software lack adequate tools for understanding data flow in binaries. To reduce the human time burden for these analysts, we used human factors methods in a rolling discovery process to derive user-centric visual representation requirements. We encountered three main challenges: analysis projects span weeks, analysis goals significantly affect approaches and required knowledge, and analyst tools, techniques, conventions, and prioritization are based on personal preference. To address these challenges, we initially focused our human factors methods on an attack surface characterization task. We generalized our results using a two-stage modified sorting task, creating requirements for a data flow visualization. We implemented these requirements partially in manual static visualizations, which we informally evaluated, and partially in automatically generated interactive visualizations, which have yet to be integrated into workflows for evaluation. Our observations and results indicate that 1) this data flow visualization has the potential to enable novel code navigation, information presentation, and information sharing, and 2) it is an excellent time to pursue research applying human factors methods to binary analysis workflows.

More Details

Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

IEEE Transactions on Visualization and Computer Graphics

Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; Wang, Zhiyuan; Wilson, Andrew T.

Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene have visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. Finally, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.

More Details

Transcranial direct current stimulation of dorsolateral prefrontal cortex during encoding improves recall but not recognition memory

Neuropsychologia

Trumbo, Michael C.S.; Leshikar, Eric D.; Leach, Ryan C.; Mccurdy, Matthew P.; Sklenar, Allison M.; Frankenstein, Andrea N.; Matzen, Laura E.

Prior work demonstrates that application of transcranial direct current stimulation (tDCS) improves memory. In this study, we investigated tDCS effects on face-name associative memory using both recall and recognition tests. Participants encoded face-name pairs under either active (1.5 mA) or sham (.1 mA) stimulation applied to the scalp adjacent to the left dorsolateral prefrontal cortex (dlPFC), an area known to support associative memory. Participants’ memory was then tested after study (day one) and then again after a 24-h delay (day two), to assess both immediate and delayed stimulation effects on memory. Results indicated that active relative to sham stimulation led to substantially improved recall (more than 50%) at both day one and day two. Recognition memory performance did not differ between stimulation groups at either time point. These results suggest that stimulation at encoding improves memory performance by enhancing memory for details that enable a rich recollective experience, but that these improvements are evident only under some testing conditions, especially those that rely on recollection. Overall, stimulation of the dlPFC could have led to recall improvement through enhanced encoding from stimulation or from carryover effects of stimulation that influenced retrieval processes, or both.

More Details

Feature Selection and Inferential Procedures for Video Data [Slides]

Chen, Maximillian G.; Bapst, Aleksander B.; Busche, Kirk R.; Do, Minh N.; Matzen, Laura E.; Mcnamara, Laura A.; Yeh, Raymond A.

With the rise of electronic and high-dimensional data, new and innovative feature detection and statistical methods are required to perform accurate and meaningful statistical analysis of these datasets that provide unique statistical challenges. In the area of feature detection, much of the recent feature detection research in the computer vision community has focused on deep learning methods, which require large amounts of labeled training data. However, in many application areas, training data is very limited and often difficult to obtain. We develop methods for fast, unsupervised, precise feature detection for video data based on optical flows, edge detection, and clustering methods. We also use pretrained neural networks and interpretable linear models to extract features using very limited training data. In the area of statistics, while high-dimensional data analysis has been a main focus of recent statistical methodological research, much focus has been on populations of high-dimensional vectors, rather than populations of high-dimensional tensors, which are three-dimensional arrays that can be used to model dependent images, such as images taken of the same person or ripped video frames. Our feature detection method is a non-model-based method that fusses information from dense optical flow, raw image pixels, and frame differences to generate detections. Our hypothesis testing methods are based on the assumption that dependent images are concatenated into a tensor that follows a tensor normal distribution, and from this assumption, we derive likelihood-ratio, score, and regression-based tests for one- and multiple-sample testing problems. Our methods will be illustrated on simulated and real datasets. We conclude this report with comments on the relationship between feature detection and hypothesis testing methods.

More Details

Modeling human comprehension of data visualizations

Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; Wilson, Andrew T.

This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need for cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.

More Details

Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

IEEE Transactions on Visualization and Computer Graphics

Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; Wang, Zhiyuan; Wilson, Andrew T.

Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene have visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.

More Details

Brain Science and International Nuclear Safeguards: Implications from Cognitive Science and Human Factors Research on the Provision and Use of Safeguards-Relevant Information in the Field

ESARDA Bulletin

Gastelum, Zoe N.; Matzen, Laura E.; Smartt, Heidi A.; Horak, Karl E.; Moyer, Eric M.; St Pierre, M.E.

Today’s international nuclear safeguards inspectors have access to an increasing volume of supplemental information about the facilities under their purview, including commercial satellite imagery, nuclear trade data, open source information, and results from previous safeguards activities. In addition to completing traditional in-field safeguards activities, inspectors are now responsible for being able to act upon this growing corpus of supplemental safeguards-relevant data and for maintaining situational awareness of unusual activities taking place in their environment. However, cognitive science research suggests that maintaining too much information can be detrimental to a user’s understanding, and externalizing information (for example, to a mobile device) to reduce cognitive burden can decrease cognitive function related to memory, navigation, and attention. Given this dichotomy, how can international nuclear safeguards inspectors better synthesize information to enhance situational awareness, decision making, and performance in the field? This paper examines literature from the fields of cognitive science and human factors in the areas of wayfinding, situational awareness, equipment and technical assistance, and knowledge transfer, and describes the implications for the provision of, and interaction with, safeguards-relevant information for international nuclear safeguards inspectors working in the field.

More Details

Patterns of attention: How data visualizations are read

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; Stites, Mallory C.

Data visualizations are used to communicate information to people in a wide variety of contexts, but few tools are available to help visualization designers evaluate the effectiveness of their designs. Visual saliency maps that predict which regions of an image are likely to draw the viewer’s attention could be a useful evaluation tool, but existing models of visual saliency often make poor predictions for abstract data visualizations. These models do not take into account the importance of features like text in visualizations, which may lead to inaccurate saliency maps. In this paper we use data from two eye tracking experiments to investigate attention to text in data visualizations. The data sets were collected under two different task conditions: a memory task and a free viewing task. Across both tasks, the text elements in the visualizations consistently drew attention, especially during early stages of viewing. These findings highlight the need to incorporate additional features into saliency models that will be applied to visualizations.

More Details

Enhanced working memory performance via transcranial direct current stimulation: The possibility of near and far transfer

Neuropsychologia

Trumbo, Michael C.S.; Matzen, Laura E.; Coffman, Brian A.; Hunter, Michael A.; Jones, Aaron P.; Robinson, Charles S.H.; Clark, Vincent P.

Although working memory (WM) training programs consistently result in improvement on the trained task, benefit is typically short-lived and extends only to tasks very similar to the trained task (i.e., near transfer). It is possible that pairing repeated performance of a WM task with brain stimulation encourages plasticity in brain networks involved in WM task performance, thereby improving the training benefit. In the current study, transcranial direct current stimulation (tDCS) was paired with performance of a WM task (n-back). In Experiment 1, participants performed a spatial location-monitoring n-back during stimulation, while Experiment 2 used a verbal identity-monitoring n-back. In each experiment, participants received either active (2.0 mA) or sham (0.1 mA) stimulation with the anode placed over either the right or the left dorsolateral prefrontal cortex (DLPFC) and the cathode placed extracephalically. In Experiment 1, only participants receiving active stimulation with the anode placed over the right DLPFC showed marginal improvement on the trained spatial n-back, which did not extend to a near transfer (verbal n-back) or far transfer task (a matrix-reasoning task designed to measure fluid intelligence). In Experiment 2, both left and right anode placements led to improvement, and right DLPFC stimulation resulted in numerical (though not sham-adjusted) improvement on the near transfer (spatial n-back) and far transfer (fluid intelligence) task. Results suggest that WM training paired with brain stimulation may result in cognitive enhancement that transfers to performance on other tasks, depending on the combination of training task and tDCS parameters used.

More Details

Practice makes imperfect: Working memory training can harm recognition memory performance

Memory and Cognition

Matzen, Laura E.; Trumbo, Michael C.S.; Haass, Michael J.; Silva, Austin R.; Adams, Susan S.; Bunting, Michael F.; O'Rourke, Polly

There is a great deal of debate concerning the benefits of working memory (WM) training and whether that training can transfer to other tasks. Although a consistent finding is that WM training programs elicit a short-term near-transfer effect (i.e., improvement in WM skills), results are inconsistent when considering persistence of such improvement and far transfer effects. In this study, we compared three groups of participants: a group that received WM training, a group that received training on how to use a mental imagery memory strategy, and a control group that received no training. Although the WM training group improved on the trained task, their posttraining performance on nontrained WM tasks did not differ from that of the other two groups. In addition, although the imagery training group’s performance on a recognition memory task increased after training, the WM training group’s performance on the task decreased after training. Participants’ descriptions of the strategies they used to remember the studied items indicated that WM training may lead people to adopt memory strategies that are less effective for other types of memory tasks. These results indicate that WM training may have unintended consequences for other types of memory performance.

More Details

Information theoretic measures for visual analytics: The silver ticket?

ACM International Conference Proceeding Series

Mcnamara, Laura A.; Bauer, Travis L.; Haass, Michael J.; Matzen, Laura E.

In this paper, we argue that information theoretic measures may provide a robust, broadly applicable, repeatable metric to assess how a system enables people to reduce high-dimensional data into topically relevant subsets of information. Explosive growth in electronic data necessitates the development of systems that balance automation with human cognitive engagement to facilitate pattern discovery, analysis and characterization, variously described as "cognitive augmentation" or "insight generation." However, operationalizing the concept of insight in any measurable way remains a difficult challenge for visualization researchers. The "golden ticket" of insight evaluation would be a precise, generalizable, repeatable, and ecologically valid metric that indicates the relative utility of a system in heightening cognitive performance or facilitating insights. Unfortunately, the golden ticket does not yet exist. In its place, we are exploring information theoretic measures derived from Shannon's ideas about information and entropy as a starting point for precise, repeatable, and generalizable approaches for evaluating analytic tools. We are specifically concerned with needle-in-haystack workflows that require interactive search, classification, and reduction of very large heterogeneous datasets into manageable, task-relevant subsets of information. We assert that systems aimed at facilitating pattern discovery, characterization and analysis - i.e., "insight" - must afford an efficient means of sorting the needles from the chaff; and simple compressibility measures provide a way of tracking changes in information content as people shape meaning from data.

More Details

Transcranial stimulation over the left inferior frontal gyrus increases false alarms in an associative memory task in older adults

Healthy Aging Research

Leach, Ryan C.; Mccurdy, Matthew P.; Trumbo, Michael C.S.; Matzen, Laura E.; Leshikar, Eric D.

Here, transcranial direct current stimulation (tDCS) is a potent ial tool for alleviating various forms of cognitive decline, including memory loss, in older adults. However, past effects of tDCS on cognitive ability have been mixed. One important potential moderator of tDCS effects is the baseline level of cognitive performance. We tested the effects of tDCS on face-name associative memory in older adults, who suffer from performance deficits in this task relative to younger adults. Stimulation was applied to the left inferior prefrontal cortex during encoding of face-name pairs, and memory was assessed with both a recognition and recall task. As a result, face–name memory performance was decreased with the use of tDCS. This result was driven by increased false alarms when recognizing rearranged face–name pairs.

More Details

PANTHER Grand Challenge LDRD: Human Analytics Research Summary

Mcnamara, Laura A.; Czuchlewski, Kristina R.; Cole, Kerstan; Ganter, John H.; Haass, Michael J.; Matzen, Laura E.; Adams, Susan S.; Stracuzzi, David J.

This summary of PANTHER Human Analytics work describes three of the team's major work activities: research with teams to elicit and document work practices; experimental studies of visual search performance and visual attention; and the application of spatio-temporal algorithms to the analysis of eye tracking data. Our intent is to provide basic introduction to the work area and a selected set of representative HA team publications as a starting point for readers interested our team's work.

More Details

Using eye tracking metrics and visual saliency maps to assess image utility

Human Vision and Electronic Imaging 2016, HVEI 2016

Matzen, Laura E.; Haass, Michael J.; Tran, Jonathan; Mcnamara, Laura A.

In this study, eye tracking metrics and visual saliency maps were used to assess analysts' interactions with synthetic aperture radar (SAR) imagery. Participants with varying levels of experience with SAR imagery completed a target detection task while their eye movements and behavioral responses were recorded. The resulting gaze maps were compared with maps of bottom-up visual saliency and with maps of automatically detected image features The results showed striking differences between professional SAR analysis and novices in terms of how their visual search patterns related to the visual saliency of features in the imagery. They also revealed patterns that reflect the utility of various features in the images for the professional analysts These findings have implications for system design andfor the design and use of automatic feature classification algorithms.

More Details

Modeling human comprehension of data visualizations

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Haass, Michael J.; Matzen, Laura E.; Wilson, Andrew T.; Divis, Kristin M.

A critical challenge in data science is conveying the meaning of data to human decision makers. While working with visualizations, decision makers are engaged in a visual search for information to support their reasoning process. As sensors proliferate and high performance computing becomes increasingly accessible, the volume of data decision makers must contend with is growing continuously and driving the need for more efficient and effective data visualizations. Consequently, researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles to assess the effectiveness of data visualizations. In this paper, we compare the performance of three different saliency models across a common set of data visualizations. This comparison establishes a performance baseline for assessment of new data visualization saliency models.

More Details

Assessment of expert interaction with multivariate time series ‘big data’

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Adams, Susan S.; Haass, Michael J.; Matzen, Laura E.; King, Saskia H.

‘Big data’ is a phrase that has gained much traction recently. It has been defined as ‘a broad term for data sets so large or complex that traditional data processing applications are inadequate and there are challenges with analysis, searching and visualization’ [1]. Many domains struggle with providing experts accurate visualizations of massive data sets so that the experts can understand and make decisions about the data e.g., [2, 3, 4, 5]. Abductive reasoning is the process of forming a conclusion that best explains observed facts and this type of reasoning plays an important role in process and product engineering. Throughout a production lifecycle, engineers will test subsystems for critical functions and use the test results to diagnose and improve production processes. This paper describes a value-driven evaluation study [7] for expert analyst interactions with big data for a complex visual abductive reasoning task. Participants were asked to perform different tasks using a new tool, while eye tracking data of their interactions with the tool was collected. The participants were also asked to give their feedback and assessments regarding the usability of the tool. The results showed that the interactive nature of the new tool allowed the participants to gain new insights into their data sets, and all participants indicated that they would begin using the tool in its current state.

More Details

Modeling human comprehension of data visualizations

Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics

Haass, Michael J.; Matzen, Laura E.; Wilson, Andrew T.; Divis, Kristin M.

A critical challenge in data science is conveying the meaning of data to human decision makers. While working with visualizations, decision makers are engaged in a visual search for information to support their reasoning process. As sensors proliferate and high performance computing becomes increasingly accessible, the volume of data decision makers must contend with is growing continuously and driving the need for more efficient and effective data visualizations. Consequently, researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles to assess the effectiveness of data visualizations. In this paper, we compare the performance of three different saliency models across a common set of data visualizations. This comparison establishes a performance baseline for assessment of new data visualization saliency models.

More Details

Through a scanner quickly: Elicitation of P3 in transportation security officers following rapid image presentation and categorization

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Trumbo, Michael C.S.; Matzen, Laura E.; Silva, Austin R.; Haass, Michael J.; Divis, Kristin M.; Speed, Ann E.

Numerous domains, ranging from medical diagnostics to intelligence analysis, involve visual search tasks in which people must find and identify specific items within large sets of imagery. These tasks rely heavily on human judgment, making fully automated systems infeasible in many cases. Researchers have investigated methods for combining human judgment with computational processing to increase the speed at which humans can triage large image sets. One such method is rapid serial visual presentation (RSVP), in which images are presented in rapid succession to a human viewer. While viewing the images and looking for targets of interest, the participant’s brain activity is recorded using electroencephalography (EEG). The EEG signals can be time-locked to the presentation of each image, producing event-related potentials (ERPs) that provide information about the brain’s response to those stimuli. The participants’ judgments about whether or not each set of images contained a target and the ERPs elicited by target and non-target images are used to identify subsets of images that merit close expert scrutiny [1]. Although the RSVP/EEG paradigm holds promise for helping professional visual searchers to triage imagery rapidly, it may be limited by the nature of the target items. Targets that do not vary a great deal in appearance are likely to elicit useable ERPs, but more variable targets may not. In the present study, we sought to extend the RSVP/EEG paradigm to the domain of aviation security screening, and in doing so to explore the limitations of the technique for different types of targets. Professional Transportation Security Officers (TSOs) viewed bag X-rays that were presented using an RSVP paradigm. The TSOs viewed bursts of images containing 50 segments of bag X-rays that were presented for 100 ms each. Following each burst of images, the TSOs indicated whether or not they thought there was a threat item in any of the images in that set. EEG was recorded during each burst of images and ERPs were calculated by time-locking the EEG signal to the presentation of images containing threats and matched images that were identical except for the presence of the threat item. Half of the threat items had a prototypical appearance and half did not. We found that the bag images containing threat items with a prototypical appearance reliably elicited a P300 ERP component, while those without a prototypical appearance did not. These findings have implications for the application of the RSVP/EEG technique to real-world visual search domains.

More Details

Methodology for knowledge elicitation in visual abductive reasoning tasks

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Haass, Michael J.; Matzen, Laura E.; Adams, Susan S.; Roach, Robert A.

The potential for bias to affect the results of knowledge elicitation studies is well recognized. Researchers and knowledge engineers attempt to control for bias through careful selection of elicitation and analysis methods. Recently, the development of a wide range of physiological sensors, coupled with fast, portable and inexpensive computing platforms, has added an additional dimension of objective measurement that can reduce bias effects. In the case of an abductive reasoning task, bias can be introduced through design of the stimuli, cues from researchers, or omissions by the experts. We describe a knowledge elicitation methodology robust to various sources of bias, incorporating objective and cross-referenced measurements. The methodology was applied in a study of engineers who use multivariate time series data to diagnose mance of devices throughout the production lifecycle. For visual reasoning tasks, eye tracking is particularly effective at controlling for biases of omission by providing a record of the subject’s attention allocation.

More Details

Ethnographic methods for experimental design: Case studies in visual search

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Mcnamara, Laura A.; Cole, Kerstan; Haass, Michael J.; Matzen, Laura E.; Morrow, James D.; Adams, Susan S.; Mcmichael, Stephanie N.

Researchers at Sandia National Laboratories are integrating qualitative and quantitative methods from anthropology, human factors and cognitive psychology in the study of military and civilian intelligence analyst workflows in the United States’ national security community. Researchers who study human work processes often use qualitative theory and methods, including grounded theory, cognitive work analysis, and ethnography, to generate rich descriptive models of human behavior in context. In contrast, experimental psychologists typically do not receive training in qualitative induction, nor are they likely to practice ethnographic methods in their work, since experimental psychology tends to emphasize generalizability and quantitative hypothesis testing over qualitative description. However, qualitative frameworks and methods from anthropology, sociology, and human factors can play an important role in enhancing the ecological validity of experimental research designs.

More Details

Measuring expert and novice performance within computer security incident response teams

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Silva, Austin R.; Avina, Glory E.; Mcclain, Jonathan T.; Matzen, Laura E.; Forsythe, James C.

There is a great need for creating cohesive, expert cybersecurity incident response teams and training them effectively. This paper discusses new methodologies for measuring and understanding expert and novice differences within a cybersecurity environment to bolster training, selection, and teaming. This methodology for baselining and characterizing individuals and teams relies on relating eye tracking gaze patterns to psychological assessments, human-machine transaction monitoring, and electroencephalography data that are collected during participation in the game-based training platform Tracer FIRE. We discuss preliminary findings from two pilot studies using novice and professional teams.

More Details

Effects of non-invasive brain stimulation on associative memory

Brain Research

Matzen, Laura E.; Trumbo, Michael C.S.; Leach, Ryan C.; Leshikar, Eric D.

Associative memory refers to remembering the association between two items, such as a face and a name. It is a crucial part of daily life, but it is also one of the first aspects of memory performance that is impacted by aging and by Alzheimer's disease. Evidence suggests that transcranial direct current stimulation (tDCS) can improve memory performance, but few tDCS studies have investigated its impact on associative memory. In addition, no prior study of the effects of tDCS on memory performance has systematically evaluated the impact of tDCS on different types of memory assessments, such as recognition and recall tests. In this study, we measured the effects of tDCS on associative memory performance in healthy adults, using both recognition and recall tests. Participants studied face-name pairs while receiving either active (30 min, 2 mA) or sham (30 min, 0.1 mA) stimulation with the anode placed at F9 and the cathode placed on the contralateral upper arm. Participants in the active stimulation group performed significantly better on the recall test than participants in the sham group, recalling 50% more names, on average, and making fewer recall errors. However, the two groups did not differ significantly in terms of their performance on the recognition memory test. This investigation provides evidence that stimulation at the time of study improves associative memory encoding, but that this memory benefit is evident only under certain retrieval conditions.

More Details

Effects of Transcranial Direct Current Stimulation (tDCS) on Human Memory

Matzen, Laura E.; Trumbo, Michael C.S.

Training a person in a new knowledge base or skill set is extremely time consuming and costly, particularly in highly specialized domains such as the military and the intelligence community. Recent research in cognitive neuroscience has suggested that a technique called transcranial direct current stimulation (tDCS) has the potential to revolutionize training by enabling learners to acquire new skills faster, more efficiently, and more robustly (Bullard et al., 2011). In this project, we tested the effects of tDCS on two types of memory performance that are critical for learning new skills: associative memory and working memory. Associative memory is memory for the relationship between two items or events. It forms the foundation of all episodic memories, so enhancing associative memory could provide substantial benefits to the speed and robustness of learning new information. We tested the effects of tDCS on associative memory, using a real-world associative memory task: remembering the links between faces and names. Working memory refers to the amount of information that can be held in mind and processed at one time, and it forms the basis for all higher-level cognitive processing. We investigated the degree of transfer between various working memory tasks (the N-back task as a measure of verbal working memory, the rotation-span task as a measure of visuospatial working memory, and Raven's progressive matrices as a measure of fluid intelligence) in order to determine if tDCS-induced facilitation of performance is task-specific or general.

More Details

Frequency-Dependent Enhancement of Fluid Intelligence Induced by Transcranial Oscillatory Potentials

Current Biology

Matzen, Laura E.

Everyday problem solving requires the ability to go beyond experience by efficiently encoding and manipulating new information, i.e., fluid intelligence (Gf) [1]. Performance in tasks involving Gf, such as logical and abstract reasoning, has been shown to rely on distributed neural networks, with a crucial role played by prefrontal regions [2]. Synchronization of neuronal activity in the gamma band is a ubiquitous phenomenon within the brain; however, no evidence of its causal involvement in cognition exists to date [3]. Here, we show an enhancement of Gf ability in a cognitive task induced by exogenous rhythmic stimulation within the gamma band. Imperceptible alternating current [4] delivered through the scalp over the left middle frontal gyrus resulted in a frequency-specific shortening of the time required to find the correct solution in a visuospatial abstract reasoning task classically employed to measure Gf abilities (i.e., Raven’s matrices) [5]. Crucially, gamma-band stimulation (γ-tACS) selectively enhanced performance only on more complex trials involving conditional/logical reasoning. The finding presented here supports a direct involvement of gamma oscillatory activity in the mechanisms underlying higher-order human cognition.

More Details

Evaluating information visualizations with working memory metrics

Communications in Computer and Information Science

Bandlow, Alisa; Matzen, Laura E.; Cole, Kerstan; Dornburg, Courtney C.; Geiseler, Charles J.; Mcnamara, Laura A.; Adams, Susan S.

Information visualization tools are being promoted to aid decision support. These tools assist in the analysis and comprehension of ambiguous and conflicting data sets. Formal evaluations are necessary to demonstrate the effectiveness of visualization tools, yet conducting these studies is difficult. Objective metrics that allow designers to compare the amount of work required for users to operate a particular interface are lacking. This in turn makes it difficult to compare workload across different interfaces, which is problematic for complicated information visualization and visual analytics packages. We believe that measures of working memory load can provide a more objective and consistent way of assessing visualizations and user interfaces across a range of applications. We present initial findings from a study using measures of working memory load to compare the usability of two graph representations. © 2011 Springer-Verlag.

More Details

Using computational modeling to assess use of cognitive strategies

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Haass, Michael J.; Matzen, Laura E.

Although there are many strategies and techniques that can improve memory, cognitive biases generally lead people to choose suboptimal memory strategies. In this study, participants were asked to memorize words while their brain activity was recorded using electroencephalography (EEG). The participants' memory performance and EEG data revealed that a self-testing (retrieval practice) strategy could improve memory. The majority of the participants did not use self-testing, but computational modeling revealed that a subset of the participants had brain activity that was consistent with this optimal strategy. We developed a model that characterized the brain activity associated with passive study and with explicit memory testing. We used that model to predict which participants adopted a self-testing strategy, and then evaluated the behavioral performance of those participants. This analysis revealed that, as predicted, the participants whose brain activity was consistent with a self-testing strategy had better memory performance at test. © 2011 Springer-Verlag.

More Details

Cultural neuroscience and individual differences: Implications for augmented cognition

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Matzen, Laura E.

Technologies that augment human cognition have the potential to enhance human performance in a wide variety of domains. However, there are a number of individual differences in brain activity that must be taken into account during the development, validation, and application of augmented cognition tools. A growing body of research in cultural neuroscience has shown that there are substantial differences in how people from different cultural backgrounds approach various cognitive tasks. In addition, there are many other types of individual differences and even changes in a single individual over time that have implications for augmented cognition research and development. The aim of this session is to highlight a few of those differences and to discuss how they might impact augmented cognition technologies. © 2011 Springer-Verlag.

More Details

Recommendations for reducing ambiguity in written procedures

Matzen, Laura E.

Previous studies in the nuclear weapons complex have shown that ambiguous work instructions (WIs) and operating procedures (OPs) can lead to human error, which is a major cause for concern. This report outlines some of the sources of ambiguity in written English and describes three recommendations for reducing ambiguity in WIs and OPs. The recommendations are based on commonly used research techniques in the fields of linguistics and cognitive psychology. The first recommendation is to gather empirical data that can be used to improve the recommended word lists that are provided to technical writers. The second recommendation is to have a review in which new WIs and OPs and checked for ambiguities and clarity. The third recommendation is to use self-paced reading time studies to identify any remaining ambiguities before the new WIs and OPs are put into use. If these three steps are followed for new WIs and OPs, the likelihood of human errors related to ambiguity could be greatly reduced.

More Details

A study of potential sources of linguistic ambiguity in written work instructions

Matzen, Laura E.

This report describes the results of a small experimental study that investigated potential sources of ambiguity in written work instructions (WIs). The English language can be highly ambiguous because words with different meanings can share the same spelling. Previous studies in the nuclear weapons complex have shown that ambiguous WIs can lead to human error, which is a major cause for concern. To study possible sources of ambiguity in WIs, we determined which of the recommended action verbs in the DOE and BWXT writer's manuals have numerous meanings to their intended audience, making them potentially ambiguous. We used cognitive psychology techniques to conduct a survey in which technicians who use WIs in their jobs indicated the first meaning that came to mind for each of the words. Although the findings of this study are limited by the small number of respondents, we identified words that had many different meanings even within this limited sample. WI writers should pay particular attention to these words and to their most frequent meanings so that they can avoid ambiguity in their writing.

More Details
153 Results
153 Results