Publications

Results 1–25 of 53
Skip to search filters

Activity Theory Literature Review

Greenwald-Yarnell, Megan G.; Divis, Kristin; Fleming Lindsley, Elizabeth S.; Heiden, Siobhan M.; Nyre-Yu, Megan N.; Odom, Peter W.; Pang, Michelle A.; Salmon, Madison M.; Silva, Austin R.

Complex challenges across Sandia National Laboratories? (SNL) mission areas underscore the need for systems level thinking, resulting in a better understanding of the organizational work systems and environments in which our hardware and software will be used. SNL researchers have successfully used Activity Theory (AT) as a framework to clarify work systems, informing product design, delivery, acceptance, and use. To increase familiarity with AT, a working group assembled to select key resources on the topic and generate an annotated bibliography. The resources in this bibliography are arranged in six categories: 1) An introduction to AT; 2) Advanced readings in AT; 3) AT and human computer interaction (HCI); 4) Methodological resources for practitioners; 5) Case studies; and 6) Related frameworks that have been used to study work systems. This annotated bibliography is expected to improve the reader?s understanding of AT and enable more efficient and effective application of it.

More Details

The Cognitive Effects of Machine Learning Aid in Domain-Specific and Domain-General Tasks

Proceedings of the Annual Hawaii International Conference on System Sciences

Divis, Kristin; Howell, Breannan C.; Matzen, Laura E.; Stites, Mallory C.; Gastelum, Zoe N.

With machine learning (ML) technologies rapidly expanding to new applications and domains, users are collaborating with artificial intelligence-assisted diagnostic tools to a larger and larger extent. But what impact does ML aid have on cognitive performance, especially when the ML output is not always accurate? Here, we examined the cognitive effects of the presence of simulated ML assistance—including both accurate and inaccurate output—on two tasks (a domain-specific nuclear safeguards task and domain-general visual search task). Patterns of performance varied across the two tasks for both the presence of ML aid as well as the category of ML feedback (e.g., false alarm). These results indicate that differences such as domain could influence users’ performance with ML aid, and suggest the need to test the effects of ML output (and associated errors) in the specific context of use, especially when the stimuli of interest are vague or ill-defined

More Details

Assessing Cognitive Impacts of Errors from Machine Learning and Deep Learning Models: Final Report

Gastelum, Zoe N.; Matzen, Laura E.; Stites, Mallory C.; Divis, Kristin; Howell, Breannan C.; Jones, Aaron P.; Trumbo, Michael C.

Due to their recent increases in performance, machine learning and deep learning models are being increasingly adopted across many domains for visual processing tasks. One such domain is international nuclear safeguards, which seeks to verify the peaceful use of commercial nuclear energy across the globe. Despite recent impressive performance results from machine learning and deep learning algorithms, there is always at least some small level of error. Given the significant consequences of international nuclear safeguards conclusions, we sought to characterize how incorrect responses from a machine or deep learning-assisted visual search task would cognitively impact users. We found that not only do some types of model errors have larger negative impacts on human performance than other errors, the scale of those impacts change depending on the accuracy of the model with which they are presented and they persist in scenarios of evenly distributed errors and single-error presentations. Further, we found that experiments conducted using a common visual search dataset from the psychology community has similar implications to a safeguards- relevant dataset of images containing hyperboloid cooling towers when the cooling tower images are presented to expert participants. While novice performance was considerably different (and worse) on the cooling tower task, we saw increased novice reliance on the most challenging cooling tower images compared to experts. These findings are relevant not just to the cognitive science community, but also for developers of machine and deep learning that will be implemented in multiple domains. For safeguards, this research provides key insights into how machine and deep learning projects should be implemented considering their special requirements that information not be missed.

More Details

Evaluating the Impact of Algorithm Confidence Ratings on Human Decision Making in Visual Search

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Jones, Aaron P.; Trumbo, Michael C.; Matzen, Laura E.; Stites, Mallory C.; Howell, Breannan C.; Divis, Kristin; Gastelum, Zoe N.

As the ability to collect and store data grows, so does the need to efficiently analyze that data. As human-machine teams that use machine learning (ML) algorithms as a way to inform human decision-making grow in popularity it becomes increasingly critical to understand the optimal methods of implementing algorithm assisted search. In order to better understand how algorithm confidence values associated with object identification can influence participant accuracy and response times during a visual search task, we compared models that provided appropriate confidence, random confidence, and no confidence, as well as a model biased toward over confidence and a model biased toward under confidence. Results indicate that randomized confidence is likely harmful to performance while non-random confidence values are likely better than no confidence value for maintaining accuracy over time. Providing participants with appropriate confidence values did not seem to benefit performance any more than providing participants with under or over confident models.

More Details

Rim-to-Rim Werables at The Canyon for Health (R2R WATCH): Physiological Cognitive and Biological Markers of Performance Decline in an Extreme Environment

Journal of Human Performance in Extreme Environments

Divis, Kristin; Abbott, Robert G.; Branda, Catherine B.; Avina, Glory E.; Femling, Jon F.; Huerta, Jose G.; Jelinkova, Lucie J.; Jennings, Jeremy K.; Pearce, Emily P.; Ries, Daniel R.; Sanchez, Danielle; Silva, Austin R.

Abstract not provided.

A heuristic approach to value-driven evaluation of visualizations

IEEE Transactions on Visualization and Computer Graphics

Wall, Emily; Agnihotri, Meeshu; Matzen, Laura E.; Divis, Kristin; Haass, Michael J.; Endert, Alex; Stasko, John

Recently, an approach for determining the value of a visualization was proposed, one moving beyond simple measurements of task accuracy and speed. The value equation contains components for the time savings a visualization provides, the insights and insightful questions it spurs, the overall essence of the data it conveys, and the confidence about the data and its domain it inspires. This articulation of value is purely descriptive, however, providing no actionable method of assessing a visualization's value. In this work, we create a heuristic-based evaluation methodology to accompany the value equation for assessing interactive visualizations. We refer to the methodology colloquially as ICE-T, based on an anagram of the four value components. Our approach breaks the four components down into guidelines, each of which is made up of a small set of low-level heuristics. Evaluators who have knowledge of visualization design principles then assess the visualization with respect to the heuristics. We conducted an initial trial of the methodology on three interactive visualizations of the same data set, each evaluated by 15 visualization experts. We found that the methodology showed promise, obtaining consistent ratings across the three visualizations and mirroring judgments of the utility of the visualizations by instructors of the course in which they were developed.

More Details

The Tularosa study: An experimental design and implementation to quantify the effectiveness of cyber deception

Proceedings of the Annual Hawaii International Conference on System Sciences

Ferguson-Walter, Kimberly J.; Shade, Temmie B.; Rogers, Andrew V.; Niedbala, Elizabeth M.; Trumbo, Michael C.; Nauer, Kevin S.; Divis, Kristin; Jones, Aaron P.; Combs, Angela C.; Abbott, Robert G.

The Tularosa study was designed to understand how defensive deception-including both cyber and psychological-affects cyber attackers. Over 130 red teamers participated in a network penetration task over two days in which we controlled both the presence of and explicit mention of deceptive defensive techniques. To our knowledge, this represents the largest study of its kind ever conducted on a professional red team population. The design was conducted with a battery of questionnaires (e.g., experience, personality, etc.) and cognitive tasks (e.g., fluid intelligence, working memory, etc.), allowing for the characterization of a “typical” red teamer, as well as physiological measures (e.g., galvanic skin response, heart rate, etc.) to be correlated with the cyber events. This paper focuses on the design, implementation, data, population characteristics, and begins to examine preliminary results.

More Details

Challenges in Eye Tracking for Dynamic User-Driven Workflows

McNamara, Laura A.; Divis, Kristin; Morrow, James D.; Chen, Maximillian G.; Perkins, David P.

This three-year Laboratory Directed Research and Development (LDRD) project aimed at developing a developed prototype data collection system and analysis techniques to enable the measurement and analysis of user-driven dynamic workflows. Over 3 years, our team developed software, algorithms, and analysis technique to explore the feasibility of capturing and automatically associating eye tracking data with geospatial content, in a user-directed, dynamic visual search task. Although this was a small LDRD, we demonstrated the feasibility of automatically capturing, associating, and expressing gaze events in terms of geospatial image coordinates, even as the human "analyst" is given complete freedom to manipulate the stimulus image during a visual search task. This report describes the problem under examination, our approach, the techniques and software we developed, key achievements, ideas that did not work as we had hoped, and unsolved problems we hope to tackle in future projects.

More Details
Results 1–25 of 53
Results 1–25 of 53