This three-year Laboratory Directed Research and Development (LDRD) project aimed at developing a developed prototype data collection system and analysis techniques to enable the measurement and analysis of user-driven dynamic workflows. Over 3 years, our team developed software, algorithms, and analysis technique to explore the feasibility of capturing and automatically associating eye tracking data with geospatial content, in a user-directed, dynamic visual search task. Although this was a small LDRD, we demonstrated the feasibility of automatically capturing, associating, and expressing gaze events in terms of geospatial image coordinates, even as the human "analyst" is given complete freedom to manipulate the stimulus image during a visual search task. This report describes the problem under examination, our approach, the techniques and software we developed, key achievements, ideas that did not work as we had hoped, and unsolved problems we hope to tackle in future projects.
Multivariate time-series datasets are intrinsic to the study of dynamic, naturalistic behavior, such as in the applications of finance and motion video analysis. Statistical models provide the ability to identify event patterns in these data under conditions of uncertainty, but researchers must be able to evaluate how well a model uses available information in a dataset for clustering decisions and for uncertainty information. The Hidden Markov Model (HMM) is an established method for clustering time-series data, where the hidden states of the HMM are the clusters. We develop novel methods for quantifying the uncertainty of the performance of and for visualizing the clustering performance and uncertainty of fitting a HMM to multivariate time-series data. We explain the usefulness of uncertainty quantification and visualization with evaluating the performance of clustering models, as well as how information exploitation of time-series datasets can be enhanced. We implement our methods to cluster patterns of scanpaths from raw eye tracking data.
Many companies rely on user experience metrics, such as Net Promoter scores, to monitor changes in customer attitudes toward their products. This paper suggests that similar metrics can be used to assess the user experience of the pilots and sensor operators who are tasked with using our radar, EO/IR, and other remote sensing technologies. As we have previously discussed, the problem of making our national security remote sensing systems useful, usable and adoptable is a human-system integration problem that does not get the sustained attention it deserves, particularly given the high-throughput, information-dense task environments common to military operations. In previous papers, we have demonstrated how engineering teams can adopt well-established human-computer interaction principles to fix significant usability problems in radar operational interfaces. In this paper, we describe how we are using a combination of Situation Awareness design methods, along with techniques from the consumer sector, to identify opportunities for improving human-system interactions. We explain why we believe that all stakeholders in remote sensing-including program managers, engineers, or operational users-can benefit from systematically incorporating some of these measures into the evaluation of our national security sensor systems. We will also provide examples of our own experience adapting consumer user experience metrics in operator-focused evaluation of currently deployed radar interfaces.
Fatigued driving contributes to a substantial number of motor vehicle accidents each year. Music listening is often employed as a countermeasure during driving in order to mitigate the effects of fatigue. Though music listening has been established as a distractor in the sense that it increases cognitive load during driving, it is possible that increased cognitive load is desirable under particular circumstances. For instance, during situations that typically result in cognitive underload, such as driving in a low-traffic monotonous stretch of highway, it may be beneficial for cognitive load to increase, thereby necessitating allocation of greater cognitive resources to the task of driving and attenuating fatigue. In the current study, we employed a song-naming game as a countermeasure to fatigued driving in a simulated monotonous environment. During the first driving session, we established that driving performance deteriorates in the absence of an intervention following 30 min of simulated driving. During the second session, we found that a song-naming game employed at the point of fatigue onset was an effective countermeasure, as reflected by simulated driving performance that met or exceeded fresh driving behavior and was significantly better relative to fatigued performance during the first driving session.
We address the problem of wide-area search of overhead imagery. Given a time sequence of overhead images, we construct a geospatial-temporal semantic graph, which expresses the complex continuous information in the overhead images in a discrete searchable form, including explicit modeling of changes seen from one image to the next. We can then express desired search goals as a template graph, and search for matches using simple and efficient graph search algorithms. This produces a set of potential matches which provide cues for where to examine the imagery in detail, applying human expertise to determine which matches are correct. We include a match quality metric that scores the matches according to how well they match the stated search goal. This enables matches to be presented in sorted order with the best matches first, similar to the results returned by a web search engine. We present an evaluation of the method applied to several examples and data sets, and show that it can be used successfully for some problems. We also remark on several limitations of the method and note additional work needed to improve its scope and robustness. Approved for public release; further dissemination unlimited.
This document describes the PANTHER R&D Application, a proof-of-concept user interface application developed under the PANTHER Grand Challenge LDRD. The purpose of the application is to explore interaction models for graph analytics, drive algorithmic improvements from an end-user point of view, and support demonstration of PANTHER technologies to potential customers. The R&D Application implements a graph-centric interaction model that exposes analysts to the algorithms contained within the GeoGraphy graph analytics library. Users define geospatial-temporal semantic graph queries by constructing search templates based on nodes, edges, and the constraints among them. Users then analyze the results of the queries using both geo-spatial and temporal visualizations. Development of this application has made user experience an explicit driver for project and algorithmic level decisions that will affect how analysts one day make use of PANTHER technologies.
Researchers at Sandia National Laboratories are integrating qualitative and quantitative methods from anthropology, human factors and cognitive psychology in the study of military and civilian intelligence analyst workflows in the United States’ national security community. Researchers who study human work processes often use qualitative theory and methods, including grounded theory, cognitive work analysis, and ethnography, to generate rich descriptive models of human behavior in context. In contrast, experimental psychologists typically do not receive training in qualitative induction, nor are they likely to practice ethnographic methods in their work, since experimental psychology tends to emphasize generalizability and quantitative hypothesis testing over qualitative description. However, qualitative frameworks and methods from anthropology, sociology, and human factors can play an important role in enhancing the ecological validity of experimental research designs.
Participatory modeling has become an important tool in facilitating resource decision making and dispute resolution. Approaches to modeling that are commonly used in this context often do not adequately account for important human factors. Current techniques provide insights into how certain human activities and variables affect resource outcomes; however, they do not directly simulate the complex variables that shape how, why, and under what conditions different human agents behave in ways that affect resources and human interactions related to them. Current approaches also do not adequately reveal how the effects of individual decisions scale up to have systemic level effects in complex resource systems. This lack of integration prevents the development of more robust models to support decision making and dispute resolution processes. Development of integrated tools is further hampered by the fact that collection of primary data for decision-making modeling is costly and time consuming. This project seeks to develop a new approach to resource modeling that incorporates both technical and behavioral modeling techniques into a single decision-making architecture. The modeling platform is enhanced by use of traditional and advanced processes and tools for expedited data capture. Specific objectives of the project are: (1) Develop a proof of concept for a new technical approach to resource modeling that combines the computational techniques of system dynamics and agent based modeling, (2) Develop an iterative, participatory modeling process supported with traditional and advance data capture techniques that may be utilized to facilitate decision making, dispute resolution, and collaborative learning processes, and (3) Examine potential applications of this technology and process. The development of this decision support architecture included both the engineering of the technology and the development of a participatory method to build and apply the technology. Stakeholder interaction with the model and associated data capture was facilitated through two very different modes of engagement, one a standard interface involving radio buttons, slider bars, graphs and plots, while the other utilized an immersive serious gaming interface. The decision support architecture developed through this project was piloted in the Middle Rio Grande Basin to examine how these tools might be utilized to promote enhanced understanding and decision-making in the context of complex water resource management issues. Potential applications of this architecture and its capacity to lead to enhanced understanding and decision-making was assessed through qualitative interviews with study participants who represented key stakeholders in the basin.
The purpose of this LDRD is to develop technology allowing warfighters to provide high-level commands to their unmanned assets, freeing them to command a group of them or commit the bulk of their attention elsewhere. To this end, a brain-emulating cognition and control architecture (BECCA) was developed, incorporating novel and uniquely capable feature creation and reinforcement learning algorithms. BECCA was demonstrated on both a mobile manipulator platform and on a seven degree of freedom serial link robot arm. Existing military ground robots are almost universally teleoperated and occupy the complete attention of an operator. They may remove a soldier from harm's way, but they do not necessarily reduce manpower requirements. Current research efforts to solve the problem of autonomous operation in an unstructured, dynamic environment fall short of the desired performance. In order to increase the effectiveness of unmanned vehicle (UV) operators, we proposed to develop robots that can be 'directed' rather than remote-controlled. They are instructed and trained by human operators, rather than driven. The technical approach is modeled closely on psychological and neuroscientific models of human learning. Two Sandia-developed models are utilized in this effort: the Sandia Cognitive Framework (SCF), a cognitive psychology-based model of human processes, and BECCA, a psychophysical-based model of learning, motor control, and conceptualization. Together, these models span the functional space from perceptuo-motor abilities, to high-level motivational and attentional processes.
Working with leading experts in the field of cognitive neuroscience and computational intelligence, SNL has developed a computational architecture that represents neurocognitive mechanisms associated with how humans remember experiences in their past. The architecture represents how knowledge is organized and updated through information from individual experiences (episodes) via the cortical-hippocampal declarative memory system. We compared the simulated behavioral characteristics with those of humans measured under well established experimental standards, controlling for unmodeled aspects of human processing, such as perception. We used this knowledge to create robust simulations of & human memory behaviors that should help move the scientific community closer to understanding how humans remember information. These behaviors were experimentally validated against actual human subjects, which was published. An important outcome of the validation process will be the joining of specific experimental testing procedures from the field of neuroscience with computational representations from the field of cognitive modeling and simulation.