Publications

Results 26–37 of 37
Skip to search filters

Robust automated knowledge capture

Trumbo, Michael C.; Haass, Michael J.; Adams, Susan S.; Hendrickson, Stacey M.; Abbott, Robert G.

This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

More Details

Communications-based automated assessment of team cognitive performance

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Lakkaraju, Kiran; Adams, Susan S.; Abbott, Robert G.; Forsythe, James C.

In this paper we performed analysis of speech communications in order to determine if we can differentiate between expert and novice teams based on communication patterns. Two pairs of experts and novices performed numerous test sessions on the E-2 Enhanced Deployable Readiness Trainer (EDRT) which is a medium-fidelity simulator of the Naval Flight Officer (NFO) stations positioned at bank end of the E-2 Hawkeye. Results indicate that experts and novices can be differentiated based on communication patterns. First, experts and novices differ significantly with regard to the frequency of utterances, with both expert teams making many fewer radio calls than both novice teams. Next, the semantic content of utterances was considered. Using both manual and automated speech-to-text conversion, the resulting text documents were compared. For 7 of 8 subjects, the two most similar subjects (using cosine-similarity of term vectors) were in the same category of expertise (novice/expert). This means that the semantic content of utterances by experts was more similar to other experts, than novices, and vice versa. Finally, using machine learning techniques we constructed a classifier that, given as input the text of the speech of a subject, could identify whether the individual was an expert or novice with a very low error rate. By looking at the parameters of the machine learning algorithm we were also able to identify terms that are strongly associated with novices and experts. © 2011 Springer-Verlag.

More Details

Using after-action review based on automated performance assessment to enhance training effectiveness

Adams, Susan S.; Basilico, Justin D.; Abbott, Robert G.

Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. In this work, we follow-up on previous evaluations of the Automated Expert Modeling and Automated Student Evaluation (AEMASE) system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain. The current study provides a rigorous empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback on two out of three domain-specific performance metrics.

More Details

Performance assessment to enhance training effectiveness

Adams, Susan S.; Basilico, Justin D.; Abbott, Robert G.

Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. To maximize training efficiency, new technologies are required that assist instructors in providing individually relevant instruction. Sandia National Laboratories has shown the feasibility of automated performance assessment tools, such as the Sandia-developed Automated Expert Modeling and Student Evaluation (AEMASE) software, through proof-of-concept demonstrations, a pilot study, and an experiment. In the pilot study, the AEMASE system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain, achieved a high degree of agreement with a human grader (89%) in assessing tactical air engagement scenarios. In more recent work, we found that AEMASE achieved a high degree of agreement with human graders (83-99%) for three Navy E-2 domain-relevant performance metrics. The current study provides a rigorous empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we assessed whether giving students feedback based on automated metrics would enhance training effectiveness and improve student performance. We trained two groups of employees (differentiated by type of feedback) on a Navy E-2 simulator and assessed their performance on three domain-specific performance metrics. We found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback on two out of three metrics. Future work will focus on extending these developments for automated assessment of teamwork.

More Details

Enabling immersive simulation

Abbott, Robert G.; Basilico, Justin D.; Glickman, Matthew R.; Hart, Derek H.; Whetzel, Jonathan H.

The object of the 'Enabling Immersive Simulation for Complex Systems Analysis and Training' LDRD has been to research, design, and engineer a capability to develop simulations which (1) provide a rich, immersive interface for participation by real humans (exploiting existing high-performance game-engine technology wherever possible), and (2) can leverage Sandia's substantial investment in high-fidelity physical and cognitive models implemented in the Umbra simulation framework. We report here on these efforts. First, we describe the integration of Sandia's Umbra modular simulation framework with the open-source Delta3D game engine. Next, we report on Umbra's integration with Sandia's Cognitive Foundry, specifically to provide for learning behaviors for 'virtual teammates' directly from observed human behavior. Finally, we describe the integration of Delta3D with the ABL behavior engine, and report on research into establishing the theoretical framework that will be required to make use of tools like ABL to scale up to increasingly rich and realistic virtual characters.

More Details

Automated expert modeling for automated student evaluation

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abbott, Robert G.

This paper presents automated expert modeling for automated student evaluation, or AEMASE (pronounced "amaze"). This technique grades students by comparing their actions to a model of expert behavior. The expert model is constructed with machine learning techniques, avoiding the costly and time-consuming process of manual knowledge elicitation and expert system implementation. A brief summary of after action review (AAR) and intelligent tutoring systems (ITS) provides background for a prototype AAR application with a learning expert model. A validation experiment confirms that the prototype accurately grades student behavior on a tactical aircraft maneuver application. Finally, several topics for further research are proposed. © Springer-Verlag Berlin Heidelberg 2006.

More Details

Engineering a transformation of human-machine interaction to an augmented cognitive relationship

Forsythe, James C.; Forsythe, James C.; Bernard, Michael L.; Xavier, Patrick G.; Abbott, Robert G.; Speed, Ann S.; Brannon, Nathan B.

This project is being conducted by Sandia National Laboratories in support of the DARPA Augmented Cognition program. Work commenced in April of 2002. The objective for the DARPA program is to 'extend, by an order of magnitude or more, the information management capacity of the human-computer warfighter.' Initially, emphasis has been placed on detection of an operator's cognitive state so that systems may adapt accordingly (e.g., adjust information throughput to the operator in response to workload). Work conducted by Sandia focuses on development of technologies to infer an operator's ongoing cognitive processes, with specific emphasis on detecting discrepancies between machine state and an operator's ongoing interpretation of events.

More Details
Results 26–37 of 37
Results 26–37 of 37