This project studied the potential for multiscale group dynamics in complex social systems, including emergent recursive interaction. Current social theory on group formation and interaction focuses on a single scale (individuals forming groups) and is largely qualitative in its explanation of mechanisms. We combined theory, modeling, and data analysis to find evidence that these multiscale phenomena exist, and to investigate their potential consequences and develop predictive capabilities. In this report, we discuss the results of data analysis showing that some group dynamics theory holds at multiple scales. We introduce a new theory on communicative vibration that uses social network dynamics to predict group life cycle events. We discuss a model of behavioral responses to the COVID-19 pandemic that incorporates influence and social pressures. Finally, we discuss a set of modeling techniques that can be used to simulate multiscale group phenomena.
The testing effect refers to the benefits to retention that result from structuring learning activities in the form of a test. As educators consider implementing testenhanced learning paradigms in real classroom environments, we think it is critical to consider how an array of factors affecting test-enhanced learning in laboratory studies bear on test-enhanced learning in real-world classroom environments. This review discusses the degree to which test feedback, test format (of formative tests), number of tests, level of the test questions, timing of tests (relative to initial learning), and retention duration have import for testing effects in ecologically valid contexts (e.g., classroom studies). Attention is also devoted to characteristics of much laboratory testing-effect research that may limit translation to classroom environments, such as the complexity of the material being learned, the value of the testing effect relative to other generative learning activities in classrooms, an educational orientation that favors criterial tests focused on transfer of learning, and online instructional modalities. We consider how student-centric variables present in the classroom (e.g., cognitive abilities, motivation) may have bearing on the effects of testing-effect techniques implemented in the classroom. We conclude that the testing effect is a robust phenomenon that benefits a wide variety of learners in a broad array of learning domains. Still, studies are needed to compare the benefit of testing to other learning strategies, to further characterize how individual differences relate to testing benefits, and to examine whether testing benefits learners at advanced levels.
There is a wealth of psychological theory regarding the drive for individuals to congregate and form social groups, positing that people may organize out of fear, social pressure, or even to manage their self-esteem. We evaluate three such theories for multi-scale validity by studying them not only at the individual scale for which they were originally developed, but also for applicability to group interactions and behavior. We implement this multi-scale analysis using a dataset of communications and group membership derived from a long-running online game, matching the intent behind the theories to quantitative measures that describe players’ behavior. Once we establish that the theories hold for the dataset, we increase the scope to test the theories at the higher scale of group interactions. Despite being formulated to describe individual cognition and motivation, we show that some group dynamics theories hold at the higher level of group cognition and can effectively describe the behavior of joint decision making and higher-level interactions.
Current quantification of margin and uncertainty (QMU) guidance lacks a consistent framework for communicating the credibility of analysis results. Recent efforts at providing QMU guidance have pushed for broadening the analyses supporting QMU results beyond extrapolative statistical models to include a more holistic picture of risk, including information garnered from both experimental campaigns and computational simulations. Credibility guidance would assist in the consideration of belief-based aspects of an analysis. Such guidance exists for presenting computational simulation-based analyses and is under development for the integration of experimental data into computational simulations (calibration or validation), but is absent for the ultimate QMU product resulting from experimental or computational analyses. A QMU credibility assessment framework comprised of five elements is proposed: requirement definitions and quantity of interest selection, data quality, model uncertainty, calibration/parameter estimation, and validation. Through considering and reporting on these elements during a QMU analysis, the decision-maker will receive a more complete description of the analysis and be better positioned to understand the risks involved with using the analysis to support a decision. A molten salt battery application is used to demonstrate the proposed QMU credibility framework.
As system of systems (SoS) models become increasingly complex and interconnected a new approach is needed to capture the effects of humans within the SoS. Many real-life events have shown the detrimental outcomes of failing to account for humans in the loop. This research introduces a novel and cross-disciplinary methodology for modeling humans interacting with technologies to perform tasks within an SoS specifically within a layered physical security system use case. Metrics and formulations developed for this new way of looking at SoS termed sociotechnical SoS allow for the quantification of the interplay of effectiveness and efficiency seen in detection theory to measure the ability of a physical security system to detect and respond to threats. This methodology has been applied to a notional representation of a small military Forward Operating Base (FOB) as a proof-of-concept.
As system of systems (SoS) models become increasingly complex and interconnected a new approach is needed to capture the effects of humans within the SoS. Many real-life events have shown the detrimental outcomes of failing to account for humans in the loop. This research introduces a novel and cross-disciplinary methodology for modeling humans interacting with technologies to perform tasks within an SoS specifically within a layered physical security system use case. Metrics and formulations developed for this new way of looking at SoS termed sociotechnical SoS allow for the quantification of the interplay of effectiveness and efficiency seen in detection theory to measure the ability of a physical security system to detect and respond to threats. This methodology has been applied to a notional representation of a small military Forward Operating Base (FOB) as a proof-of-concept.
As concerns with cyber security and network protection increase, there is a greater need for organizations to deploy state-of-the-art technology to keep their cyber information safe. However, foolproof cyber security and network protection are a difficult feat since a security breach can be caused simply by a single employee who unknowingly succumbs to a cyber threat. It is critical for an organization’s workforce to holistically adopt cyber technologies that enable enhanced protection, help ward off cyber threats, and are efficient at encouraging human behavior towards safer cyber practices. It is also crucial for the workforce, once they have adopted cyber technologies, to remain consistent and thoughtful in their use of these technologies to keep resistance strong against cyber threats and vulnerabilities. Adoption of cyber technology can be difficult. Many organizations struggle with their workforce adopting newly-introduced cyber technologies, even when the technologies themselves have proven to be worthy solutions. Research, especially in the domain of cognitive science and the human dimension, has sought to understand how technology adoption works and can be leveraged. This paper reviews what empirical literature has found regarding cyber technology adoption, the current research gaps, and how non-research based efforts can influence adoption. Focusing on current efforts accomplished by a government-sponsored activity entitled “ACT” (Adoption of Cybersecurity Technologies), the aim of this paper is to empirically study cyber technology adoption to better understand how to influence operational adoption across the government-sector as well as how what can be done to develop a model that enables cyber technology adoption.
Modern systems, such as physical security systems, are often designed to involve complex interactions of technological and human elements. Evaluation of the performance of these systems often overlooks the human element. A method is proposed here to expand the concept of sensitivity—as denoted by d’—from signal detection theory (Green & Swets 1966; Macmillan & Creelman 2005), which came out of the field of psychophysics, to cover not only human threat detection but also other human functions plus the performance of technical systems in a physical security system, thereby including humans in the overall evaluation of system performance. New in this method is the idea that probabilities of hits (accurate identification of threats) and false alarms (saying “threat” when there is not one), which are used to calculate d’ of the system, can be applied to technologies and, furthermore, to different functions in the system beyond simple yes-no threat detection. At the most succinct level, the method returns a single number that represents the effectiveness of a physical security system; specifically, the balance between the handling of actual threats and the distraction of false alarms. The method can be automated, and the constituent parts revealed, such that given an interaction graph that indicates the functional associations of system elements and the individual probabilities of hits and false alarms for those elements, it will return the d’ of the entire system as well as d’ values for individual parts. The method can also return a measure of the response bias* of the system. One finding of this work is that the d’ for a physical security system can be relatively poor in spite of having excellent d’s for each of its individual functional elements.
Analysts across national security domains are required to sift through large amounts of data to find and compile relevant information in a form that enables decision makers to take action in high-consequence scenarios. However, even the most experienced analysts are unable to be 100 % consistent and accurate based on the entire dataset, unbiased towards familiar documentation, and are unable to synthesize and process large amounts of information in a small amount of time. Sandia National Laboratories has attempted to solve this problem by developing an intelligent web crawler called Huntsman. Huntsman acts as a personal research assistant by browsing the internet or offline datasets in a way similar to the human search process, only much faster (millions of documents per day), by submitting queries to search engines and assessing the usefulness of page results through analysis of full-page content with a suite of text analytics. This paper will discuss Huntsman’s capability to both mirror and enhance human analysts using intelligent web crawling with analysts-in-the-loop. The goal is to demonstrate how weaknesses in human cognitive processing can be compensated for by fusing human processes with text analytics and web crawling systems, which ultimately reduces analysts’ cognitive burden and increases mission effectiveness.
System-of-systems modeling has traditionally focused on physical systems rather than humans, but recent events have proved the necessity of considering the human in the loop. As technology becomes more complex and layered security continues to increase in importance, capturing humans and their interactions with technologies within the system-of-systems will be increasingly necessary. After an extensive job-task analysis, a novel type of system-ofsystems simulation model has been created to capture the human-technology interactions on an extra-small forward operating base to better understand performance, key security drivers, and the robustness of the base. In addition to the model, an innovative framework for using detection theory to calculate d’ for individual elements of the layered security system, and for the entire security system as a whole, is under development.
Performance at Transportation Security Administration (TSA) airport checkpoints must be consistently high to skillfully mitigate national security threats and incidents. To accomplish this, Transportation Security Officers (TSOs) must exceptionally perform in threat detection, interaction with passengers, and efficiency. It is difficult to measure the human attributes that contribute to high performing TSOs because cognitive ability such as memory, personality, and competence are inherently latent variables. Cognitive scientists at Sandia National Laboratories have developed a methodology that links TSOs’ cognitive ability to their performance. This paper discusses how the methodology was developed using a strict quantitative process, the strengths and weaknesses, as well as how this could be generalized to other non-TSA contexts. The scope of this project is to identify attributes that distinguished high and low TSO performance for the duties at the checkpoint that involved direct interaction with people going through the checkpoint.