The ground truth program used simulations as test beds for social science research methods. The simulations had known ground truth and were capable of producing large amounts of data. This allowed research teams to run experiments and ask questions of these simulations similar to social scientists studying real-world systems, and enabled robust evaluation of their causal inference, prediction, and prescription capabilities. We tested three hypotheses about research effectiveness using data from the ground truth program, specifically looking at the influence of complexity, causal understanding, and data collection on performance. We found some evidence that system complexity and causal understanding influenced research performance, but no evidence that data availability contributed. The ground truth program may be the first robust coupling of simulation test beds with an experimental framework capable of teasing out factors that determine the success of social science research.
Measures of simulation model complexity generally focus on outputs; we propose measuring the complexity of a model’s causal structure to gain insight into its fundamental character. This article introduces tools for measuring causal complexity. First, we introduce a method for developing a model’s causal structure diagram, which characterises the causal interactions present in the code. Causal structure diagrams facilitate comparison of simulation models, including those from different paradigms. Next, we develop metrics for evaluating a model’s causal complexity using its causal structure diagram. We discuss cyclomatic complexity as a measure of the intricacy of causal structure and introduce two new metrics that incorporate the concept of feedback, a fundamental component of causal structure. The first new metric introduced here is feedback density, a measure of the cycle-based interconnectedness of causal structure. The second metric combines cyclomatic complexity and feedback density into a comprehensive causal complexity measure. Finally, we demonstrate these complexity metrics on simulation models from multiple paradigms and discuss potential uses and interpretations. These tools enable direct comparison of models across paradigms and provide a mechanism for measuring and discussing complexity based on a model’s fundamental assumptions and design.
The causal structure of a simulation is a major determinant of both its character and behavior, yet most methods we use to compare simulations focus only on simulation outputs. We introduce a method that combines graphical representation with information theoretic metrics to quantitatively compare the causal structures of models. The method applies to agent-based simulations as well as system dynamics models and facilitates comparison within and between types. Comparing models based on their causal structures can illuminate differences in assumptions made by the models, allowing modelers to (1) better situate their models in the context of existing work, including highlighting novelty, (2) explicitly compare conceptual theory and assumptions to simulated theory and assumptions, and (3) investigate potential causal drivers of divergent behavior between models. We demonstrate the method by comparing two epidemiology models at different levels of aggregation.
The retina plays an important role in animal vision --- namely to pre-process visual information before sending it to the brain. The goal of this LDRD was to develop models of motion-sensitive retinal cells for the purpose of developing retinal-inspired algorithms to be applied to real-world data specific to Sandia's national security missions. We specifically focus on detection of small, dim moving targets amidst varying types of clutter or distractor signals. We compare a classic motion-sensitive model, the Hassenstein-Reichardt model, to a model of the OMS (object motion- sensitive) cell, and find that the Reichardt model performs better under continuous clutter (e.g. white noise) but is very sensitive to particular stimulus conditions (e.g. target velocity). We also demonstrate that lateral inhibition, a ubiquitous characteristic of neural circuitry, can effect target-size tuning, improving detection specifically of small targets.
The retina plays an important role in animal vision - namely preprocessing visual information before sending it to the brain through the optic nerve. Understanding howthe retina does this is of particular relevance for development and design of neuromorphic sensors, especially those focused towards image processing. Our research focuses on examining mechanisms of motion processing in the retina. We are specifically interested in detection of moving targets under challenging conditions, specifically small or low-contrast (dim) targets amidst high quantities of clutter or distractor signals. In this paper we compare a classic motion-sensitive cell model, the Hassenstein-Reichardt model, to a model of the OMS (object motion-sensitive) cell, that relies primarily on change-detection, and describe scenarios for which each model is better suited. We also examine mechanisms, inspired by features of retinal circuitry, by which performance may be enhanced. For example, lateral inhibition (mediated by amacrine cells) conveys selectivity for small targets to the W3 ganglion cell - we demonstrate that a similar mechanism can be combined with the previously mentioned motion-processing cell models to select small moving targets for further processing.
Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.
The Transportation Security Administration has a large workforce of Transportation Security Officers, most of whom perform interrogation of x-ray images at the passenger checkpoint. To date, TSOs on the x-ray have been limited to a 30-min session at a time, however, it is unclear where this limit originated. The current paper outlines methods for empirically determining if that 30-min duty cycle is optimal and if there are differences between individual TSOs. This work can inform scheduling TSOs at the checkpoint and can also inform whether TSOs should continue to be cross-trained (i.e., performing all 6 checkpoint duties) or whether specialization makes more sense.
Electric distribution utilities, the companies that feed electricity to end users, are overseeing a technological transformation of their networks, installing sensors and other automated equipment, that are fundamentally changing the way the grid operates. These grid modernization efforts will allow utilities to incorporate some of the newer technology available to the home user – such as solar panels and electric cars – which will result in a bi-directional flow of energy and information. How will this new flow of information affect control room operations? How will the increased automation associated with smart grid technologies influence control room operators’ decisions? And how will changes in control room operations and operator decision making impact grid resilience? These questions have not been thoroughly studied, despite the enormous changes that are taking place. In this study, which involved collaborating with utility companies in the state of Vermont, the authors proposed to advance the science of control-room decision making by understanding the impact of distribution grid modernization on operator performance. Distribution control room operators were interviewed to understand daily tasks and decisions and to gain an understanding of how these impending changes will impact control room operations. Situation awareness was found to be a major contributor to successful control room operations. However, the impact of growing levels of automation due to smart grid technology on operators’ situation awareness is not well understood. Future work includes performing a naturalistic field study in which operator situation awareness will be measured in real-time during normal operations and correlated with the technological changes that are underway. The results of this future study will inform tools and strategies that will help system operators adapt to a changing grid, respond to critical incidents and maintain critical performance skills.
The impact of automation on human performance has been studied by human factors researchers for over 35 years. One unresolved facet of this research is measurement of the level of automation across and within engineered systems. Repeatable methods of observing, measuring and documenting the level of automation are critical to the creation and validation of generalized theories of automation's impact on the reliability and resilience of human-in-the-loop systems. Numerous qualitative scales for measuring automation have been proposed. However these methods require subjective assessments based on the researcher's knowledge and experience, or through expert knowledge elicitation involving highly experienced individuals from each work domain. More recently, quantitative scales have been proposed, but have yet to be widely adopted, likely due to the difficulty associated with obtaining a sufficient number of empirical measurements from each system component. Our research suggests the need for a quantitative method that enables rapid measurement of a system's level of automation, is applicable across domains, and can be used by human factors practitioners in field studies or by system engineers as part of their technical planning processes. In this paper we present our research methodology and early research results from studies of electricity grid distribution control rooms. Using a system analysis approach based on quantitative measures of level of automation, we provide an illustrative analysis of select grid modernization efforts. This measure of the level of automation can be displayed as either a static, historical view of the system's automation dynamics (the dynamic interplay between human and automation required to maintain system performance) or it can be incorporated into real-time visualization systems already present in control rooms.