Neural Computing at Sandia National Laboratories
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of the International Joint Conference on Neural Networks
The field of machine learning strives to develop algorithms that, through learning, lead to generalization; that is, the ability of a machine to perform a task that it was not explicitly trained for. An added challenge arises when the problem domain is dynamic or non-stationary with the data distributions or categorizations changing over time. This phenomenon is known as concept drift. Game-theoretic algorithms are often iterative by nature, consisting of repeated game play rather than a single interaction. Effectively, rather than requiring extensive retraining to update a learning model, a game-theoretic approach can adjust strategies as a novel approach to concept drift. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in an adaptive manner with repeated play to address concept drift, and show results of applying this algorithm to synthetic as well as real data.
This project evaluates the effectiveness of moving target defense (MTD) techniques using a new game we have designed, called PLADD, inspired by the game FlipIt [28]. PLADD extends FlipIt by incorporating what we believe are key MTD concepts. We have analyzed PLADD and proven the existence of a defender strategy that pushes a rational attacker out of the game, demonstrated how limited the strategies available to an attacker are in PLADD, and derived analytic expressions for the expected utility of the game’s players in multiple game variants. We have created an algorithm for finding a defender’s optimal PLADD strategy. We show that in the special case of achieving deterrence in PLADD, MTD is not always cost effective and that its optimal deployment may shift abruptly from not using MTD at all to using it as aggressively as possible. We believe our effort provides basic, fundamental insights into the use of MTD, but conclude that a truly practical analysis requires model selection and calibration based on real scenarios and empirical data. We propose several avenues for further inquiry, including (1) agents with adaptive capabilities more reflective of real world adversaries, (2) the presence of multiple, heterogeneous adversaries, (3) computational game theory-based approaches such as coevolution to allow scaling to the real world beyond the limitations of analytical analysis and classical game theory, (4) mapping the game to real-world scenarios, (5) taking player risk into account when designing a strategy (in addition to expected payoff), (6) improving our understanding of the dynamic nature of MTD-inspired games by using a martingale representation, defensive forecasting, and techniques from signal processing, and (7) using adversarial games to develop inherently resilient cyber systems.
Abstract not provided.
Abstract not provided.
Abstract not provided.
PLoS ONE
Background Recent declines in US cigarette smoking prevalence have coincided with increases in use of other tobacco products. Multiple product tobacco models can help assess the population health impacts associated with use of a wide range of tobacco products. Methods and Findings We present a multi-state, dynamical systems population structure model that can be used to assess the effects of tobacco product use behaviors on population health. The model incorporates transition behaviors, such as initiation, cessation, switching, and dual use, related to the use of multiple products. The model tracks product use prevalence and mortality attributable to tobacco use for the overall population and by sex and age group. The model can also be used to estimate differences in these outcomes between scenarios by varying input parameter values. We demonstrate model capabilities by projecting future cigarette smoking prevalence and smoking-attributable mortality and then simulating the effects of introduction of a hypothetical new lower-risk tobacco product under a variety of assumptions about product use. Sensitivity analyses were conducted to examine the range of population impacts that could occur due to differences in input values for product use and risk. We demonstrate that potential benefits from cigarette smokers switching to the lower-risk product can be offset over time through increased initiation of this product. Model results show that population health benefits are particularly sensitive to product risks and initiation, switching, and dual use behaviors. Conclusion Our model incorporates the variety of tobacco use behaviors and risks that occur with multiple products. As such, it can evaluate the population health impacts associated with the introduction of new tobacco products or policies that may result in product switching or dual use. Further model development will include refinement of data inputs for non-cigarette tobacco products and inclusion of health outcomes such as morbidity and disability.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Prehospital and Disaster Medicine
Hospital evacuations that occur during, or as a result of, infrastructure outages are complicated and demanding. Loss of infrastructure services can initiate a chain of events with corresponding management challenges. This report describes a modeling case study of the 2001 evacuation of the Memorial Hermann Hospital in Houston, Texas (USA). The study uses a model designed to track such cascading events following loss of infrastructure services and to identify the staff, resources, and operational adaptations required to sustain patient care and/or conduct an evacuation. The model is based on the assumption that a hospital's primary mission is to provide necessary medical care to all of its patients, even when critical infrastructure services to the hospital and surrounding areas are disrupted. Model logic evaluates the hospital's ability to provide an adequate level of care for all of its patients throughout a period of disruption. If hospital resources are insufficient to provide such care, the model recommends an evacuation. Model features also provide information to support evacuation and resource allocation decisions for optimizing care over the entire population of patients. This report documents the application of the model to a scenario designed to resemble the 2001 evacuation of the Memorial Hermann Hospital, demonstrating the model's ability to recreate the timeline of an actual evacuation. The model is also applied to scenarios demonstrating how its output can inform evacuation planning activities and timing.
This report presents a mathematical model of the way in which a hospital uses a variety of resources, utilities and consumables to provide care to a set of in-patients, and how that hospital might adapt to provide treatment to a few patients with a serious infectious disease, like the Ebola virus. The intended purpose of the model is to support requirements planning studies, so that hospitals may be better prepared for situations that are likely to strain their available resources. The current model is a prototype designed to present the basic structural elements of a requirements planning analysis. Some simple illustrati ve experiments establish the mo del's general capabilities. With additional inve stment in model enhancement a nd calibration, this prototype could be developed into a useful planning tool for ho spital administrators and health care policy makers.
Procedia Computer Science
Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently and recom-bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.
Abstract not provided.
Abstract not provided.
Environment, Systems, & Decisions
Adaptation is believed to be a source of resilience in systems. It has been difficult to measure the contribution of adaptation to resilience, unlike other resilience mechanisms such as restoration and recovery. One difficulty comes from treating adaptation as a deus ex machina that is interjected after a disruption. This provides no basis for bounding possible adaptive responses. We can bracket the possible effects of adaptation when we recognize that it occurs continuously, and is in part responsible for the current system’s properties. In this way the dynamics of the system’s pre-disruption structure provides information about post-disruption adaptive reaction. Seen as an ongoing process, adaptation has been argued to produce “robust-yet-fragile” systems. Such systems perform well under historical stresses but become committed to specific features of those stresses in a way that makes them vulnerable to system-level collapse when those features change. In effect adaptation lessens the cost of disruptions within a certain historical range, at the expense of increased cost from disruptions outside that range. Historical adaptive responses leave a signature in the structure of the system. Studies of ecological networks have suggested structural metrics that pick out systemic resilience in the underlying ecosystems. If these metrics are generally reliable indicators of resilience they provide another strategy for gaging adaptive resilience. To progress in understanding how the process of adaptation and the property of resilience interrelate in infrastructure systems, we pose some specific questions: Does adaptation confer resilience?; Does it confer resilience to novel shocks as well, or does it tune the system to fragility?; Can structural features predict resilience to novel shocks?; Are there policies or constraints on the adaptive process that improve resilience?.
Abstract not provided.
Abstract not provided.
Adult neurogenesis in the hippocampus region of the brain is a neurobiological process that is believed to contribute to the brain's advanced abilities in complex pattern recognition and cognition. Here, we describe how realistic scale simulations of the neurogenesis process can offer both a unique perspective on the biological relevance of this process and confer computational insights that are suggestive of novel machine learning techniques. First, supercomputer based scaling studies of the neurogenesis process demonstrate how a small fraction of adult-born neurons have a uniquely larger impact in biologically realistic scaled networks. Second, we describe a novel technical approach by which the information content of ensembles of neurons can be estimated. Finally, we illustrate several examples of broader algorithmic impact of neurogenesis, including both extending existing machine learning approaches and novel approaches for intelligent sensing.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proposed for publication in Security Informatics.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report describes the laboratory directed research and development work to model relevant areas of the brain that associate multi-modal information for long-term storage for the purpose of creating a more effective, and more automated, association mechanism to support rapid decision making. Using the biology and functionality of the hippocampus as an analogy or inspiration, we have developed an artificial neural network architecture to associate k-tuples (paired associates) of multimodal input records. The architecture is composed of coupled unimodal self-organizing neural modules that learn generalizations of unimodal components of the input record. Cross modal associations, stored as a higher-order tensor, are learned incrementally as these generalizations form. Graph algorithms are then applied to the tensor to extract multi-modal association networks formed during learning. Doing so yields a novel approach to data mining for knowledge discovery. This report describes the neurobiological inspiration, architecture, and operational characteristics of our model, and also provides a real world terrorist network example to illustrate the model's functionality.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The neocortex is perhaps the highest region of the human brain, where audio and visual perception takes place along with many important cognitive functions. An important research goal is to describe the mechanisms implemented by the neocortex. There is an apparent regularity in the structure of the neocortex [Brodmann 1909, Mountcastle 1957] which may help simplify this task. The work reported here addresses the problem of how to describe the putative repeated units ('cortical circuits') in a manner that is easily understood and manipulated, with the long-term goal of developing a mathematical and algorithmic description of their function. The approach is to reduce each algorithm to an enhanced perceptron-like structure and describe its computation using difference equations. We organize this algorithmic processing into larger structures based on physiological observations, and implement key modeling concepts in software which runs on parallel computing hardware.
Behavioral and Brain Sciences
We propose an analogy between optical holography and neural behavior as a hypothesis about the physical mechanisms of neural reuse. Specifically, parameters in optical holography (frequency, amplitude, and phase of the reference beam) may provide useful analogues for understanding the role of different parameters in determining the behavior of neurons (e.g., frequency, amplitude, and phase of spiking behavior). © 2010 Cambridge University Press.
Abstract not provided.
Behavioral Brain Science
Abstract not provided.
Working with leading experts in the field of cognitive neuroscience and computational intelligence, SNL has developed a computational architecture that represents neurocognitive mechanisms associated with how humans remember experiences in their past. The architecture represents how knowledge is organized and updated through information from individual experiences (episodes) via the cortical-hippocampal declarative memory system. We compared the simulated behavioral characteristics with those of humans measured under well established experimental standards, controlling for unmodeled aspects of human processing, such as perception. We used this knowledge to create robust simulations of & human memory behaviors that should help move the scientific community closer to understanding how humans remember information. These behaviors were experimentally validated against actual human subjects, which was published. An important outcome of the validation process will be the joining of specific experimental testing procedures from the field of neuroscience with computational representations from the field of cognitive modeling and simulation.
Abstract not provided.
2008 IEEE Symposium on Computational Intelligence and Games, CIG 2008
Ground Truth, a training game developed by Sandia National Laboratories in partnership with the University of Southern California GamePipe Lab, puts a player in the role of an Incident Commander working with teammate agents to respond to urban threats. These agents simulate certain emotions that a responder may feel during this high-stress situation. We construct psychology-plausible models compliant with the Sandia Human Embodiment and Representation Cognitive Architecture (SHERCA) that are run on the Sandia Cognitive Runtime Engine with Active Memory (SCREAM) software. SCREAM's computational representations for modeling human decision-making combine aspects of ANNs and fuzzy logic networks. This paper gives an overview of Ground Truth and discusses the adaptation of the SHERCA and SCREAM into the game. We include a semiformal descriptionof SCREAM. ©2008 IEEE.
Proposed for publication in Computational Linguistics.
Abstract not provided.
This report describes the Licensing Support Network (LSN) Assistant--a set of tools for categorizing e-mail messages and documents, and investigating and correcting existing archives of categorized e-mail messages and documents. The two main tools in the LSN Assistant are the LSN Archive Assistant (LSNAA) tool for recategorizing manually labeled e-mail messages and documents and the LSN Realtime Assistant (LSNRA) tool for categorizing new e-mail messages and documents. This report focuses on the LSNAA tool. There are two main components of the LSNAA tool. The first is the Sandia Categorization Framework, which is responsible for providing categorizations for documents in an archive and storing them in an appropriate Categorization Database. The second is the actual user interface, which primarily interacts with the Categorization Database, providing a way for finding and correcting categorizations errors in the database. A procedure for applying the LSNAA tool and an example use case of the LSNAA tool applied to a set of e-mail messages are provided. Performance results of the categorization model designed for this example use case are presented.
This 3-year research and development effort focused on what we believe is a significant technical gap in existing modeling and simulation capabilities: the representation of plausible human cognition and behaviors within a dynamic, simulated environment. Specifically, the intent of the ''Simulating Human Behavior for National Security Human Interactions'' project was to demonstrate initial simulated human modeling capability that realistically represents intra- and inter-group interaction behaviors between simulated humans and human-controlled avatars as they respond to their environment. Significant process was made towards simulating human behaviors through the development of a framework that produces realistic characteristics and movement. The simulated humans were created from models designed to be psychologically plausible by being based on robust psychological research and theory. Progress was also made towards enhancing Sandia National Laboratories existing cognitive models to support culturally plausible behaviors that are important in representing group interactions. These models were implemented in the modular, interoperable, and commercially supported Umbra{reg_sign} simulation framework.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.