As part of the National Security Engineering Institute (NSEI) project, several sensors were developed in conjunction with an assessment algorithm. A camera system was developed in-house to track the locations of personnel within a secure room. In addition, a commercial, off-the-shelf (COTS) tracking system developed by TYZX was examined. TYZX is a Bay Area start-up that has developed its own tracking hardware and software which we use as COTS support for robust tracking. This report discusses the pros and cons of each camera system, how they work, a proposed data fusion method, and some visual results. Distributed, embedded image processing solutions show the most promise in their ability to track multiple targets in complex environments and in real-time. Future work on the camera system may include three-dimensional volumetric tracking by using multiple simple cameras, Kalman or particle filtering, automated camera calibration and registration, and gesture or path recognition.
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.
The global market for wireless sensor networks in 2010 will be valued close to $10 B, or 200 M units. TPL, Inc. is a small Albuquerque based business that has positioned itself to be a leader in providing uninterruptible power supplies in this growing market with projected revenues expected to exceed $26 M in 5 years. This project focused on improving TPL, Inc.'s patent-pending EnerPak{trademark} device which converts small amounts of energy from the environment (e.g., vibrations, light or temperature differences) into electrical energy that can be used to charge small energy storage devices. A critical component of the EnerPak{trademark} is the supercapacitor that handles high power delivery for wireless communications; however, optimization and miniaturization of this critical component is required. This proposal aimed to produce prototype microsupercapacitors through the integration of novel materials and fabrication processes developed at New Mexico Technology Research Collaborative (NMTRC) member institutions. In particular, we focused on developing novel ruthenium oxide nanomaterials and placed them into carbon supports to significantly increase the energy density of the supercapacitor. These improvements were expected to reduce maintenance costs and expand the utility of the TPL, Inc.'s device, enabling New Mexico to become the leader in the growing global wireless power supply market. By dominating this niche, new customers were expected to be attracted to TPL, Inc. yielding new technical opportunities and increased job opportunities for New Mexico.
The Hypervelocity Impact Society is devoted to the advancement of the science and technology of hypervelocity impact and related technical areas required to facilitate and understand hypervelocity impact phenomena. Topics of interest include experimental methods, theoretical techniques, analytical studies, phenomenological studies, dynamic material response as related to material properties (e.g., equation of state), penetration mechanics, and dynamic failure of materials, planetary physics and other related phenomena. The objectives of the Society are to foster the development and exchange of technical information in the discipline of hypervelocity impact phenomena, promote technical excellence, encourage peer review publications, and hold technical symposia on a regular basis. It was sometime in 1985, partly in response to the Strategic Defense Initiative (SDI), that a small group of visionaries decided that a conference or symposium on hypervelocity science would be useful and began the necessary planning. A major objective of the first Symposium was to bring the scientists and researchers up to date by reviewing the essential developments of hypervelocity science and technology between 1955 and 1985. This Symposia--HVIS 2007 is the tenth Symposium since that beginning. The papers presented at all the HVIS are peer reviewed and published as a special volume of the archival journal International Journal of Impact Engineering. HVIS 2007 followed the same high standards and its proceedings will add to this body of work.
Parallel adaptive mesh refinement methods potentially lead to realistic modeling of complex three-dimensional physical phenomena. However, the dynamics inherent in these methods present significant challenges in data partitioning and load balancing. Significant human resources, including time, effort, experience, and knowledge, are required for determining the optimal partitioning technique for each new simulation. In reality, scientists resort to using the on-board partitioner of the computational framework, or to using the partitioning industry standard, ParMetis. Adaptive partitioning refers to repeatedly selecting, configuring and invoking the optimal partitioning technique at run-time, based on the current state of the computer and application. In theory, adaptive partitioning automatically delivers superior performance and eliminates the need for repeatedly spending valuable human resources for determining the optimal static partitioning technique. In practice, however, enabling frameworks are non-existent due to the inherent significant inter-disciplinary research challenges. This paper presents a study of a simple implementation of adaptive partitioning and discusses implied potential benefits from the perspective of common groups of users within computational science. The study is based on a large set of data derived from experiments including six real-life, multi-time-step adaptive applications from various scientific domains, five complementing and fundamentally different partitioning techniques, a large set of parameters corresponding to a wide spectrum of computing environments, and a flexible cost function that considers the relative impact of multiple partitioning metrics and diverse partitioning objectives. The results show that even a simple implementation of adaptive partitioning can automatically generate results statistically equivalent to the best static partitioning. Thus, it is possible to effectively eliminate the problem of determining the best partitioning technique for new simulations. Moreover. the results show that adaptive partitioning can provide a performance gain of about 10 percent on average as compared to routinely using the industry-standard, ParMetis.
This paper describes the concept for augmenting the SEGIS Program (an industry-led effort to greatly enhance the utility of distributed PV systems) with energy storage in residential and small commercial applications (SEGIS-ES). The goal of SEGIS-ES is to develop electrical energy storage components and systems specifically designed and optimized for grid-tied PV applications. This report describes the scope of the proposed SEGIS-ES Program and why it will be necessary to integrate energy storage with PV systems as PV-generated energy becomes more prevalent on the nation's utility grid. It also discusses the applications for which energy storage is most suited and for which it will provide the greatest economic and operational benefits to customers and utilities. Included is a detailed summary of the various storage technologies available, comparisons of their relative costs and development status, and a summary of key R&D needs for PV-storage systems. The report concludes with highlights of areas where further PV-specific R&D is needed and offers recommendations about how to proceed with their development.
The engineering analysis community at Sandia National Laboratories uses a number of internal and commercial software codes and tools, including mesh generators, preprocessors, mesh manipulators, simulation codes, post-processors, and visualization packages. We define an analysis workflow as the execution of an ordered, logical sequence of these tools. Various forms of analysis (and in particular, methodologies that use multiple function evaluations or samples) involve executing parameterized variations of these workflows. As part of the DART project, we are evaluating various commercial workflow management systems, including iSIGHT-FD from Engineous. This report documents the results of a scalability test that was driven by DAKOTA and conducted on a parallel computer (Thunderbird). The purpose of this experiment was to examine the suitability and performance of iSIGHT-FD for large-scale, parameterized analysis workflows. As the results indicate, we found iSIGHT-FD to be suitable for this type of application.
Tsao, Jeffrey Y.; Huey, Mark C.; Boyack, Kevin W.; Miksovic, Ann E.
We present an analysis of the literature of solid-state lighting, based on a comprehensive dataset of 35,851 English-language articles and 12,420 U.S. patents published or issued during the years 1977-2004 in the foundational knowledge domain of electroluminescent materials and phenomena. The dataset was created using a complex, iteratively developed search string. The records in the dataset were then partitioned according to: whether they are articles or patents, their publication or issue date, their national or continental origin, whether the active electroluminescent material was inorganic or organic, and which of a number of emergent knowledge sub-domains they aggregate into on the basis of bibliographic coupling. From these partitionings, we performed a number of analyses, including: identification of knowledge sub-domains of historical and recent importance, and trends over time of the contributions of various nations and continents to the knowledge domain and its sub-domains. Among the key results: (1) The knowledge domain as a whole has been growing quickly: the average growth rates of the inorganic and organic knowledge sub-domains have been 8%/yr and 25%/yr, respectively, compared to average growth rates less than 5%/yr for English-language articles and U.S. patents in other knowledge domains. The growth rate of the organic knowledge sub-domain is so high that its historical dominance by the inorganic knowledge sub-domain will, at current trajectories, be reversed in the coming decade. (2) Amongst nations, the U.S. is the largest contributor to the overall knowledge domain, but Japan is on a trajectory to become the largest contributor within the coming half-decade. Amongst continents, Asia became the largest contributor during the past half-decade, overwhelmingly so for the organic knowledge sub-domain. (3) The relative contributions to the article and patent datasets differ for the major continents: North America contributing relatively more patents, Europe contributing relatively more articles, and Asia contributing in a more balanced fashion. (4) For the article dataset, the nations that contribute most in quantity also contribute most in breadth, while the nations that contribute less in quantity concentrate their contributions in particular knowledge sub-domains. For the patent dataset, North America and Europe tend to contribute improvements in end-use applications (e.g., in sensing, phototherapy and communications), while Asia tends to contribute improvements at the materials and chip levels. (5) The knowledge sub-domains that emerge from aggregations based on bibliographic coupling are roughly organized, for articles, by the degree of localization of electrons and holes in the material or phenomenon of interest, and for patents, according to both their emphasis on chips, systems or applications, and their emphasis on organic or inorganic materials. (6) The six 'hottest' topics in the article dataset are: spintronics, AlGaN UV LEDs, nanowires, nanophosphors, polyfluorenes and electrophosphorescence. The nine 'hottest' topics in the patent dataset are: OLED encapsulation, active-matrix displays, multicolor OLEDs, thermal transfer for OLED fabrication, ink-jet printed OLEDs, phosphor-converted LEDs, ornamental LED packages, photocuring and phototherapy, and LED retrofitting lamps. A significant caution in interpreting these results is that they are based on English-language articles and U.S. patents, and hence will tend to over-represent the strength of English-speaking nations (particularly the U.S.), and under-represent the strength of non-English-speaking nations (particularly China).
A loose two-way coupling of SNL's Presto v2.8 and CTH v8.1 analysis code has been developed to support the analysis of explosive loading of structures. Presto is a Lagrangian, three-dimensional explicit, transient dynamics code in the SIERRA mechanics suite for the analysis of structures subjected to impact-like loads. CTH is a hydro code for modeling complex multi-dimensional, multi-material problems that are characterized by large deformations and/or strong shocks. A fundamental assumption in this loose coupling is that the compliance of the structure modeled with Presto is significantly smaller than the compliance of the surrounding medium (e.g. air) modeled with CTH. A current limitation of the coupled code is that the interaction between CTH and thin structures modeled in Presto (e.g. shells) is not supported. Research is in progress to relax this thin-structure limitation.
This report documents a demonstration model of interacting insurgent leadership, military leadership, government leadership, and societal dynamics under a variety of interventions. The primary focus of the work is the portrayal of a token societal model that responds to leadership activities. The model also includes a linkage between leadership and society that implicitly represents the leadership subordinates as they directly interact with the population. The societal model is meant to demonstrate the efficacy and viability of using System Dynamics (SD) methods to simulate populations and that these can then connect to cognitive models depicting individuals. SD models typically focus on average behavior and thus have limited applicability to describe small groups or individuals. On the other hand, cognitive models readily describe individual behavior but can become cumbersome when used to describe populations. Realistic security situations are invariably a mix of individual and population dynamics. Therefore, the ability to tie SD models to cognitive models provides a critical capability that would be otherwise be unavailable.
Hexanitrostilbene (HNS) is a widely used explosive, due in part to its high thermal stability. Degradation of HNS is known to occur through UV, chemical exposure, and heat exposure, which can lead to reduced performance of the material. Common methods of testing for HNS degradation include wet chemical and surface area testing of the material itself, and performance testing of devices that use HNS. The commonly used chemical tests, such as volatility, conductivity and contaminant trapping provide information on contaminants rather than the chemical stability of the HNS itself. Additionally, these tests are destructive in nature. As an alternative to these methods, we have been exploring the use of vibrational spectroscopy as a means of monitoring HNS degradation non-destructively. In particular, infrared (IR) spectroscopy lends itself well to non-destructive analysis. Molecular variations in the material can be identified and compared to pure samples. The utility of IR spectroscopy was evaluated using pressed pellets of HNS exposed to DETA (diethylaminetriamine). Amines are known to degrade HNS, with the proposed product being a {sigma}-adduct. We have followed these changes as a function of time using various IR sampling techniques including photoacoustic and attenuated total reflectance (ATR).
This report provides an overview on the current state of wind turbine control and introduces a number of active techniques that could be potentially used for control of wind turbine blades. The focus is on research regarding active flow control (AFC) as it applies to wind turbine performance and loads. The techniques and concepts described here are often described as 'smart structures' or 'smart rotor control'. This field is rapidly growing and there are numerous concepts currently being investigated around the world; some concepts already are focused on the wind energy industry and others are intended for use in other fields, but have the potential for wind turbine control. An AFC system can be broken into three categories: controls and sensors, actuators and devices, and the flow phenomena. This report focuses on the research involved with the actuators and devices and the generated flow phenomena caused by each device.