The advent of the nuclear renaissance gives rise to a concern for the effective design of nuclear fuel cycle systems that are safe, secure, nonproliferating and cost-effective. We propose to integrate the monitoring of the four major factors of nuclear facilities by focusing on the interactions between Safeguards, Operations, Security, and Safety (SOSS). We proposed to develop a framework that monitors process information continuously and can demonstrate the ability to enhance safety, operations, security, and safeguards by measuring and reducing relevant SOSS risks, thus ensuring the safe and legitimate use of the nuclear fuel cycle facility. A real-time comparison between expected and observed operations provides the foundation for the calculation of SOSS risk. The automation of new nuclear facilities requiring minimal manual operation provides an opportunity to utilize the abundance of process information for monitoring SOSS risk. A framework that monitors process information continuously can lead to greater transparency of nuclear fuel cycle activities and can demonstrate the ability to enhance the safety, operations, security and safeguards associated with the functioning of the nuclear fuel cycle facility. Sandia National Laboratories (SNL) has developed a risk algorithm for safeguards and is in the process of demonstrating the ability to monitor operational signals in real-time though a cooperative research project with the Japan Atomic Energy Agency (JAEA). The risk algorithms for safety, operations and security are under development. The next stage of this work will be to integrate the four algorithms into a single framework.
This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.
As part of the National Security Engineering Institute (NSEI) project, several sensors were developed in conjunction with an assessment algorithm. A camera system was developed in-house to track the locations of personnel within a secure room. In addition, a commercial, off-the-shelf (COTS) tracking system developed by TYZX was examined. TYZX is a Bay Area start-up that has developed its own tracking hardware and software which we use as COTS support for robust tracking. This report discusses the pros and cons of each camera system, how they work, a proposed data fusion method, and some visual results. Distributed, embedded image processing solutions show the most promise in their ability to track multiple targets in complex environments and in real-time. Future work on the camera system may include three-dimensional volumetric tracking by using multiple simple cameras, Kalman or particle filtering, automated camera calibration and registration, and gesture or path recognition.
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.
Artificial lighting for general illumination purposes accounts for over 8% of global primary energy consumption. However, the traditional lighting technologies in use today, i.e., incandescent, fluorescent, and high-intensity discharge lamps, are not very efficient, with less than about 25% of the input power being converted to useful light. Solid-state lighting is a rapidly evolving, emerging technology whose efficiency of conversion of electricity to visible white light is likely to approach 50% within the next years. This efficiency is significantly higher than that of traditional lighting technologies, with the potential to enable a marked reduction in the rate of world energy consumption., There is no fundamental physical reason why efficiencies well beyond 50% could not be achieved, which could enable even greater world energy savings. The maximum achievable luminous efficacy for a solid-state lighting source depends on many different physical parameters, for example the color rendering quality that is required, the architecture employed to produce the component light colors that are mixed to produce white, and the efficiency of light sources producing each color component. In this article, we discuss in some detail several approaches to solid-state lighting and the maximum luminous efficacy that could be attained, given various constraints such as those listed above.
The global market for wireless sensor networks in 2010 will be valued close to $10 B, or 200 M units. TPL, Inc. is a small Albuquerque based business that has positioned itself to be a leader in providing uninterruptible power supplies in this growing market with projected revenues expected to exceed $26 M in 5 years. This project focused on improving TPL, Inc.'s patent-pending EnerPak{trademark} device which converts small amounts of energy from the environment (e.g., vibrations, light or temperature differences) into electrical energy that can be used to charge small energy storage devices. A critical component of the EnerPak{trademark} is the supercapacitor that handles high power delivery for wireless communications; however, optimization and miniaturization of this critical component is required. This proposal aimed to produce prototype microsupercapacitors through the integration of novel materials and fabrication processes developed at New Mexico Technology Research Collaborative (NMTRC) member institutions. In particular, we focused on developing novel ruthenium oxide nanomaterials and placed them into carbon supports to significantly increase the energy density of the supercapacitor. These improvements were expected to reduce maintenance costs and expand the utility of the TPL, Inc.'s device, enabling New Mexico to become the leader in the growing global wireless power supply market. By dominating this niche, new customers were expected to be attracted to TPL, Inc. yielding new technical opportunities and increased job opportunities for New Mexico.
The Hypervelocity Impact Society is devoted to the advancement of the science and technology of hypervelocity impact and related technical areas required to facilitate and understand hypervelocity impact phenomena. Topics of interest include experimental methods, theoretical techniques, analytical studies, phenomenological studies, dynamic material response as related to material properties (e.g., equation of state), penetration mechanics, and dynamic failure of materials, planetary physics and other related phenomena. The objectives of the Society are to foster the development and exchange of technical information in the discipline of hypervelocity impact phenomena, promote technical excellence, encourage peer review publications, and hold technical symposia on a regular basis. It was sometime in 1985, partly in response to the Strategic Defense Initiative (SDI), that a small group of visionaries decided that a conference or symposium on hypervelocity science would be useful and began the necessary planning. A major objective of the first Symposium was to bring the scientists and researchers up to date by reviewing the essential developments of hypervelocity science and technology between 1955 and 1985. This Symposia--HVIS 2007 is the tenth Symposium since that beginning. The papers presented at all the HVIS are peer reviewed and published as a special volume of the archival journal International Journal of Impact Engineering. HVIS 2007 followed the same high standards and its proceedings will add to this body of work.
Parallel adaptive mesh refinement methods potentially lead to realistic modeling of complex three-dimensional physical phenomena. However, the dynamics inherent in these methods present significant challenges in data partitioning and load balancing. Significant human resources, including time, effort, experience, and knowledge, are required for determining the optimal partitioning technique for each new simulation. In reality, scientists resort to using the on-board partitioner of the computational framework, or to using the partitioning industry standard, ParMetis. Adaptive partitioning refers to repeatedly selecting, configuring and invoking the optimal partitioning technique at run-time, based on the current state of the computer and application. In theory, adaptive partitioning automatically delivers superior performance and eliminates the need for repeatedly spending valuable human resources for determining the optimal static partitioning technique. In practice, however, enabling frameworks are non-existent due to the inherent significant inter-disciplinary research challenges. This paper presents a study of a simple implementation of adaptive partitioning and discusses implied potential benefits from the perspective of common groups of users within computational science. The study is based on a large set of data derived from experiments including six real-life, multi-time-step adaptive applications from various scientific domains, five complementing and fundamentally different partitioning techniques, a large set of parameters corresponding to a wide spectrum of computing environments, and a flexible cost function that considers the relative impact of multiple partitioning metrics and diverse partitioning objectives. The results show that even a simple implementation of adaptive partitioning can automatically generate results statistically equivalent to the best static partitioning. Thus, it is possible to effectively eliminate the problem of determining the best partitioning technique for new simulations. Moreover. the results show that adaptive partitioning can provide a performance gain of about 10 percent on average as compared to routinely using the industry-standard, ParMetis.
This paper describes the concept for augmenting the SEGIS Program (an industry-led effort to greatly enhance the utility of distributed PV systems) with energy storage in residential and small commercial applications (SEGIS-ES). The goal of SEGIS-ES is to develop electrical energy storage components and systems specifically designed and optimized for grid-tied PV applications. This report describes the scope of the proposed SEGIS-ES Program and why it will be necessary to integrate energy storage with PV systems as PV-generated energy becomes more prevalent on the nation's utility grid. It also discusses the applications for which energy storage is most suited and for which it will provide the greatest economic and operational benefits to customers and utilities. Included is a detailed summary of the various storage technologies available, comparisons of their relative costs and development status, and a summary of key R&D needs for PV-storage systems. The report concludes with highlights of areas where further PV-specific R&D is needed and offers recommendations about how to proceed with their development.