Vehicle Classification in Infrared Video using the Sequential Probability Ratio Test
Abstract not provided.
Abstract not provided.
This document describes new advances in hybrid reachability techniques accomplished during the course of a one-year Truman Postdoctoral Fellowship. These techniques provide guarantees of safety in complex systems, which is especially important in high-risk, expensive, or safety-critical systems. My work focused on new approaches to two specific problems motivated by real-world issues in complex systems: (1) multi-objective controller synthesis, and (2) control for recovery from error. Regarding the first problem, a novel application of reachability analysis allowed controller synthesis in a single step to achieve (a) safety, (b) stability, and (c) prevent input saturation. By extending the state to include the input parameters, constraints for stability, saturation, and envelope protection are incorporated into a single reachability analysis. Regarding the second problem, a new approach to the problem of recovery provides (a) states from which recovery is possible, and (b) controllers to guide the system during a recovery maneuver from an error state to a safe state in minimal time. Results are computed in both problems on nonlinear models of single longitudinal aircraft dynamics and two-aircraft lateral collision avoidance dynamics.
Proposed for presentation at the American Journal of Health Promotion.
Abstract not provided.
Proposed for publication in Science.
Abstract not provided.
Proposed for publication in Physics of Plasma.
The growth of the flute-type instability for a field-aligned plasma column immersed in a uniform magnetic field is studied. Particle-in-cell simulations are compared with a semi-analytic dispersion analysis of the drift cyclotron instability in cylindrical geometry with a Gaussian density profile in the radial direction. For the parameters considered here, the dispersion analysis gives a local maximum for the peak growth rates as a function of R/r{sub i}, where R is the Gaussian characteristic radius and r{sub i} is the ion gyroradius. The electrostatic and electromagnetic particle-in-cell simulation results give azimuthal and radial mode numbers that are in reasonable agreement with the dispersion analysis. The electrostatic simulations give linear growth rates that are in good agreement with the dispersion analysis results, while the electromagnetic simulations yield growth rate trends that are similar to the dispersion analysis but that are not in quantitative agreement. These differences are ascribed to higher initial field fluctuation levels in the electromagnetic field solver. Overall, the simulations allow the examination of both the linear and nonlinear evolution of the instability in this physical system up to and beyond the point of wave energy saturation. Keywords: Microinstabilities, Magnetic confinement and equilibrium, Particle-in-cell method.
Transactions of the American Nuclear Society
Abstract not provided.
Transactions of the American Nuclear Society
Abstract not provided.
Although high level nuclear wastes (HLW) contain a daunting array of radioisotopes, only a restricted number are long-lived enough to be problematic, and of these many are either effectively insoluble or are likely to be scavenged from solution by minerals indigenous to all aquifers. Those few constituents likely to travel significant distances through aquifers either form colloids (and travel as particulates) or anions--which are not sorbed onto the predominantly negatively charged mineral surfaces. Iodine ({sup 129}I) is one such constituent and may travel as either iodide (I{sup -}) or iodate (IO{sub 3}{sup -}) depending on whether conditions are mildly reducing or oxidizing. Conventionally, {sup 99}Tc (traveling as TcO{sub 4}{sup 0}) is regarded as being of greater concern since it is both more abundant and has a shorter half life (e.g., has a higher specific activity). However, it is unclear whether TcO{sub 4}{sup -} will ever actually form in the mildly reducing environments thought likely within degrading HLW canisters. Instead, technetium may remain reduced as highly insoluble Tc(lV), in which case {sup 129}I might become a significant risk driver in performance assessment (PA) calculations. In the 2004-2005 time frame the US Department of Energy (DOE)--Office of Civilian Radioactive Waste Management (OCRUM), Office of Science and Technology International (S&T) funded a program to identify ''getters'' for possible placement in the invert beneath HLW packages in the repository being planned by the Yucca Mountain Project (YMP). This document reports on progress made during the first (and only) year of this activity. The problem is not a new one and the project did not proceed in a complete vacuum of information. Potential leads came from past studies directed at developing anion getters for a near surface low-level waste facility at Hanford, which suggested that both copper-containing compounds and hydrotalcite-group minerals might be promising. Later work relating to closing HLW tanks (Hanford and Savannah River) added layered bismuth hydroxides to the list of candidates. In fact, even in the first year the project had considerable success in meeting its objectives (Krumhansl, et al., 2005). ''Batch Kd'' testing was used to screen a wide variety of materials from the above-mentioned groups. Some materials tested were, in fact, archived samples from prior studies but a significant amount of effort was also put into synthesizing new--and novel--phases. A useful rule of thumb in judging getter performance is that the ''Kd'' , should exceed a value of roughly 1000 before it's placement can materially decrease the potential dose at a hypothetical (distant) point of compliance (MacNeil, et al., 1999). Materials from each of the groups met these criteria for both iodide and iodate (though, of course, the actual chemistry operating in ''batch Kd'' runs is unknown, which casts a rather long shadow over the meaning of such comparisons). Additionally, as a sideline, a few materials were also tested for TcO{sub 4}{sup -} and occasionally Kd values in excess of 10{sup 3} were also found for this constituent. It is to be stressed that the ''batch Kd'' test was used as a convenient screening tool but in most cases nothing is known about the chemical processes responsible for removing iodine from the test solutions. It follows that the real meaning of such tests is just as a relative measure of iodine scavenging ability, and they may say nothing about sorption processes (in which case evaluating a Kd is irrelevant). Numerous questions also remain regarding the longevity and functionality of materials in the diverse environments in, and around, the proposed YMP repository. Thus, although we had a highly successful first year, we are still far from being able to either qualify any material for placement in the repository, or quantify a getter's performance for use in PA assessments.
Journal of Water Resources Planning and Management
Real-time water quality sensors are becoming commonplace in water distribution systems. However, field deployable, contaminant-specific sensors are still in the development stage. As development proceeds, the necessary operating parameters of these sensors must be determined to protect consumers from accidental and malevolent contamination events. This objective can be quantified in several different ways including minimization of: the time necessary to detect a contamination event, the population exposed to contaminated water, the extent of the contamination within the network, and others. We examine the ability of a sensor set to meet these objectives as a function of both the detection limit of the sensors and the number of sensors in the network. A moderately sized distribution network is used as an example and different sized sets of randomly placed sensors are considered. For each combination of a certain number of sensors and a detection limit, the mean values of the different objectives across multiple random sensor placements are calculated. The tradeoff between the necessary detection limit in a sensor and the number of sensors is evaluated. Results show that for the example problem examined here, a sensor detection limit of 0.01 of the average source concentration is adequate for maximum protection. Detection of events is dependent on the detection limit of the sensors, but for those events that are detected, the values of the performance measures are not a function of the sensor detection limit. The results of replacing a single sensor in a network with a sensor having a much lower detection limit show that while this replacement can improve results, the majority of the additional events detected had performance measures of relatively low consequence. © 2006 ASCE.
Abstract not provided.
Journal of Vacuum Science and Technology B: Microelectronics and Nanometer Structures
This work demonstrates accurate sculpting of predetermined micron-scale, curved shapes in initially planar solids. Using a 20 keV focused Ga + ion beam, various features are sputtered including hemispheres, parabolas, and sinusoidal wave forms having dimensions from 1 to 30 μm. Ion sculpting is accomplished by varying the dose at different points within individual scans. The doses calculated per point account for the material-specific, angle-dependent sputter yield, Y(θ), the beam current, and the ion beam spatial distribution. Several target materials are sculpted using this technique. These include semiconductors that are made amorphous or disordered by the high-energy beam and metals that remain crystalline with ion exposure. For several target materials, curved feature shapes closely match desired geometries with milled depths within 5% of intended values. Deposition of sputtered material and reflection of ions from sloped surfaces are important factors in feature depth and profile evolution. Materials that are subject to severe effects of redeposition (e.g., C and Si) require additional dose in certain regions in order to achieve desired geometries. The angle-dependent sputter yields of Si, C, Au, Al, W, SiC, and Al 2O 3 are reported. This includes normal incidence values, Y(0°), and Yamamura parameters f and Σ. © 2006 American Vacuum Society.
Journal of Micromechanics and Microengineering
During x-ray exposure in the LIGA process, the polymethylmethacrylate (PMMA) photoresist undergoes chain scission, which reduces the molecular weight of the exposed materials. Under some exposure and development conditions, sidewall cracking is observed on the PMMA sidewall, creating surface texture that is undesirable. In this research, exposed and developed PMMA sidewalls were examined for evidence of crack formation using optical profilometry. PMMA thickness, exposure dose and delay time between the end of exposure and beginning of development were varied. Our analysis of samples, with three different radiation doses and four different delay times from the end of exposure to the beginning of development, indicate that the first occurrence of cracking and the extent of cracking are affected by both the dose and the development delay time. This work includes the examination of the depth of cracks into the PMMA, distance between cracks, the width of cracks and the relationship between crack occurrence and dose profile. An empirical predictive model to correlate the delay time to the observance of sidewall cracking based on the deposited dose is presented. This information has direct implication for predicting processing conditions and logistics for LIGA fabricated parts. © 2006 IOP Publishing Ltd.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report provides relevant information and analysis to the Department of Homeland Security (DHS) that will assist DHS in determining how to meet the requirements of federal technology transfer legislation. These legal requirements are grouped into five categories: (1) establishing an Office of Research and Technology Applications, or providing the functions thereof; (2) information management; (3) enabling agreements with non-federal partners; (4) royalty sharing; and (5) invention ownership/obligations. These five categories provide the organizing framework for this study, which benchmarks other federal agencies/laboratories engaged in technology transfer/transition Four key agencies--the Department of Health & Human Services (HHS), the U.S. Department of Agriculture (USDA), the Department of Energy (DOE), and the Department of Defense (DoD)--and several of their laboratories have been surveyed. An analysis of DHS's mission needs for commercializing R&D compared to those agencies/laboratories is presented with implications and next steps for DHS's consideration. Federal technology transfer legislation, requirements, and practices have evolved over the decades as agencies and laboratories have grown more knowledgeable and sophisticated in their efforts to conduct technology transfer and as needs and opinions in the federal sector have changed with regards to what is appropriate. The need to address requirements in a fairly thorough manner has, therefore, resulted in a lengthy paper. There are two ways to find summary information. Each chapter concludes with a summary, and there is an overall ''Summary and Next Steps'' chapter on pages 57-60. For those readers who are unable to read the entire document, we recommend referring to these pages.
Ensuring the reliability of all components within a weapon system becomes increasingly important as the stockpile ages. One of the most noteworthy surveillance techniques designed to circumvent (or take place alongside) traditional D&I operations is to collect a sample of gas from within the internal atmosphere of a particular region in a weapon. While a wealth of information about the weapon may be encoded within the composition of its gas sample, our access to that information is only as good as the method used to analyze the sample. It has been shown that cryofocusing-GC/MS offers advantages in terms of sensitivity, ease of sample collection, and robustness of the equipment/hardware used. Attention is therefore focused on qualifying a cryo-GC/MS system for routine stockpile surveillance operations at Pantex. A series of tests were performed on this instrument to characterize the linearity and repeatability of its response using two different standard gas mixes (ozone precursor and TO-14) at various concentrations. This paper outlines the methods used and the results of these tests in order to establish a baseline by which to compare future cryo-GC/MS analyses. A summary of the results is shown.
This report presents computational analyses to determine the structural integrity of different salt cavern shapes. Three characteristic shapes for increasing cavern volumes are evaluated and compared to the baseline shape of a cylindrical cavern. Caverns with enlarged tops, bottoms, and mid-sections are modeled. The results address pillar to diameter ratios of some existing caverns in the system and will represent the final shape of other caverns if they are repeatedly drawn down. This deliverable is performed in support of the U.S. Strategic Petroleum Reserve. Several three-dimensional models using a close-packed arrangement of 19 caverns have been built and analyzed using a simplified symmetry involving a 30-degree wedge portion of the model. This approach has been used previously for West Hackberry (Ehgartner and Sobolik, 2002) and Big Hill (Park et al., 2005) analyses. A stratigraphy based on the Big Hill site has been incorporated into the model. The caverns are modeled without wells and casing to simplify the calculations. These calculations have been made using the power law creep model. The four cavern shapes were evaluated at several different cavern radii against four design factors. These factors included the dilatant damage safety factor in salt, the cavern volume closure, axial well strain in the caprock, and surface subsidence. The relative performance of each of the cavern shapes varies for the different design factors, although it is apparent that the enlarged bottom design provides the worst overall performance. The results of the calculations are put in the context of the history of cavern analyses assuming cylindrical caverns, and how these results affect previous understanding of cavern behavior in a salt dome.
Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.
This report serves to document contract deliverables considered to be of continuing interest associated with two workshops conducted as part of an initial assessment of Material Protection, Control, and Accounting (MPC&A) training needs of the Newly Independent and Baltic States (NIS/Baltics). These workshops were held in Kiev, Ukraine, ca. 2003-2004, with the assistance of personnel from the George Kuzmycz Training Center (GKTC) of the Kiev Institute of Nuclear Research (KINR). Because of the dominant role Ukraine plays in the region in terms of the nuclear industry, one workshop focused exclusively on Ukrainian training needs, with participants attending from twelve Ukrainian organizations (plus U.S. DOE/NNSA representatives). The second workshop included participation by a further ten countries from the NIS/Baltics region. In addition, the training needs data developed during the workshop were supplemented by the outcomes of surveys and studies conducted by the GKTC.
Abstract not provided.
In recent years, modeling and simulation has played an increasingly important role in the maintenance of the nuclear stockpile. The Advanced Simulation and Computing (ASC) program continues to support and encourage the development of a modeling and simulation infrastructure to make these goals a reality. The Distance Computing Network has been making make the ASC resources available to users throughout the tri-lab environment for over five years. This network relies on the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite to provide high performance and reliable communications. Understanding TCP/IP operation in this unique environment is critical. Software modeling has been used to analyze current network performance and predict the effect of proposed changes. Recently the network architecture was radically changed and the software model had to be changed as well. Whereas the original network was based on 2.5 gigabit per second ATM links, the redesigned network is comprised of 10-gigabit Ethernet links arranged as a 3-node ring. Therefore, a new software model was needed to continue to predict the performance of proposed changes and allow engineers to experiment with new network applications without the risk of interfering with critical operations.
A growing recognition exists in companies worldwide that, when employees leave, they take with them valuable knowledge that is difficult and expensive to recreate. The concern is now particularly acute as the large ''baby boomer'' generation is reaching retirement age. A new field of science, Knowledge Continuity Management (KCM), is designed to capture and catalog the acquired knowledge and wisdom from experience of these employees before they leave. The KCM concept is in the final stages of being adopted by the Energy, Infrastructure, and Knowledge Systems Center and a program is being applied that should produce significant annual cost savings. This report discusses how the Center can use KCM to mitigate knowledge loss from employee departures, including a concise description of a proposed plan tailored to the Center's specific needs and resources.
This document provides the status of the Virtual Control System Environment (VCSE) under development at Sandia National Laboratories. This development effort is funded by the Department of Energy's (DOE) National SCADA Test Bed (NSTB) Program. Specifically the document presents a Modeling and Simulation (M&S) and software interface capability that supports the analysis of Process Control Systems (PCS) used in critical infrastructures. This document describes the development activities performed through June 2006 and the current status of the VCSE development task. Initial activities performed by the development team included researching the needs of critical infrastructure systems that depend on PCS. A primary source describing the security needs of a critical infrastructure is the Roadmap to Secure Control Systems in the Energy Sector. A literature search of PCS analysis tools was performed and we identified a void in system-wide PCS M&S capability. No existing tools provide a capability to simulate control system devices and the underlying supporting communication network. The design team identified the requirements for an analysis tool to fill this void. Since PCS are comprised of multiple subsystems, an analysis framework that is modular was selected for the VCSE. The need for a framework to support the interoperability of multiple simulators with a PCS device model library was identified. The framework supports emulation of a system that is represented by models in a simulation interacting with actual hardware via a System-in-the-Loop (SITL) interface. To identify specific features for the VCSE analysis tool the design team created a questionnaire that briefly described the range of potential capabilities the analysis tool could include and requested feedback from potential industry users. This initial industry outreach was also intended to identify several industry users that are willing to participate in a dialog through the development process so that we maximize usefulness of the VCSE to industry. Industry involvement will continue throughout the VCSE development process. The teams activities have focused on creating a modeling and simulation capability that will support the analysis of PCS. An M&S methodology that is modular in structure was selected. The framework is able to support a range of model fidelities depending on the analysis being performed. In some cases high-fidelity network communication protocol and device models are necessary which can be accomplished by including a high-fidelity communication network simulator such as OPNET Modeler. In other cases lower fidelity models could be used in which case the high-fidelity communication network simulator is not needed. In addition, the framework supports a range of control system device behavior models. The models could range from simple function models to very detailed vendor-specific models. Included in the FY05 funding milestones was a demonstration of the framework. The development team created two scenarios that demonstrated the VCSE modular framework. The first demonstration provided a co-simulation using a high-fidelity communication network simulator interoperating with a custom developed control system simulator and device library. The second scenario provided a system-in-the-loop demonstration that emulated a system with a virtual network segment interoperating with a real-device network segment.