Risk assessment methodology for bioscience facilities
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proposed for publication in Journal of Vacuum Science and Technology A.
We have conducted an extensive study of the evolution of surface morphology of single crystal diamond surfaces during sputtering by 20 keV Ga{sup +} and Ga{sup +} + H{sub 2}O. We observe the formation of well-ordered ripples on the surface for angles of incidence between 40 and 70{sup o}. We have also measured sputter yields as a function of angle of incidence, and ripple wavelength and amplitude dependence on angle of incidence and ion fluence. Smooth surface morphology is observed for <40{sup o}, and a transition to a step-and-terrace structure is observed for >70{sup o}. The formation and evolution of well-ordered surface ripples is well characterized by the model of Bradley and Harper, where sputter-induced roughening is balanced by surface transport smoothing. Smoothing is consistent with an ion-induced viscous relaxation mechanism. Ripple amplitude saturates at high ion fluence, confirming the effect of nonlinear processes. Differences between Ga{sup +} and Ga{sup +} + H{sub 2}O in ripple wavelength, amplitude, and time to saturation of amplitude are consistent with the increased sputter yield observed for Ga{sup +} + H{sub 2}O. For angle of incidence <40{sup o}, an ion bombardment-induced 'atomic drift' mechanism for surface smoothing may be responsible for suppression of ripple formation. For Ga{sup +} + H{sub 2}O, we observe anomalous formation of very large amplitude and wavelength, poorly ordered surface ridges for angle of incidence near 40{sup o}. Finally, we observe that ripple initiation on smooth surfaces can take place by initial stochastic roughening followed by evolution of increasingly well-ordered ripples.
Nuclear fuel cycle transparency can be defined as a confidence building approach among political entities to ensure civilian nuclear facilities are not being used for the development of nuclear weapons. Transparency concepts facilitate the transfer of nuclear technology, as the current international political climate indicates a need for increased methods of assuring non-proliferation. This research develops a system which will augment current non-proliferation assessment activities undertaken by U.S. and international regulatory agencies. It will support the export of nuclear technologies, as well as the design and construction of Gen. IV energy systems. Additionally, the framework developed by this research will provide feedback to cooperating parties, thus ensuring full transparency of a nuclear fuel cycle. As fuel handling activities become increasingly automated, proliferation or diversion potential of nuclear material still needs to be assessed. However, with increased automation, there exists a vast amount of process data to be monitored. By designing a system that monitors process data continuously, and compares this data to declared process information and plant designs, a faster and more efficient assessment of proliferation risk can be made. Figure 1 provides an illustration of the transparency framework that has been developed. As shown in the figure, real-time process data is collected at the fuel cycle facility; a reactor, a fabrication plant, or a recycle facility, etc. Data is sent to the monitoring organization and is assessed for proliferation risk. Analysis and recommendations are made to cooperating parties, and feedback is provided to the facility. The analysis of proliferation risk is based on the following factors: (1) Material attractiveness: the quantification of factors relevant to the proliferation risk of a certain material (e.g., highly enriched Pu-239 is more attractive than that of lower enrichment) (2) The static (baseline) risk: the quantification of risk factors regarding the expected value of proliferation risk under normal (not proliferating) operations. (3) The dynamic (changing) risk: the quantification of risk factors regarding the observed value of proliferation risk, based on monitor signals from facility operations. This framework could be implemented at facilities which have been exported (for instance, to third world countries), or facilities located in sensitive countries. Sandia National Laboratories is currently working with the Japan Nuclear Cycle Development Institute (JNC) to implement a demonstration of nuclear fuel cycle transparency technology at the Fuel Handling Training Model designed for the Monju Fast Reactor at the International Cooperation and Development Training Center in Japan. This technology has broad applications, both in the U.S. and abroad. Following the demonstration, we expect to begin further testing of the technology at an Enrichment Facility, a Fast Reactor, and at a Recycle Facility.
Proposed for publication in Encyclopedia of Material: Science & Technology.
The use of a lower-melting-point molten metal to join metallic components is perhaps the earliest example of processing which employs metallurgical bonding principles, having roots as far back as 4200 BC (Peaslee 2003). More than 6000 years later, brazing occupies a prominent position in our suite of joining processes where it offers cost and/or performance advantages in the fabrication of many structures. More precisely, brazing can be described as the use of a molten filler metal to wet the closely fitting faying surfaces of a joint, leading to formation of metallurgical bonds between the filler metal and substrates. Historically, brazing processes employ filler metals whose solidus temperature exceeds 723 K, as opposed to soldering processes which use lower-melting-point temperature filler materials. In the past several decades, technological advances have facilitated a broadening of applications for brazing while simultaneously contradicting some of the traditional perceptions of the process. However, many of those tenets remain appropriate for the majority of brazing processes and products. Accordingly, this article provides a brief description of traditional brazing and some important factors to be considered when designing and producing brazed structures. An additional section describes the technical advances in the field.
Proposed for publication in IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control.
The effect of critical dimension (CD) variation and metallization ratio on the efficiency of energy conversion of a surface acoustic wave (SAW) correlator is examined. We find that a 10% variation in the width of finger electrodes predicts only a 1% decrease in the efficiency of energy conversion. Furthermore, our model predicts that a metallization ratio of 0.74 represents an optimum value for energy extraction from the SAW by the interdigitated transducer (IDT).
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proposed for publication Structure & Infrastructure Engineering: Maintenance, Management, Life-Cycle Design & Performance.
Abstract not provided.
This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.
Abstract not provided.
While loss of life is the operating concern of Department of Homeland Security (DHS), the security of the economy ultimately decides the success of the war on terrorism. This project focuses on mitigation, containment, response, and impact of terrorist events on the economy. Conventional economic methods are inadequate, but agent-based methods (Discrete Simulation) appears to uniquely capture the dynamics and emergent (human) behaviors.
Low-temperature co-fired ceramic (LTCC) enables development and testing of critical elements on microsystem boards as well as nonmicroelectronic meso-scale applications. We describe silicon-based microelectromechanical systems packaging and LTCC meso-scale applications. Microfluidic interposers permit rapid testing of varied silicon designs. The application of LTCC to micro-high-performance liquid chromatography (?-HPLC) demonstrates performance advantages at very high pressures. At intermediate pressures, a ceramic thermal cell lyser has lysed bacteria spores without damaging the proteins. The stability and sensitivity of LTCC/chemiresistor smart channels are comparable to the performance of silicon-based chemiresistors. A variant of the use of sacrificial volume materials has created channels, suspended thick films, cavities, and techniques for pressure and flow sensing. We report on inductors, diaphragms, cantilevers, antennae, switch structures, and thermal sensors suspended in air. The development of 'functional-as-released' moving parts has resulted in wheels, impellers, tethered plates, and related new LTCC mechanical roles for actuation and sensing. High-temperature metal-to-LTCC joining has been developed with metal thin films for the strong, hermetic interfaces necessary for pins, leads, and tubes.
Abstract not provided.
Abstract not provided.
Proposed for publication as a book chapter in "Parallel Scientific Computing".
Combinatorial algorithms have long played a pivotal enabling role in many applications of parallel computing. Graph algorithms in particular arise in load balancing, scheduling, mapping and many other aspects of the parallelization of irregular applications. These are still active research areas, mostly due to evolving computational techniques and rapidly changing computational platforms. But the relationship between parallel computing and discrete algorithms is much richer than the mere use of graph algorithms to support the parallelization of traditional scientific computations. Important, emerging areas of science are fundamentally discrete, and they are increasingly reliant on the power of parallel computing. Examples include computational biology, scientific data mining, and network analysis. These applications are changing the relationship between discrete algorithms and parallel computing. In addition to their traditional role as enablers of high performance, combinatorial algorithms are now customers for parallel computing. New parallelization techniques for combinatorial algorithms need to be developed to support these nontraditional scientific approaches. This chapter will describe some of the many areas of intersection between discrete algorithms and parallel scientific computing. Due to space limitations, this chapter is not a comprehensive survey, but rather an introduction to a diverse set of techniques and applications with a particular emphasis on work presented at the Eleventh SIAM Conference on Parallel Processing for Scientific Computing. Some topics highly relevant to this chapter (e.g. load balancing) are addressed elsewhere in this book, and so we will not discuss them here.
Abstract not provided.
Red Storm is a massively parallel processor. The Red Storm design goals are: (1) Balanced system performance - CPU, memory, interconnect, and I/O; (2) Usability - functionality of hardware and software meets needs of users for Massively Parallel Computing; (3)S calability - system hardware and software scale, single cabinet system to {approx} 30,000 processor system; (4) reliability - machines tays up long enough between interrupts to make real progress on completing application run (at least 50 hours MTBI), requires full system RAS capability; (5) Upgradability - system can be upgraded with a processor swap and additional cabinets to 100T or greater; (6) red/black switching - capability to switch major portions of the machine between classified and unclassified computing environments; (7) space, power, cooling - high density, low power system; and (8) price/performance - excellent performance per dollar, use high volume commodity parts where feasible.
Abstract not provided.
Abstract not provided.
Abstract not provided.