Nanometer-Thick Cobalt-Iron Spinel Oxide Films for High Temperature Splitting of H2O and CO2
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Location of the liquid-vapor critical point (c.p.) is one of the key features of equation of state models used in simulating high energy density physics and pulsed power experiments. For example, material behavior in the location of the vapor dome is critical in determining how and when coronal plasmas form in expanding wires. Transport properties, such as conductivity and opacity, can vary an order of magnitude depending on whether the state of the material is inside or outside of the vapor dome. Due to the difficulty in experimentally producing states near the vapor dome, for all but a few materials, such as Cesium and Mercury, the uncertainty in the location of the c.p. is of order 100%. These states of interest can be produced on Z through high-velocity shock and release experiments. For example, it is estimated that release adiabats from {approx}1000 GPa in aluminum would skirt the vapor dome allowing estimates of the c.p. to be made. This is within the reach of Z experiments (flyer plate velocity of {approx}30 km/s). Recent high-fidelity EOS models and hydrocode simulations suggest that the dynamic two-phase flow behavior observed in initial scoping experiments can be reproduced, providing a link between theory and experiment. Experimental identification of the c.p. in aluminum would represent the first measurement of its kind in a dynamic experiment. Furthermore, once the c.p. has been experimentally determined it should be possible to probe the electrical conductivity, opacity, reflectivity, etc. of the material near the vapor dome, using a variety of diagnostics. We propose a combined experimental and theoretical investigation with the initial emphasis on aluminum.
Petaflops systems will have tens to hundreds of thousands of compute nodes which increases the likelihood of faults. Applications use checkpoint/restart to recover from these faults, but even under ideal conditions, applications running on more than 30,000 nodes will likely spend more than half of their total run time saving checkpoints, restarting, and redoing work that was lost. We created a library that performs redundant computations on additional nodes allocated to the application. An active node and its redundant partner form a node bundle which will only fail, and cause an application restart, when both nodes in the bundle fail. The goal of this library is to learn whether this can be done entirely at the user level, what requirements this library places on a Reliability, Availability, and Serviceability (RAS) system, and what its impact on performance and run time is. We find that our redundant MPI layer library imposes a relatively modest performance penalty for applications, but that it greatly reduces the number of applications interrupts. This reduction in interrupts leads to huge savings in restart and rework time. For large-scale applications the savings compensate for the performance loss and the additional nodes required for redundant computations.
Ionizing radiation is known to cause Single Event Effects (SEE) in a variety of electronic devices. The mechanism that leads to these SEEs is current induced by the radiation in these devices. While this phenomenon is detrimental in ICs, this is the basic mechanism behind the operation of semiconductor radiation detectors. To be able to predict SEEs in ICs and detector responses we need to be able to simulate the radiation induced current as the function of time. There are analytical models, which work for very simple detector configurations, but fail for anything more complex. On the other end, TCAD programs can simulate this process in microelectronic devices, but these TCAD codes costs hundreds of thousands of dollars and they require huge computing resources. In addition, in certain cases they fail to predict the correct behavior. A simulation model based on the Gunn theorem was developed and used with the COMSOL Multiphysics framework.
Graph algorithms are a key component in a wide variety of intelligence analysis activities. The Graph-Based Informatics for Non-Proliferation and Counter-Terrorism project addresses the critical need of making these graph algorithms accessible to Sandia analysts in a manner that is both intuitive and effective. Specifically we describe the design and implementation of an open source toolkit for doing graph analysis, informatics, and visualization that provides Sandia with novel analysis capability for non-proliferation and counter-terrorism.
Abstract not provided.
Abstract not provided.
Traditional safeguards and security design for fuel cycle facilities is done separately and after the facility design is near completion. This can result in higher costs due to retrofits and redundant use of data. Future facilities will incorporate safeguards and security early in the design process and integrate the systems to make better use of plant data and strengthen both systems. The purpose of this project was to evaluate the integration of materials control and accounting (MC&A) measurements with physical security design for a nuclear reprocessing plant. Locations throughout the plant where data overlap occurs or where MC&A data could be a benefit were identified. This mapping is presented along with the methodology for including the additional data in existing probabilistic assessments to evaluate safeguards and security systems designs.
Abstract not provided.
Progress in MEMS fabrication has enabled a wide variety of force and displacement sensing devices to be constructed. One device under intense development at Sandia is a passive shock switch, described elsewhere (Mitchell 2008). A goal of all MEMS devices, including the shock switch, is to achieve a high degree of reliability. This, in turn, requires systematic methods for validating device performance during each iteration of design. Once a design is finalized, suitable tools are needed to provide quality assurance for manufactured devices. To ensure device performance, measurements on these devices must be traceable to NIST standards. In addition, accurate metrology of MEMS components is needed to validate mechanical models that are used to design devices to accelerate development and meet emerging needs. Progress towards a NIST-traceable calibration method is described for a next-generation, 2D Interfacial Force Microscope (IFM) for applications in MEMS metrology and qualification. Discussed are the results of screening several suitable calibration methods and the known sources of uncertainty in each method.
Abstract not provided.
Abstract not provided.
Abstract not provided.