Inertial Confinement Fusion (ICF) is a grand challenge for high-energy-density science. In traditional ICF, alpha heating is achieved by producing T ~ 4 keV hot spot surrounded by a high areal density (ρR ~1 g/cm2) of cold fuel, requiring hot-spot pressures above 400 Gbar. In Magneto-Inertial Fusion (MIF), the pressure and areal density requirements are relaxed by the presence of a magnetic field strong enough to magnetically confine charged particles within a radius smaller than the fuel radius. The key parameter for MIF is thus BR rather than ρR, and for BR >0.5 MG-cm, the magnetic field effectively traps electrons, 1 MeV tritons, and fusion-produced alpha particles for almost arbitrarily small ρR. Consequently, thermal conduction losses are reduced and trapped alpha particles return much of their energy to the burning plasma. Operating at intermediate plasma density and pressure regimes between traditional magnetic confinement fusion and ICF systems, MIF concepts achieve self-heating at pressures of only ~5 Gbar, with GJ-scale yields appearing possible. Ongoing experiments demonstrate good inertial and magnetic confinement, stable implosions, and promising yields, but challenges remain in optimizing preheat, mitigating mix, and understanding fundamental plasma physics and magneto-hydrodynamic behavior in these extreme conditions.
This report describes the Standard Unified Modeling and Mapping Integration Toolkit (SUMMIT) and how it can be used to prepare and plan for homeland emergencies.
This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientific characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.
How does energy propagate from the solar core to the surface of the sun, where it emerges to warm the Earth? How old are the stellar systems that host the numerous exoplanets that have now been discovered outside our solar system? How does radiation penetrate and heat an inertial fusion capsule? The answers to these seemingly disparate questions hinge on knowledge of the fundamental material property that controls the absorption of radiation: opacity. Opacity plays a critical role for many high energy density (HED) systems and is highly important for the NNSA stewardship mission. In addition, laboratory astrophysics research serves as a conduit for establishing collaborations between the NNSA laboratories, between the NNSA laboratories and universities, and between the NNSA laboratories and our international partners. Exposure to open peer review sharpens the research capabilities and interactions of NNSA scientists with students and professors as a natural path for recruiting the next generation of stockpile stewards.
This Quick Start Guide is an abbreviated version of the Contingency Contractor Optimization Phase 3, User Manual for the Contingency Contractor Optimization Tool engineering prototype. It focuses on providing quick access instructions to the core activities of the two main user roles: Planning Manager and Analyst. Based on an electronic storyboard prototype developed in Phase 2, the Contingency Contractor Optimization Tool engineering prototype was refined in Phase 3 of the OSD ATL Contingency Contractor Optimization to support strategic planning for contingency contractors. The tool uses a model to optimize the total workforce mix by minimizing the combined total costs for the selected mission scenarios. The model will optimize the match of personnel types (military, DoD civilian, and contractors) and capabilities to meet the mission requirements as effectively as possible, based on risk, cost, and other requirements.
This User Manual provides step-by-step instructions on the Contingency Contractor Optimization Tool's major features. Activities are organized by user role. The Contingency Contractor Optimization project is intended to address former Secretary Gates' mandate in a January 2011 memo and DoDI 3020.41 by delivering a centralized strategic planning tool that allows senior decision makers to quickly and accurately assess the impacts, risks, and mitigation strategies associated with utilizing contract support. Based on an electronic storyboard prototype developed in Phase 2, the Contingency Contractor Optimization Tool engineering prototype was refined in Phase 3 of the OSD ATL Contingency Contractor Optimization project to support strategic planning for contingency contractors. The planning tool uses a model to optimize the Total Force mix by minimizing the combined total costs for the selected mission scenarios. The model will optimize the match of personnel groups (military, DoD civilian, and contractors) and capabilities to meet the mission requirements as effectively as possible, based on risk, cost, and other requirements.
Sirocco is a massively parallel, high performance storage system for the exascale era. It emphasizes client-to-client coordination, low server-side coupling, and free data movement to improve resilience and performance. Its architecture is inspired by peer-to-peer and victim- cache architectures. By leveraging these ideas, Sirocco natively supports several media types, including RAM, flash, disk, and archival storage, with automatic migration between levels. Sirocco also includes storage interfaces and support that are more advanced than typical block storage. Sirocco enables clients to efficiently use key-value storage or block-based storage with the same interface. It also provides several levels of transactional data updates within a single storage command, including full ACID-compliant updates. This transaction support extends to updating several objects within a single transaction. Further support is provided for con- currency control, enabling greater performance for workloads while providing safe concurrent modification. By pioneering these and other technologies and techniques in the storage system, Sirocco is poised to fulfill a need for a massively scalable, write-optimized storage system for exascale systems. This is version 1.0 of a document reflecting the current and planned state of Sirocco. Further versions of this document will be accessible at http://www.cs.sandia.gov/Scalable_IO/ sirocco .