Dynamic compression experiments on liquid deuterium above the melt boundary
Abstract not provided.
Abstract not provided.
Abstract not provided.
ESARDA Bulletin
As the 21st century progresses, new nuclear facilities and the expansion of nuclear activities into new countries will require the International Atomic Energy Agency (IAEA) to place a higher reliance on attaining and maintaining a Continuity of Knowledge (CoK) of its safeguards information than is currently practiced. Additionally, a conceptual view of where and how CoK can be applied will need to evolve to support improved efficiency and efficacy of drawing a safeguards conclusion for each Member State. The ability to draw a safeguards conclusion for a Member State will be predicated on the confidence that CoK has been attained and subsequently maintained with respect to the data and information streams used by the IAEA. This confidence can be described as a function of factors such as elapsed time since the measurement, surveillance of attributes, authentication of information, historic knowledge of potential system failures, and the number and type of data collections. A set of general scenarios are further described for determining what is required to attain CoK and whether CoK has been maintained. A high-level analysis of example scenarios is presented to identify failures or gaps that could cause a loss of CoK. Potential areas for technological research and development are discussed for the next generation of CoK tools.
Polymer nanocomposite films consisting of polystyrene (PS) and lead sulfide (PbS) quantum dots, as well as pure PbS quantum dot films were synthesized for the purpose of investigating the pressure directed assembly (PDA) of the nanomaterials and the interactions of polystyrene and the quantum dot superlattice under pressure. Samples were compressed using a diamond anvil cell (DAC) to pressures greater than 15 GPa and studied using x-ray synchrotron radiation in order to show the changes in the d-spacing of the superlattice with respect to pressure. Absorption characteristics were investigated with ultraviolet visible spectroscopy (UV/Vis), while structure and long range ordering of the lattice were studied using small angle x-ray scattering (SAXS) as well as grazing incidence small angle scattering (GISAXS). Particle size was examined with transmission electron microscopy (TEM). These inquiries into size, structure, and interactions were performed in order to gain a baseline understanding of the interplay between nanoparticles and a simple polymer in a composite system and how the composite systems can be composed in future experiments.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Thermal analysts address a wide variety of applications requiring the simulation of radiation heat transfer phenomena. There are gaps in the currently available modeling capabilities. Addressing these gaps would allow for the consideration of additional physics and increase confidence in simulation predictions. This document outlines a five year plan to address the current and future needs of the analyst community with regards to modeling radiation heat transfer processes. This plan represents a significant multi-year effort that must be supported on an ongoing basis.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Frontiers in Chemistry
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report provides a short overview of the DNN R&D funded project, Time-Encoded Imagers. The project began in FY11 and concluded in FY14. The Project Description below provides the overall motivation and objectives for the project as well as a summary of programmatic direction. It is followed by a short description of each task and the resulting deliverables.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Solar spectral data for all parts of the US is limited due in part to the high cost of commercial spectrometers. Solar spectral information is necessary for accurate photovoltaic (PV) performance forecasting, especially for large utility-scale PV installations. A low-cost solar spectral sensor would address the obstacles and needs. In this report, a novel low-cost, discrete-band sensor device, comprised of five narrow-band sensors, is described. The hardware is comprised of commercial-off-the-shelf components to keep the cost low. Data processing algorithms were developed and are being refined for robustness. PV module short-circuit current ($I_{sc}$) prediction methods were developed based on interaction-terms regression methodology and spectrum reconstruction methodology for computing $I_{sc}$. The results suggest the computed spectrum using the reconstruction method agreed well with the measured spectrum from the wide-band spectrometer (RMS error of 38.2 W/m2 -nm). Further analysis of computed $I_{sc}$ found a close correspondence of 0.05 A RMS error. The goal is for ubiquitous adoption of the low-cost spectral sensor in solar PV and other applications such as weather forecasting.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Standard 1547-2003 conformance of several interconnected microinverters was performed by Sandia National Laboratories (SNL) to determine if there were emergent adverse behaviors of co-located aggregated distributed energy resources. Experiments demonstrated the certification tests could be expanded for multi-manufacturer microinverter interoperability. Evaluations determined the microinverters' response to abnormal conditions in voltage and frequency, interruption in grid service, and cumulative power quality. No issues were identified to be caused by the interconnection of multiple devices.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Applied Physics
Abstract not provided.
Abstract not provided.
Abstract not provided.
International Journal of Micro-Nano Scale Transport
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The objectives of this project are to address the root cause implications of thermal runaway of Li-ion batteries by delivering a software architecture solution that can lead to the development of predictive mechanisms that are based on identification of species.
The goal of the workshop and this report is to identify common themes and standardize concepts for locality-preserving abstractions for exascale programming models.
Abstract not provided.
Abstract not provided.
Abstract not provided.
For over two years, Sandia National Laboratories has been using a Gigabit Passive Optical Network (GPON) access layer for selected networks. The GPON equipment includes the Tellabs 1150 Multiservice Access Platform (MSAP) Optical Line Terminal (OLT), the Tellabs ONT709 and ONT709GP Optical Network Terminals (ONTs), and the Panorama PON Network Manager. In late 2013, the Tellabs equipment was updated to Software Release FP27.1_015130. Because a new software release has the potential to affect performance and functionality, it needed to be thoroughly tested. This report documents that testing. It also provides a comparison between the current release and the previous Software Release FP25.5.1_013274 that was being used.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Complex Networks
Abstract not provided.
Abstract not provided.
Finding dense substructures in a graph is a fundamental graph mining operation, with applications in bioinformatics, social networks, and visualization to name a few. Yet most standard formulations of this problem (like clique, quasiclique, k-densest subgraph) are NP-hard. Furthermore, the goal is rarely to nd the \true optimum", but to identify many (if not all) dense substructures, understand their distribution in the graph, and ideally determine a hierarchical structure among them. Current dense subgraph nding algorithms usually optimize some objective, and only nd a few such subgraphs without providing any hierarchy. It is also not clear how to account for overlaps in dense substructures. We de ne the nucleus decomposition of a graph, which represents the graph as a forest of nuclei. Each nucleus is a subgraph where smaller cliques are present in many larger cliques. The forest of nuclei is a hierarchy by containment, where the edge density increases as we proceed towards leaf nuclei. Sibling nuclei can have limited intersections, which allows for discovery of overlapping dense subgraphs. With the right parameters, the nuclear decomposition generalizes the classic notions of k-cores and k-trusses. We give provable e cient algorithms for nuclear decompositions, and empirically evaluate their behavior in a variety of real graphs. The tree of nuclei consistently gives a global, hierarchical snapshot of dense substructures, and outputs dense subgraphs of higher quality than other state-of-theart solutions. Our algorithm can process graphs with tens of millions of edges in less than an hour.
Abstract not provided.