GICNT Nuclear Detection Working Group Meeting: Exercise Playbook Scenario Summaries
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physics of Plasmas
Abstract not provided.
Abstract not provided.
Abstract not provided.
Active brazes have been used for many years to produce bonds between metal and ceramic objects. By including a relatively small of a reactive additive to the braze one seeks to improve the wetting and spreading behavior of the braze. The additive modifies the substrate, either by a chemical surface reaction or possibly by alloying. By its nature, the joining process with active brazes is a complex nonequilibrium non-steady state process that couples chemical reaction, reactant and product diffusion to the rheology and wetting behavior of the braze. Most of the these subprocesses are taking place in the interfacial region, most are difficult to access by experiment. To improve the control over the brazing process, one requires a better understanding of the melting of the active braze, rate of the chemical reaction, reactant and product diffusion rates, nonequilibrium composition-dependent surface tension as well as the viscosity. This report identifies ways in which modeling and theory can assist in improving our understanding.
Visible spectroscopy is a powerful diagnostic, allowing plasma parameters ranging from temperature and density to electric and magnetic fields to be measured. Spectroscopic dopants are commonly introduced to make these measurements. On Z, dopants are introduced passively (i.e. a salt deposited on a current-carrying surface); however, in some cases, passive doping can limit the times and locations at which measurements can be made. Active doping utilizes an auxiliary energy source to disperse the dopant independently from the rest of the experiment. The objective of this LDRD project was to explore laser ablation as a method of actively introducing spectroscopic dopants. Ideally, the laser energy would be delivered to the dopant via fiber optic, which would eliminate the need for time-intensive laser alignments in the Z chamber. Experiments conducted in a light lab to assess the feasibility of fibercoupled and open-beam laser-ablated doping are discussed.
We develop a novel calibration approach to address the problem of predictive ke RANS simulations of jet-incrossflow. Our approach is based on the hypothesis that predictive ke parameters can be obtained by estimating them from a strongly vortical flow, specifically, flow over a square cylinder. In this study, we estimate three ke parameters, C%CE%BC, Ce2 and Ce1 by fitting 2D RANS simulations to experimental data. We use polynomial surrogates of 2D RANS for this purpose. We conduct an ensemble of 2D RANS runs using samples of (C%CE%BC;Ce2;Ce1) and regress Reynolds stresses to the samples using a simple polynomial. We then use this surrogate of the 2D RANS model to infer a joint distribution for the ke parameters by solving a Bayesian inverse problem, conditioned on the experimental data. The calibrated (C%CE%BC;Ce2;Ce1) distribution is used to seed an ensemble of 3D jet-in-crossflow simulations. We compare the ensemble's predictions of the flowfield, at two planes, to PIV measurements and estimate the predictive skill of the calibrated 3D RANS model. We also compare it against 3D RANS predictions using the nominal (uncalibrated) values of (C%CE%BC;Ce2;Ce1), and find that calibration delivers a significant improvement to the predictive skill of the 3D RANS model. We repeat the calibration using surrogate models based on kriging and find that the calibration, based on these more accurate models, is not much better that those obtained with simple polynomial surrogates. We discuss the reasons for this rather surprising outcome.
Abstract not provided.
As alternative energy generating devices (i.e., solar, wind, etc) are added onto the electrical energy grid (AC grid), irregularities in the available electricity due to natural occurrences (i.e., clouds reducing solar input or wind burst increasing wind powered turbines) will be dramatically increased. Due to their almost instantaneous response, modern flywheel-based energy storage devices can act a mechanical mechanism to regulate the AC grid; however, improved spin speeds will be required to meet the necessary energy levels to balance these green energy variances. Focusing on composite flywheels, we have investigated methods for improving the spin speeds based on materials needs. The so-called composite flywheels are composed of carbon fiber (C-fiber), glass fiber, and a glue (resin) to hold them together. For this effort, we have focused on the addition of fillers to the resin in order to improve its properties. Based on the high loads required for standard meso-sized fillers, this project investigated the utility of ceramic nanofillers since they can be added at very low load levels due to their high surface area. The impact that TiO2 nanowires had on the final strength of the flywheel material was determined by a three-point-bend test. The results of the introduction of nanomaterials demonstrated an increase in strength of the flywheels C-fiber-resin moiety, with an upper limit of a 30% increase being reported. An analysis of the economic impact concerning the utilization of the nanowires was undertaken and after accounting for new-technology and additional production costs, return on improved-nanocomposite investment was approximated at 4-6% per year over the 20-year expected service life. Further, it was determined based on the 30% improvement in strength, this change may enable a 20-30% reduction in flywheel energy storage cost ($/kW-h).
This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.
Abstract not provided.
Journal of Physical Chemistry C
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We performed optical electric field measurements ion nanosecond time scales using the electrooptic crystal beta barium borate (BBO). Tests were based on a preliminary bench top design intended to be a proofofprinciple stepping stone towards a modulardesign optical Efield diagnostic that has no metal in the interrogated environment. The long term goal is to field a modular version of the diagnostic in experiments on large scale xray source facilities, or similarly harsh environments.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Materials Science
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
In the supercritical CO2-water-mineral systems relevant to subsurface CO2 sequestration, interfacial processes at the supercritical fluid-mineral interface will strongly affect core- and reservoir-scale hydrologic properties. Experimental and theoretical studies have shown that water films will form on mineral surfaces in supercritical CO2, but will be thinner than those that form in vadose zone environments at any given matric potential. The theoretical model presented here allows assessment of water saturation as a function of matric potential, a critical step for evaluating relative permeabilities the CO2 sequestration environment. The experimental water adsorption studies, using Quartz Crystal Microbalance and Fourier Transform Infrared Spectroscopy methods, confirm the major conclusions of the adsorption/condensation model. Additional data provided by the FTIR study is that CO2 intercalation into clays, if it occurs, does not involve carbonate or bicarbonate formation, or significant restriction of CO2 mobility. We have shown that the water film that forms in supercritical CO2 is reactive with common rock-forming minerals, including albite, orthoclase, labradorite, and muscovite. The experimental data indicate that reactivity is a function of water film thickness; at an activity of water of 0.9, the greatest extent of reaction in scCO2 occurred in areas (step edges, surface pits) where capillary condensation thickened the water films. This suggests that dissolution/precipitation reactions may occur preferentially in small pores and pore throats, where it may have a disproportionately large effect on rock hydrologic properties. Finally, a theoretical model is presented here that describes the formation and movement of CO2 ganglia in porous media, allowing assessment of the effect of pore size and structural heterogeneity on capillary trapping efficiency. The model results also suggest possible engineering approaches for optimizing trapping capacity and for monitoring ganglion formation in the subsurface.
As with other large healthcare organizations, medical adverse events at the Department of Veterans Affairs (VA) facilities can expose patients to unforeseen negative risks. VHA leadership recognizes that properly handled disclosure of adverse events can minimize potential harm to patients and negative consequences for the effective functioning of the organization. The work documented here seeks to help improve the disclosure process by situating it within the broader theoretical framework of issues management, and to identify opportunities for process improvement through modeling disclosure and reactions to disclosure. The computational model will allow a variety of disclosure actions to be tested across a range of incident scenarios. Our conceptual model will be refined in collaboration with domain experts, especially by continuing to draw on insights from VA Study of the Communication of Adverse Large-Scale Events (SCALE) project researchers.
Abstract not provided.
Corrosion tests at 400, 500, and 680ÀC were performed using four high temperature alloys; 347SS, 321SS In625, and HA230. Molten salt chemistry was monitored over time through analysis of nitrite, carbonate, and dissolved metals. Metallography was performed on alloys at 500 and 680ÀC, due to the relatively thin oxide scale observed at 400ÀC. At 500ÀC, corrosion of iron based alloys took the form of chromium depletion and iron oxides, while nickel based alloys also had chromium depletion and formation of NiO. Chromium was detected in relatively low concentrations at this temperature. At 680ÀC, significant surface corrosion occurred with metal losses greater than 450microns/year after 1025hours of exposure. Iron based alloys formed complex iron, sodium, and chromium oxides. Some data suggests grain boundary chromium depletion of 321SS. Nickel alloys formed NiO and metallic nickel corrosion morphologies, with HA230 displaying significant internal oxidation in the form of chromia. Nickel alloys both exhibited worse corrosion than iron based alloys likely due to preferential dissolution of chromium, molybdenum, and tungsten.
Abstract not provided.
Cell Transplantation
Abstract not provided.
Proceedings of SPIE
Abstract not provided.
Journal of Vacuum Science&Technology B
Abstract not provided.
Abstract not provided.
Rock Mechanics and Rock Engineering
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The performance, reproducibility and reliability of metal joints are complex functions of the detailed history of physical processes involved in their creation. Prediction and control of these processes constitutes an intrinsically challenging multi-physics problem involving heating and melting a metal alloy and reactive wetting. Understanding this process requires coupling strong molecularscale chemistry at the interface with microscopic (diffusion) and macroscopic mass transport (flow) inside the liquid followed by subsequent cooling and solidification of the new metal mixture. The final joint displays compositional heterogeneity and its resulting microstructure largely determines the success or failure of the entire component. At present there exists no computational tool at Sandia that can predict the formation and success of a braze joint, as current capabilities lack the ability to capture surface/interface reactions and their effect on interface properties. This situation precludes us from implementing a proactive strategy to deal with joining problems. Here, we describe what is needed to arrive at a predictive modeling and simulation capability for multicomponent metals with complicated phase diagrams for melting and solidification, incorporating dissolutive and composition-dependent wetting.
This report documents work conducted in FY13 on electrical discharge experiments performed to develop predictive computational models of the fundamental processes of surface breakdown in the vicinity of high-permittivity material interfaces. Further, experiments were conducted to determine if free carrier electrons could be excited into the conduction band thus lowering the effective breakdown voltage when UV photons (4.66 eV) from a high energy pulsed laser were incident on the rutile sample. This report documents the numerical approach, the experimental setup, and summarizes the data and simulations. Lastly, it describes the path forward and challenges that must be overcome in order to improve future experiments for characterizing the breakdown behavior for rutile.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Computers, Materials&Continua
Abstract not provided.
Abstract not provided.
Abstract not provided.
NanoLetters
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Materials Letters
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Infectious diseases can spread rapidly through healthcare facilities, resulting in widespread illness among vulnerable patients. Computational models of disease spread are useful for evaluating mitigation strategies under different scenarios. This report describes two infectious disease models built for the US Department of Veteran Affairs (VA) motivated by a Varicella outbreak in a VA facility. The first model simulates disease spread within a notional contact network representing staff and patients. Several interventions, along with initial infection counts and intervention delay, were evaluated for effectiveness at preventing disease spread. The second model adds staff categories, location, scheduling, and variable contact rates to improve resolution. This model achieved more accurate infection counts and enabled a more rigorous evaluation of comparative effectiveness of interventions.
Abstract not provided.
An ideal 3He detector replacement for the near- to medium-term future will use materials that are easy to produce and well understood, while maintaining thermal neutron detection efficiency and gamma rejection close to the 3He standard. Toward this end, we investigated the use of standard alkali halide scintillators interfaced with 6Li and read out with photomultiplier tubes (PMTs). Thermal neutrons are captured on 6Li with high efficiency, emitting high-energy and triton (3H) reaction products. These particles deposit energy in the scintillator, providing a thermal neutron signal; discrimination against gamma interactions is possible via pulse shape discrimination (PSD), since heavy particles produce faster pulses in alkali halide crystals. We constructed and tested two classes of detectors based on this concept. In one case 6Li is used as a dopant in polycrystalline NaI; in the other case a thin Li foil is used as a conversion layer. In the configurations studied here, these systems are sensitive to both gamma and neutron radiation, with discrimination between the two and good energy resolution for gamma spectroscopy. We present results from our investigations, including measurements of the neutron efficiency and gamma rejection for the two detector types. We also show a comparison with Cs2LiYCl6:Ce (CLYC), which is emerging as the standard scintillator for simultaneous gamma and thermal neutron detection, and also allows PSD. We conclude that 6Li foil with CsI scintillating crystals has near-term promise as a thermal neutron detector in applications previously dominated by 3He detectors. The other approach, 6Li-doped alkali halides, has some potential, but require more work to understand material properties and improve fabrication processes.
Abstract not provided.
Abstract not provided.
Journal of Computational Physics
Abstract not provided.
Abstract not provided.
Two categories of challenges confront the developer of computational spray models: those related to the computation and those related to the physics. Regarding the computation, the trend towards heterogeneous, multi- and many-core platforms will require considerable re-engineering of codes written for the current supercomputing platforms. Regarding the physics, accurate methods for transferring mass, momentum and energy from the dispersed phase onto the carrier fluid grid have so far eluded modelers. Significant challenges also lie at the intersection between these two categories. To be competitive, any physics model must be expressible in a parallel algorithm that performs well on evolving computer platforms. This work created an application based on a software architecture where the physics and software concerns are separated in a way that adds flexibility to both. The develop spray-tracking package includes an application programming interface (API) that abstracts away the platform-dependent parallelization concerns, enabling the scientific programmer to write serial code that the API resolves into parallel processes and threads of execution. The project also developed the infrastructure required to provide similar APIs to other application. The API allow object-oriented Fortran applications direct interaction with Trilinos to support memory management of distributed objects in central processing units (CPU) and graphic processing units (GPU) nodes for applications using C++.
This report summarizes the result of a NEAMS project focused on the use of reliability methods within the RAVEN and RELAP-7 software framework for assessing failure probabilities as part of probabilistic risk assessment for nuclear power plants. RAVEN is a software tool under development at the Idaho National Laboratory that acts as the control logic driver and post-processing tool for the newly developed Thermal-Hydraulic code RELAP-7. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. Reliability methods are algorithms which transform the uncertainty problem to an optimization problem to solve for the failure probability, given uncertainty on problem inputs and a failure threshold on an output response. The goal of this work is to demonstrate the use of reliability methods in Dakota with RAVEN/RELAP-7. These capabilities are demonstrated on a demonstration of a Station Blackout analysis of a simplified Pressurized Water Reactor (PWR).
Most materials microstructural evolution processes progress with multiple processes occurring simultaneously. In this work, we have concentrated on the processes that are active in nuclear materials, in particular, nuclear fuels. These processes are coarsening, nucleation, differential diffusion, phase transformation, radiation-induced defect formation and swelling, often with temperature gradients present. All these couple and contribute to evolution that is unique to nuclear fuels and materials. Hybrid model that combines elements from the Potts Monte Carlo, phase-field models and others have been developed to address these multiple physical processes. These models are described and applied to several processes in this report. An important feature of the models developed are that they are coded as applications within SPPARKS, a Sandiadeveloped framework for simulation at the mesoscale of microstructural evolution processes by kinetic Monte Carlo methods. This makes these codes readily accessible and adaptable for future applications.
We created interactive demonstration activities for Take Our Daughters&Sons to Work Day (TODSTWD) 2013 in order to promote general interest in chemistry and also generate awareness of the type of work our laboratories can perform. %E2%80%9CCurious about Mars Rover Curiosity?%E2%80%9D performed an elemental analysis on rocks brought to our lab using the same technique utilized on the planet Mars by the NASA robotic explorer Curiosity. %E2%80%9CFood is Chemistry?%E2%80%9D utilized a mass spectrometer to measure, in seconds, each participant's breath in order to identify the food item consumed for the activity. A total of over 130 children participated in these activities over a 3 hour block, and feedback was positive. This document reports the materials (including handouts), experimental procedures, and lessons learned so that future demonstrations can benefit from the baseline work performed. We also present example results used to prepare the Food activity and example results collected during the Curiosity demo.
Radiation transport calculations were performed to compute the angular tallies for scattered gamma-rays as a function of distance, height, and environment. Greens Functions were then used to encapsulate the results a reusable transformation function. The calculations represent the transport of photons throughout scattering surfaces that surround sources and detectors, such as the ground and walls. Utilization of these calculations in GADRAS (Gamma Detector Response and Analysis Software) enables accurate computation of environmental scattering for a variety of environments and source configurations. This capability, which agrees well with numerous experimental benchmark measurements, is now deployed with GADRAS Version 18.2 as the basis for the computation of scattered radiation.
This SAND report summarizes the activities and outcomes of the Network and Ensemble Enabled Entity Extraction in Information Text (NEEEEIT) LDRD project, which addressed improving the accuracy of conditional random fields for named entity recognition through the use of ensemble methods.
ECS Transactions (Online)
Resistive random access memory (ReRAM) has become a promising candidate for next-generation high-performance non-volatile memory that operates by electrically tuning resistance states via modulating vacancy concentrations. Here, we demonstrate a wafer-scale process for resistive switching in tantalum oxide that is completely CMOS compatible. The resulting devices are forming-free and with greater than 1x105 cycle endurance.
2013 Optical Interconnects Conference, OI 2013
As high-performance scientific computing continues to advance to higher degrees of parallel computing power, the system interconnect becomes a more critical performance-related resource. Optical links have been used strategically to reduce cost per performance in HPC system interconnects for the past decade. In this paper, we explore the performance implications of optical link performance on scientific applications in leading network designs, placing optical signaling technology in the context of its usefulness to HPC system interconnects. © 2013 IEEE.
Journal of Chemical Physics
Density Functional Theory points to a key role of K+ solvation in the low-energy two-dimensional arrangement of water molecules on the basal surface of muscovite. At a coverage of 9 water molecules per 2 surface potassium ions, there is room to accommodate the ions into wetting layers wherein half of them are hydrated by 3 and the other half by 4 water molecules, with no broken H-bonds, or wherein all are hydrated by 4. Relative to the "fully connected network of H-bonded water molecules" that Odelius found to form "a cage around the potassium ions," the hydrating arrangements are several tens of meV/H2O better bound. Thus, low-temperature wetting on muscovite is not driven towards "ice-like" hexagonal coordination. Instead, solvation forces dominate. © 2013 AIP Publishing LLC.
51st AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition 2013
Sandia National Laboratories has concluded field testing of its wind turbine rotor equipped with trailing-edge flaps. The blade design, fabrication, and integration which have been described in previous papers are briefly reviewed and then a portion of the data is presented and analyzed. Time delays observed in the time-averaged response to stepwise flap motions are consistent with the expected time scales of the structural and aerodynamic phenomena involved. Control authority of the flaps is clearly seen in the blade strain data and in hub-mounted video of the blade tip movement. © 2013 by the American Institute of Aeronautics and Astronautics, Inc.
Collection of Technical Papers - AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference
This paper discusses the treatment of uncertainties corresponding to relatively few samples of random-variable quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse samples it is not practical to have a goal of accurately estimating the underlying variability distribution (probability density function, PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a desired percentage of the actual PDF, say 95% included probability, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the random-variable range corresponding to the desired percentage of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem an interesting and difficult one. In this paper the performance of five uncertainty representation techniques is characterized on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods exhibit significantly better overall performance than the others according to the objectives and performance measures emphasized. © 2012 AIAA.
Langmuir
The characterization of liposomes was undertaken using in-situ microfluidic transmission electron microscopy. Liposomes were imaged without contrast enhancement staining or cryogenic treatment, allowing for the observation of functional liposomes in an aqueous environment. The stability and quality of the liposome structures observed were found to be highly dependent on the surface and liposome chemistries within the liquid cell. The successful imaging of liposomes suggests the potential for the extension of in-situ microfluidic TEM to a wide variety of other biological and soft matter systems and processes. © 2013 American Chemical Society.
Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
We design a space efficient algorithm that approximates the transitivity (global clustering coefficient) and total triangle count with only a single pass through a graph given as a stream of edges. Our procedure is based on the classic probabilistic result, the birthday paradox. When the transitivity is constant and there are more edges than wedges (common properties for social networks), we can prove that our algorithm requires O( p n) space (n is the number of vertices) to provide accurate estimates. We run a detailed set of experiments on a variety of real graphs and demonstrate that the memory requirement of the algorithm is a tiny fraction of the graph. For example, even for a graph with 200 million edges, our algorithm stores just 60,000 edges to give accurate results. Being a single pass streaming algorithm, our procedure also maintains a real-Time estimate of the transitivity/number of triangles of a graph, by storing a miniscule fraction of edges.