There are several machines in this country that produce short bursts of neutrons for various applications. A few examples are the Zmachine, operated by Sandia National Laboratories in Albuquerque, NM; the OMEGA Laser Facility at the University of Rochester in Rochester, NY; and the National Ignition Facility (NIF) operated by the Department of Energy at Lawrence Livermore National Laboratory in Livermore, California. They all incorporate neutron time of flight (nTOF) detectors which measure neutron yield, and the shapes of the waveforms from these detectors contain germane information about the plasma conditions that produce the neutrons. However, the signals can also be %E2%80%9Cclouded%E2%80%9D by a certain fraction of neutrons that scatter off structural components and also arrive at the detectors, thereby making analysis of the plasma conditions more difficult. These detectors operate in current mode - i.e., they have no discrimination, and all the photomultiplier anode charges are integrated rather than counted individually as they are in single event counting. Up to now, there has not been a method for modeling an nTOF detector operating in current mode. MCNPPoliMiwas developed in 2002 to simulate neutron and gammaray detection in a plastic scintillator, which produces a collision data output table about each neutron and photon interaction occurring within the scintillator; however, the postprocessing code which accompanies MCNPPoliMi assumes a detector operating in singleevent counting mode and not current mode. Therefore, the idea for this work had been born: could a new postprocessing code be written to simulate an nTOF detector operating in current mode? And if so, could this process be used to address such issues as the impact of neutron scattering on the primary signal? Also, could it possibly even identify sources of scattering (i.e., structural materials) that could be removed or modified to produce %E2%80%9Ccleaner%E2%80%9D neutron signals? This process was first developed and then applied to the axial neutron time of flight detectors at the ZFacility mentioned above. First, MCNPPoliMi was used to model relevant portions of the facility between the source and the detector locations. To obtain useful statistics, variance reduction was utilized. Then, the resulting collision output table produced by MCNPPoliMi was further analyzed by a MATLAB postprocessing code. This converted the energy deposited by neutron and photon interactions in the plastic scintillator (i.e., nTOF detector) into light output, in units of MeVee%D1%84 (electron equivalent) vs time. The time response of the detector was then folded into the signal via another MATLAB code. The simulated response was then compared with experimental data and shown to be in good agreement. To address the issue of neutron scattering, an %E2%80%9CIdeal Case,%E2%80%9D (i.e., a plastic scintillator was placed at the same distance from the source for each detector location) with no structural components in the problem. This was done to produce as %E2%80%9Cpure%E2%80%9D a neutron signal as possible. The simulated waveform from this %E2%80%9CIdeal Case%E2%80%9D was then compared with the simulated data from the %E2%80%9CFull Scale%E2%80%9D geometry (i.e., the detector at the same location, but with all the structural materials now included). The %E2%80%9CIdeal Case%E2%80%9D was subtracted from the %E2%80%9CFull Scale%E2%80%9D geometry case, and this was determined to be the contribution due to scattering. The time response was deconvolved out of the empirical data, and the contribution due to scattering was then subtracted out of it. A transformation was then made from dN/dt to dN/dE to obtain neutron spectra at two different detector locations.
The article briefly summarizes the siting history of salt nuclear waste repositories as it relates to the research that has been conducted in support of this overall mission. Project Salt Vault was a solid-waste disposal demonstration in bedded salt performed by Oak Ridge National Laboratory (ORNL) in Lyons, Kan. The US Atomic Energy Commission (AEC) intended to convert the project into a pilot plant for the storage of high-level waste. Despite these intentions, nearby solution mining and questionably plugged oil and gas boreholes resulted in the abandonment of the Lyons site. With help from the USGS, in 1972 ORNL began looking in the Permian Basin for a different disposal site in Texas or New Mexico. The WIPP project was discontinued in 1974 in favor of concentrating efforts on a Retrievable Surface Storage Facility. After the demise of that project in 1975, work resumed on WIPP and its scope was temporarily expanded to include defense HLW. A location a few miles northeast of the current WIPP site was chosen for further study.
This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.
The storage caverns of the US Strategic Petroleum Reserve (SPR) exhibit creep behavior resulting in reduction of storage capacity over time. Maintenance of oil storage capacity requires periodic controlled leaching named remedial leach. The 30 MMB sale in summer 2011 provided space available to facilitate leaching operations. The objective of this report is to present the results and analyses of remedial leach activity at the SPR following the 2011 sale until mid-January 2013. This report focuses on caverns BH101, BH104, WH105 and WH106. Three of the four hanging strings were damaged resulting in deviations from normal leach patterns; however, the deviations did not affect the immediate geomechanical stability of the caverns. Significant leaching occurred in the toes of the caverns likely decreasing the number of available drawdowns until P/D ratio criteria are met. SANSMIC shows good agreement with sonar data and reasonably predicted the location and size of the enhanced leaching region resulting from string breakage.
Laser-driven proton radiography provides electromagnetic field mapping with high spatiotemporal resolution, and has been applied to many laser-driven High Energy Density Physics (HEDP) experiments. Our report addresses key questions about the feasibility of ion radiography at the Z-Accelerator (%E2%80%9CZ%E2%80%9D), concerning laser configuration, hardware, and radiation background. Charged particle tracking revealed that radiography at Z requires GeV scale protons, which is out of reach for existing and near-future laser systems. However, it might be possible to perform proton deflectometry to detect magnetic flux compression in the fringe field region of a magnetized liner inertial fusion experiment. Experiments with the Z-Petawatt laser to enhance proton yield and energy showed an unexpected scaling with target thickness. Full-scale, 3D radiation-hydrodynamics simulations, coupled to fully explicit and kinetic 2D particle-in-cell simulations running for over 10 ps, explain the scaling by a complex interplay of laser prepulse, preplasma, and ps-scale temporal rising edge of the laser.
This report considers and prioritizes the primary potential technical costreduction pathways for offshore wave activated body attenuators designed for ocean resources. This report focuses on technical research and development costreduction pathways related to the device technology rather than environmental monitoring or permitting opportunities. Three sources of information were used to understand current cost drivers and develop a prioritized list of potential costreduction pathways: a literature review of technical work related to attenuators, a reference device compiled from literature sources, and a webinar with each of three industry device developers. Data from these information sources were aggregated and prioritized with respect to the potential impact on the lifetime levelized cost of energy, the potential for progress, the potential for success, and the confidence in success. Results indicate the five most promising costreduction pathways include advanced controls, an optimized structural design, improved power conversion, planned maintenance scheduling, and an optimized device profile.
To benchmark the current U.S. wind turbine fleet reliability performance and identify the major contributors to component-level failures and other downtime events, the Department of Energy funded the development of the Continuous Reliability Enhancement for Wind (CREW) database by Sandia National Laboratories. This report is the third annual Wind Plant Reliability Benchmark, to publically report on CREW findings for the wind industry. The CREW database uses both high resolution Supervisory Control and Data Acquisition (SCADA) data from operating plants and Strategic Power Systems ORAPWindª (Operational Reliability Analysis Program for Wind) data, which consist of downtime and reserve event records and daily summaries of various time categories for each turbine. Together, these data are used as inputs into CREWs reliability modeling. The results presented here include: the primary CREW Benchmark statistics (operational availability, utilization, capacity factor, mean time between events, and mean downtime); time accounting from an availability perspective; time accounting in terms of the combination of wind speed and generation levels; power curve analysis; and top system and component contributors to unavailability.
This report develops and documents nonlinear kinematic relations needed to implement piezoelectric constitutive models in ALEGRA-EMMA [5], where calculations involving large displacements and rotations are routine. Kinematic relationships are established using Gausss law and Faradays law; this presentation on kinematics goes beyond piezoelectric materials and is applicable to all dielectric materials. The report then turns to practical details of implementing piezoelectric models in an application code where material principal axes are rarely aligned with user defined problem coordinate axes. This portion of the report is somewhat pedagogical but is necessary in order to establish documentation for the piezoelectric implementation in ALEGRA-EMMA. This involves transforming elastic, piezoelectric, and permittivity moduli from material principal axes to problem coordinate axes. The report concludes with an overview of the piezoelectric implementation in ALEGRA-EMMA and small verification examples.
Suites of experiments are preformed over a validation hierarchy to test computational simulation models for complex applications. Experiments within the hierarchy can be performed at different conditions and configurations than those for an intended application, with each experiment testing only part of the physics relevant for the application. The purpose of the present work is to develop methodology to roll-up validation results to an application, and to assess the impact the validation hierarchy design has on the roll-up results. The roll-up is accomplished through the development of a meta-model that relates validation measurements throughout a hierarchy to the desired response quantities for the target application. The meta-model is developed using the computation simulation models for the experiments and the application. The meta-model approach is applied to a series of example transport problems that represent complete and incomplete coverage of the physics of the target application by the validation experiments.
This is the final SAND report for the Early-Career LDRD (# 158477) “Sublinear Algorithms for Massive Data Sets”. We provide a list of the various publications and achievements in the project. 3
This report summarizes computational efforts to model interfacial fracture using cohesive zone models in the SIERRA/SolidMechanics (SIERRA/SM) finite element code. Cohesive surface elements were used to model crack initiation and propagation along predefined paths. Mesh convergence was observed with SIERRA/SM for numerous geometries. As the funding for this project came from the Advanced Simulation and Computing Verification and Validation (ASC V&V) focus area, considerable effort was spent performing verification and validation. Code verification was performed to compare code predictions to analytical solutions for simple three-element simulations as well as a higher-fidelity simulation of a double-cantilever beam. Parameter identification was conducted with Dakota using experimental results on asymmetric double-cantilever beam (ADCB) and end-notched-flexure (ENF) experiments conducted under Campaign-6 funding. Discretization convergence studies were also performed with respect to mesh size and time step and an optimization study was completed for mode II delamination using the ENF geometry. Throughout this verification process, numerous SIERRA/SM bugs were found and reported, all of which have been fixed, leading to over a 10-fold increase in convergence rates. Finally, mixed-mode flexure experiments were performed for validation. One of the unexplained issues encountered was material property variability for ostensibly the same composite material. Since the variability is not fully understood, it is difficult to accurately assess uncertainty when performing predictions.
Accurate knowledge of thermophysical properties of concrete is considered extremely important for meaningful models to be developed of scenarios wherein the concrete is rapidly heated. Test of solid propellant burns on samples of concrete from Launch Complex 17 of the Cape Canaveral show spallation and fragmentation. In response to the need for accurate modeling scenarios of these observations, an experimental program to determine the permeability and thermal properties of the concrete was developed. Room temperature gas permeability measurements of Launch Complex 17 of the Cape Canaveral concrete dried at 50°C yield permeability estimates of 0.07mD (mean), and thermal properties (thermal conductivity, diffusivity, and specific heat) were found to vary with temperatures from room temperature to 300°C. Thermal conductivity ranges from 1.7-1.9 W/mK at 50°C to 1.0-1.15 W/mK at 300°C, thermal diffusivity ranges from 0.75-0.96 mm2/s at 50°C to 0.44-0.58 mm2/s at 300°C, and specific heat ranges from 1.76-2.32 /m3K to 2.00-2.50 /m3K at 300°C.
The Gamma Detector Response and Analysis Software-Detector Response Function (GADRAS-DRF) application computes the response of gamma-ray detectors to incoming radiation. This manual provides step-by-step procedures to acquaint new users with the use of the application. The capabilities include characterization of detector response parameters, plotting and viewing measured and computed spectra, and analyzing spectra to identify isotopes or to estimate flux profiles. GADRAS-DRF can compute and provide detector responses quickly and accurately, giving researchers and other users the ability to obtain usable results in a timely manner (a matter of seconds or minutes).
The National Solar Thermal Test Facility at Sandia National Laboratories has a unique test capability called the Molten Salt Test Loop (MSTL) system. MSTL is a test capability that allows customers and researchers to test components in flowing, molten nitrate salt. The components tested can range from materials samples, to individual components such as flex hoses, ball joints, and valves, up to full solar collecting systems such as central receiver panels, parabolic troughs, or linear Fresnel systems. MSTL provides realistic conditions similar to a portion of a concentrating solar power facility. The facility currently uses 60/40 nitrate %E2%80%9Csolar salt%E2%80%9D and can circulate the salt at pressure up to 40 bar (600psi), temperature to 585%C2%B0C, and flow rate of 44-50kg/s(400-600GPM) depending on temperature. The purpose of this document is to provide a basis for customers to evaluate the applicability to their testing needs, and to provide an outline of expectations for conducting testing on MSTL. The document can serve as the basis for testing agreements including Work for Others (WFO) and Cooperative Research and Development Agreements (CRADA). While this document provides the basis for these agreements and describes some of the requirements for testing using MSTL and on the site at Sandia, the document is not sufficient by itself as a test agreement. The document, however, does provide customers with a uniform set of information to begin the test planning process.
A method for measuring the relative performance of energy dispersive spectrometers (EDS) on a TEM is discussed. A NiO thin-film standard fabricated at Sandia CA is used. A performance parameter,, is measured and compared to values on several TEM systems.
This report details the progress made in measuring the temperature dependence of the electronic and optoelectronic of devices made with individual carbon nanotubes.
Electric energy storage technologies can provide numerous grid services; there are a number of factors that restrict their current deployment. The most significant barrier to deployment is high capital costs, though several recent deployments indicate that capital costs are decreasing and energy storage may be the preferred economic alternative in certain situations. However, a number of other market and regulatory barriers persist, limiting further deployment. These barriers can be categorized into regulatory barriers, market (economic) barriers, utility and developer business model barriers, cross-cutting barriers and technology barriers.
The University of Michigan and Sandia National Laboratories collaborated on the initial development of a compact single-camera approach for simultaneously measuring 3-D gasphase velocity and temperature fields at high frame rates. A compact diagnostic tool is desired to enable investigations of flows with limited optical access, such as near-wall flows in an internal combustion engine. These in-cylinder flows play a crucial role in improving engine performance. Thermographic phosphors were proposed as flow and temperature tracers to extend the capabilities of a novel, compact 3D velocimetry diagnostic to include high-speed thermometry. Ratiometric measurements were performed using two spectral bands of laser-induced phosphorescence emission from BaMg2Al10O17:Eu (BAM) phosphors in a heated air flow to determine the optimal optical configuration for accurate temperature measurements. The originally planned multi-year research project ended prematurely after the first year due to the Sandia-sponsored student leaving the research group at the University of Michigan.
We have measured time-resolved laser-induced incandescence (LII) from combustion-generated mature soot extracted from a burner and (1) coated with oleic acid or (2) coated with oleic acid and then thermally denuded using a thermodenuder. The soot samples were size selected using a differential mobility analyser and characterized with a scanning mobility particle sizer, centrifugal particle mass analyser, and transmission electron microscope. The results demonstrate a strong influence of coatings particle morphology and on the magnitude and temporal evolution of the LII signal. For coated particles higher laser fluences are required to reach LII signal levels comparable to those of uncoated particles. This effect is predominantly attributable to the additional energy needed to vaporize the coating while heating the particle. LII signals are higher and signal decay rates are significantly slower for thermally denuded particles relative to coated or uncoated particles, particularly at low and intermediate laser fluences.