Turbulent fluctuation behavior is approximately modeled using a pulsatile flow model analogy.. This model follows as an extension to the turbulent laminar sublayer model developed by Sternberg (1962) to be valid for a fully turbulent flow domain. Here unsteady turbulent behavior is modeled via a sinusoidal pulsatile approach. While the individual modes of the turbulent flow fluctuation behavior are rather crudely modeled, approximate temporal integration yields plausible estimates for Root Mean Square (RMS) velocity fluctuations. RMS pressure fluctuations and spectra are of particular interest and are estimated via the pressure Poisson expression. Both RMS and Power Spectral Density (PSD), i.e. spectra are developed. Comparison with available measurements suggests reasonable agreement. An additional fluctuating quantity, i.e. RMS wall shear fluctuation is also estimated, yielding reasonable agreement with measurement.
We examine the role of periodic sinusoidal free-stream disturbances on the inner law law-of-the-wall (log-law) for turbulent boundary layers. This model serves a surrogate for the interaction of flight vehicles with atmospheric disturbances. The approximate skin friction expression that is derived suggests that free-stream disturbances can cause enhancement of the mean skin friction. Considering the influence of grid generated free stream turbulence in the laminar sublayer/log law region (small scale/high frequency) the model recovers the well-known shear layer enhancement suggesting an overall validity for the approach. The effect on the wall shear associated with the lower frequency due to the passage of the vehicle through large (vehicle scale) atmospheric disturbances is likely small i.e. on the order 1% increase for turbulence intensities on the order of 2%. The increase in wall pressure fluctuation which is directly proportional to the wall shear stress is correspondingly small.
This initial draft document contains formative data model content for select areas of Re-Engineering Phase 2 IDC System. The purpose of this document is to facilitate discussion among the stakeholders. It is not intended as a definitive proposal.
Flywheel energy storage systems are in use globally in increasing numbers . No codes pertaining specifically to flywheel energy storage exist. A number of industrial incidents have occurred. This protocol recommends a technical basis for safe flywheel de sign and operation for consideration by flywheel developers, users of flywheel systems and standards setting organizations.
The objective of this project was to devolop a plan for modifying the Turbulent Combustion Laboratory (TCL) with the necessary infrastructure to produce a cold (near liquid temperature) hydrogen jet. The necessary infrastructure has been specified and laboratory modifications are currently underway. Once complete, experiments from this platform will be used to develop and validate models that inform codes and standards which specify protection criteria for unintended releases from liquid hydrogen storage, transport, and delivery infrastructure.
This report analyzes the permeation resistance of a novel and proprietary polymer coating for hydrogen isotope resistance that was developed by New Mexico State University. Thermal gravimetric analysis and thermal desoprtion spectroscopy show the polymer is stable thermally to approximately 250 deg C. Deuterium gas-driven permeation experiments were conducted at Sandia to explore early evidence (obtained using Brunauer - Emmett - Teller) of the polymer's strong resistance to hydrogen. With a relatively small amount of the polymer in solution (0.15%), a decrease in diffusion by a factor of 2 is observed at 100 and 150 deg C. While there was very little reduction in permeability, the preliminary findings reported here are meant to demonstrate the sensitivity of Sandia's permeation measurements and are intended to motivate the future exploration of thicker barriers with greater polymer coverage.
With rising grid interconnections of solar photovoltaic (PV) systems, greater attention is being trained on lifecycle performance, reliability, and project economics. Expected to meet production thresholds over a 20-30 year timeframe, PV plants require a steady diet of operations and maintenance (O&M) oversight to meet contractual terms. However, industry best practices are only just beginning to emerge, and O&M budgets—given the arrangement of the solar project value chain—appear to vary widely. Based on insights from in-depth interviews and survey research, this paper presents an overview of the utility-scale PV O&M budgeting process along with guiding rationales, before detailing perspectives on current plant upkeep activities and price points largely in the U.S. It concludes by pondering potential opportunities for improving upon existing O&M budgeting approaches in ways that can benefi t the industry at-large.
In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describing temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct and quantitative comparisons between experimental measurements and simulation show that the proposed model accurately captures plasticity in deformation of polycrystalline tantalum.
Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.
Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry causes issues when generating a mesh of the model. This report will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.
This project was intended to enable SNL-CA to produce appropriate specimens of relevant stainless steels for testing and perform baseline testing of weld heat-affected zone and weld fusion zone. One of the key deliverables in this project was to establish a procedure for fracture testing stainless steel weld fusion zone and heat affected zones that were pre-charged with hydrogen. Following the establishment of the procedure, a round robin was planned between SNL-CA and SRNL to ensure testing consistency between laboratories. SNL-CA and SRNL would then develop a comprehensive test plan, which would include tritium exposures of several years at SRNL on samples delivered by SNL-CA. Testing would follow the procedures developed at SNL-CA. SRNL will also purchase tritium charging vessels to perform the tritium exposures. Although comprehensive understanding of isotope-induced fracture in GTS reservoir materials is a several year effort, the FY15 work would enabled us to jump-start the tests and initiate long-term tritium exposures to aid comprehensive future investigations. Development of a procedure and laboratory testing consistency between SNL-CA and SNRL ensures reliability in results as future evaluations are performed on aluminum alloys and potentially additively-manufactured components.
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.
This report documents work done for the ITER International Fusion Energy Organization (Sponsor) under a Funds-In Agreement FI 011140916 with Sandia National Laboratories. The work consists of preparing and analyzing samples for an experiment to measure material erosion and deposition in the EAST Tokamak. Sample preparation consisted of depositing thin films of carbon and aluminum onto molybdenum tiles. Analysis consists of measuring the thickness of films before and after exposure to helium plasma in EAST. From these measurements the net erosion and deposition of material will be quantified. Film thickness measurements are made at the Sandia Ion Beam Laboratory using Rutherford backscattering spectrometry and nuclear reaction analysis, as described in this report. This report describes the film deposition and pre-exposure analysis. Results from analysis after plasma exposure will be given in a subsequent report.
A series of quasi - static indentation experiments are conducted on carbon fiber reinforced polymer laminates with a systematic variation of thicknesses and fixture boundary conditions. Different deformation mechanisms and their resulting damage mechanisms are activated b y changing the thickn ess and boundary conditions. The quasi - static indentation experiments have been shown to achieve damage mechanisms similar to impact and penetration, however without strain rate effects. The low rate allows for the detailed analysis on the load response. Moreover, interrupted tests allow for the incremental analysis of various damage mechanisms and pr ogressions. The experimentally tested specimens are non - destructively evaluated (NDE) with optical imaging, ultrasonics and computed tomography. The load displacement responses and the NDE are then utilized in numerical simulations for the purpose of model validation and vetting. The accompanying numerical simulation work serves two purposes. First, the results further reveal the time sequence of events and the meaning behind load dro ps not clear from NDE . Second, the simulations demonstrate insufficiencies in the code and can then direct future efforts for development.
Ensuring Real Applications perform well on Trinity is key to success. Four components: ASC applications, Sustained System Performance (SSP), Extra-Large MiniApplications problems, and Micro-benchmarks.
Digital in-line holography and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a preliminary comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with digital in-line holography. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and digital in-line holography successfully quantify the 3D nature of these particle fields. This includes measurement of the 3D particle position, individual particle sizes, and three-component velocity vectors. For the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1-2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. On the other hand, plenotpic imaging allows for a simpler experimental configuration. Furthermore, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments. Additional work is needed to better quantify sources of uncertainty, particularly in the plenoptic experiments, as well as develop data processing methodologies optimized for the plenoptic measurement.
Sandia National Laboratories (Sandia) manages four of the five PV Regional Test Centers (RTCs). This report reviews accomplishments made by the four Sandia-managed RTCs during FY2015 (October 1, 2014 to September 30, 2015) as well as some programmatic improvements that apply to all five sites. The report is structured by Site first then by Partner within each site followed by the Current and Potential Partner summary table, the New Business Process, and finally the Plan for FY16 and beyond. Since no official SOPO was ever agreed to for FY15, this report does not include reporting on specific milestones and go/no-go decisions.
The SunShot Initiative coordinates research, development, demonstration, and deployment activities aimed at dramatically reducing the total installed cost of solar power. The SunShot Initiative focuses on removing critical technical and non-technical barriers to installing and integrating solar energy into the electricity grid. Uncertainty in projected power and energy production from solar power systems contributes to these barriers by increasing financial risks to photovoltaic (PV) deployment and by exacerbating the technical challenges to integration of solar power on the electricity grid.
PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.
The PV Fault Detection Tool project plans to demonstrate that the FDT can (a) detect catastrophic and degradation faults and (b) identify the type of fault. This will be accomplished by collecting fault signatures using different instruments and integrating this information to establish a logical controller for detecting, diagnosing and classifying each fault.
The Advanced Measurement and Analysis of PV Derate Factors project focuses on improving the accuracy and reducing the uncertainty of PV performance model predictions by addressing a common element of all PV performance models referred to as “derates”. Widespread use of “rules of thumb”, combined with significant uncertainty regarding appropriate values for these factors contribute to uncertainty in projected energy production.
Currently we are investigating the inclusion of organotin compounds in polystyrene material to improve plastic scintillators full gamma-ray energy sensitivity with the ultimate goal of achieving spectroscopy. Accurate evaluation of light yield from the newly developed scintillators is crucial to assess merits of compounds and chemical process used in the scintillators development. Full gamma-ray energy peak in measured gammaray spectrum, resulting from total absorption of gamma-ray energy, would be ideal in evaluating the light yield from the new scintillators. However, full energy sensitivity achieved thus far is not statistically viable for fast and accurate light yield energy calibration from the new scintillators. The Compton edge in measured gamma-ray spectrum has been found as an alternate gamma-ray spectrum feature that can be exploited for characterizing the light yield energy from the newly developed plastic scintillators. In this study we present technique implemented for accurate light yield energy calibration using the Compton edge. Results obtained were very encouraging and promise the possibility of using the Compton edge for energy calibration in detectors with poor energy resolution such as plastic and liquid scintillators.
An Evaluation and Screening team supporting the Fuel Cycle Technologies Program Office of the United States Department of Energy, Office of Nuclear Energy is conducting an evaluation and screening of a comprehensive set of fuel cycle options. These options have been assigned to one of 40 evaluation groups, each of which has a representative fuel cycle option [Todosow 2013]. A Fuel Cycle Data Package System Datasheet has been prepared for each representative fuel cycle option to ensure that the technical information used in the evaluation is high-quality and traceable [Kim, et al., 2013]. The information contained in the Fuel Cycle Data Packages has been entered into the Nuclear Fuel Cycle Options Catalog at Sandia National Laboratories so that it is accessible by the evaluation and screening team and other interested parties. In addition, an independent team at Savannah River National Laboratory has verified that the information has been entered into the catalog correctly. This report documents that the 40 representative fuel cycle options have been entered into the Catalog, and that the data entered into the catalog for the 40 representative options has been entered correctly.
Systematic veri cation and validation (V&V) is necessary to establish the credibility for high consequence simulations. In this paper, we focus on a radiation-induced plasma experimental validation exercise for simulations which uses both numerical error estimation and input parameter uncertainty quanti cation to provide a direct comparison between Particle-In-Cell (PIC) plasma simulations and experiments. This approach demonstrates how careful validation can uncover missing physics in the simulation. Three di erent validation examples are shown; a vacuum space charge limited cavity, a gas lled space charge limited cavity, and a vacuum non space charge limited cavity. Two of the example are picked to show the importance of error estimation in uncovering inaccuracy/incomplete simulation models. We also report on a newly-developed numerical error estimation approach, StREEQ, which is a notable improvement to past approaches. In the StREEQ approach, a multi- tting scheme based on L1, L2, and L$\infty$ error norms and alternate weightings is used to propagate uncertainties in the relative importance of outliers and coarse/re ned discretization levels. Bootstrap sampling is used to represent the stochasticity in the response data. The resulting method appears to robustly and conservatively predict the fully-converged response within estimated numerical error bounds for stochastic simulations. The StREEQ approach is demonstrated on two related prototype electron diode problems, and preliminary results are reported for a radiation-induced plasma simulation.
The goal is to make software developers aware of common issues that can impede the adoption of analytic tools. This paper provides a summary of guidelines, lessons learned and existing research to explain what is currently known about what analysts want and how to better understand what tools they do and don't need.
The Characterizing Emerging Technologies project focuses on developing, improving and validating characterization methods for PV modules, inverters and embedded power electronics. Characterization methods and associated analysis techniques are at the heart of technology assessments and accurate component and system modeling. Outputs of the project include measurement and analysis procedures that industry can use to accurately model performance of PV system components, in order to better distinguish and understand the performance differences between competing products (module and inverters) and new component designs and technologies (e.g., new PV cell designs, inverter topologies, etc.).
We study a time-parallel approach to solving quadratic optimization problems with linear time-dependent partial differential equation (PDE) constraints. These problems arise in formulations of optimal control, optimal design and inverse problems that are governed by parabolic PDE models. They may also arise as subproblems in algorithms for the solution of optimization problems with nonlinear time-dependent PDE constraints, e.g., in sequential quadratic programming methods. We apply a piecewise linear finite element discretization in space to the PDE constraint, followed by the Crank-Nicolson discretization in time. The objective function is discretized using finite elements in space and the trapezoidal rule in time. At this point in the discretization, auxiliary state variables are introduced at each discrete time interval, with the goal to enable: (i) a decoupling in time; and (ii) a fixed-point iteration to recover the solution of the discrete optimality system. The fixed-point iterative schemes can be used either as preconditioners for Krylov subspace methods or as smoothers for multigrid (in time) schemes. We present promising numerical results for both use cases.
Increasing the penetration of distributed renewable sources, including photovoltaic (PV) sources, poses technical challenges for grid management. The grid has been optimized over decades to rely upon large centralized power plants with well-established feedback controls, but now non-dispatchable, renewable sources are displacing these controllable generators. This one-year study was funded by the Department of Energy (DOE) SunShot program and is intended to better utilize those variable resources by providing electric utilities with the tools to implement frequency regulation and primary frequency reserves using aggregated renewable resources, known as a virtual power plant. The goal is to eventually enable the integration of 100s of Gigawatts into US power systems.
The goal of this effort was to apply four potential control analysis/design approaches to the design of distributed grid control systems to address the impact of latency and communications uncertainty with high penetrations of photovoltaic (PV) generation. The four techniques considered were: optimal fixed structure control; Nyquist stability criterion; vector Lyapunov analysis; and Hamiltonian design methods. A reduced order model of the Western Electricity Coordinating Council (WECC) developed for the Matlab Power Systems Toolbox (PST) was employed for the study, as well as representative smaller systems (e.g., a two-area, three-area, and four-area power system). Excellent results were obtained with the optimal fixed structure approach, and the methodology we developed was published in a journal article. This approach is promising because it offers a method for designing optimal control systems with the feedback signals available from Phasor Measurement Unit (PMU) data as opposed to full state feedback or the design of an observer. The Nyquist approach inherently handles time delay and incorporates performance guarantees (e.g., gain and phase margin). We developed a technique that works for moderate sized systems, but the approach does not scale well to extremely large system because of computational complexity. The vector Lyapunov approach was applied to a two area model to demonstrate the utility for modeling communications uncertainty. Application to large power systems requires a method to automatically expand/contract the state space and partition the system so that communications uncertainty can be considered. The Hamiltonian Surface Shaping and Power Flow Control (HSSPFC) design methodology was selected to investigate grid systems for energy storage requirements to support high penetration of variable or stochastic generation (such as wind and PV) and loads. This method was applied to several small system models.
This project aimed to identify the path forward for dynamic simulation tools to accommodate these needs by characterizing the properties of power systems (with high PV penetration), analyzing how these properties affect dynamic simulation software, and offering solutions for potential problems.
In this report we formulate eigenvalue-based methods for model calibration using a PDE-constrained optimization framework. We derive the abstract optimization operators from first principles and implement these methods using Sierra-SD and the Rapid Optimization Library (ROL). To demon- strate this approach, we use experimental measurements and an inverse solution to compute the joint and elastic foam properties of a low-fidelity unit (LFU) model.
This paper presents an end-to-end design process for compliance minimization based topological optimization of cellular structures through to the realization of a final printed product. Homogenization is used to derive properties representative of these structures through direct numerical simulation of unit cell models of the underlying periodic structure. The resulting homogenized properties are then used assuming uniform distribution of the cellular structure to compute the final macro-scale structure. A new method is then presented for generating an STL representation of the final optimized part that is suitable for printing on typical industrial machines. Quite fine cellular structures are shown to be possible using this method as compared to other approaches that use nurb based CAD representations of the geometry. Finally, results are presented that illustrate the fine-scale stresses developed in the final macro-scale optimized part and suggestions are made as to incorporate these features into the overall optimization process.
Interface-conforming elements generated by the conformal decomposition finite element method can have arbitrarily poor quality due to the arbitrary intersection of the base triangular or tetrahedral mesh with material interfaces. This can have severe consequences for both the solvability of linear systems and for the interpolation error of fields represented on these meshes. The present work demonstrates that snapping the base mesh nodes to the interface whenever the interface cuts close to a node results in conforming meshes of good quality. Theoretical limits on the snapping tolerance are derived, and even conservative tolerance choices result in limiting the stiffness matrix condition number to within a small multiple of that of the base mesh. Interpolation errors are also well controlled in the norms of interest. In 3D, use of node-to-interface snapping also permits a simpler and more robust vertex ID-based element decomposition algorithm to be used with no serious detriment to mesh quality.
Increasing the penetration of distributed renewable sources, including photovoltaic (PV) generators, poses technical challenges for grid management. The grid has been optimized over decades to rely on large centralized power plants with well-established feedback controls. Conventional generators provide relatively constant dispatchable power and help to regulate both voltage and frequency. In contrast, photovoltaic (PV) power is variable, is only as predictable as the weather, and provides no control action. Thus, as conventional generation is displaced by PV power, utility operation stake holders are concerned about managing fluctuations in grid voltage and frequency. Furthermore, since the operation of these distributed resources are bound by certain rules that require they stop delivering power when measured voltage or frequency deviate from the nominal operating point, there are also concerns that a single grid event may cause a large fraction of generation to turn off, triggering a black out or break-up of an electric power system.
A great deal of research has been carried out in oxide material systems. Among them, ZnO and La0.7Sr0.3MnO3 (LSMO) are of particular interest due to their superb optical properties and colossal magneto-resistive effect. Here, we report our recent results of magneto-transport studies in self-assembled, epitaxial (ZnO)0.5:(La0.7Sr0.3MnO3)0.5 nanocomposite films.
At the initiation of the Used Fuel Disposition (UFD) R&D campaign, international geologic disposal programs and past work in the U.S. were surveyed to identify viable disposal concepts for crystalline, clay/shale, and salt host media (Hardin et al., 2012). Concepts for disposal of commercial spent nuclear fuel (SNF) and high-level waste (HLW) from reprocessing are relatively advanced in countries such as Finland, France, and Sweden. The UFD work quickly showed that these international concepts are all “enclosed,” whereby waste packages are emplaced in direct or close contact with natural or engineered materials . Alternative “open” modes (emplacement tunnels are kept open after emplacement for extended ventilation) have been limited to the Yucca Mountain License Application Design (CRWMS M&O, 1999). Thermal analysis showed that, if “enclosed” concepts are constrained by peak package/buffer temperature, waste package capacity is limited to 4 PWR assemblies (or 9-BWR) in all media except salt. This information motivated separate studies: 1) extend the peak temperature tolerance of backfill materials, which is ongoing; and 2) develop small canisters (up to 4-PWR size) that can be grouped in larger multi-pack units for convenience of storage, transportation, and possibly disposal (should the disposal concept permit larger packages). A recent result from the second line of investigation is the Task Order 18 report: Generic Design for Small Standardized Transportation, Aging and Disposal Canister Systems (EnergySolution, 2015). This report identifies disposal concepts for the small canisters (4-PWR size) drawing heavily on previous work, and for the multi-pack (16-PWR or 36-BWR).
This document defines the concept of operations (CONOPS) and the requirements for the Buddy Tag, which is conceived and designed in collaboration between Sandia National Laboratories and Princeton University under the Department of State Key VerificationAssets Fund. The CONOPS describe how the tags are used to support verification of treaty limitations and is only defined to the extent necessary to support a tag design. The requirements define the necessary functions and desired non-functional features of the Buddy Tag at a high level
This System Requirements Document (SRD) defines waveform data processing requirements for the International Data Centre (IDC) of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO). The IDC applies, on a routine basis, automatic processing methods and interactive analysis to raw International Monitoring System (IMS) data in order to produce, archive, and distribute standard IDC products on behalf of all States Parties. The routine processing includes characterization of events with the objective of screening out events considered to be consistent with natural phenomena or non-nuclear, man-made phenomena. This document does not address requirements concerning acquisition, processing and analysis of radionuclide data but includes requirements for the dissemination of radionuclide data and products.
This document contains the system specifications derived to satisfy the system requirements found in the IDC System Requirements Document for the IDC ReEngineering Phase 2 project.
The U.S. Department of Energy (DOE) has embarked on the Deep Borehole Field Test (DBFT), which will investigate whether conditions suitable for disposal of radioactive waste can be found at a depth of up to 5 km in the earth’s crust. As planned, the DBFT will demonstrate drilling and construction of two boreholes, one for initial scientific characterization, and the other at a larger diameter such as could be appropriate for waste disposal (the DBFT will not involve radioactive waste). A wide range of geoscience activities is planned for the Characterization Borehole, and an engineering demonstration of test package emplacement and retrieval is planned for the larger Field Test Borehole. Characterization activities will focus on measurements and samples that are important for evaluating the long-term isolation capability of the Deep Borehole Disposal (DBD) concept. Engineering demonstration activities will focus on providing data to evaluate the concept’s operational safety and practicality. Procurement of a scientifically acceptable DBFT site and a site management contractor is now underway. The concept of deep borehole disposal (DBD) for radioactive wastes is not new. It was considered by the National Academy of Science (NAS 1957) for liquid waste, studied in the 1980’s in the U.S. (Woodward–Clyde 1983), and has been evaluated by European waste disposal R&D programs in the past few decades (for example, Grundfelt and Crawford 2014; Grundfelt 2010). Deep injection of wastewater including hazardous wastes is ongoing in the U.S. and regulated by the Environmental Protection Agency (EPA 2001). The DBFT is being conducted with a view to use the DBD concept for future disposal of smaller-quantity, DOE-managed wastes from nuclear weapons production (i.e., Cs/Sr capsules and granular solid wastes). However, the concept may also have broader applicability for nations that have a need to dispose of limited amounts of spent fuel from nuclear power reactors. For such nations the cost for disposing of volumetrically limited waste streams could be lower than mined geologic repositories.
A hierarchical methodology is introduced to predict the effects of radiation damage and irradiation conditions on the yield stress and internal stress heterogeneity developments in polycrystalline α-Fe. Simulations of defect accumulation under displacement cascade damage conditions are performed using spatially resolved stochastic cluster dynamics. The resulting void and dislocation loop concentrations and average sizes are then input into a crystal plasticity formulation that accounts for the change in critical resolved shear stress due to the presence of radiation induced defects. The simulated polycrystalline tensile tests show a good match to experimental hardening data over a wide range of irradiation doses. With this capability, stress heterogeneity development and the effect of dose rate on hardening is investigated. The model predicts increased hardening at higher dose rates for low total doses. By contrast, at doses above 10-2 dpa when cascade overlap becomes significant, the model does not predict significantly different hardening for different dose rates. The development of such a model enables simulation of radiation damage accumulation and associated hardening without relying on experimental data as an input under a wide range of irradiation conditions such as dose, dose rate, and temperature.
Advanced semiconductor devices often utilize structural and geometrical effects to tailor their characteristics and improve their performance. We report here detailed understanding of such geometrical effects in the epitaxial selective area growth of GaN on sapphire substrates and utilize them to enhance light extraction from GaN light emitting diodes. Systematic size and spacing effects were performed side-by-side on a single 2" sapphire substrate to minimize experimental sampling errors for a set of 144 pattern arrays with circular mask opening windows in SiO2. We show that the mask opening diameter leads to as much as 4 times increase in the thickness of the grown layers for 20 μm spacings and that spacing effects can lead to as much as 3 times increase in thickness for a 350 μm dot diameter. We observed that the facet evolution in comparison with extracted Ga adatom diffusion lengths directly influences the vertical and lateral overgrowth rates and can be controlled with pattern geometry. Such control over the facet development led to 2.5 times stronger electroluminescence characteristics from well-faceted GaN/InGaN multiple quantum well LEDs compared to non-faceted structures.
The thermoelectric properties of unintentionally n-doped core GaN/AlGaN core/shell N-face nanowires are reported. We found that the temperature dependence of the electrical conductivity is consistent with thermally activated carriers with two distinctive donor energies. The Seebeck coefficient of GaN/AlGaN nanowires is more than twice as large as that for the GaN nanowires alone. However, an outer layer of GaN deposited onto the GaN/AlGaN core/shell nanowires decreases the Seebeck coefficient at room temperature, while the temperature dependence of the electrical conductivity remains the same. We attribute these observations to the formation of an electron gas channel within the heavily-doped GaN core of the GaN/AlGaN nanowires. The room-temperature thermoelectric power factor for the GaN/AlGaN nanowires can be four times higher than the GaN nanowires. Selective doping in bandgap engineered core/shell nanowires is proposed for enhancing the thermoelectric power.
As transistors start to approach fundamental limits and Moore's law slows down, new devices and architectures are needed to enable continued performance gains. New approaches based on RRAM (resistive random access memory) or memristor crossbars can enable the processing of large amounts of data[1, 2]. One of the most promising applications for RRAM crossbars is brain inspired or neuromorphic computing[3, 4].
This study examines the single-event response of the Xilinx 20 nm Kintex UltraScale Field-Programmable Gate Array irradiated with heavy ions. Results for single-event latch-up and single-event upset on configuration SRAM cells and Block RAM memories are provided.
As transistors start to approach fundamental limits and Moore's law slows down, new devices and architectures are needed to enable continued performance gains. New approaches based on RRAM (resistive random access memory) or memristor crossbars can enable the processing of large amounts of data[1, 2]. One of the most promising applications for RRAM crossbars is brain inspired or neuromorphic computing[3, 4].
Millivolt switches will not only improve energy efficiency, but will enable a new capability to manage the energy-reliability tradeoff. By effectively utilizing this system-level capability, it may be possible to obtain one or two additional generations of scaling beyond current projections. Millivolt switches will enable further energy scaling, a process that is expected to continue until the technology encounters thermal noise errors [Theis 10]. If thermal noise errors can be accommodated at higher levels through a new form of error correction, it may be possible to scale about 3× lower in system energy than is currently projected. A general solution to errors would also address long standing problems with Cosmic Ray strikes, weak and aging parts, some cyber security vulnerabilities, etc.
Millivolt switches will not only improve energy efficiency, but will enable a new capability to manage the energy-reliability tradeoff. By effectively utilizing this system-level capability, it may be possible to obtain one or two additional generations of scaling beyond current projections. Millivolt switches will enable further energy scaling, a process that is expected to continue until the technology encounters thermal noise errors [Theis 10]. If thermal noise errors can be accommodated at higher levels through a new form of error correction, it may be possible to scale about 3× lower in system energy than is currently projected. A general solution to errors would also address long standing problems with Cosmic Ray strikes, weak and aging parts, some cyber security vulnerabilities, etc.