This report documents the key findings from the Reservoir Maintenance and Development (RM&D) Task of the U.S. Department of Energy's (DOE), Geothermal Technologies Office (GTO) Geothermal Vision Study (GeoVision Study). The GeoVision Study had the objective of conduc ting analyses of future geothermal growth based on sets of current and future geothermal technology developments. The RM&D Task is one of seven tasks within the GeoVision Study with the others being, Exploration and Confirmation, Potential to Penetration, Institutional Market Barriers, Environmental and Social Impacts, Thermal Applications, and Hybrid Systems. The full set of findings and the details of the GeoVision Study can be found in the final GeoVision Study report on the DOE-GTO website. As applied here, RM&D refers to the activities associated with developing, exploiting, and maintaining a known geothermal resource. It assumes that the site has already been vetted and that the resource has been evaluated to be of sufficient quality to move towards full-scale development. It also assumes that the resource is to be developed for power generation, as opposed to low-temperature or direct use applications. This document presents the key factors influencing RM&D from both a technological and operational standpoint and provides a baseline of its current state. It also looks forward to describe areas of research and development that must be pursued if the development geothermal energy is to reach its full potential.
Our nation's dependence on information networks makes it vital to anticipate disruptions or find weaknesses in these networks. But networks like the Internet are vast, distributed, and there is no mechanism to completely collect their structure. We are restricted to specific data collection schemes (like traceroute samples from router interfaces) that examine tiny portions of such a network. It has been empirically documented and theoretically proven that these measurements have significant biases, and direct inferences from them will be wrong. But these data collection mechanisms have limited flexibility and cannot be easily modified. Moreover, in many applications there are limits on how much data can be collected. How do we make accurate inferences of network properties with biased and limited measurements? The general problem this report deals with is how to work with incompletely observed networks. We will present several different approaches to this problem. First we will present an approach to estimate the degree distribution of a graph by sampling only a small portion of the vertices. This algorithm provides provably accurate results with sublinear samples. An alternative approach would be to try to enhance the information in the by selective collecting new information by probing for neighbors of a vertex or presence of individual edges. A different setting for working with incomplete arises when we have full access to local information, but do not have any global version of the graph. Can we still identify critical nodes in such a graph? We present an approach to identify such nodes efficiently. Finally, how can we put these ideas together to identify the structure of a network? We present an approach that can complement the existing approaches for network mapping. We start with an estimate of network structure based on existing network mapping methods. Then we find a critical router in the network, use the traffic through this network to selectively collect new data to enhance our prediction.
For a heave-pitch-surge three-degrees-of-freedom wave energy converter, the heave mode is usually decoupled from the pitch-surge modes for small motions. The pitch-surge modes are usually coupled and are parametrically excited by the heave mode, depending on the buoy geometry. In this paper, a Model Predictive Control is applied to the parametric excited pitch-surge motion, while the heave motion is optimized independently. The optimality conditions are derived, and a gradient-based numerical optimization algorithm is used to search for the optimal control. Numerical tests are conducted for regular and Bretschneider waves. The results demonstrate that the proposed control can be implemented to harvest more than three times the energy that can be harvested using a heave-only wave energy converter. The energy harvested using a parametrically excited model is higher than that is harvested when using a linear model.
We report on the development of a frequency domain method of analysis in the Panzer foundation of Charon. We first present a harmonic balance approach for calculating the frequency-domain response (in its weak form) of a non-linear system of partial differential equations (PDEs). Our approach is anemable to adaptation of Charon's transient PDE models for frequency domain analysis. We make an observation that allows us to analyze either small-signal or large-signal responses with minimal specialization of the algorithm. We conclude by confirming our small- and large-signal analyses of a transient, linear Helmholtz equation by comparing its analytic solution to our results. We include figures from a sequence of non-linear perturbations of this system, showcasing the fact that, when the non-linearities are insignificant, the small- and large-signal analyses obtain similar solutions. On the other hand, we depict the inadequacy of a small-signal analysis to accurately capture the response in the presence of a large non-linearity, and underscore the requirement to employ a large-signal analysis for modelling highly non-linear systems.
The purpose of the scenarios workshop held for the Civilian Nuclear component of the Global Nuclear Assured Security Mission Integration Initiative was to identify sources of risk in the global civilian nuclear enterprise. The risks identified are inadequately addressed through current technical measures, regulatory frameworks and institutions and should be considered for further research. The workshop participants also developed four high level scenarios describing different sequences of events that could result in radiological releases, widespread loss of electric power, and loss of public confidence in segments of the nuclear industry. The scenarios are intended for further analysis and as the basis for simulation exercises.
IEEE Transactions on Microwave Theory and Techniques
Mincey, John S.; Silva-Martinez, Jose; Karsilayan, Aydin I.; Rodenbeck, Christopher T.
In this paper, a coherent subsampling digitizer for pulsed Doppler radar systems is proposed. Prior to transmission, the radar system modulates the RF pulse with a known pseudorandom binary phase shift keying (BPSK) sequence. Upon reception, the radar digitizer uses a programmable sample-and-hold circuit to multiply the received waveform by a properly time-delayed version of the known a priori BPSK sequence. This operation demodulates the desired echo signal while suppressing the spectrum of all in-band noncorrelated interferers, making them appear as noise in the frequency domain. The resulting demodulated narrowband Doppler waveform is then subsampled at the IF frequency by a delta-sigma modulator. Because the digitization bandwidth within the delta-sigma feedback loop is much less than the input bandwidth to the digitizer, the thermal noise outside of the Doppler bandwidth is effectively filtered prior to quantization, providing an increase in signal-to-noise ratio (SNR) at the digitizer's output compared with the input SNR. In this demonstration, a delta-sigma correlation digitizer is fabricated in a 0.18-μm CMOS technology. The digitizer has a power consumption of 1.12 mW with an IIP3 of 7.5 dBm. The digitizer is able to recover Doppler tones in the presence of blockers up to 40 dBm greater than the Doppler tone.
A distributed impedance 'field cage' structure is proposed and evaluated for electric field control in GaN-based, lateral high electron mobility transistors operating as kilovolt-range power devices. In this structure, a resistive voltage divider is used to control the electric field throughout the active region. The structure complements earlier proposals utilizing floating field plates that did not employ resistively connected elements. Transient results, not previously reported for field plate schemes using either floating or resistively connected field plates, are presented for ramps of dVds/dt = 100 V/ns. For both dc and transient results, the voltage between the gate and drain is laterally distributed, ensuring that the electric field profile between the gate and drain remains below the critical breakdown field as the source-to-drain voltage is increased. Our scheme indicates promise for achieving the breakdown voltage scalability to a few kilovolts.
Robinson, Brandon; Rocha Da Costa, Leandro J.; Poirel, Dominique; Pettit, Chris; Khalil, Mohammad K.; Sarkar, Abhijit
Our study details the derivation of the nonlinear equations of motion for the axial, biaxial bending and torsional vibrations of an aeroelastic cantilever undergoing rigid body (pitch) rotation at the base. The primary attenstion is focussed on the geometric nonlinearities of the system, whereby the aeroelastic load is modeled by the theory of linear quasisteady aerodynamics. This modelling effort is intended to mimic the wind-tunnel experimental setup at the Royal Military College of Canada. While the derivation closely follows the work of Hodges and Dowell [1] for rotor blades, this aeroelastic system contains new inertial terms which stem from the fundamentally different kinematics than those exhibited by helicopter or wind turbine blades. Using the Hamilton’s principle, a set of coupled nonlinear partial differential equations (PDEs) and an ordinary differential equation (ODE) are derived which describes the coupled axial-bending-bending-torsion-pitch motion of the aeroelastic cantilever with the pitch rotation. The finite dimensional approximation of the coupled system of PDEs are obtained using the Galerkin projection, leading to a coupled system of ODEs. Subsequently, these nonlinear ODEs are solved numerically using the built-in MATLAB implicit ODE solver and the associated numerical results are compared with those obtained using Houbolt’s method. It is demonstrated that the system undergoes coalescence flutter, leading to a limit cycle oscillation (LCO) due to coupling between the rigid body pitching mode and teh flexible mode arising from the flapwise bending motion.
In this study, solubility measurements on tri-calcium di-citrate tetrahydrate [Ca3[C3H5O(COO)3]2•4H2O, abbreviated as Ca3[Citrate]2•4H2O] as a function of ionic strength are conducted in NaCl solutions up to I = 5.0 mol•kg–1 and in MgCl2 solutions up to I = 7.5 mol•kg–1, at room temperature (22.5 ± 0.5°C). The solubility constant (log K$0\atop{sp}$) for Ca3[Citrate]2•4H2O and formation constant (logβ$0\atop{1}$) for Ca[C3H5O(COO)3]–Ca3[C3H5O(COO)3]2•4H2O (earlandite) = 3Ca2+ + 2[C3H5O(COO)3]3– + 4H2O (1) Ca2+ + [C3H5O(COO)3]3– = Ca[C3H5O(COO)3]– (2) are determined as –18.11 ± 0.05 and 4.97 ± 0.05, respectively, based on the Pitzer model with a set of Pitzer parameters describing the specific interactions in NaCl and MgCl2 media.
For this study, the interactions of lead with citrate and ethylenediaminetetraacetate (EDTA) are investigated based on solubility measurements as a function of ionic strength at room temperature (22.5 ± 0.5°C) in NaCl and MgCl2 solutions. The formation constants (log β10 ) for Pb[C3H5O(COO)3]– (abbreviated as PbCitrate–) and Pb[(CH2COO)2N(CH2)2N(CH2COO)2)]2– (abbreviated as PbEDTA2–) Pb2+ + [C3H5O(COO)3]3– = Pb[C3H5O(COO)3]– (1) Pb2+ + (CH2COO)2N(CH2)2N(CH2COO)2)4- = Pb[(CH2COO)2N(CH2)2N(CH2COO)2)]2– (2) are evaluated as 7.28 ± 0.18 (2σ) and 20.00 ± 0.20 (2σ), respectively, with a set of Pitzer parameters describing the specific interactions in NaCl and MgCl2 media. Based on these parameters, the interactions of lead with citrate and EDTA in various low temperature environments can be accurately modelled.
Significant quantities of water are produced during enhanced oil recovery making these “produced water” streams attractive candidates for treatment and reuse. However, high concentrations of dissolved silica raise the propensity for fouling. In this paper, we report the design and economic analysis for a new ion exchange process using calcined hydrotalcite (HTC) to remove silica from water. This process improves upon known technologies by minimizing sludge product, reducing process fouling, and lowering energy use. Process modeling outputs included raw material requirements, energy use, and the minimum water treatment price (MWTP). Monte Carlo simulations quantified the impact of uncertainty and variability in process inputs on MWTP. These analyses showed that cost can be significantly reduced if the HTC materials are optimized. Specifically, R&D improving HTC reusability, silica binding capacity, and raw material price can reduce MWTP by 40%, 13%, and 20%, respectively. Optimizing geographic deployment further improves cost competitiveness.
The objective of the Crystalline Disposal R&D control account is to advance our understanding of long-term disposal of used fuel in crystalline rocks and to develop necessary experimental and computational capabilities to evaluate various disposal concepts in such media.
Current models for hydrogen embrittlement rely on adjustable parameters to correct for uncertainties in crack tip stress fields and subsequent H2-concentrations. Techniques are needed to quantify these concentrations ahead of crack tips in mechanically loaded materials, providing data for model calibration and validation. The goal of this work was to establish advanced analytical techniques to detect and quantitatively measure hydrogen ahead of cracks in stressed solids. Two advanced analytical techniques, kelvin probe force microscopy (KPFM) and nuclear reaction analysis (NRA), were explored to evaluate the feasibility to provide qualitative and quantitative H2-concentration fields in geometries designed to be 'loaded' while under observation. The feasibility of the KPFM technique at detecting hydrogen was evaluated using electrochemically precharged hydrogen as well as a mixed hydrogen gas atmosphere. The KPFM technique was able to detect the presence of elevated stress and hydrogen concentrations ahead of a tensile loaded crack tip. The results suggest that KPFM is a viable technique for qualitatively imaging changes in stress and hydrogen concentrations on the scale needed to inform predictive models. KPFM could be used to provide local stress and hydrogen variations associated with hydrogen traps or different phases which require sensitive measurements on the micron scale. NRA provided quantitative measurements of the hydrogen-isotope deuterium ahead of a tensile loaded notch, however, the vacancy formation due to the incident high energy He 3 beam overwhelmed stress-assisted enhancement of deuterium concentrations such that the effect of stress was overshadowed in this analysis. Modeling of the chemo- mechanical hydrogen concentration change was used to verify this observation.
The energy density of nonaqueous redox flow batteries is often limited by the concentration of the redox active species soluble in solution. A possible route to increasing the this energy density is through the use of energy-dense solid materials such as polyoxometalates, LiFePO4, or LixTi02. These solid materials can be contained in canisters through which an electrolyte with dissolved redox-active species is flowed. The redox potentials for the flowing species are chosen specifically such that they mediate the chemical reduction and oxidation of the solid components. This strategy is advantageous in that it allows for independent optimization of the flow electrolyte (e.g. for low viscosity, high charging rate) and the solid energy storing media (e.g. high energy density). This report summarizes results using a variety of redox active organic and metalorganic species to mediate the oxidation and reduction of polyoxometalate and Li-ion battery chemistries in a redox flow battery system.
Deep borehole disposal (DBD) has been suggested as an option for disposing spent nuclear fuel in a number of countries, including several countries that are subject to international safeguards. DBD presents some distinct challenges for safeguards compared to a conventional mined geological repository (MGR), including the ability to verify declared design information about the borehole. The ability to verify a borehole's design is crucial for assuring that spent fuel or other accountable nuclear materials are disposed as declared in a borehole of known and verifiable design. This study reviews existing commercial off-the-shelf (COTS) borehole inspection tools currently used by the drilling industry, and evaluates the capabilities of those COTS inspection tools against how well they can meet potential needs and requirements of Design Information Verification (DIV) inspections for international safeguards. The study provides recommendations for several promising COTS borehole inspection tools that might be used for DIV safeguards inspections and recommends possible modifications and future testing
Progress toward quantitative measurements and simulations of 3D, temporally resolved aerodynamic induced liquid atomization is reported. Columns of water and galinstan (liquid metal at room temperature) are subjected to a step change in relative gas velocity within a shock tube. Breakup morphologies are shown to closely resemble previous observations of spherical drops. The 3D position, size, and velocity of secondary fragments are quantified by a high-speed digital inline holography (DIH) system developed for this measurement campaign. For the first time, breakup dynamics are temporally resolved at 100 kHz close to the atomization zone where secondary drops are highly non-spherical. Experimental results are compared to interface capturing simulations using a combined level set moment of fluid approach (CLSMOF). Initial simulation results show good agreement with observed breakup morphologies and rates of deformation.
Graphs are widely used to model relationships in a wide variety of domains such as sociology, bioinformatics, infrastructure, the WWW, to name a few. One of the key observations is that while real-world graphs are often globally sparse, they are locally dense. In other words, the average degree is often quite small (say at most 10 in a million vertex graph), but vertex neighborhoods are often dense. Finding dense subgraphs is a critical aspect of graph mining It has been used for finding communities and spam link farms in web graphs, graph visualization, real-time story identification, DNA motif detection in biological networks, finding correlated genes, epilepsy prediction, finding price value motifs in financial data, graph compression, distance query indexing, and increasing the throughput of social networking site servers. However, most standard formulations of this problem (like clique, quasi-clique, k-densest subgraph) are NP-hard. Furthermore, current dense subgraph finding algorithms usually optimize some objective, and only find a few such subgraphs without providing any structural relations, whereas the goal is rarely to find the "true optimum," but to identify many (if not all) dense substructures, understand their distribution in the graph, and ideally determine relationships among them. In this project, we first aim to devise algorithms and provide 3 implementations with nice visualizations to find the hierarchy between dense subgraphs, and then understand the structure of the hierarchy to gain more insight on the hidden patterns in real-world networks. Another important aspects in graph analysis is the temporal nature of networks. Networks evolve over time and in many applications data arrives at a high velocity, and thus it is important to design algorithms that can process data efficiently. We report three main results towards identifying dense structures in large evolving graphs. First, we will show how the hierarchical connectedness structure can be maintained efficiently, where connectedness is defined by increasing levels of connectivity strength. Next, we present dense structure can be identified in bipartite graphs without building projection graphs. And finally, we present a new method for peeling algorithms This new approach avoids sequential nature of peeling algorithms and is amenable to parallelization, which is crucial for processing high velocity data.
A three year LDRD was undertaken to look at the feasibility of using magnetic sensing to determine flows within sealed vessels at high temperatures and pressures. Uniqueness proofs were developed for tracking of single magnetic particles with multiple sensors. Experiments were shown to be able to track up to 3 dipole particles undergoing rigid-body rotational motion. Temperature was wirelessly monitored using magnetic particles in static and predictable motions. Finally high-speed vibrational motion was tracked using magnetic particles. Ideas for future work include using small particles for measuring vorticity and better calibration methods for tracking multiple particles.
The experimental breeder reactor (EBR-II) used fuel with a layer of sodium surrounding the uranium-zirconium fuel to improve heat transfer. Disposing of this EBR-II used fuel in a geologic repository without treatment is not prudent because of the potentially energetic reaction of the sodium with water. In 2000, the US Department of Energy decided to treat the EBR-II sodium-bonded used fuel in an electrorefiner (ER), which produces a metallic waste, mostly from the cladding. The salt remaining in the ER contains most of the actinides and fission products. Two baseline waste forms were proposed for disposal in a mined repository; the metallic waste, which was to be cast into ingots, and the ER salt waste, which was to be further treated to produce a ceramic waste form. However, alternative disposal pathways for metallic and salt waste streams are being investigated that may reduce the complexity. For example, performance assessments show that both mined repositories in salt and deep boreholes in basement crystalline rock can easily accommodate the ER salt waste without treating it to form a ceramic waste form. Hence the focus of a direct disposal option, as described herein, is now on the feasibility of packaging the ER salt waste in the near term such that it can be transported to a repository in the future without repackaging. A vessel for direct disposal of ER salt waste has been previously proposed, designed, and a prototype manufactured based on desirable features for use in the hot cell. The reported analysis focused on the feasibility of transporting this proposed vessel and whether any issues would suggest that a smaller or larger size is more appropriate. Specifically, three issues are addressed (1) shielding necessary to reduce doses to acceptable levels; (2) the criticality potential and the ease which it can be shown to be inconsequential, and (3) temperatures of the containers in relation to acceptable cask limits. The generally positive results demonstrate that direct disposal of ER in the proposed packaging is feasible without the need to secure funding to modify the facility.
This report is an outcome of the ASC ATDM Level 2 Milestone 6015: Asynchronous Many-Task Software Stack Demonstration. It comprises a summary and in depth analysis of DARMA and a DARMA-compliant Asynchronous Many-Task (AMT) runtime software stack. Herein performance and productivity of the over- all approach are assessed on benchmarks and proxy applications representative of the Sandia ATDM applications. As part of the effort to assess the perceived strengths and weaknesses of AMT models compared to more traditional methods, experiments were performed on ATS-1 (Advanced Technology Systems) test bed machines and Trinity. In addition to productivity and performance assessments, this report includes findings on the generality of DARMAs backend API as well as findings on interoperability with node- level and network-level system libraries. Together, this information provides a clear understanding of the strengths and limitations of the DARMA approach in the context of Sandias ATDM codes, to guide our future research and development in this area.
The quality of automatic detections from sensor networks depends on a large number of data processing parameters that interact in complex ways. The largely manual process of identifying effective parameters is painstaking and does not guarantee that the resulting controls are the optimal configuration settings, yet achieving superior automatic detection of events is closely related to these parameters. We present an automated sensor tuning (AST) system that tunes effective parameter settings for each sensor detector to the current state of the environment by leveraging cooperation within a neighborhood of sensors. After a stabilization period, the AST algorithm can adapt in near real-time to changing conditions and automatically self-tune a signal detector to identify (detect) only signals from events of interest. The overall goal is to reduce the number of missed legitimate event detections and the number of false event detections. Our current work focuses on reducing false signal detections early in the seismic signal processing pipeline, which leads to fewer false events and has a significant impact on reducing analyst time and effort. Applicable both for existing sensor performance boosting and new sensor deployment, this system provides an important new method to automatically tune complex remote sensing systems. Systems tuned in this way will achieve better performance than is currently possible by manual tuning, and with much less time and effort devoted to the tuning process. With ground truth on detections from a seismic sensor network monitoring the Mount Erebus Volcano in Antarctica, we show that AST increases the probability of detection while decreasing false alarms.
Razorback is a research reactor transient analysis computer code designed to simulate the operation of a research reactor (such as Sandia National Laboratories Annular Core Research Reactor (ACRR)). The code provides a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, the equation of motion for fuel element thermal expansion, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This input manual describes how an input file is composed, and facilitates an understanding of the various code input parameters. The makeup of the various code output files is also described. This manual also provides instructions for the installation and setup of the code, and how to report bugs and/or errors.
This report contains work completed by a group of student interns during the summer of 2017. Under the guidance of Ryan Coe, Aubrey Eckert-Gallup, and Nevin Martin, a series of interrelated projects were completed on topics relating to extreme response and survival analysis of wave energy converters (WECs). Jarred Canning studied long-term design response analysis methods for WECs. Sam Edwards studied how variation in the selection of an environmental contour affects the characterization of WEC response in extreme conditions. Sam also led the integration of various components of this report and overall editing. Tyler Esterly produced a catalog of analyses for different ocean sites. Bibiana Seng studied clustering analyses for comparing the wave environments of different ocean sites. Lori Smith performed a comparison between analyses conducted using spectral wave data and analyses using deterministic time-domain wave data. William ("Zach") Stuart studied the sensitivity and convergence of environmental contour methods.
This report describes a model of the displacement of one hydrogen isotope within a metal hydride tube by a different isotope in the gas phase that is blown through the tube. The model incorporates only the most basic parameters to make a clear connection to the theory of open-tube gas chromatography, and to provide a simple description of how the behavior of the system scales with controllable parameters such as gas velocity and tube radius. A single tube can be seen as a building block for more complex architectures that provide higher molar flow rates or other advanced design goals.
This simple Microgrid Design Toolkit ( MDT ) use case will provide you an example of a basic microgrid design. It will introduce basic principles of using the MDT islanded mode optimization by modifying a baseline microgrid design and performing an analysis of the results . Please reference the MDT User Guide (SAND201-9374) for detailed instructions on how to use the tool.
This simple Microgrid Design Toolkit (MDT) use case will provide you an example of performing microgrid sizing by identifying the types and quantities of technology to be purchased for use in a microgrid. It will introduce basic principles of using the MDT microgrid sizing capability by comparing the results of two microgrids in two different markets. Please reference the MDT User Guide (SAND2017-9374) for detailed instructions on how to use the tool.
This report presents multi-phase modeling approaches that are developed for simulating rubble fire scenarios similar to a large-scale rubble pool fire test at Sandia National Laboratories using composite materials and jet fuel. The rubble pool fire test burnt oddly shaped combustible solid objects submerged in liquid fuel. As an intermediate step toward a full scale rubble fire simulation, various model improvement tasks were performed. For modeling solid decomposition, a multi-step degradation model was used for canonical verification problems and the Chemical Percolation for Devolatilization (CPD) approach was implemented. Capabilities of Lagrangian particle approach has been extended such that a group of particles may represent a solid bulk. For gas-liquid interface, the volume of fluid (VOF) technique was implemented and relevant physics were added. The developed tools offer a potential for simulating three-phase (gas, liquid, and solid) combustion applications.
Our team has investigated a series of soluble coordination complexes for use as tags to monitor underground fluid flows in reservoirs. While most of the metal-ligand (M-L) complexes were based on the dianionic salen family of ligands, conceptually other ligands such as porphyrins or phthalocyanines could be used with similar success. Detection and identification of these species in solution were performed by inductively coupled plasma (ICP) or Raman/resonance Raman (rR) spectroscopy. The preparation of a large number of new M-L salen complexes was accomplished. Complexes were prepared that were soluble in either water or hydrocarbons to allow for flexibility in use. Unambiguous identification of these complexes allowed for meaningful molecular dynamics (MD) calculations to be performed, so that the attraction of the M-L complexes to either the rock formation or the liquid media could be evaluated. The use of soluble M-L species was found to avoid issues of rock deposition.
Imaging techniques for the analysis of porous structures have revolutionized our ability to quantitatively characterize geomaterials. Digital representations of rock from CT images and physics modeling based on these pore structures provide the opportunity to further advance our quantitative understanding of fluid flow, geomechanics, and geochemistry, and the emergence of coupled behaviors. Additive manufacturing, commonly known as 3D printing, has revolutionized production of custom parts with complex internal geometries. For the geosciences, recent advances in 3D printing technology may be co-opted to print reproducible porous structures derived from CT-imaging of actual rocks for experimental testing. The use of 3D printed microstructure allows us to surmount typical problems associated with sample-to-sample heterogeneity that plague rock physics testing and to test material response independent from pore-structure variability. Together, imaging, digital rocks and 3D printing potentially enables a new workflow for understanding coupled geophysical processes in a real, but well-defined setting circumventing typical issues associated with reproducibility, enabling full characterization and thus connection of physical phenomena to structure. Here we report on our research exploring the possibilities that these technologies can bring to geosciences for coupled multiscale experimental and numerical analysis using 3D printed fractured rock specimens.
Visual clutter metrics play an important role in both the design of information visualizations and in the continued theoretical development of visual search models. In visualization design, clutter metrics provide a mathematical prediction of the complexity of the display and the difficulty associated with locating and identifying key pieces of information. In visual search models, they offer a proxy to set size, which represents the number of objects in the search scene, but is difficult to estimate in real-world imagery. In this article, we first briefly review the literature on clutter metrics and then contribute our own results drawn from studies in two security-oriented visual search domains: airport X-ray imagery and radar imagery. We analyze our results with an eye toward bridging the gap between the scene features evaluated by current clutter metrics and the features that are relevant to our security tasks. The article concludes with a brief discussion of possible research steps to close this gap.
An experimental system we developed combines triaxial rock deformation and mass spectrometry to measure noble gas flow before, during, and after rock fracture. Geogenic noble gas is released during triaxial deformation (real time) and is related to volume strain and acoustic emissions. The noble gas release then represents a signal of deformation during its stages of development. Noble gases are contained in most crustal rock at inter and intra granular sites. Their release during natural and man-made stress and strain changes represents a signal of deformation in brittle and semi-brittle conditions. The noble gas composition depends on lithology, geologic history, age of the rock, and fluids present. Uranium, thorium and potassium-40 concentrations in the rocks also affect the production of radiogenic noble gases (4He, Ar). Noble gas emission and its relationship to crustal processes have been studied for many years in the geologic community including correlations to tectonic velocities and qualitative estimates of deep permeability from surface measurements, finger prints of nuclear weapon detonation, and as a potential precursory signal to earthquakes attributed to gas release due to pre-seismic stress, dilatancy and/or fracturing of the rock. Helium emission has been shown as a precursor of volcanic activity. We present empirical results/relationships of specimen strain, microstructural evolution, acoustic emissions, and noble gas release from laboratory triaxial experiments performed upon a granite and a young basalt, bedded salt, and a marine shale.
Lotfi, Hossein; Li, Lu; Lei, Lin; Yang, Rui Q.; Klem, John F.; Johnson, Matthew B.
We report on the characterization of narrow-bandgap (Eg ≈ 0.4 eV, at 300 K) interband cascade thermophotovoltaic (TPV) devices with InAs/GaSb/AlSb type-II superlattice absorbers. Two device structures with different numbers of stages (two and three) were designed and grown to study the influence of the number of stages and absorber thicknesses on the device performance at high temperatures (300-340 K). Maximum power efficiencies of 9.6% and 6.5% with open-circuit voltages of 800 and 530 mV were achieved in the three- and two-stage devices at 300 K, respectively. These results validate the benefits of a multiple-stage architecture with thin individual absorbers for efficient conversion of infrared radiation into electricity from low-temperature heat sources. Additionally, we developed an effective characterization method, based on an adapted version of Suns-Voc technique, to extract the device series and shunt resistance in these TPV cells.
International Conference on Transparent Optical Networks
Chow, Weng W.; Kreinberg, S.; Wolters, J.; Schneider, C.; Gies, C.; Jahnke, F.; Hofling, S.; Kamp, M.; Reitzenstein, S.
We report on a theoretical and experimental study performed on AlAs/GaAs micropillar cavities containing InGaAs quantum dots as active medium. The devices have the interesting property of having almost all emission (spontaneous and stimulated) channelled into one cavity mode. They are excellent experimental platforms for studying laser physics because their emission behaviours question our understanding of lasing action. Analysis of spectrally-resolved photoluminescence and photon autocorrelation will be discussed and a physically definitive criterion for lasing applicable to all systems will be presented.
A systematic approach is presented for increasing the concentration of redox-active species in electrolytes for nonaqueous redox flow batteries (RFBs). Starting with an ionic liquid consisting of a metal coordination cation (MetIL), ferrocene-containing ligands and iodide anions are substituted incrementally into the structure. While chemical structures can be drawn for molecules with 10 m redox-active electrons (RAE), practical limitations such as melting point and phase stability constrain the structures to 4.2 m RAE, a 2.3× improvement over the original MetIL. Dubbed “MetILs3,” these ionic liquids possess redox activity in the cation core, ligands, and anions. Throughout all compositions, infrared spectroscopy shows the ethanolamine-based ligands primarily coordinate to the Fe2+ core via hydroxyl groups. Calorimetry conveys a profound change in thermophysical properties, not only in melting temperature but also in suppression of a cold crystallization only observed in the original MetIL. Square wave voltammetry reveals redox processes characteristic of each molecular location. Testing a laboratory-scale RFB demonstrates Coulombic efficiencies >95% and increased voltage efficiencies due to more facile redox kinetics, effectively increasing capacity 4×. Application of this strategy to other chemistries, optimizing melting point and conductivity, can yield >10 m RAE, making nonaqueous RFB a viable technology for grid scale storage.
The National Nuclear Security Agency (NNSA) created a Minority Serving Institution Partnership Plan (MSIPP) to 1) align investments in a university capacity and workforce development with the NNSA mission to develop the needed skills and talent for NNSA’s enduring technical workforce at the laboratories and production plants and 2) to enhance research and education at under-represented colleges and universities. Out of this effort, MSIPP launched a new program in early FY17 focused on Tribal Colleges and Universities (TCUs). The following report summarizes the project focus and status update during this reporting period.
This report details the modeling results for the response of a finite-length dissipative conductor interacting with a conducting ground to a hypothetical nuclear device with the same output energy spectrum as the Fat Man device. We use a frequency-domain method based on transmission line theory and implemented it in a code we call ATLOG - Analytic Transmission Line Over Ground. Select results are compared to ones computed using the circuit simulator Xyce. Intentionally Left Blank
This report details the modeling results for the response of a finite-length dissipative conductor interacting with a conducting ground to a hypothetical nuclear device with the same output energy spectrum as the Fat Man device. We use a time-domain method based on transmission line theory that allows accounting for time-varying air conductivities. We implemented such method in a code we call ATLOG - Analytic Transmission Line Over Ground. Results are compared the frequency-domain version of ATLOG previously developed and to the circuit simulator Xyce in some instances. Intentionally Left Blank
This report details the modeling results for the response of a finite-length dissipative conductor interacting with a conducting ground to the Bell Labs electromagnetic pulse excitation. We use both a frequency-domain and a time-domain method based on transmission line theory through a code we call ATLOG - Analytic Transmission Line Over Ground. Results are compared to the circuit simulator Xyce for selected cases. Intentionally Left Blank
This report details the comparison of ATLOG modeling results for the response of a finite-length dissipative aerial conductor interacting with a conducting ground to a measurement taken November 2016 at the High-Energy Radiation Megavolt Electron Source (HERMES) facility. We use the ATLOG time-domain method based on transmission line theory. Good agreement is observed between simulations and experiments. Intentionally Left Blank
We present the design and performance of a proof-of-concept 32 channel material identification system. Our system is based on the energy-dependent attenuation of fast neutrons for four elements: hydrogen, carbon, nitrogen and oxygen. We describe a new approach to obtaining a broad range of neutron energies to probe a sample, as well as our technique for reconstructing the molar densities within a sample. The system's performance as a function of time-of-flight energy resolution is explored using a Geant4-based Monte Carlo. Our results indicate that, with the expected detector response of our system, we will be able to determine the molar density of all four elements to within a 20-30% accuracy in a two hour scan time. In many cases this error is systematically low, thus the ratio between elements is more accurate. This degree of accuracy is enough to distinguish, for example, a sample of water from a sample of pure hydrogen peroxide: the ratio of oxygen to hydrogen is reconstructed to within 8 0.5% of the true value. Finally, with future algorithm development that accounts for backgrounds caused by scattering within the sample itself, the accuracy of molar densities, not ratios, may improve to the 5-10% level for a two hour scan time. Experimental performance was evaluated with various thicknesses of polyethylene. The detector response in terms of energy, particle identification, and timing are presented as well.
This project investigated a recently patented Sandia technology known as visible light Laser Voltage Probing (LVP). In this effort we carefully prepared well understood and characterized samples for testing. These samples were then operated across a range of configurations to minimize the possibility of superposition of multiple photon carrier interactions as data was taken with conventional and visible light LVP systems. Data consisted of LVP waveforms and Laser Voltage Images (LVI). Visible light (633 nm) LVP data was compared against 1319 nm and 1064 nm conventional LVP data to better understand the similarities and differences in mechanisms for all wavelengths of light investigated. The full text can be obtained by reaching the project manager, Ed Cole or the Cyber IA lead, Justin Ford.
The objectives of this project are to elucidate degradation mechanisms, decomposition products, and abuse response for next generation silicon based anodes; and understand the contribution of various materials properties and cell build parameters towards thermal runaway enthalpies. Quantify the contributions from various cell parameters such as particle size, composition, state of charge (SOC), electrolyte to active materials ratio, etc.
Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fraction of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.
Materials that incorporate hydrogen and helium isotopes are of great interest at Sandia and throughout the NNSA and DOE. The Ion Beam Lab at SNL-NM has invented techniques using micron to mm-size MeV ion beams to recoil these light isotopes (Elastic Recoil Detection or ERD) that can very accurately make such measurements. However, there are many measurements that would benefit NW and DOE that require much better resolution, such as the distribution of H isotopes (and 3He) in individual grains of materials relevant to TPBARs, H and He-embrittlement of weapon components important to Tritium Sustainment Programs, issues with GTSs, batteries… Higher resolution would also benefit the field of materials science in general. To address these and many other issues, nm-scale lateral resolution is required. This LDRD demonstrated that neutral H atoms could be recoiled through a thin film by 70 keV electrons and detected with a Channeltron electron multiplier (CEM). The electrons were steered away from the CEM by strong permanent magnets. This proved the feasibility that the high energy electrons from a transmissionelectron- microscope-TEM can potentially be used to recoil and subsequently detect (e-ERD), quantify and map the concentration of H and He isotopes with nm resolution. This discovery could lead to a TEM-based H/He-isotope nanoprobe with 1000x higher resolution than currently available.
As part of analysis support for FCTO, Sandia assesses the factors that influence the future of FCEVs and Hydrogen in the US vehicle fleet. Using ParaChoice, we model competition between FCEVs, conventional vehicles, and other alternative vehicle technologies in order to understand the drivers and sensitivities of adoption of FCEVs. ParaChoice leverages existing tools such as Autonomie (Moawad et al., 2016), AEO (U.S. Energy Information Administration, 2016), and the Macro System Model (Ruth et al., 2009) in order to synthesize a complete picture of the co-evolution of vehicle technology development, energy price evolution, and hydrogen production and pricing, with consumer demand for vehicles and fuel. We then assess impacts of FCEV market penetration and hydrogen use on green- house gas (GHG) emissions and petroleum consumption, providing context for the role of policy, technology development, infrastructure, and consumer behavior on the vehicle and fuel mix through parametric and sensitivity analyses.
For three years, Sandia National Laboratories, Georgia Institute of Technology, and University of Illinois at Urbana-Champaign investigated a smart grid vision in which renewable-centric Virtual Power Plants (VPPs) provided ancillary services with interoperable distributed energy resources (DER). This team researched, designed, built, and evaluated real-time VPP designs incorporating DER forecasting, stochastic optimization, controls, and cyber security to construct a system capable of delivering reliable ancillary services, which have been traditionally provided by large power plants or other dedicated equipment. VPPs have become possible through an evolving landscape of state and national interconnection standards, which now require DER to include grid-support functionality and communications capabilities. This makes it possible for third party aggregators to provide a range of critical grid services such as voltage regulation, frequency regulation, and contingency reserves to grid operators. This paradigm (a) enables renewable energy, demand response, and energy storage to participate in grid operations and provide grid services, (b) improves grid reliability by providing additional operating reserves for utilities, independent system operators (ISOs), and regional transmission organization (RTOs), and (c) removes renewable energy high-penetration barriers by providing services with photovoltaics and wind resources that traditionally were the jobs of thermal generators. Therefore, it is believed VPP deployment will have far-reaching positive consequences for grid operations and may provide a robust pathway to high penetrations of renewables on US power systems. In this report, we design VPPs to provide a range of grid-support services and demonstrate one VPP which simultaneously provides bulk-system energy and ancillary reserves.
This report presents an object-oriented implementation of full state feedback control for virtual power plants (VPP). The components of the VPP full state feedback control are (1) objectoriented high-fidelity modeling for all devices in the VPP; (2) Distribution System Distributed Quasi-Dynamic State Estimation (DS-DQSE) that enables full observability of the VPP by augmenting actual measurements with virtual, derived and pseudo measurements and performing the Quasi-Dynamic State Estimation (QSE) in a distributed manner, and (3) automated formulation of the Optimal Power Flow (OPF) in real time using the output of the DS-DQSE, and solving the distributed OPF to provide the optimal control commands to the DERs of the VPP.
A variety of Earth surface and atmospheric sources generate low frequency sound waves that can travel great distances. Despite a rich history of ground-based sensor studies, very few experiments have investigated the prospects of free floating microphone arrays at high altitudes. However, recent initiatives have shown that such networks have very low background noise and may sample an acoustic wave field that is fundamentally different than that at the Earth's surface. The experiments have been limited to at most two stations at altitude, limiting their utility in acoustic event detection and localization. We describe the deployment of five drifting microphone stations at altitudes between 21 and 24 km above sea level. The stations detected one of two regional ground-based explosions as well as the ocean microbarom while traveling almost 500 km across the American Southwest. The explosion signal consisted of multiple arrivals; signal amplitudes did not correlate with sensor elevation or source range. A sparse network method that employed curved wave front corrections was able to determine the backazimuth from the free flying network to the acoustic source. Episodic broad band signals similar to those seen on previous flights in the same region were noted as well, but their source remains unclear. Background noise levels were commensurate with those on infrasound stations in the International Monitoring System (IMS) below 2 seconds, but sensor self noise appears to dominate at higher frequencies.
Potentially, radiation detectors at ports of entry could be mounted on container gantry crane spreaders to monitor cargo containers entering and leaving the country. These detectors would have to withstand the extreme physical environment experienced by these spreaders during normal operations. Physical shock data from the gable ends of a spreader were recorded during the loading and unloading of a cargo ship with two Lansmont SAVER 9X30 units (with padding) and two PCB Piezotronics model 340A50 accelerometers (hard mounted). Physical shocks in the form of rapid acceleration were observed in all accelerometer units with values ranging from 0.20 g’s to 199.99 g’s. The majority of the shocks for all the Lansmont and PCB accelerometers were below 50 g’s. The Lansmont recorded mean shocks of 21.83 ± 13.62 g’s and 24.78 ± 11.49 g’s while the PCB accelerometers experienced mean shocks of 34.39 ± 25.51 g’s and 41.77 ± 22.68 g’s for the landside and waterside units, respectively. Encased detector units with external padding should be designed to withstand at least 200 g’s of acceleration without padding and typical shocks of 30 g’s with padding for mounting on a spreader.
The presentation documented the technical approach of the team and summary of the results with sufficient detail to demonstrate both the value and the completion of the milestone. A separate SAND report was also generated with more detail to supplement the presentation.
The overall goal of this work was to utilize the Advanced Power Management (APM) capabilities of the ATS-1 Trinity platform to understand the power usage behavior of ASC workloads running on Trinity and gain insight into the potential for utilizing power management techniques on future ASC platforms.
This report summarizes the work performed as part of a FY17 CSSE L2 milestone to in- vestigate the power usage behavior of ASC workloads running on the ATS-1 Trinity plat- form. Techniques were developed to instrument application code regions of interest using the Power API together with the Kokkos profiling interface and Caliper annotation library. Experiments were performed to understand the power usage behavior of mini-applications and the SNL/ATDM SPARC application running on ATS-1 Trinity Haswell and Knights Landing compute nodes. A taxonomy of power measurement approaches was identified and presented, providing a guide for application developers to follow. Controlled scaling study experiments were performed on up to 2048 nodes of Trinity along with smaller scale ex- periments on Trinity testbed systems. Additionally, power and energy system monitoring information from Trinity was collected and archived for post analysis of "in-the-wild" work- loads. Results were analyzed to assess the sensitivity of the workloads to ATS-1 compute node type (Haswell vs. Knights Landing), CPU frequency control, node-level power capping control, OpenMP configuration, Knights Landing on-package memory configuration, and algorithm/solver configuration. Overall, this milestone lays groundwork for addressing the long-term goal of determining how to best use and operate future ASC platforms to achieve the greatest benefit subject to a constrained power budget.
Pore-scale aperture effects on flow in pore networks was studied in the laboratory to provide a parameterization for use in transport models. Four cases were considered: regular and irregular pillar/pore alignment with and without an aperture. The velocity field of each case was measured and simulated, providing quantitatively comparable results. Two aperture effect parameterizations were considered: permeability and transmission. Permeability values varied by an order of magnitude between the cases with and without apertures. However, transmission did not correlate with permeability. Despite having much greater permeability the regular aperture case permitted less transmission than the regular case. Moreover, both irregular cases had greater transmission than the regular cases, a difference not supported by the permeabilities. Overall, these findings suggest that pore-scale aperture effects on flow though a pore-network may not be adequately captured by properties such as permeability for applications that are interested in determining particle transport volume and timing.
Metal organic frameworks (MOFs) are extended, nanoporous crystalline compounds consisting of metal ions interconnected by organic ligands. Their synthetic versatility suggest a disruptive class of opto - electronic materials with a high degree of electrical tunability and without the property - degrading disorder of organic conductors. In this project we determined the factors controlling charge and energy transport in MOFs and evaluated their potential for thermoelectric energy conversion. Two strategies for a chieving electronic conductivity in MOFs were explored: 1) using redox active 'guest' molecules introduced into the pores to dope the framework via charge - transfer coupling (Guest@MOF), 2) metal organic graphene analogs (MOGs) with dispersive band structur es arising from strong electronic overlap between the MOG metal ions and its coordinating linker groups. Inkjet deposition methods were developed to facilitate integration of the guest@MOF and MOG materials into practical devices.
Nelsen, Nicholas H.; Kolb, James D.; Kulkarni, Akshay G.; Sorscher, Zachary; Habing, Clayton D.; Mathis, Allen; Beller, Zachary J.
Mechanical component response to shock environments must be predictable in order to ensure reliability and safety. Whether the shock input results from accidental drops during transportation to projectile impact scenarios, the system must irreversibly transition into a safe state that is incapable of triggering the component . With this critical need in mind, the 2017 Nuclear Weapons Summer Product Realization Institute (NW SPRINT) program objective sought the design of a passive shock failsafe with emphasis on additively manufactured (AM) components. Team Advanced and Exploratory (A&E) responded to the challenge by designing and delivering multiple passive shock sensing mech anisms that activate within a prescribed mechanical shock threshold. These AM failsafe designs were tuned and validated using analytical and computational techniques including the shock response spectrum (SRS) and finite element analysis (FEA). After rapid prototyping, the devices experienced physical shock tests conducted on Sandia drop tables to experimentally verify performance. Keywords: Additive manufacturing, dynamic system, failsafe, finite element analysis, mechanical shock, NW SPRINT, shock respon se spectrum
Coupled length and time scales determine the dynamic behavior of polymers and polymer nanocomposites and underlie their unique properties. To resolve the properties over large time and length scales it is imperative to develop coarse grained models which retain the atomistic specificity. Here we probe the degree of coarse graining required to simultaneously retain significant atomistic details a nd access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using polyethylene as a model system, we probe how the coarse - graining scale affects the measured dynamics with different number methylene group s per coarse - grained beads. Using these models we simulate polyethylene melts for times over 500 ms to study the viscoelastic properties of well - entangled polymer melts and large nanoparticle assembly as the nanoparticles are driven close enough to form nanostructures.
In response to the expansion of nuclear fuel cycle (NFC) activities -- and the associated suite of risks -- around the world, this project evaluated systems-based solutions for managing such risk complexity in multimodal and multi-jurisdictional international spent nuclear fuel (SNF) transportation. By better understanding systemic risks in SNF transportation, developing SNF transportation risk assessment frameworks, and evaluating these systems-based risk assessment frameworks, this research illustrated interdependency between safety, security, and safeguards risks is inherent in NFC activities and can go unidentified when each "S" is independently evaluated. Two novel system-theoretic analysis techniques -- dynamic probabilistic risk assessment (DPRA) and system-theoretic process analysis (STPA) -- provide integrated "3S" analysis to address these interdependencies and the research results suggest a need -- and provide a way -- to reprioritize United States engagement efforts to reduce global nuclear risks. Lastly, this research identifies areas where Sandia National Laboratories can spearhead technical advances to reduce global nuclear dangers.
Predicting transient effects caused by short - pulse neutron irradiation of electronic devices is an important part of Sandia's mission. For example , predicting the diffusion of radiation - induced point defects is needed with in Sandia's Qualification Alternative to the Sandia Pulsed Reactor (QASPR) pro gram since defect diffusion mediates transient gain recovery in QASPR electronic devices. Recently, the semiconductors used to fabricate radiation - hard electronic devices have begun to shift from silicon to III - V compounds such as GaAs, InAs , GaP and InP . An advantage of this shift is that it allows engineers to optimize the radiation hardness of electronic devices by using alloy s such as InGaAs and InGaP . However, the computer codes currently being used to simulate transient radiation effects in QASP R devices will need to be modified since they presume that defect properties (charge states, energy levels, and diffusivities) in these alloys do not change with time. This is not realistic since the energy and properties of a defect depend on the types of atoms near it and , therefore, on its location in the alloy. In particular, radiation - induced defects are created at nearly random locations in an alloy and the distribution of their local environments - and thus their energies and properties - evolves with time as the defects diffuse through the alloy . To incorporate these consequential effects into computer codes used to simulate transient radiation effects, we have developed procedures to accurately compute the time dependence of defect energies and properties and then formulate them within compact models that can be employed in these computer codes. In this document, we demonstrate these procedures for the case of the highly mobile P interstitial (I P ) in an InGaP alloy. Further dissemination only as authorized to U.S. Government agencies and their contractors; other requests shall be approved by the originating facility or higher DOE programmatic authority.
In this work, shock-induced reactions in high explosives and their chemical mechanisms were investigated using state-of-the-art experimental and theoretical techniques. Experimentally, ultrafast shock interrogation (USI, an ultrafast interferometry technique) and ultrafast absorption spectroscopy were used to interrogate shock compression and initiation of reaction on the picosecond timescale. The experiments yielded important new data that appear to indicate reaction of high explosives on the timescale of tens of picoseconds in response to shock compression, potentially setting new upper limits on the timescale of reaction. Theoretically, chemical mechanisms of shock-induced reactions were investigated using density functional theory. The calculations generated important insights regarding the ability of several hypothesized mechanisms to account for shock-induced reactions in explosive materials. The results of this work constitute significant advances in our understanding of the fundamental chemical reaction mechanisms that control explosive sensitivity and initiation of detonation.
The purpose of the project was to perform multiscale characterization of low permeability rocks to determine the effect of physical and chemical heterogeneity on the poromechanical and flow responses of shales and carbonate rocks with a broad range of physical and chemical heterogeneity . An integrated multiscale imaging of shale and carbonate rocks from nanometer to centimeter scales include s dual focused ion beam - scanning electron microscopy (FIB - SEM) , micro computed tomography (micro - CT) , optical and confocal microscopy, and 2D and 3D energy dispersive spectroscopy (EDS). In addition, mineralogical mapping and backscattered imaging with nanoindentation testing advanced the quantitative evaluat ion of the relationship between material heterogeneity and mechanical behavior. T he spatial distribution of compositional heterogeneity, anisotropic bedding patterns, and mechanical anisotropy were employed as inputs for brittle fracture simulations using a phase field model . Comparison of experimental and numerical simulations reveal ed that proper incorporation of additional material information, such as bedding layer thickness and other geometrical attributes of the microstructures, can yield improvements on the numerical prediction of the mesoscale fracture patterns and hence the macroscopic effective toughness. Overall, a comprehensive framework to evaluate the relationship between mechanical response and micro-lithofacial features can allow us to make more accurate prediction of reservoir performance by developing a multi - scale understanding of poromechanical response to coupled chemical and mechanical interactions for subsurface energy related activities.
Microstructural variabilities are among the predominant sources of uncertainty in structural performance and reliability. We seek to develop efficient algorithms for multiscale calcu- lations for polycrystalline alloys such as aluminum alloy 6061-T6 in environments where ductile fracture is the dominant failure mode. Our approach employs concurrent multiscale methods, but does not focus on their development. They are a necessary but not sufficient ingredient to multiscale reliability predictions. We have focused on how to efficiently use concurrent models for forward propagation because practical applications cannot include fine-scale details throughout the problem domain due to exorbitant computational demand. Our approach begins with a low-fidelity prediction at the engineering scale that is sub- sequently refined with multiscale simulation. The results presented in this report focus on plasticity and damage at the meso-scale, efforts to expedite Monte Carlo simulation with mi- crostructural considerations, modeling aspects regarding geometric representation of grains and second-phase particles, and contrasting algorithms for scale coupling.
We have developed two advanced designs of a field-distortion air-insulated spark-gap switch that reduce the size of a linear-transformer-driver (LTD) brick. Both designs operate at 200 kV and a peak current of ~50 kA. At these parameters, both achieve a jitter of less than 2 ns and a prefire rate of ~0.1% over 5000 shots. We have reduced the number of switch parts and assembly steps, which has resulted in a more uniform, design-driven assembly process. We will characterize the performance of tungsten-copper and graphite electrodes, and two different electrode geometries. The new switch designs will substantially improve the electrical and operational performance of next-generation pulsed-power accelerators.
Instrumentation and control of nuclear power is transforming from analog to modern digital assets. These control systems perform key safety and security functions. This transformation is occurring in new plant designs as well as in the existing fleet of plants as the operation of those plants is extended to 60 years. This transformation introduces new and unknown issues involving both digital asset induced safety issues and security issues. Traditional nuclear power risk assessment tools and cyber security assessment methods have not been modified or developed to address the unique nature of cyber failure modes and of cyber security threat vulnerabilities. iii This Lab-Directed Research and Development project has developed a dynamic cyber-risk in- formed tool to facilitate the analysis of unique cyber failure modes and the time sequencing of cyber faults, both malicious and non-malicious, and impose those cyber exploits and cyber faults onto a nuclear power plant accident sequence simulator code to assess how cyber exploits and cyber faults could interact with a plants digital instrumentation and control (DI&C) system and defeat or circumvent a plants cyber security controls. This was achieved by coupling an existing Sandia National Laboratories nuclear accident dynamic simulator code with a cyber emulytics code to demonstrate real-time simulation of cyber exploits and their impact on automatic DI&C responses. Studying such potential time-sequenced cyber-attacks and their risks (i.e., the associated impact and the associated degree of difficulty to achieve the attack vector) on accident management establishes a technical risk informed framework for developing effective cyber security controls for nuclear power.
Sintering is a component fabrication process in which powder is compacted by pressing or some other means and then held at elevated temperature for a period of hours. The powder grains bond with each other, leading to the formation of a solid component with much lower porosity, and therefore higher density and higher strength, than the original powder compact. In this project, we investigated a new way of computationally modeling sintering at the length scale of grains. The model uses a high-fidelity, three-dimensional representation with a few hundred nodes per grain. The numerical model solves the peridynamic equations, in which nonlocal forces allow representation of the attraction, adhesion, and mass diffusion between grains. The deformation of the grains is represented through a viscoelastic material model. The project successfully demonstrated the use of this method to reproduce experimentally observed features of material behavior in sintering, including densification, the evolution of microstructure, and the occurrence of random defects in the sintered solid.
Silica is ubiquitous in produced and industrial waters, and plays a major disruptive role in water recycle. Herein we have investigated the use of mixed oxides for the removal of silica from these waters, and their incorporation into a low cost and low energy water purification process. High selectivity hydrotalcite (HTC, (Mg6Al2(OH)16(CO3)•4H2O)), is combined in series with high surface area active alumina (AA, (Al2O3)) as the dissolved silica removal media. Batch test results indicated that combined HTC/AA is a more effective method for removing silica from industrial cooling tower wasters (CTW) than using HTC or AA separately. The silica uptake via ion exchange on the mixed oxides was confirmed by Fourier transform infrared (FTIR), and Energy dispersive spectroscopy (EDS). Furthermore, HTC/AA effectively removes silica from CTW even in the presence of large concentrations of competing anions, such as Cl-, NO3- HCO3-, CO32- and SO42-. Similar to batch tests, Single Path Flow Through (SPFT) tests with sequential HTC/AA column filtration has very high silica removal too. Technoeconomic Analysis (TEA) was simultaneously performed for cost comparisons to existing silica removal technologies.
Nenoff, T.M.; Moore, Sarah E.; Mirchandani, Sera; Karanikola, Vasiliki; Arnold, Robert G.; Saez, Eduardo
Securing additional water sources remains a primary concern for arid regions in both the developed and developing world. Climate change is causing fluctuations in the frequency and duration of precipitation, which can be can be seen as prolonged droughts in some arid areas. Droughts decrease the reliability of surface water supplies, which forces communities to find alternate primary water sources. In many cases, ground water can supplement the use of surface supplies during periods of drought, reducing the need for above-ground storage without sacrificing reliability objectives. Unfortunately, accessible ground waters are often brackish, requiring desalination prior to use, and underdeveloped infrastructure and inconsistent electrical grid access can create obstacles to groundwater desalination in developing regions. The objectives of the proposed project are to (i) mathematically simulate the operation of hollow fiber membrane distillation systems and (ii) optimize system design for off-grid treatment of brackish water. It is anticipated that methods developed here can be used to supply potable water at many off-grid locations in semi-arid regions including parts of the Navajo Reservation. This research is a collaborative project between Sandia and the University of Arizona.
As the penetration of renewables increases in the distribution systems, and microgrids are conceived with high penetration of such generation that connects through inverters, fault location and protection of microgrids needs consideration. This report proposes averaged models that help simulate fault scenarios in renewable-rich microgrids, models for locating faults in such microgrids, and comments on the protection models that may be considered for microgrids. Simulation studies are reported to justify the models.
Multiphase computational models and tests of falling water droplets on inclined glass surfaces were developed to investigate the physics of impingement and potential of these droplets to self-clean glass surfaces for photovoltaic modules and heliostats. A multiphase volume-of-fluid model was developed in ANSYS Fluent to simulate the impinging droplets. The simulations considered different droplet sizes (1 mm and 3 mm), tilt angles (0°, 10°, and 45°), droplet velocities (1 m/s and 3 m/s), and wetting characteristics (wetting=47° contact angle and non-wetting = 93° contact angle). Results showed that the spread factor (maximum droplet diameter during impact divided by the initial droplet diameter) decreased with increasing inclination angle due to the reduced normal force on the surface. The hydrophilic surface yielded greater spread factors than the hydrophobic surface in all cases. With regard to impact forces, the greater surface tilt angles yielded lower normal forces, but higher shear forces. Experiments showed that the experimentally observed spread factor (maximum droplet diameter during impact divided by the initial droplet diameter) was significantly larger than the simulated spread factor. Observed spread factors were on the order of 5 - 6 for droplet velocities of ~3 m/s, whereas the simulated spread factors were on the order of 2. Droplets were observed to be mobile following impact only for the cases with 45° tilt angle, which matched the simulations. An interesting phenomenon that was observed was that shortly after being released from the nozzle, the water droplet oscillated (like a trampoline) due to the "snapback" caused by the surface tension of the water droplet being released from the nozzle. This oscillation impacted the velocity immediately after the release. Future work should evaluate the impact of parameters such as tilt angle and surface wettability on the impact of particle/soiling uptake and removal to investigate ways that photovoltaic modules and heliostats can be designed to maximize self-cleaning.
This report summarizes fiscal year (FY) 2017 progress towards developing and implementing within the SPARC in-house finite volume flow solver advanced fluid reduced order models (ROMs) for compressible captive-carriage flow problems of interest to Sandia National Laboratories for the design and qualification of nuclear weapons components. The proposed projection-based model order reduction (MOR) approach, known as the Proper Orthogonal Decomposition (POD)/Least- Squares Petrov-Galerkin (LSPG) method, can substantially reduce the CPU-time requirement for these simulations, thereby enabling advanced analyses such as uncertainty quantification and de- sign optimization. Following a description of the project objectives and FY17 targets, we overview briefly the POD/LSPG approach to model reduction implemented within SPARC . We then study the viability of these ROMs for long-time predictive simulations in the context of a two-dimensional viscous laminar cavity problem, and describe some FY17 enhancements to the proposed model reduction methodology that led to ROMs with improved predictive capabilities. Also described in this report are some FY17 efforts pursued in parallel to the primary objective of determining whether the ROMs in SPARC are viable for the targeted application. These include the implemen- tation and verification of some higher-order finite volume discretization methods within SPARC (towards using the code to study the viability of ROMs on three-dimensional cavity problems) and a novel structure-preserving constrained POD/LSPG formulation that can improve the accuracy of projection-based reduced order models. We conclude the report by summarizing the key takeaways from our FY17 findings, and providing some perspectives for future work.
This progress report describes work done in FY17 at Sandia National Laboratories (SNL) to assess the localized corrosion performance of container/cask materials used in the interim storage of spent nuclear fuel (SNF). Of particular concern is stress corrosion cracking (SCC), by which a through-wall crack could potentially form in a canister outer wall over time intervals that are shorter than possible dry storage times. Work in FY17 refined our understanding of the chemical and physical environment on canister surfaces, and evaluated the relationship between chemical and physical environment and the form and extent of corrosion that occurs. The SNL corrosion work focused predominantly on pitting corrosion, a necessary precursor for SCC, and process of pit-to-crack transition; it has been carried out in collaboration with university partners. SNL is collaborating with several university partners to investigate SCC crack growth experimentally, providing guidance for design and interpretation of experiments.
This report catalogues the results of a project exploring the incorporation of organometallic compounds into thermosetting polymers as a means to reduce their residual stress. Various syntheses of polymerizable ferro cene derivatives were attempted with mixed success. Ultimately, a diamine derivative of ferrocene was used as a curing agen t for a commercial epoxy resin, where it was found to give similar cure kinetics and mechanical properties in comparison to conventional curing agents. T he ferrocen e - based material is uniquely able to relax stress above the glass transition, leading to reduced cure stress. We propose that this behavior arises from the fluxional capacity of ferrocene. In support of this notion, nuclear magnetic resonance spectroscopy indicates a substantial increase in chain flexibility in the ferrocene - containing network. Although t he utilization of fluxionality is a novel approach to stress management in epoxy thermosets, it is anticipated to have greater impact in radical - cured ther mosets and linear polymers.
With the rise of electronic and high-dimensional data, new and innovative feature detection and statistical methods are required to perform accurate and meaningful statistical analysis of these datasets that provide unique statistical challenges. In the area of feature detection, much of the recent feature detection research in the computer vision community has focused on deep learning methods, which require large amounts of labeled training data. However, in many application areas, training data is very limited and often difficult to obtain. We develop methods for fast, unsupervised, precise feature detection for video data based on optical flows, edge detection, and clustering methods. We also use pretrained neural networks and interpretable linear models to extract features using very limited training data. In the area of statistics, while high-dimensional data analysis has been a main focus of recent statistical methodological research, much focus has been on populations of high-dimensional vectors, rather than populations of high-dimensional tensors, which are three-dimensional arrays that can be used to model dependent images, such as images taken of the same person or ripped video frames. Our feature detection method is a non-model-based method that fusses information from dense optical flow, raw image pixels, and frame differences to generate detections. Our hypothesis testing methods are based on the assumption that dependent images are concatenated into a tensor that follows a tensor normal distribution, and from this assumption, we derive likelihood-ratio, score, and regression-based tests for one- and multiple-sample testing problems. Our methods will be illustrated on simulated and real datasets. We conclude this report with comments on the relationship between feature detection and hypothesis testing methods.
This report presents the results of instrumentation cable tests sponsored by the US Nuclear Regulatory Commission (NRC) Office of Nuclear Regulatory Research and performed at Sandia National Laboratories (SNL). The goal of the tests was to assess thermal and electrical response behavior under fire-exposure conditions for instrumentation cables and circuits. The test objective was to assess how severe radiant heating conditions surrounding an instrumentation cable affect current or voltage signals in an instrumentation circuit. A total of thirty-nine small-scale tests were conducted. Ten different instrumentation cables were tested, ranging from one conductor to eight-twisted pairs. Because the focus of the tests was thermoset (TS) cables, only two of the ten cables had thermoplastic (TP) insulation and jacket material and the remaining eight cables were one of three different TS insulation and jacket material. Two instrumentation cables from previous cable fire testing were included, one TS and one TP. Three test circuits were used to simulate instrumentation circuits present in nuclear power plants: a 4–20 mA current loop, a 10–50 mA current loop and a 1–5 VDC voltage loop. A regression analysis was conducted to determine key variables affecting signal leakage time.
As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilize memory.
A nanoscale , microfabricated waveguide structure can in - principle be used to trap atoms in well - defined locations and enable strong photon-atom interactions . A neutral - atom platform based on this microfabrication technology will be prealigned , which is especially important for quantum - control applications. At present, there is still no reported demonstration of evanescent - field atom trapping using a microfabricated waveguide structure. We described the capabilities established by our team for future development of the waveguide atom - trapping technology at SNL and report our studies to overcome the technical challenges of loading cold atoms into the waveguide atom traps, efficient and broadband optical coupling to a waveguide, and the waveguide material for high - power optical transmission. From the atomic - physics and the waveguide modeling, w e have shown that a square nano-waveguide can be utilized t o achieve better atomic spin squeezing than using a nanofiber for first time.
This report is a summary of the international collaboration and laboratory work funded by the US Department of Energy Office of Nuclear Energy Spent Fuel and Waste Science & Technology (SFWST) as part of the Sandia National Laboratories Salt R&D work package. This report satisfies milestone levelfour milestone M4SF-17SN010303014. Several stand-alone sections make up this summary report, each completed by the participants. The first two sections discuss international collaborations on geomechanical benchmarking exercises (WEIMOS) and bedded salt investigations (KOSINA), while the last three sections discuss laboratory work conducted on brucite solubility in brine, dissolution of borosilicate glass into brine, and partitioning of fission products into salt phases.
Impedance spectroscopy was leveraged to directly detect the sorption of I 2 by selective adsorption into nanoporous metal organic frameworks (MOF). Films of three different types of MOF frameworks, respectively, were drop cast onto platinum interdigitated electrodes, dried, and exposed to gaseous I 2 at 25, 40, or 70 C. The MOF frameworks varied in topology from small pores (equivalent to I 2 diameter) to large pore frameworks. The combination of the chemistry of the framework and pore size dictated quantity and kinetics of I 2 adsorption. Air, argon, methanol, and water were found to produce minimal changes in ZIF-8 impedance. Independent of MOF framework characteristics, all resultant sensors showed high response to I 2 in air. As an example of sensor output, I 2 was readily detected at 25 C in air within 720 s of exposure, using an un-optimized sensor geometry with a small pored MOF. Further optimization of sensor geometry, decreasing MOF film thicknesses and maximizing sensor capacitance, will enable faster detection of trace I 2 .
As part of Sandia’s nuclear deterrence mission, the B61-12 Life Extension Program (LEP) aims to modernize the aging weapon system. Modernization requires requalification and Sandia is using high performance computing to perform advanced computational simulations to better understand, evaluate, and verify weapon system performance in conjunction with limited physical testing. The Nose Bomb Subassembly (NBSA) of the B61-12 is responsible for producing a fuzing signal upon ground impact. The fuzing signal is dependent upon electromechanical impact sensors producing valid electrical fuzing signals at impact. Computer generated models were used to assess the timing between the impact sensor’s response to the deceleration of impact and damage to major components and system subassemblies. The modeling and simulation team worked alongside the physical test team to design a large-scale reverse ballistic test to not only assess system performance, but to also validate their computational models. The reverse ballistic test conducted at Sandia’s sled test facility sent a rocket sled with a representative target into a stationary B61-12 (NBSA) to characterize the nose crush and functional response of NBSA components. Data obtained from data recorders and high-speed photometrics were integrated with previously generated computer models in order to refine and validate the model’s ability to reliably simulate real-world effects. Large-scale tests are impractical to conduct for every single impact scenario. By creating reliable computer models, we can perform simulations that identify trends and produce estimates of outcomes over the entire range of required impact conditions. Sandia’s HPCs enable geometric resolution that was unachievable before, allowing for more fidelity and detail, and creating simulations that can provide insight to support evaluation of requirements and performance margins. As computing resources continue to improve, researchers at Sandia are hoping to improve these simulations so they provide increasingly credible analysis of the system response and performance over the full range of conditions.
This milestone is a tri-lab deliverable supporting ongoing Co-Design efforts impacting applications in the Integrated Codes (IC) program element Advanced Technology Development and Mitigation (ATDM) program element. In FY14, the trilabs looked at porting proxy application to technologies of interest for ATS procurements. In FY15, a milestone was completed evaluating proxy applications in multiple programming models and in FY16, a milestone was completed focusing on the migration of lessons learned back into production code development. This year, the co-design milestone focuses on extracting the knowledge gained and/or code revisions back into production applications.
SIERRA/Aero is a two and three dimensional, node-centered, edge-based finite volume code that approximates the compressible Navier-Stokes equations on unstructured meshes. It is applicable to inviscid and high Reynolds number laminar and turbulent flows. Currently, two classes of turbulence models are provided: Reynolds Averaged Navier-Stokes (RANS) and hybrid methods such as Detached Eddy Simulation (DES). Large Eddy Simulation (LES) models are currently under development. The gas may be modeled either as ideal, or as a non-equilibrium, chemically reacting mixture of ideal gases. This document describes the mathematical models contained in the code, as well as certain implementation details. First, the governing equations are presented, followed by a description of the spatial discretization. Next, the time discretization is described, and finally the boundary conditions. Throughout the document, SIERRA/ Aero is referred to simply as Aero for brevity.
SIERRA/Aero is a compressible fluid dynamics program intended to solve a wide variety compressible fluid flows including transonic and hypersonic problems. This document describes the commands for assembling a fluid model for analysis with this module, henceforth referred to simply as Aero for brevity. Aero is an application developed using the SIERRA Toolkit (STK). The intent of STK is to provide a set of tools for handling common tasks that programmers encounter when developing a code for numerical simulation. For example, components of STK provide field allocation and management, and parallel input/output of field and mesh data. These services also allow the development of coupled mechanics analysis software for a massively parallel computing environment. In the definitions of the commands that follow, the term Real_Max denotes the largest floating point value that can be represented on a given computer. Int_Max is the largest such integer value.
To help effectively plan the management and modernization of their large and diverse fleets of vehicles, the Program Executive Office Ground Combat Systems (PEO GCS) and the Program Executive Office Combat Support and Combat Service Support (PEO CS &CSS) commissioned the development of a large - scale portfolio planning optimization tool. This software, the Capability Portfolio Analysis Tool (CPAT), creates a detailed schedule that optimally prioritizes the modernization or replacement of vehicles within the fleet - respecting numerous business rules associated with fleet structure, budgets, industrial base, research and testing, etc., while maximizing overall fleet performance through time. This report contains a description of the organizational fleet structure and a thorough explanation of the business rules that the CPAT formulation follows involving performance, scheduling, production, and budgets. This report, which is an update to the original CPAT domain model published in 2015 (SAND2015 - 4009), covers important new CPAT features. This page intentionally left blank
In order to effectively plan the management and modernization of their large and diverse fleets of vehicles, Program Executive Office Ground Combat Systems (PEO GCS) and Program Executive Office Combat Support and Combat Service Support (PEO CS&CSS) commis- sioned the development of a large-scale portfolio planning optimization tool. This software, the Capability Portfolio Analysis Tool (CPAT), creates a detailed schedule that optimally prioritizes the modernization or replacement of vehicles within the fleet - respecting numerous business rules associated with fleet structure, budgets, industrial base, research and testing, etc., while maximizing overall fleet performance through time. This paper contains a thor- ough documentation of the terminology, parameters, variables, and constraints that comprise the fleet management mixed integer linear programming (MILP) mathematical formulation. This paper, which is an update to the original CPAT formulation document published in 2015 (SAND2015-3487), covers the formulation of important new CPAT features.
The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
Aria is a Galerkin fnite element based program for solving coupled-physics problems described by systems of PDEs and is capable of solving nonlinear, implicit, transient and direct-to-steady state problems in two and three dimensions on parallel architectures. The suite of physics currently supported by Aria includes thermal energy transport, species transport, and electrostatics as well as generalized scalar, vector and tensor transport equations. Additionally, Aria includes support for manufacturing process fows via the incompressible Navier-Stokes equations specialized to a low Reynolds number ( %3C 1 ) regime. Enhanced modeling support of manufacturing processing is made possible through use of either arbitrary Lagrangian- Eulerian (ALE) and level set based free and moving boundary tracking in conjunction with quasi-static nonlinear elastic solid mechanics for mesh control. Coupled physics problems are solved in several ways including fully-coupled Newton's method with analytic or numerical sensitivities, fully-coupled Newton- Krylov methods and a loosely-coupled nonlinear iteration about subsets of the system that are solved using combinations of the aforementioned methods. Error estimation, uniform and dynamic h -adaptivity and dynamic load balancing are some of Aria's more advanced capabilities. Aria is based upon the Sierra Framework.
The goal of this LDRD is to develop a quantum nanophotonics capability that will allow practical control over electron (hole) and photon confinement in more than one dimension. We plan to use quantum dots (QDs) to control electrons, and photonic crystals to control photons. InGaN QDs will be fabricated using quantum size control processes, and methods will be developed to add epitaxial layers for hole injection and surface passivation. We will also explore photonic crystal nanofabrication techniques using both additive and subtractive fabrication processes, which can tailor photonic crystal properties. These two efforts will be combined by incorporating the QDs into photonic crystal surface emitting lasers (PCSELs). Modeling will be performed using finite-different time-domain and gain analysis to optimize QD-PCSEL designs that balance laser performance with the ability to nano-fabricate structures. Finally, we will develop design rules for QD-PCSEL architectures, to understand their performance possibilities and limits.