This report is a guide to the use of the Sandia-developed Ringdown Parasitic Extractor (RPE) software tool. It explains the theory behind performing parasitic extraction from current ringdown waveforms and describes how to use the tool to achieve good results. Further dissemination only as authorized to U.S. Government agencies and their contractors; other requests shall be approved by the originating facility or higher DOE programmatic authority.
The Zeiss Multi-Beam Scanning Electron Microscope (MultiSEM) was used to image a wide array samples using non-standard operating conditions. The ability of this new, high-throughput imaging technique to produce high-quality images was assessed during this one year LDRD. In addition to exploring new imaging conditions, sample preparation techniques, coupled with theoretical simulations, were explored to optimize the MultiSEM images. To obtain details about the devices imaged, as well as the experimental details, please refer to the classified report from the project manager, Bradley Gabel, or the Cyber IA lead, Justin Ford.
The recent boom in shale gas production through hydrofracturing has reshaped the energy production landscape in the United States. Wellbore production rates vary greatly among the wells within a single field and decline rapidly with time, thus bring up a serious concern with the sustainability of shale gas production. Shale gas production starts with creating a fracture network by injecting a pressurized fluid in a wellbore. The induced fractures are then held open by proppant particles. During production, gas releases from the mudstone matrix, migrates to nearby fractures, and ultimately reaches a production wellbore. Given the relatively high permeability of the induced fractures, gas release and migration in low-permeability shale matrix is likely to be a limiting step for long-term wellbore production. Therefore, a clear understanding of the underlying mechanisms of methane disposition and release in shale matrix is crucial for the development of new technologies to maximize gas production and recovery. Shale is a natural nanocomposite material with distinct characteristics of nanometer-scale pore sizes, extremely low permeability, high clay contents, significant amounts of organic carbon, and large spatial heterogeneities. Our work has shown that nanopore confinement plays an important role in methane disposition and release in shale matrix. Using molecular simulations, we show that methane release in nanoporous kerogen matrix is characterized by fast release of pressurized free gas (accounting for ~ 30 - 47% recovery) followed by slow release of adsorbed gas as the gas pressure decreases. The first stage is driven by the gas pressure gradient while the second stage is controlled by gas desorption and diffusion. The long-term production decline appears controlled by the second stage of gas release. We further show that diffusion of all methane in nanoporous kerogen behaves differently from the bulk phase, with much smaller diffusion coefficients. The MD simulations also indicate that a significant fraction (3 - 35%) of methane deposited in kerogen can potentially become trapped in isolated nanopores and thus not recoverable. We have successfully established experimental capabilities for measuring gas sorption and desorption on shale and model materials under a wide range of physical and chemical conditions. Both low and high pressure measurements show significant sorption of CH4 and CO2 onto clays, implying that methane adsorbed on clay minerals could contribute a significant portion of gas-in-place in an unconventional reservoir. We have also studied the potential impact of the interaction of shale with hydrofracking fluid on gas sorption. We have found that the CH4-CO2 sorption capacity for the reacted sample is systematically lower (by a factor of ~2) than that for the unreacted (raw) sample. This difference in sorption capacity may result from a mineralogical or surface chemistry change of the shale sample induced by fluid-rock interaction. Our results shed a new light on mechanistic understanding gas release and production decline in unconventional reservoirs.
Residual stresses induced during forging and welding can cause detrimental failure in reservoirs due to enhanced possibility of crack propagation. Therefore, reservoirs must be designed with yield strengths in a tight range. This report summarizes an effort to verify and validate a computational tool that was developed to aid in prediction of the evolution of residual stresses throughout the manufacturing process. The application requirements are identified and summarized in the context of the Predictive Capability Maturity Model (PCMM). The phenomena of interest that the model attempts to capture are discussed and prioritized using the Phenomena Identification and Ranking Table (PIRT) to identify any gaps in our approach. The fidelity of the modeling approach is outlined and details on the implementation and boundary conditions are provided. The code verification requirements are discussed and solution verification is performed, including a mesh convergence study on the series of modeling steps (forging, machining and welding). Validation activities are summarized, including validation of the displacements, residual stresses, recrystallization, yield strength and thermal history. A sensitivity analysis and uncertainty quantification are also performed to understand how variations in the manufacturing process affect the residual stresses.
The fourth solicitation of the California Solar Initiative (CSI) Research, Development, Demonstration and Deployment (RD&D) Program established by the California Public Utilities Commission (CPUC) supported the Electric Power Research Institute (EPRI), National Renewable Energy Laboratory (NREL), and Sandia National Laboratories (SNL) with data provided from Pacific Gas and Electric (PG&E), Southern California Edison (SCE), and San Diego Gas and Electric (SDG&E) conducted research to determine optimal default settings for distributed energy resource advanced inverter controls. The inverter functions studied are aligned with those developed by the California Smart Inverter Working Group (SIWG) and those being considered by the IEEE 1547 Working Group. The advanced inverter controls examined to improve the distribution system response included power factor, volt-var, and volt-watt. The advanced inverter controls examined to improve the transmission system response included frequency and voltage ride-through as well as Dynamic Voltage Support. This CSI RD&D project accomplished the task of developing methods to derive distribution focused advanced inverter control settings, selecting a diverse set of feeders to evaluate the methods through detailed analysis, and evaluating the effectiveness of each method developed. Inverter settings focused on the transmission system performance were also evaluated and verified. Based on the findings of this work, the suggested advanced inverter settings and methods to determine settings can be used to improve the accommodation of distributed energy resources (PV specifically). The voltage impact from PV can be mitigated using power factor, volt-var, or volt-watt control, while the bulk system impact can be improved with frequency/voltage ride-through.
Gamble, Marc; Van Der Wijngaart, Rob; Teranishi, Keita; Parashar, Manish
This document provides a specification of Fenix, a software library compatible with the Message Passing Interface (MPI) to support fault recovery without application shutdown. The library consists of two modules. The first, termed process recovery, restores an application to a consistent state after it has suffered a loss of one or more MPI processes (ranks). The second specifies functions the user can invoke to store application data in Fenix managed redundant storage, and to retrieve it from that storage after process recovery.
Bender, Donald A.; Akhil, Abbas A.; Huff, Georgianne; Currier, Aileen B.; Kaun, Benjamin C.; Rastler, Dan M.; Chen, Stella B.; Cotter, Andrew L.; Bradshaw, Dale T.; Gauntlett, William D.; Eyer, James; Olinsky-Paul, Todd; Ellison, Michelle; Schoenung, Susan
The Electricity Storage Handbook (Handbook) is a how-to guide for utility and rural cooperative engineers, planners, and decision makers to plan and implement energy storage projects. The Handbook also serves as an information resource for investors and venture capitalists, providing the latest developments in technologies and tools to guide their evaluations of energy storage opportunities. It includes a comprehensive database of the cost of current storage systems in a wide variety of electric utility and customer services, along with interconnection schematics. A list of significant past and present energy storage projects is provided for a practical perspective. This Handbook, jointly sponsored by the U.S. Department of Energy and the Electric Power Research Institute in collaboration with the National Rural Electric Cooperative Association, is published in electronic form at www.sandia.gov/ess.
The war to establish cyber supremacy continues, and the literature is crowded with strictly technical cyber security measures. We present the results of a three year LDRD project using Linkography, a methodology new to the field of cyber security, we establish the foundation necessary to track and profile the microbehavior of humans attacking cyber systems. We also propose ways to leverage this understanding to influence and deceive these attackers. We studied the science of linkography, applied it to the cyber security domain, implemented a software package to manage linkographs, generated the preprocessing blocks necessary to ingest raw data, produced machine learning models, created ontology refinement algorithms and prototyped a web application for researchers and practitioners to apply linkography. Machine learning produced some of our key results: We trained and validated multinomial classifiers with a real world data set and predicted the attacker's next category of action with 86 to 98% accuracy; dimension reduction techniques indicated that the linkography-based features were among the most powerful. We also discovered ontology refinement algorithms that advanced the state of the art in linkography in general and cyber security in particular. We conclude that linkography is a viable tool for cyber security; we look forward to expanding our work to other data sources and using our prediction results to enable adversary deception techniques.
This report analyzes data from multi-arm caliper (MAC) surveys taken at the Big Hill SPR site to determine the most likely casing weights within each well. Radial arm data from MAC surveys were used to calculate the approximate wall thickness of each well. Results from this study indicate that (1) most wells at the site have thinner wall thicknesses than expected, (2) most wells experienced an acute increase in diameter near the salt/caprock interface, and (3) there were isolated instances of well sections being the wrong casing weight. All three findings could have a negative impact on well integrity.
The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 3.0 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sensitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.
A 9.6 kW test array of Prism bifacial modules and reference monofacial modules installed in February 2016 at the New Mexico Regional Test Center has produced six months of performance data. The data reveal that the Prism modules are out-performing the monofacial modules, with bifacial gains in energy over the six-month period ranging from 18% to 136%, depending on the orientation and ground albedo. These measured bifacial gains were found to be in good agreement with modeled bifacial gains using equations previously published by Prism. The most dramatic increase in performance was seen among the vertically tilted, west-facing modules, where the bifacial modules produced more than double the energy of monofacial modules and more energy than monofacial modules at any orientation. Because peak energy generation (mid-morning and mid-afternoon) for these bifacial modules may best match load on the electric grid, the west-facing orientation may be more economically desirable than traditional south-facing module orientations (which peak at solar noon).
Remote sensing systems have firmly established a role in providing immense value to commercial industry, scientific exploration, and national security. Continued maturation of sensing technology has reduced the cost of deploying highly-capable sensors while at the same time increased reliance on the information these sensors can provide. The demand for time on these sensors is unlikely to diminish. Coordination of next-generation sensor systems, larger constellations of satellites, unmanned aerial vehicles, ground telescopes, etc. is prohibitively complex for existing heuristics-based scheduling techniques. The project was a two-year collaboration spanning multiple Sandia centers and included a partnership with Texas A&M University. We have developed algorithms and software for collection scheduling, remote sensor field-of-view pointing models, and bandwidth-constrained prioritization of sensor data. Our approach followed best practices from the operations research and computational geometry communities. These models provide several advantages over state of the art techniques. In particular, our approach is more flexible compared to heuristics that tightly couple models and solution techniques. First, our mixed-integer linear models afford a rigorous analysis so that sensor planners can quantitatively describe a schedule relative to the best possible. Optimal or near-optimal schedules can be produced with commercial solvers in operational run-times. These models can be modified and extended to incorporate different scheduling and resource constraints and objective function definitions. Further, we have extended these models to proactively schedule sensors under weather and ad hoc collection uncertainty. This approach stands in contrast to existing deterministic schedulers which assume a single future weather or ad hoc collection scenario. The field-of-view pointing algorithm produces a mosaic with the fewest number of images required to fully cover a region of interest. The bandwidth-constrained algorithms find the highest priority information that can be transmitted. All of these are based on mixed-integer linear programs so that, in the future, collection scheduling, field-of-view, and bandwidth prioritization can be combined into a single problem. Experiments conducted using the developed models, commercial solvers, and benchmark datasets have demonstrated that proactively scheduling against uncertainty regularly and significantly outperforms deterministic schedulers.
Open-source indicators have been proposed as a way of tracking and forecasting disease outbreaks. Some, such are meteorological data, are readily available as reanalysis products. Others, such as those derived from our online behavior (web searches, media article etc.) are gathered easily and are more timely than public health reporting. In this study we investigate how these datastreams may be combined to provide useful epidemiological information. The investigation is performed by building data assimilation systems to track influenza in California and dengue in India. The first does not suffer from incomplete data and was chosen to explore disease modeling needs. The second explores the case when observational data is sparse and disease modeling complexities are beside the point. The two test cases are for opposite ends of the disease tracking spectrum. We find that data assimilation systems that produce disease activity maps can be constructed. Further, being able to combine multiple open-source datastreams is a necessity as any one individually is not very informative. The data assimilation systems have very little in common except that they contain disease models, calibration algorithms and some ability to impute missing data. Thus while the data assimilation systems share the goal for accurate forecasting, they are practically designed to compensate for the shortcomings of the datastreams. Thus we expect them to be disease and location-specific.
Paracousti is a parallelized acoustic wave propagation simulation package developed at Sandia National Laboratories. It solves the linearized coupled set of acousto-dynamic partial differential equations using finite-difference approximations that are second order accurate in time and fourth order accurate in space. Paracousti simulates sound wave propagation within realistic 3-D earth, static atmosphere and hydroacoustic models, including 3-D variations in medium densities and acoustic sound speeds and topography or bathymetry. It can also incorporate attenuative media such as would be expected from physical mechanisms such as molecular dissipation. This report explains the usage of the Paracousti algorithm.
Single photon sources (SPS) are quantum light sources where photons are emitted one at a time instead of randomly (e.g. lasers) or in a bunch (e.g. thermal) that can significantly impact quantum information science (computing and secure communications) and quantum metrology. Some highly desirable features of SPS are low second order correlation (g2),controllable emission, electrically injected room temperature operation with high photon rate, high extraction efficiency and controllable directionality. Approaches taken thus far using different material systems have only addressed a subset of these features. III-nitride based approach offers a clear pathway to deterministic, room temperature (R.T.), electrically injected practical SPS as one can potentially also leverage the knowledge and technology from the light emitting diode (LED) world. Here we will describe a hybrid approach wherein a TiO2 based photonic crystal (PC) cavity is fabricated around an InGaN quantum dot (QD) embedded nanoscale post deterministically placed inside a photonic crystal cavity. This project takes the initial steps necessary to achieve a practical, compact SPS. We have used finite difference time domain simulations to optimize the cavity design to achieve high quality factor, mode overlap with QD and high extraction. We have fabricated InGaN quantum dots using a top-down approach involving dry etch and photoelectrochemical etch followed by electron beam lithography based nanofabrication of photonic crystal cavities.
We have made the first continuous measurements of black carbon in Barrow, Alaska at the ARM aerosol-observing site at the NOAA Barrow Observatory using a Single-Particle Soot Photometer (SP2). These data demonstrate that BC particles are extremely small, and a majority of the particles (by number density) are smaller than 0.5 fg, the lower limit of reliability of the SP2. We developed the first numerical model capable of quantitatively reproducing the laser-induced incandescence (LII) and scattering signals produced by the SP2, the industry-standard BC instrument. Our model reproduces the SP2 signal temporally and spectrally and demonstrates that the current SP2 optical design allows substantial contamination of LII on the scattering signal. We ran CAM5-SE in nudged mode, i.e., by constraining the transport used in the model with meteorological data. The results demonstrate the problem observed previously of under-predicting BC at high latitudes. The cause of the discrepancy is currently unknown, but we suspect that it is associated with scavenging and rainout mechanisms.
Dynamic mode decomposition (DMD) is a method that has gained notoriety in the field of turbulent fluid flows as a method for decomposing the flow field into modes that could be further deconstructed to understand their influence on the overall dynamics of the system. Forays into solids and non-linear systems were considered, but not applied. In this work, DMD was applied for the first time to the heat-diffusion and reactive heat diffusion equations on a random particle pack of uniform solid spheres. A verification of a linear heat-diffusion test problem was successful, showing equality between the normal modes and Koopman modes obtained from DMD. Further application to a non-linear reactive system revealed stability limits of the underlying modes which are dependent on microstructure and chemical kinetics. This work will enable the development of reactive material models based on further analysis using DMD to quantify the statistical dependencies of transient response on microstructural characteristics.
Sandia National Laboratories’ California site is celebrating its 60th anniversary (1956 to 2016), and high performance computing has been a key enabler for its scientists and engineers throughout much of its history. Since its founding, Sandia California has helped pioneer the use of HPC platforms including hosting Sandia’s first Cray-1 supercomputer in the 1970s and supporting development of scalable cluster computing platforms to create a new paradigm for cost-effective supercomputing in the 1990s. Recent decades of investment in creation of scalable application frameworks for scientific computing have also enabled new generations of modeling and simulation codes. These resources have facilitated computational analysis of complex phenomena in diverse applications spanning national defense, energy, and homeland security. Today, Sandia California researchers work with partners in academia, industry, and national labs to evolve the state-of-the-art in HPC, modeling, and data analysis (including foundational capabilities for exascale computing platforms) and apply them in transformational ways. Research efforts include mitigating the effects of silent hardware failures that can jeopardize the results of large-scale computations, developing exascale-capable asynchronous task-parallel programming models and runtime systems, formulating new techniques to better explore and analyze extreme-scale data sets, and increasing our understanding of materials and chemical sciences which has applications spanning nuclear weapons stockpile stewardship to more efficient automobile engines. The following section highlights some of these research and applications projects and further illustrates the breadth of our HPC capabilities.
A viscoelastic constitutive analysis is used to investigate the counter-intuitive observation of “mobility decrease with increased deformation through yield” in a glass forming polymer under compressive and tensile loading conditions. Current Sandia National Laboratory polymer models are built on the assumption that deformation enhances the mobility of the material. If this assumption is not true at small strain rates (e.g., thermal fluctuations in stockpile storage) then models will not be able to accurately predict stress evolution and potential failure of components during stockpile storage. The behavior of an epoxy thermoset is explored using an extensively validated material “clock” model, the Simplified Potential Energy Clock (SPEC) model. This methodology allows for a comparison between the linear viscoelastic (LVE) limit and the true non-linear viscoelastic (NLVE) representation and enables exploration of a wide range of conditions that are not practical to explore experimentally. The model predicts the behavior described as “mobility decrease with increased deformation” in the LVE limit and at low strain rates for NLVE. Only when loading rates are sufficient to decrease the material shift factor by multiple orders of magnitude is the anticipated deformation induced mobility or “mobility increase with increased deformation” observed. While the model has not been “trained” for these behaviors, it also predicts that the normalized stress relaxation response is indistinguishable amongst strain levels in the “post-yield” region as has been experimentally reported. At long time, which has not been examined experimentally, the model predicts the normalized relaxation curves “crossover” and return to the LVE ordering. These findings further demonstrate the ability of rheologically simple models to represent experimentally measured material response and present predictions at long time scales that could be tested experimentally.
This project was funded through the Campus Executive Fellowship at University of California (UC) Berkeley, and had two principal aims. First, it sought to explore predictive tools for estimating fuel properties based on molecular structure, with the goal of identifying promising candidates for new fuels to be synthesized. Second, it sought to investigate the possibility of increasing engine efficiency by substituting air for a working fluid with higher efficiency potential employed in a closed loop, namely a mixture of argon and oxygen. In pursuing the predictive tool for novel fuels, a new model was built that proved to be highly predictive of autoignition characteristics for a wide variety of hydrocarbons, esters, ethers and alcohols, and reasonably predictive for furan and tetrahydrofuran compounds, the target class of novel fuels. Obtaining more “training data” for the model improved its predictive capabilities, and further reductions in the uncertainty of the predictions would be possible with more training data. In investigating the concept of a closed-loop engine cycle using an argon-oxygen working fluid, substantial progress was made. Initial engineering models were built showing the feasibility of the concept; numerous collaborations were formed with industry and academic partners; external funding was secured from the California Energy Commission (CEC) to build a dedicated engine platform for research; and this engine platform was designed and constructed. Experimental work and associated modeling studies will take place in late 2016 and early 2017.
Engineering decisions are often formulated as optimization problems such as the optimal design or control of physical systems. In these applications, the resulting optimization problems are constrained by large-scale simulations involving systems of partial differential equations (PDEs), ordinary differential equations (ODEs), and differential algebraic equations (DAEs). In addition, critical components of these systems are fraught with uncertainty, including unverifiable modeling assumptions, unknown boundary and initial conditions, and uncertain coefficients. Typically, these components are estimated using noisy and incomplete data from a variety of sources (e.g., physical experiments). The lack of knowledge of the true underlying probabilistic characterization of model inputs motivates the need for optimal solutions that are robust to this uncertainty. In this report, we introduce a framework for handling "distributional" uncertainties in the context of simulation-based optimization. This includes a novel measure discretization technique that will lead to an adaptive optimization algorithm tailored to exploit the structures inherent to simulation- based optimization.
The Data Inferencing on Semantic Graphs project (DISeG) was a two-year investigation of inferencing techniques (focusing on belief propagation) to social graphs with a focus on semantic graphs (also called multi-layer graphs). While working this problem, we developed a new directed version of inferencing we call Directed Propagation (Chapters 2 and 4), identified new semantic graph sampling problems (Chapter 3).
This report summarizes the work performed as part of a Laboratory Directed Research and Development project focused on evaluating and mitigating risk associated with biological dual use research of concern. The academic and scientific community has identified the funding stage as the appropriate place to intervene and mitigate risk, so the framework developed here uses a portfolio-level approach and balances biosafety and biosecurity risks, anticipated project benefits, and available mitigations to identify the best available investment strategies subject to cost constraints. The modeling toolkit was designed for decision analysis for dual use research of concern, but is flexible enough to support a wide variety of portfolio-level funding decisions where risk/benefit tradeoffs are involved. Two mathematical optimization models with two solution methods are included to accommodate stakeholders with varying levels of certainty about priorities between metrics. An example case study is presented.
The ability to track nuclear material is a challenge for resiliency of complex systems, e.g., harsh environments. RF tags, frequently used in national security applications, cannot be used for technological, operational, or safety reasons. Magnetic Smart Tags (MaST) is a novel tag technology based on magnetoelastic sensing that circumvents these issues. This technology is enabled by a new, cost-effective, batch manufacturing electrochemical deposition (ECD) process. This new advancement in fabrication enables multi-frequency tags capable of providing millions of possible codes for tag identification unlike existing theft deterrent tags that can convey only a single bit of information. Magnetostrictive 70% Co: 30% Fe was developed as the base alloy comprising the magnetoelastic resonator transduction element. Saturation magnetostriction, λS, has been externally measured by the Naval Research Laboratory to be as high as 78 ppm. Description of a novel MEMS variable capacitive test structure is described for future measurements of this parameter.
Parametric sensitivities of dynamic system responses are very useful in a variety of applications, including circuit optimization and uncertainty quantification. Sensitivity calculation methods fall into two related categories: direct and adjoint methods. Effective implementation of such methods in a production circuit simulator poses a number of technical challenges, including instrumentation of device models. This report documents several years of work developing and implementing direct and adjoint sensitivity methods in the Xyce circuit simulator. Much of this work sponsored by the Laboratory Directed Research and Development (LDRD) Program at Sandia National Laboratories, under project LDRD 14-0788.
This report summarizes the methods and algorithms that were developed on the Sandia National Laboratory LDRD project entitled "Advanced Uncertainty Quantification Methods for Circuit Simulation", which was project # 173331 and proposal # 2016-0845. As much of our work has been published in other reports and publications, this report gives an brief summary. Those who are interested in the technical details are encouraged to read the full published results and also contact the report authors for the status of follow-on projects.
The Strategic Petroleum Reserve (SPR) contains the largest supply is the largest stockpile of government-owned emergency crude oil in the world. The oil is stored in multiple salt caverns spread over four sites in Louisiana and Texas. Cavern infrastructure near the bottom of the cavern can be damaged from vertical floor movement. This report presents a comprehensive history of floor movements in each cavern. Most of the cavern floor rise rates ranged from 0.5-3.5 ft/yr, however, there were several caverns with much higher rise rates. BH103, BM106, and BH105 had the three highest rise rates. Information from this report will be used to better predict future vertical floor movements and optimally place cavern infrastructure. The reasons for floor rise are not entirely understood and should be investigated.
The Regional Test Centers are a group of several sites around the US for testing photovoltaic systems and components related to photovoltaic systems. The RTCs are managed by Sandia National Laboratories. The data collected by the RTCs must be transmitted to Sandia for storage, analysis, and reporting. This document describes the methods that transfer the data between remote sites and Sandia as well as data movement within Sandia’s network. The methods described are in force as of September, 2016.
This report summarizes FY16 progress towards enabling uncertainty quantification for compressible cavity simulations using model order reduction (MOR). The targeted application is the quantification of the captive-carry environment for the design and qualification of nuclear weapons systems. To accurately simulate this scenario, Large Eddy Simulations (LES) require very fine meshes and long run times, which lead to week-long runs even on parallel state-of-the-art super- computers. MOR can reduce substantially the CPU-time requirement for these simulations. We describe two approaches for model order reduction for nonlinear systems, which can yield significant speed-ups when combined with hyper-reduction: the Proper Orthogonal Decomposition (POD)/Galerkin approach and the POD/Least-Squares Petrov Galerkin (LSPG) approach. The implementation of these methods within the in-house compressible flow solver SPARC is discussed. Next, a method for stabilizing and enhancing low-dimensional reduced bases that was developed as a part of this project is detailed. This approach is based on a premise termed "minimal subspace rotation", and has the advantage of yielding ROMs that are more stable and accurate for long-time compressible cavity simulations. Numerical results for some laminar cavity problems aimed at gauging the viability of the proposed model reduction methodologies are presented and discussed.
Optical diagnostics play a central role in dynamic compression research. Currently, streak cameras are employed to record temporal and spectroscopic information in single-event experiments, yet are limited in several ways; the tradeoff between time resolution and total record duration is one such limitation. This project solves the limitations that streak cameras impose on dynamic compression experiments while reducing both cost and risk (equipment and labor) by utilizing standard high-speed digitizers and commercial telecommunications equipment. The missing link is the capability to convert the set of experimental (visible/x-ray) wavelengths to the infrared wavelengths used in telecommunications. In this report, we describe the problem we are solving, our approach, our results, and describe the system that was delivered to the customer. The system consists of an 8-channel visible-to-infrared converter with > 2 GHz 3-dB bandwidth.
The intention of this document is to provide a path-forward for research and development (R&D) for two host rock media-specific (argillite and crystalline) disposal research work packages within the Used Fuel Disposition Campaign (UFDC). The two work packages, Argillite Disposal R&D and Crystalline Disposal R&D, support the achievement of the overarching mission and objectives of the Department of Energy Office of Nuclear Energy Fuel Cycle Technologies Program. These two work packages cover many of the fundamental technical issues that will have multiple implications to other disposal research work packages by bridging knowledge gaps to support the development of the safety case. The path-forward begins with the assumption of target dates that are set out in the January 2013 DOE Strategy for the Management and Disposal of Used Nuclear Fuel and High-Level Radioactive Waste (http://energy.gov/downloads/strategy-management-and-disposal-used-nuclear-fuel-and-high-levelradioactive- waste). The path-forward will be maintained as a living document and will be updated as needed in response to available funding and the progress of multiple R&D tasks in the Used Fuel Disposition Campaign and the Fuel Cycle Technologies Program. This path forward is developed based on the report of “Used Fuel Disposition Campaign Disposal Research and Development Roadmap (FCR&D-USED- 2011-000065 REV0)” (DOE, 2011). This document delineates the goals and objectives of the UFDC R&D program, needs for generic disposal concept design, and summarizes the prioritization of R&D issues.
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.
An electromagnetic finite volume forward solver is implemented to create a suite of forward models that provide the expected response for an air-filled buried structure constructed of concrete and rebar. Model parameters considered are the conductivities and thicknesses of a two-layer subsurface and the nature of VLF plane wave source. By building this suite of models, the results can be packaged into a data set that is both easily callable and requires minimal storage. More importantly, the user is relieved of the time required to manually execute a large number of models. Instead the results are already provided along with an interpolation tool for immediately data access. This document is written in compliance the LDRD reporting requirements for a close-out report on Project 180848.
Novel designs to increase light trapping and thermal efficiency of concentrating solar receivers at multiple length scales have been conceived, designed, and tested. The fractal-like geometries and features are introduced at both macro (meters) and meso (millimeters to centimeters) scales. Advantages include increased solar absorptance, reduced thermal emittance, and increased thermal efficiency. Radial and linear structures at the meso (tube shape and geometry) and macro (total receiver geometry and configuration) scales redirect reflected solar radiation toward the interior of the receiver for increased absorptance. Hotter regions within the interior of the receiver can reduce thermal emittance due to reduced local view factors to the environment, and higher concentration ratios can be employed with similar surface irradiances to reduce the effective optical aperture, footprint, and thermal losses. Coupled optical/fluid/thermal models have been developed to evaluate the performance of these designs relative to conventional designs. Modeling results showed that fractal-like structures and geometries can increase the effective solar absorptance by 5 – 20% and the thermal efficiency by several percentage points at both the meso and macro scales, depending on factors such as intrinsic absorptance. Meso-scale prototypes were fabricated using additive manufacturing techniques, and a macro-scale bladed receiver design was fabricated using Inconel 625 tubes. On-sun tests were performed using the solar furnace and solar tower at the National Solar Thermal Test facility. The test results demonstrated enhanced solar absorptance and thermal efficiency of the fractal-like designs.
Additive Manufacturing (AM) provides a new avenue to design innovative materials and components that cannot be created using traditional machining operations. With current AM capabilities, complex designs (such as those required in weapon systems) can be readily manufactured with laser powder forming (or Laser-Engineered Net Shaping (LENSTM)) [1] that would be otherwise cost prohibitive or impossible to produce. However, before an AM product can be qualified for weapon applications, the characteristics of the metals produced by additive manufacturing processes need to be well understood. This work focuses on the development of computational simulation tools to model the metal additive manufacturing process. This work extends and integrates existing Sandia National Laboratories tools to accomplish the following: (i) be able to better predict residual stresses in AM product, (ii) extend high-fidelity material models to capture material evolution during the formation process, leading to prediction of end-state material properties, and (iii) provide a basis for engineering tools to propose improvements to additive manufacturing process variables, including those that minimize process variation. While this work in its current state is directly applicable to additive manufacturing processes, the tools developed may also help enable modeling welding processes such as gas tungsten arc (GTA), electron beam, and laser welding.
Operation of concentrated solar power receivers at higher temperatures (>700°C) would enable supercritical carbon dioxide (sCO2) power cycles for improved power cycle efficiencies (>50%) and cost-effective solar thermal power. Unfortunately, radiative losses at higher temperatures in conventional receivers can negatively impact the system efficiency gains. One approach to improve receiver thermal efficiency is to utilize selective coatings that enhance absorption across the visible solar spectrum while minimizing emission in the infrared to reduce radiative losses. Existing coatings, however, tend to degrade rapidly at elevated temperatures. In this report, we report on the initial designs and fabrication of spectrally selective metamaterial-based absorbers for high-temperature, high-thermal flux environments important for solarized sCO2 power cycles. Metamaterials are structured media whose optical properties are determined by sub-wavelength structural features instead of bulk material properties, providing unique solutions by decoupling the optical absorption spectrum from thermal stability requirements. The key enabling innovative concept proposed is the use of structured surfaces with spectral responses that can be tailored to optimize the absorption and retention of solar energy for a given temperature range. In this initial study through the Academic Alliance partnership with University of Texas at Austin, we use Tungsten for its stability in expected harsh environments, compatibility with microfabrication techniques, and required optical performance. Our goal is to tailor the optical properties for high (near unity) absorptivity across the majority of the solar spectrum and over a broad range of incidence angles, and at the same time achieve negligible absorptivity in the near infrared to optimize the energy absorbed and retained. To this goal, we apply the recently developed concept of plasmonic Brewster angle to suitably designed nanostructured Tungsten surfaces. We predict that this will improve the receiver thermal efficiencies by at least 10% over current solar receivers.
In July 2012, protestors cut through security fences and gained access to the Y-12 National Security Complex. This was believed to be a highly reliable, multi-layered security system. This report documents the results of a Laboratory Directed Research and Development (LDRD) project that created a consistent, robust mathematical framework using complex systems analysis algorithms and techniques to better understand the emergent behavior, vulnerabilities and resiliency of multi-layered security systems subject to budget constraints and competing security priorities. Because there are several dimensions to security system performance and a range of attacks that might occur, the framework is multi-objective for a performance frontier to be estimated. This research explicitly uses probability of intruder interruption given detection (PI) as the primary resilience metric. We demonstrate the utility of this framework with both notional as well as real-world examples of Physical Protection Systems (PPSs) and validate using a well-established force-on-force simulation tool, Umbra.