Publications

Results 16401–16500 of 99,299

Search results

Jump to search filters

Temperature distributions and gradients in laser-heated plasmas relevant to magnetized liner inertial fusion

Physical Review E

Harding, Eric H.; Harvey-Thompson, Adam J.; Geissel, Matthias; Weis, Matthew R.; Hansen, Stephanie B.; Peterson, K.J.; Rochau, G.A.; Carpenter, K.R.; Mancini, R.C.

We present two-dimensional temperature measurements of magnetized and unmagnetized plasma experiments performed at Z relevant to the preheat stage in magnetized liner inertial fusion. The deuterium gas fill was doped with a trace amount of argon for spectroscopy purposes, and time-integrated spatially resolved spectra and narrow-band images were collected in both experiments. The spectrum and image data were included in two separate multiobjective analysis methods to extract the electron temperature spatial distribution Te(r,z). The results indicate that the magnetic field increases Te, the axial extent of the laser heating, and the magnitude of the radial temperature gradients. Comparisons with simulations reveal that the simulations overpredict the extent of the laser heating and underpredict the temperature. Temperature gradient scale lengths extracted from the measurements also permit an assessment of the importance of nonlocal heat transport.

More Details

High-resolution hindcasts for U.S. wave energy resource characterization

International Marine Energy Journal

Yang, Zhaoqing; Neary, Vincent S.

The marine and hydrokinetic (MHK) industry is at an early stage of development and has the potential to play a significant role in diversifying the U.S. energy portfolio and reducing the U.S. carbon footprint. Wave energy is the largest among all the U.S. MHK energy resources, which include wave energy, ocean current, tidal-instream, ocean thermal energy conversion, and river-instream. Wave resource characterization is an essential step for regional wave energy assessments, Wave Energy Converter (WEC) project development, site selection and WEC design. The present paper provides an overview of a joint modelling effort by the Pacific Northwest National Laboratory and Sandia National Laboratories on high-resolution wave hindcasts to support the U.S. Department of Energy’s Water Power Technologies Office’s program of wave resource characterization, assessment and classifications in all US coastal regions. Topics covered include the modelling approach, model input requirements, model validation strategies, high performance computing resource requirements, model outputs and data management strategies. Examples of model setup and validation for different regions are provided along with application to development of classification systems, and analysis of regional wave climates. Lessons learned and technical challenges of the long-term, high-resolution regional wave hindcast are discussed.

More Details

An approximation algorithm for the MAX-2-local hamiltonian problem

Leibniz International Proceedings in Informatics, LIPIcs

Hallgren, Sean; Lee, Eunou; Parekh, Ojas D.

We present a classical approximation algorithm for the MAX-2-Local Hamiltonian problem. This is a maximization version of the QMA-complete 2-Local Hamiltonian problem in quantum computing, with the additional assumption that each local term is positive semidefinite. The MAX-2-Local Hamiltonian problem generalizes NP-hard constraint satisfaction problems, and our results may be viewed as generalizations of approximation approaches for the MAX-2-CSP problem. We work in the product state space and extend the framework of Goemans and Williamson for approximating MAX-2-CSPs. The key difference is that in the product state setting, a solution consists of a set of normalized 3-dimensional vectors rather than boolean numbers, and we leverage approximation results for rank-constrained Grothendieck inequalities. For MAX-2-Local Hamiltonian we achieve an approximation ratio of 0.328. This is the first example of an approximation algorithm beating the random quantum assignment ratio of 0.25 by a constant factor.

More Details

Comparison of Orientation Mapping in SEM and TEM

Microscopy and Microanalysis

Sugar, Joshua D.; Mckeown, Joseph T.; Banga, Dhego O.; Michael, Joseph R.

Multiple experimental configurations for performing nanoscale orientation mapping are compared to determine their fidelity to the true microstructure of a sample. Transmission Kikuchi diffraction (TKD) experiments in a scanning electron microscope (SEM) and nanobeam diffraction (NBD) experiments in a transmission electron microscope (TEM) were performed on thin electrodeposited hard Au films with two different microstructures. The Au samples either had a grain size that is >50 or <20 nm. The same regions of the samples were measured with TKD apparatuses at 30 kV in an SEM with detectors in the horizontal and vertical configurations and in the TEM at 300 kV. Under the proper conditions, we demonstrate that all three configurations can produce data of equivalent quality. Each method has strengths and challenges associated with its application and representation of the true microstructure. The conditions needed to obtain high-quality data for each acquisition method and the challenges associated with each are discussed.

More Details

Empirical scaling of the n = 2 error field penetration threshold in tokamaks

Nuclear Fusion

Logan, N.C.; Park, J.K.; Hu, Q.; Paz-Soldan, C.; Markovic, T.; Wang, H.; In, Y.; Piron, L.; Piovesan, P.; Myers, Clayton; Maraschek, M.; Wolfe, S.M.; Strait, E.J.; Munaretto, S.

This paper presents a multi-machine, multi-parameter scaling law for the n = 2 core resonant error field threshold that leads to field penetration, locked modes, and disruptions. Here, n is the toroidal harmonic of the non-axisymmetric error field (EF). While density scalings have been reported by individual tokamaks in the past, this work performs a regression across a comprehensive range of densities, toroidal fields, and pressures accessible across three devices using a common metric to quantify the EF in each device. The metric used is the amount of overlap between an EF and the spectrum that drives the largest linear ideal MHD resonance, known as the "dominant mode overlap". This metric, which takes into account both the external field and plasma response, is scaled against experimental parameters known to be important for the inner layer physics. These scalings validate non-linear MHD simulation scalings, which are used to elucidate the dominant inner layer physics. Both experiments and simulations show that core penetration thresholds for EFs with toroidal mode number n = 2 are of the same order as the n = 1 thresholds that are considered most dangerous on current devices. Both n = 1 and n = 2 thresholds scale to values within the ITER design tolerances, but data from additional devices with a range of sizes are needed in order to increase confidence in quantitative extrapolations of n = 2 thresholds to ITER.

More Details

Redox transistors based on TiO2 for analogue neuromorphic computing

Li, Yiyang; Fuller, Elliot J.; Talin, Albert A.

The ability to train deep neural networks on large data sets have made significant impacts onto artificial intelligence, but consume significant amounts of energy due to the need to move information from memory to logic units. In-memory "neuromorphic" computing presents an alternative framework that processes information directly on memory elements. In-memory computing has been limited by the poor performance of the analogue information storage element, often phase-change memory or memristors. To solve this problem, we developed two types of "redox transistors" using TiO2 (anatase) which stores analogue information states through the electrochemical concentration of dopants in the crystal. The first type of redox transistor uses lithium as the electrochemical dopant ion, and its key advantage is low operating voltage. The second uses oxygen vacancies as the dopant, which is CMOS compatible and can retain state even when scaled to nanosized dimensions. Both devices offer significant advantages in terms of predictable analogue switching over conventional filamentary-based devices, and provide a significant advance in developing materials and devices for neuromorphic computing.

More Details

A Novel use of Direct Simulation Monte-Carlo to Model Dynamics of COVID-19 Pandemic Spread

Pacheco, Jose L.; Echo, Zakari S.; Hooper, Russell; Finley, Melissa; Manginell, Ronald

In this report, we evaluate a novel method for modeling the spread of COVID-19 pandemic. In this new approach we leverage methods and algorithms developed for fully-kinetic plasma physics simulations using Particle-In-Cell (PIC) Direct Simulation Monte-Carlo (DSMC) models. This approach then leverages Sandia-unique simulation capabilities, and High-Performance Computer (HPC) resources and expertise in particle-particle interactions using stochastic processes. Our hypothesis is that this approach would provide a more efficient platform with assumptions based on physical data that would then enable the user to assess the impact of mitigation strategies and forecast different phases of infection. This work addresses key scientific questions related to the assumptions this new approach must make to model the interactions of people using algorithms typically used for modeling particle interactions in physics codes (kinetic plasma, gas dynamics). The model developed uses rational/physical inputs while also providing critical insight; the results could serve as inputs to, or alternatives for, existing models. The model work presented was developed over a four-week time frame, thus far showing promising results and many ways in which this model/approach could be improved. This work is aimed at providing a proof-of-concept for this new pandemic modeling approach, which could have an immediate impact on the COVID-19 pandemic modeling, while laying a basis to model future pandemic scenarios in a manner that is timely and efficient. Additionally, this new approach provides new visualization tools to help epidemiologists comprehend and articulate the spread of this and other pandemics as well as a more general tool to determine key parameters needed in order to better predict pandemic modeling in the future. In the report we describe our model for pandemic modeling, apply this model to COVID-19 data for New York City (NYC), assess model sensitivities to different inputs and parameters and , finally, propagate the model forward under different conditions to assess the effects of mitigation and associated timing. Finally, our approach will help understand the role of asymptomatic cases, and could be extended to elucidate the role of recovered individuals in the second round of the infection, which is currently being ignored.

More Details

Compression-induced solidification of shock-melted cerium

Physical Review B

Seagle, Christopher T.; Desjarlais, Michael P.; Porwitzky, Andrew J.; Jensen, Brian J.

Compression-induced solidification has been observed in cerium on nanosecond timescales. A series of experiments was conducted in Sandia National Laboratories' Z Facility in which cerium was shock melted and subsequently shocklessly, or ramp, loaded across the melt line inducing solidification. The signature of solidification manifested in the recovery of material strength and the propagation of waves at the local elastic sound velocity. Density functional theory simulations of cerium along the experimental phase-space path exhibit spontaneous freezing to a tetragonal phase at the same pressure and closely predict the observed physical properties of solid and liquid cerium near melt.

More Details

Multi-Node Program Fuzzing on High Performance Computing Resources

Cioce, Christian R.; Salim, Nasser J.; Rigdon, James B.; Loffredo, Daniel G.

Significant effort is placed on tuning the internal parameters of fuzzers to explore the state space, measured as coverage, of binaries. In this work, we investigate the effects of the external environment on the resulting coverage after fuzzing two binaries with AFL for 24 hours. Parameters such as scaling to multiple nodes, node saturation, and parallel file system type on HPC resources are controlled in order to maximize coverage. It will be shown that employing a parallel file system such as IBM's General Parallel File System offers an advantage for fuzzing operations, since it contains enhancements for performance optimization. When combined with scaling to two and four nodes, while simultaneously restricting the number of coordinated AFL tasks per node on the low end (10-50% of available physical cores), coverage may be enhanced within a shorter period of time. Thus, controlling the external environment is a useful effort.

More Details

Phase I Closeout Report: Invoking Artificial Neural Networks to Measure Insider Threat Mitigation

Williams, Adam D.; Foulk, James W.; Charlton, William

Researchers from Sandia National Laboratories (Sandia) and the University of Texas at Austin (UT) conducted this study to explore the effectiveness of commercial artificial neural network (ANN) software to improve insider threat detection and mitigation (ITDM). This study hypothesized that ANNs could be "trainee to learn patterns of organizational behaviors, detect off-normal (or anomalous) deviations from these patterns, and alert when certain types, frequencies, or quantities of deviations emerge. The ReconaSense ANN system was installed at UT's Nuclear Engineering Teaching Laboratory (NETL) and collected 13,653 access control data points and 694 intrusion sensor data points over a three-month period. Preliminary analysis of this baseline data demonstrated regularized patterns of life in the facility, and that off-normal behaviors are detectable under certain situations -- even for a facility with anticipated highly non-routine, operational behaviors. Completion of this pilot study demonstrated how the ReconaSense ANN could be used to identify expected operational patterns and detect unexpected anomalous behaviors in support of a data-analytic approach to ITDM. While additional studies are needed to fully understand and characterize this system, the results of this initial study are overall very promising for demonstrating a new framework for ITDM utilizing ANNs and data analysis techniques.

More Details

Potential Academic Research Topics of National Security Relevance

Hernandez, Patricia M.; Lafleur, Jarret M.; Steinfeldt, Bradley; Uribe, Eva U.; Carlson, Lonnie; Nielan, Paul E.; Teclemariam, Nerayo P.

Since even before its establishment as an independent national security laboratory in 1949, Sandia has been devoted to an overarching mission of developing advanced technologies for global peace. These technologies have taken a variety of forms, and they exist in and must address an ever-changing global security environment. An understanding of that global security environment and its possible or likely evolution is therefore critical to ensuring that Sandia can maintain its focus on strategic technology investments that will benefit the nation in the next 20- 30 years. Sandia sustains multiple Systems Analysis organizations whose responsibility includes maintaining an understanding of the global security environment as it applies across multiple mission domains. The topics below include two from Sandia's emerging threats and biodefense mission, three with relevance to Sandia's cyber defense mission, and four of particular but not exclusive relevance to Sandia's nuclear deterrence mission. All are intended to spur independent academic thought that could assist Sandia as well as the broader national security community in anticipating and adapting to a continually changing world. Sandia anticipates periodic interactions between Sandia Systems Analysis staff and SciPol Scholars Program faculty and students who choose to expand upon these topics in order to provide opportunities for feedback and communication throughout 2020-2021.

More Details

ECP ST Capability Assessment Report (CAR) for VTK-m (FY20)

Moreland, Kenneth D.

The ECP/VTK-m project is providing the core capabilities to perform scientific visualization on Exascale architectures. The ECP/VTK-m project fills the critical feature gap of performing visualization and analysis on processors like graphics-based processors. The results of this project will be delivered in tools like ParaView, Vislt, and Ascent as well as in stand-alone form. Moreover, these projects are depending on this ECP effort to be able to make effective use of ECP architectures. One of the biggest recent changes in high-performance computing is the increasing use of accelerators. Accelerators contain processing cores that independently are inferior to a core in a typical CPU, but these cores are replicated and grouped such that their aggregate execution provides a very high computation rate at a much lower power. Current and future CPU processors also require much more explicit parallelism. Each successive version of the hardware packs more cores into each processor, and technologies like hyper threading and vector operations require even more parallel processing to leverage each core's full potential. VTK-m is a toolkit of scientific visualization algorithms for emerging processor architectures. VTK-m supports the fine-grained concurrency for data analysis and visualization algorithms required to drive extreme scale computing by providing abstract models for data and execution that can be applied to a variety of algorithms across many different processor architectures. The ECP/VTK-m project is building up the VTK-m codebase with the necessary visualization algorithm implementations that run across the varied hardware platforms to be leveraged at the Exascale. We will be working with other ECP projects, such as ALPINE, to integrate the new VTK-m code into production software to enable visualization on our HPC systems.

More Details

Occurrence Causal Analysis Report: Inadvertent Reaction During the Pressuring of Energetic Material

Minier, Leanna M.G.; Romero, Brittni; Braem, Maria

On June 30, 2020, an inadvertent reaction occurred during pressing of the energetic material pentaerythritol tetranitrate (PETN). The location of the event was the energetic component Rapid Prototype Facility (RPF), where similar operations performed on a variety of energetic materials are routinely provided for customers throughout Sandia National Laboratories (SNL). A background on pressing of energetic materials is provided to enhance clarity in the description of the event. This background includes a description of the equipment, materials, and tooling present during the event.

More Details

A general framework for substructuring-based domain decomposition methods for models having nonlocal interactions [minus appendix B]

D'Elia, Marta; Bochev, Pavel B.; Gunzburger, Max D.; Capodaglio, Giacomo; Klar, Manuel; Vollman, Christian

A rigorous mathematical framework is provided for a substructuring-based domain-decomposition approach for nonlocal problems that feature interactions between points separated by a finite distance. Here, by substructuring it is meant that a traditional geometric configuration for local partial differential equation problems is used in which a computational domain is subdivided into non-overlapping subdomains. In the nonlocal setting, this approach is substructuring-based in the sense that those subdomains interact with neighboring domains over interface regions having finite volume, in contrast to the local PDE setting in which interfaces are lower dimensional manifolds separating abutting subdomains Key results include the equivalence between the global, single-domain nonlocal problem and its multi-domain reformulation, both at the continuous and discrete levels. These results provide the rigorous foundation necessary for the development of efficient solution strategies for nonlocal domain-decomposition methods.

More Details

GentenMPI: Distributed Memory Sparse Tensor Decomposition

Devine, Karen; Ballard, Grey

GentenMPl is a toolkit of sparse canonical polyadic (CP) tensor decomposition algorithms that is designed to run effectively on distributed-memory high-performance computers. Its use of distributed-memory parallelism enables it to efficiently decompose tensors that are too large for a single compute node's memory. GentenMPl leverages Sandia's decades-long investment in the Trilinos solver framework for much of its parallel-computation capability. Trilinos contains numerical algorithms and linear algebra classes that have been optimized for parallel simulation of complex physical phenomena. This work applies these tools to the data science problem of sparse tensor decomposition. In this report, we describe the use of Trilinos in GentenMPl, extensions needed for sparse tensor decomposition, and implementations of the CP-ALS (CP via alternating least squares) and GCP-SGD (generalized CP via stochastic gradient descent) sparse tensor decomposition algorithms. We show that GentenMPl can decompose sparse tensors of extreme size, e.g., a 12.6-terabyte tensor on 8192 computer cores. We demonstrate that the Trilinos backbone provides good strong and weak scaling of the tensor decomposition algorithms.

More Details

The Kokkos Ecosystem [Brief]

Trott, Christian R.

In 2016/2017, the field of High-Performance Computing (HPC) entered a new era driven by fundamental physics challenges to produce ever more energy and cost-efficient processors. Since the convergence on the Message-Passing Interface (MPI) standard in the mid-1990s, application developers enjoyed a seemingly static view of the underlying machine — that of a distributed collection of homogeneous nodes executing in collaboration. However, after almost two decades of dominance, the sole use of MPI to derive parallelism acted as a limiter to improved future performance. While MPI is widely expected to continue to function as the basic mechanism for communication between compute nodes for the immediate future, additional parallelism is required on the computing node itself if high performance and efficiency goals are to be realized. When reviewing the architectures of the top HPC systems today, the change in paradigm is clear: the compute nodes of the leading machines in the world are either powered by many-core chips with a few dozen cores each, or use heterogeneous designs, where traditional CPUs marshal work to massively parallel compute accelerators which has as many as 200,000 processing threads in flight simultaneously. Complicating matters further for application developers, each processor vendor has its own preferred way of writing code for their architecture.The Kokkos EcoSystem was released by Sandia in 2017 to address this new era in HPC system design by providing a vendor independent performance portable programming system for scientific, engineering, and mathematical software applications written in the C++ programming language. Using Kokkos, application developers can be more productive because they will not have to create and maintain separate versions of their software for each architecture, nor will they have to be experts in each architecture's peculiar requirements. Instead, they will have a single method of programming for the diverse set of modern HPC architectures. While Kokkos started in 2011 as a programming model only, it soon became clear that complex applications needed more. It is also critical to have a portable mathematical functions and developers need tools to debug their applications, gain insight into the performance characteristics of their codes and tune algorithm performance parameters through automated processes. The Kokkos EcoSystem addresses those needs through its three main components: the Kokkos Core programming model, the Kokkos Kernels math library, and the Kokkos Tools project.

More Details

Battery Monitoring System

Kunzler, Kyler B.

The component that is powered by the battery pack being monitored is a valuable asset and must be in working condition at all times. Battery chemistry and characteristics have a major role in how to evaluate the state of the battery. The battery monitoring system has many parts that lead to an accurate battery reading. The components consist of a coulomb counting device, end of life voltage detection, a consideration of use for a real-time clock (RTC), temperature sensor, and non-volatile random-access memory (NVRAM). The combination of these elements allows the monitoring system to be highly reliant. Moving forward a better implementation of the ideas in this paper and further testing should ensure a high-quality battery monitoring system.

More Details

Behavioral Health/Employee Assistance. Interim Program Report FY 2020

Carley, Valerie M.; Klein, Ben J.

The Sandia National Laboratories' (SNL) Corporate Behavioral Health Program is a workplace- based program that: 1) provides Employee Assistance Program (EAP) services including early identification and resolution of personal concerns which may impact job performance, 2) assists managers and the organization in addressing productivity issues, and 3) supports the SNL commitment to provide a safe and healthful work environment. The program is offered to approximately 13,500 employees in New Mexico. The Behavioral Health Program is a corporate program combining services in NM and CA. It is integrated with other occupational health and clinical services including disability, disease management and preventive health programs. In addition, Sandia's Behavioral Health Program provides critical management consultation and psychological assessment services for external organizations including Human Resources, the Department of Energy and Security through the Clinical Evaluation (CE) process, Human Reliability Program (HRP), Protective Force Program, Workplace Violence/Threat Assessment Team (TAT), and Insider Threat Working Group programs. The program supports Sandia National Laboratories' mission to safeguard national security, the environment, and the public; it is a proactive approach to early identification, intervention and assessment. Importantly, it reduces barriers to accessing mental health services and assists with reducing health care costs attributed to illness or injuries related to unhealthy lifestyles and behaviors. The team is comprised of a professional staff including a licensed Clinical Psychologist, a licensed professional clinical counselor (LPCC) and a licensed Marriage and Family counselor (MFT) who is also a Certified Employee Assistance Professional and who holds a doctorate in counseling psychology.

More Details

Estimation of Respirable Aerosol Release Fractions through Stress Corrosion Crack-Like Geometries

Durbin, S.; Lindgren, Eric

The formation of a stress corrosion crack (SCC) in the canister wall of a dry cask storage system (DCSS) has been identified as a potential issue for the long-term storage of spent nuclear fuel. The presence of an SCC in a storage system could represent a through-wall flow path from the canister interior to the environment. Modern, vertical DCSSs are of particular interest due to the significant backfill pressurization of the canister, up to approximately 800 kPa. This pressure differential offers a relatively high driving potential for blowdown of any particulates that might be present in the canister. In this study, the carrier gas flow rates and aerosol transmission properties were evaluated for an engineered microchannel with characteristic dimensions similar to those of an SCC. The microchannel was formed by mating two gage blocks with a slot orifice measuring 28.9 μm (0.0011 in.) tall by 12.7 mm (0.500 in.) wide by 8.86 mm (0.349 in.) long (flow length). Surrogate aerosols of cerium oxide, Ce02, were seeded and mixed inside a pressurized tank. The aerosol characteristics were measured immediately upstream and downstream of the simulated SCC at elevated and ambient pressures, respectively. These data sets are intended to demonstrate a new capability to characterize SCCs under well-controlled boundary conditions. Separate modeling efforts are also underway that will be validated using these data. The test apparatus and procedures developed in this study can be easily modified for the evaluation of more complex SCC-like geometries including laboratory-grown SCC samples.

More Details

FY20 Update on Brine Availability Test in Salt. Revision 4

Kuhlman, Kristopher L.; Mills, Melissa M.; Jayne, Richard; Matteo, Edward N.; Herrick, Courtney G.; Nemer, Martin; Heath, Jason E.; Xiong, Yongliang; Choens II, Robert C.; Stauffer, Phil; Boukhalfa, Hakim; Guiltinan, Eric; Rahn, Thom; Weaver, Doug; Dozier, Brian; Otto, Shawn; Rutqvist, Jonny; Wu, Yuxin; Hu, Mengsu; Uhlemann, Sebastian; Wang, Jiannan

This report summarizes the 2020 fiscal year (FY20) status of the borehole heater test in salt funded by the US Department of Energy Office of Nuclear Energy (DOE-NE) Spent Fuel and Waste Science & Technology (SFWST) campaign. This report satisfies SFWST level-two milestone number M2SF-20SNO10303032. This report is an update of an August 2019 level-three milestone report to present the final as-built description of the test and the first phase of operational data (BATS la, January to March 2020) from the Brine Availability Test in Salt (BATS) field test.

More Details

Causal Trends for Occurrences at Sandia National Laboratories

Madrid, James D.

More Details

Computational analysis of deployable wind turbine systems in defense operational energy applications

Naughton, Brian; Gilletly, Samuel D.; Brown, Tamara; Kelley, Christopher L.

The U.S. military has been exploring pathways to reduce the logistical burden of fuel on virtually all their missions globally. Energy harvesting of local resources such as wind and solar can help increase the resilience and operational effectiveness of military units, especially at the most forward operating bases where the fuel logistics are most challenging. This report considers the potential benefits of wind energy provided by deployable wind turbines as measured by a reduction in fuel consumption and supply convoys to a hypothetical network of Army Infantry Brigade Combat Team bases. Two modeling and simulation tools are used to represent the bases and their operations and quantify the impacts of system design variables that include wind turbine technologies, battery storage, number of turbines, and wind resource quality. The System of Systems Analysis Toolkit Joint Operational Energy Model serves as a baseline scenario for comparison. The Hybrid Optimization of Multiple Energy Resources simulation tool is used to optimize a single base within the larger Joint Operational Energy Model. The results of both tools show that wind turbines can provide significant benefits to contingency bases in terms of reduced fuel use and number of convoy trips to resupply the base. The match between the turbine design and wind resource, which is statistically low across most of the global land area, is a critical design consideration. The addition of battery storage can enhance the benefits of wind turbines, especially in systems with more wind turbines and higher wind resources. Wind turbines may also provide additional benefits to other metrics such as resilience that may be important but not fully considered in the current analysis. ACKNOWLEDGEMENTS The authors would like to thank the following individuals for their helpful support, feedback and review to improve this report: U.S. Department of Energy Wind Energy Technologies Office, Patrick Gilman and Bret Barker; Idaho National Laboratory, Jake Gentle and Bradley Whipple; The National Renewable Energy Laboratory, Robert Preus and Tony Jimenez; Sandia National Laboratories, Alan Nanco, Dennis Anderson, and Hai Le. In addition, numerous discussions with military and industry stakeholders over the year were invaluable in focusing the efforts represented in this report.

More Details

Additive Manufacturing Technologies Survey

Torres Chicon, Nesty R.

A literature search of the most prominent and widely available additive manufacturing technologies was done to understand the current developments in this area. The first section provides the introduction and scope for this report, followed by a very detailed second section on the different types of additive manufacturing technologies, how they work, the materials used, advantages and disadvantages of each technology, and manufacturer information. For comparative purposes, a third section on the most widely used subtractive technology, Computer Numerical Control (CNC) machining, is presented with information about the parameters used and the benefits and limitations of this technology. A final section with a summary and conclusions is presented with information comparing the power and utility of the different additive manufacturing technologies compared to traditional manufacturing.

More Details

Gen3CSP sCO2 Loop Scope of Supply (V0.0.3)

Alvarez, Francisco; Carlson, Matthew

The Generation 3 Concentrating Solar Power (Gen3CSP) supercritical carbon dioxide (sCO2) coolant loop, typically referred to here as the `sCO2loop,' is designed to continuously remove heat from a primary heat exchanger (PHX) subsystem through a flow of sCO2 as a substitute for a sCO2 Brayton power cycle as shown in Figure 1-1. This system is designed to function as a pumped coolant loop operating at a high baseline pressure with a high degree of flexibility, stability, and autonomy to simplify operation of a Gen3CSP Topic 1 team Phase 3 pilot plant. The complete system includes a dedicated inventory management module to fill the main flow loop with CO2 and recovery CO2 during heating and venting operations to minimize the delivery of CO2 to the site.

More Details

Optical Engine Lockout System Design and Operation

Martinet, Vittorio C.; Mueller, Charles J.; Biles, Drummond E.

Engine run days in the Diesel Combustion and Fuel Effects Lab are hectic. The long mental lists that must be kept by engine operators, paired with the tight time constraints between experiments, can cause operational issues that may be dangerous to personnel and/or cause damage to test equipment. Until now, a paper sign has been used to warn operators not to motor the engine when a foreign object has been placed inside of it. Unfortunately, this simple administrative control has failed in the past, motivating this effort to develop an improved system. The lockout system described in this document introduces an engineering control that, when activated, actually prevents the engine from being motored. The new system consists of a primary and a secondary control panel. Prior to an operator placing a foreign object into the cylinder, they press a button on the secondary control panel near the engine. This breaks the interlock circuit for the engine dynamometer and activates LEDs on both control panels to notify operators that a foreign object is present within the engine cylinder. Once the work is done and all foreign objects have been removed from the combustion chamber, two operators must be present to disable the system by simultaneously pressing the buttons on the primary and secondary control panels. Requiring a second operator to disable the system increases accountability and reduces the likelihood of potentially costly mistakes.

More Details

Automated Segmentation of Porous Thermal Spray Material CT Scans with Geometric Uncertainty Estimation

Martinez, Carianne; Bolintineanu, Dan S.; Olson, Aaron; Rodgers, Theron M.; Donohoe, Brendan D.; Potter, Kevin M.; Roberts, Scott A.; Moore, Nathan W.

Thermal sprayed metal coatings are used in many industrial applications, and characterizing the structure and performance of these materials is vital to understanding their behavior in the field. X-ray Computed Tomography (CT) machines enable volumetric, nondestructive imaging of these materials, but precise segmentation of this grayscale image data into discrete material phases is necessary to calculate quantities of interest related to material structure. In this work, we present a methodology to automate the CT segmentation process as well as quantify uncertainty in segmentations via deep learning. Neural networks (NNs) are shown to accurately segment full resolution CT scans of thermal sprayed materials and provide maps of uncertainty that conservatively bound the predicted geometry. These bounds are propagated through calculations of material properties such as porosity that may provide an understanding of anticipated behavior in the field.

More Details

Grid-scale Energy Storage Hazard Analysis & Design Objectives for System Safety

Rosewater, David; Lamb, Joshua; Hewson, John C.; Viswanathan, Vilayanur; Paiss, Matthew; Choi, Daiwon; Jaiswal, Abhishek

Battery based energy storage systems are becoming a critical part of a modernized, resilient power system. However, batteries have a unique combination of hazards that can make design and engineering of battery systems difficult. This report presents a systematic hazard analysis of a hypothetical, grid scale lithium-ion battery powerplant to produce sociotechnical "design objectives" for system safety. We applied system's theoretic process analysis (STPA) for the hazard analysis which is broken into four steps: purpose definition, modeling the safety control structure, identifying unsafe control actions, and identifying loss scenarios. The purpose of the analysis was defined as to prevent event outcomes that can result in loss of battery assets due to fires and explosions, loss of health or life due to battery fires and explosions, and loss of energy storage services due to non- operational battery assets. The STPA analysis resulted in identification of six loss scenarios, and their constituent unsafe control actions, which were used to define a series of design objectives that can be applied to reduce the likelihood and severity of thermal events in battery systems. These design objectives, in all or any subset, can be utilized by utilities and other industry stakeholders as "design requirements" in their storage request for proposals (RFPs) and for evaluation of proposals. Further, these design objectives can help to protect firefighters and bring a system back to full functionality after a thermal event. We also comment on the hazards of flow battery technologies.

More Details
Results 16401–16500 of 99,299
Results 16401–16500 of 99,299