Streaks in the buffer layer of wall-bounded turbulence are tracked in time to study their life cycle. Spatially and temporally resolved direct numerical simulation data are used to analyze the strong wall-parallel movements conditioned to low-speed streamwise flow. The analysis of the streaks shows that there is a clear distinction between wall-attached and detached streaks, and that the wall-attached streaks can be further categorized into streaks that are contained in the buffer layer and the ones that reach the outer region. The results reveal that streaks are born in the buffer layer, coalescing with each other to create larger streaks that are still attached to the wall. Once the streak becomes large enough, it starts to meander due to the large streamwise-to-wall-normal aspect ratio, and consequently the elongation in the streamwise direction, which makes it more difficult for the streak to be oriented strictly in the streamwise direction. While the continuous interaction of the streaks allows the superstructure to span extremely long temporal and length scales, individual streak components are relatively small and short-lived. Tall-attached streaks eventually split into wall-attached and wall-detached components. These wall-detached streaks have a strong wall-normal velocity away from the wall, similar to ejections or bursts observed in the literature. Conditionally averaging the flow fields to these split events show that the detached streak has not only a larger wall-normal velocity compared to the wall-attached counterpart, it also has a larger (less negative) streamwise velocity, similar to the velocity field at the tip of a vortex cluster.
Computing k-cores on graphs is an important graph mining target as it provides an efficient means of identifying a graph's dense and cohesive regions. Computing k-cores on hypergraphs has seen recent interest, as many datasets naturally produce hypergraphs. Maintaining k-cores as the underlying data changes is important as graphs are large, growing, and continuously modified. In many practical applications, the graph updates are bursty, both with periods of significant activity and periods of relative calm. Existing maintenance algorithms fail to handle large bursts, and prior parallel approaches on both graphs and hypergraphs fail to scale as available cores increase.We address these problems by presenting two parallel and scalable fully-dynamic batch algorithms for maintaining k-cores on both graphs and hypergraphs. Both algorithms take advantage of the connection between k-cores and h-indices. One algorithm is well suited for large batches and the other for small. We provide the first algorithms that experimentally demonstrate scalability as the number of threads increase while sustaining high change rates in graphs and hypergraphs.
Support for lower precision computation is becoming more common in accelerator hardware due to lower power usage, reduced data movement and increased computational performance. However, computational science and engineering (CSE) problems require double precision accuracy in several domains. This conflict between hardware trends and application needs has resulted in a need for multiprecision strategies at the linear algebra algorithms level if we want to exploit the hardware to its full potential while meeting the accuracy requirements. In this paper, we focus on preconditioned sparse iterative linear solvers, a key kernel in several CSE applications. We present a study of multiprecision strategies for accelerating this kernel on GPUs. We seek the best methods for incorporating multiple precisions into the GMRES linear solver; these include iterative refinement and parallelizable preconditioners. Our work presents strategies to determine when multiprecision GMRES will be effective and to choose parameters for a multiprecision iterative refinement solver to achieve better performance. We use an implementation that is based on the Trilinos library and employs Kokkos Kernels for performance portability of linear algebra kernels. Performance results demonstrate the promise of multiprecision approaches and demonstrate even further improvements are possible by optimizing low-level kernels.
Introduction: Empathy is critical for human interactions to become shared and meaningful, and it is facilitated by the expression and processing of facial emotions. Deficits in empathy and facial emotion recognition are associated with individuals with autism spectrum disorder (ASD), with specific concerns over inaccurate recognition of facial emotion expressions conveying a threat. Yet, the number of evidenced interventions for facial emotion recognition and processing (FERP), emotion, and empathy remains limited, particularly for adults with ASD. Transcranial direct current stimulation (tDCS), a noninvasive brain stimulation, may be a promising treatment modality to safely accelerate or enhance treatment interventions to increase their efficacy. Methods: This study investigates the effectiveness of FERP, emotion, and empathy treatment interventions paired with tDCS for adults with ASD. Verum or sham tDCS was randomly assigned in a within-subjects, double-blinded design with seven adults with ASD without intellectual disability. Outcomes were measured using scores from the Empathy Quotient (EQ) and a FERP test for both verum and sham tDCS. Results: Verum tDCS significantly improved EQ scores and FERP scores for emotions that conveyed threat. Conclusions: These results suggest the potential for increasing the efficacy of treatment interventions by pairing them with tDCS for individuals with ASD.
The reversible computation paradigm aims to provide a new foundation for general classical digital computing that is capable of circumventing the thermodynamic limits to the energy efficiency of the conventional, non-reversible digital paradigm. However, to date, the essential rationale for, and analysis of, classical reversible computing (RC) has not yet been expressed in terms that leverage the modern formal methods of non-equilibrium quantum thermodynamics (NEQT). In this paper, we begin developing an NEQT-based foundation for the physics of reversible computing. We use the framework of Gorini-Kossakowski-Sudarshan-Lindblad dynamics (a.k.a. Lindbladians) with multiple asymptotic states, incorporating recent results from resource theory, full counting statistics and stochastic thermodynamics. Important conclusions include that, as expected: (1) Landauer’s Principle indeed sets a strict lower bound on entropy generation in traditional non-reversible architectures for deterministic computing machines when we account for the loss of correlations; and (2) implementations of the alternative reversible computation paradigm can potentially avoid such losses, and thereby circumvent the Landauer limit, potentially allowing the efficiency of future digital computing technologies to continue improving indefinitely. We also outline a research plan for identifying the fundamental minimum energy dissipation of reversible computing machines as a function of speed.
The reversible computation paradigm aims to provide a new foundation for general classical digital computing that is capable of circumventing the thermodynamic limits to the energy efficiency of the conventional, non-reversible digital paradigm. However, to date, the essential rationale for, and analysis of, classical reversible computing (RC) has not yet been expressed in terms that leverage the modern formal methods of non-equilibrium quantum thermodynamics (NEQT). In this paper, we begin developing an NEQT-based foundation for the physics of reversible computing. We use the framework of Gorini-Kossakowski-Sudarshan-Lindblad dynamics (a.k.a. Lindbladians) with multiple asymptotic states, incorporating recent results from resource theory, full counting statistics and stochastic thermodynamics. Important conclusions include that, as expected: (1) Landauer’s Principle indeed sets a strict lower bound on entropy generation in traditional non-reversible architectures for deterministic computing machines when we account for the loss of correlations; and (2) implementations of the alternative reversible computation paradigm can potentially avoid such losses, and thereby circumvent the Landauer limit, potentially allowing the efficiency of future digital computing technologies to continue improving indefinitely. We also outline a research plan for identifying the fundamental minimum energy dissipation of reversible computing machines as a function of speed.
The Geophysical Monitoring System (GMS) State-of-Health User Interface (SOH UI) is a web-based application that allows a user to view and acknowledge the SOH status of stations in the GMS system. The SOH UI will primarily be used by the System Controller, who monitors and controls the system and external data connections. The System Controller uses the station SOH UIs to monitor, detect, and troubleshoot problems with station data availability and quality.
This report is intended to detail the findings of our investigation of the applicability of machine learning to the task of aftershock identification. The ability to automatically identify nuisance aftershock events to reduce analyst workload when searching for events of interest is an important step in improving nuclear monitoring capabilities and while waveform cross - correlation methods have proven successful, they have limitations (e.g., difficulties with spike artifacts, multiple aftershocks in the same window) that machine learning may be able to overcome. Here we apply a Paired Neural Network (PNN) to a dataset consisting of real, high quality signals added to real seismic noises in order to work with controlled, labeled data and establish a baseline of the PNN's capability to identify aftershocks. We compare to waveform cross - correlation and find that the PNN performs well, outperforming waveform cross - correlation when classifying similar waveform pairs, i.e., aftershocks.
The U.S. Strategic Petroleum Reserve is moving towards employing an expanded enhanced monitoring program. In doing so it has become apparent that there is a need for a better project wide understanding of the current state of Bryan Mound abandoned Cavern 3 stability. Cavern 3 has been inaccessible since 1988 when it was plugged and abandoned and thus this comprehensive report is structured by focusing on 1) a summarization of what can be discerned from historical records prior to 1988 and 2) a presentation and discussion of our current understanding of Cavern 3 based solely on surface monitoring and geomechanical analyses. Historical literature state the cavern was deemed unsuitable for oil storage, as it could not be definitively determined if fluid pressure could be maintained in the borehole. Current surface monitoring indicates the largest surface subsidence rates are occurring above Cavern 3. The subsidence rates are linear with no evidence of acceleration. Cavern collapse could occur if there is insufficient pressure holding up the roof. Next steps are to implement a microseismic system that will lend to a better understanding of cavern stability, as well as provide an improved early warning system for loss of integrity.
Mcnesby, Kevin; Dean, Steven W.; Benjamin, Richard; Grant, Jesse; Anderson, James; Densmore, John
A simple combination of the Planck blackbody emission law, optical filters, and digital image processing is demonstrated to enable most commercial color cameras (still and video) to be used as an imaging pyrometer for flames and explosions. The hardware and data processing described take advantage of the color filter array (CFA) that is deposited on the surface of the light sensor array present in most digital color cameras. In this work, a triple-pass optical filter incorporated into the camera lens allows light in three 10-nm wide bandpass regions to reach the CFA/light sensor array. These bandpass regions are centered over the maxima in the blue, green, and red transmission regions of the CFA, minimizing the spectral overlap of these regions normally present. A computer algorithm is used to retrieve the blue, green, and red image matrices from camera memory and correct for remaining spectral overlap. A second algorithm calibrates the corrected intensities to a gray body emitter of known temperature, producing a color intensity correction factor for the camera/filter system. The Wien approximation to the Planck blackbody emission law is used to construct temperature images from the three color (blue, green, red) matrices. A short pass filter set eliminates light of wavelengths longer than 750 nm, providing reasonable accuracy (±10%) for temperatures between 1200 and 6000 K. The effectiveness of this system is demonstrated by measuring the temperature of several systems for which the temperature is known.
In 2019, Sandia National Laboratories contracted Synapse Energy Economics (Synapse) to research the integration of community and electric utility resilience investment planning as part of the Designing Resilient Communities: A Consequence-Based Approach for Grid Investment (DRC) project. Synapse produced a series of reports to explore the challenges and opportunities in several key areas, including benefit-cost analysis, performance metrics, microgrids, and regulatory mechanisms to promote investments in electric system resilience. This report focuses on regulatory mechanisms to improve resilience. Regulatory mechanisms that improve resilience are approaches that electric utility regulators can use to align utility, customer, and third-party investments with regulatory, ratepayer, community, and other important stakeholder interests and priorities for resilience. Cost-of-service regulation may fail to provide utilities with adequate guidance or incentives regarding community priorities for infrastructure hardening and disaster recovery. The application of other types of regulatory mechanisms to resilience investments can help. This report: characterizes regulatory objective as they apply to resilience; identifies several regulatory mechanisms that are used or can be adapted to improve the resilience of the electric system--including performance-based regulation, integrated planning, tariffs and programs to leverage private investment, alternative lines of business for utilities, enhanced cost recovery, and securitization; provides a case study of each regulatory mechanism; summarizes findings across the case studies; and suggests how these regulatory mechanisms might be improved and applied to resilience moving forward. In this report, we assess the effectiveness of a range of utility regulatory mechanisms at evaluating and prioritizing utility investments in grid resilience. First, we characterize regulatory objectives which underly all regulatory mechanisms. We then describe seven types of regulatory mechanisms that can be used to improve resilience--including performance-based regulation, integrated planning, tariffs and programs to leverage private investment, alternative lines of business for utilities, enhanced cost recovery, and securitization--and provide a case study for each one. We summarize our findings on the extent to which these regulatory mechanisms have supported resilience to date. We conclude with suggestions on how these regulatory mechanisms might be improved and applied to resilience moving forward.
This work uses accelerating rate calorimetry to evaluate the impact of cell chemistry, state of charge, cell capacity, and ultimately cell energy density on the total energy release and peak heating rates observed during thermal runaway of Li-ion batteries. While the traditional focus has been using calorimetry to compare different chemistries in cells of similar sizes, this work seeks to better understand how applicable small cell data is to understand the thermal runaway behavior of large cells as well as determine if thermal runaway behaviors can be more generally tied to aspects of lithium-ion cells such as total stored energy and specific energy. We have found a strong linear correlation between the total enthalpy of the thermal runaway process and the stored energy of the cell, apparently independent of cell size and state of charge. We have also shown that peak heating rates and peak temperatures reached during thermal runaway events are more closely tied to specific energy, increasing exponentially in the case of peak heating rates.
Experimental measurements of room closure in salt repositories are valuable for understanding the evolution of the underground and for validating geomechanical models. Room closure was measured during a number of experiments at the Waste Isolation Pilot Plant (WIPP) during the 1980's and 1990's. Most rooms were excavated using a multi-pass mining sequence, where each pass necessarily destroyed some of the mining sequence closure measurement points. These destroyed points were promptly reinstalled to capture the closure after the mining pass. After the room was complete, the mining sequence closure measurement stations were supplemented with remotely read closure measurement stations. Although many aspects of these experiments were thoroughly documented, the digital copies of the closure data were inadvertently destroyed, the non-trivial process of zeroing and shifting the raw closure measurements after each mining pass was not precisely described, the various closure measurements within a given room were not directly compared on the same plot, and the measurements were collected for several years longer than previously reported. Consequently, the hand-written mining sequence closure measurements for Rooms D, B, G, and Q were located in the WIPP archives, digitized, and reanalyzed for this report. The process of reconstructing the mining sequence closure histories was documented in detail and the raw data can be found in the appendices. Within the mid-section of a given room, the reconstructed closure histories were largely consistent with other mining sequence and remotely read closure histories, which builds confidence in the experiments and suggests that plane strain is an appropriate modeling assumption. The reconstructed closure histories were also reasonably consistent with previously published results, except in one notable case: the reconstructed Room Q closure histories 30 days after excavation were about 45 % less than the corresponding closures reported in Munson's 1997 capstone paper.
Carbon dioxide (CO2) is considered the sole culprit for global warming; however, nitrous oxide (N2O), a greenhouse gas (GHG) with approximately 300 times more global warming potential than CO2, accounts for 6% of the GHG emissions in the United States. Seventy five percent of N2O emissions come from synthetic nitrogen (N) fertilizer usage in the agriculture sector primarily due to excess fertilization. Numerous studies have shown that changes in soil management practices, specifically optimizing N fertilizer use and amending soil with organic and humate materials can reverse soil damage and improve a farmer's or land reclamation company's balance sheet. Soil restoration is internationally recognized as one of the lowest cost GHG abatement opportunities available. Profitability improves in two ways: (1) lower operating costs resulting from lower input costs (water and fertilizer); and (2) increased revenue by participation in emerging GHG offsets markets, and water quality trading markets.
The NRC’s Non-Light Water Reactor Vision and Strategy report discusses the MACCS code readiness for nearfield analyses. To increase the nearfield capabilities of MACCS, the plume meander model from Ramsdell and Fosmire was integrated into MACCS and the MACCS plume meander model based on U.S. NRC Regulatory Guide 1.145 was updated. Test cases were determined to verify the plume meander model implementation into MACCS 4.1. The results using the implemented MACCS plume meander models match the comparisons with other codes and analytical calculations. This verifies that the additional MACCS plume meander models have been successfully implemented into MACCS 4.1. This report documents the verification of these model implementations into MACCS and a comparison of the results using