Proceedings of the 14th International Conference on Radiation Shielding and 21st Topical Meeting of the Radiation Protection and Shielding Division, ICRS 2022/RPSD 2022
Sedimentary-hosted geothermal energy systems are permeable structural, structural-stratigraphic, and/or stratigraphic horizons with sufficient temperature for direct use and/or electricity generation. Sedimentary-hosted (i.e., stratigraphic) geothermal reservoirs may be present in multiple locations across the central and eastern Great Basin of the USA, thereby constituting a potentially large base of untapped, economically accessible energy resources. Sandia National Laboratories has partnered with a multi-disciplinary group of collaborators to evaluate a stratigraphic system in Steptoe Valley, Nevada using both established and novel geophysical imaging techniques. The goal of this study is to inform an optimized strategy for subsequent exploration and development of this and analogous resources. Building from prior Nevada Play Fairway Analysis (PFA), this team is primarily 1) collecting additional geophysical data, 2) employing novel joint geophysical inversion/modeling techniques to update existing 3D geologic models, and 3) integrating the geophysical results to produce a working, geologically constrained thermo-hydrological reservoir model. Prior PFA work highlights Steptoe Valley as a favorable resource basin that likely has both sedimentary and hydrothermal characteristics. However, there remains significant uncertainty on the nature and architecture of the resource(s) at depth, which increases the risk in exploratory drilling. Newly acquired gravity, magnetic, magnetotelluric, and controlled-source electromagnetic data, in conjunction with new and preceding geoscientific measurements and observations, are being integrated and evaluated in this study for efficacy in understanding stratigraphic geothermal resources and mitigating exploration risk. Furthermore, the influence of hydrothermal activity on sedimentary-hosted reservoirs in favorable structural settings (i.e., whether fault-controlled systems may locally enhance temperature and permeability in some deep stratigraphic reservoirs) will also be evaluated. This paper provides details and current updates on the course of this study in-progress.
Cerjan, Alexander; Jorg, Christina; Vaidya, Sachin; Noh, Jiho; Augustine, Shyam; Von Freymann, Georg; Rechtsman, Mikael C.
Weyl points are point degeneracies that occur in momentum space of 3D periodic materials and are associated with a quantized topological charge. Here, the splitting of a quadratic (charge-2) Weyl point into two linear (charge-1) Weyl points in a 3D micro-printed photonic crystal is observed experimentally via Fourier-transform infrared spectroscopy. Using a theoretical analysis rooted in symmetry arguments, it is shown that this splitting occurs along high-symmetry directions in the Brillouin zone. This micro-scale observation and control of Weyl points is important for realizing robust topological devices in the near-infrared.
The development of the High-Resolution Wavelet Transform (HRWT) is driven by the need of increasing the high-frequency resolution of widely used discrete Wavelet Transforms (WTs). Based on the Stationary Wavelet Transform (SWT), which is a modification of the Discrete Wavelet Transform (DWT), a novel WT that increases the number of decomposition levels (therefore increasing the previously mentioned frequency resolution) is proposed. In order to show the validity of the HRWT, this paper encompasses a theoretical comparison with other discrete WT methods. First, a summary of the DWT and the SWT, along with a brief explanation of the WT theory, is provided. Then, the concept of the HRWT is presented, followed by a discussion of the adherence of this new method to the WT's common properties. Finally, an example of the application is performed on a transient waveform analysis from a power system fault event, outlining the benefits that can be obtained from its usage compared to the SWT.
Proceedings of ROSS 2022: International Workshop on Runtime and Operating Systems for Supercomputers, Held in conjunction with SC 2022: The International Conference for High Performance Computing, Networking, Storage and Analysis
Hardware design in high-performance computing (HPC) is often highly experimental. Exploring new designs is difficult and time-consuming, requiring lengthy vendor cooperation. RISC-V is an open-source processor ISA that improves the accessibility of chip design, including the ability to do hardware/software co-design using open-source hardware and tools. Co-design allows design decisions to easily flow across the hardware/software boundary and influence future design ideas. However, new hardware designs require corresponding software to drive and test them. Conventional operating systems like Linux are massively complex and modification is time-prohibitive. In this paper, we describe our port of the Kitten lightweight kernel operating system to RISC-V in order to provide an alternative to Linux for conducting co-design research. Kitten's small code base and simple resource management policies are well matched for quickly exploring new hardware ideas that may require radical operating system modifications and restructuring. Our evaluation shows that Kitten on RISC-V is functional and provides similar performance to Linux for single-core benchmarks. This provides a solid foundation for using Kitten in future co-design research involving RISC-V.
As presented above, because similar existing DOE-managed SNF (DSNF) from previous reactors have been evaluated for disposal pathways, we use this knowledge/experience as a broad reference point for initial technical bases for preliminary dispositioning of potential AR SNF. The strategy for developing fully-formed gap analyses for AR SNF entails the primary step of first obtaining all the defining characteristics of the AR SNF waste stream from the AR developers. Utilizing specific and accurate information/data for developing the potential disposal inventory to be evaluated is a key principle start for success. Once the AR SNF waste streams are defined, the initial assessments would be based on comparison to appropriate existing SNF/waste forms previously analyzed (prior experience) to make a determination on feasibility of direct disposal, or the need to further evaluate due to differences specific to the AR SNF. Assessments of criticality potential and controls would also be performed to assess any R&D gaps to be addressed in that regard as well. Although some AR SNF may need additional treatment for waste form development, these aspects may also be constrained and evaluated within the context of disposal options, including detailed gap analysis to identify further R&D activities to close the gaps.
The emerging use of the physics-based athermal recombination-corrected displacement per atom (arc-dpa) model for the displacement damage efficiency has motivated a re-evaluation of the historical empirically-derived GaAs damage response function with the purpose of highlighting needs for future analytical and experimental work. The 1-MeV neutron damage equivalence methodology used in the ASTM E-722 standard for GaAs has been re-evaluated using updated nuclear data. This yielded a higher fidelity representation of the GaAs displacement kerma and, through the use of the refined PKA recoil energy-dependent damage efficiency model, an updated 1-MeV(GaAs) displacement damage function. This re-evaluation included use of the Norgett-Robinson-Torrens (NRT) model for an updated threshold treatment, rather than the sharp-threshold Kinchin-Pease model used in the current ASTM standard. The underlying nuclear data evaluations have been updated to use the ENDF/VIII.0 75As and TENDL-2019 71Ga/69Ga evaluations. The displacement kerma and 1-MeV-equivalent damage responses were calculated using a modified NJOY-2016 code which allowed for refinements in some of the damage models. This paper shows that an updated displacement damage function, based upon the latest nuclear data, is consistent with the experimental data used to develop the current ASTM E-722 GaAs standard. Using a double ratio approach to compare the available experimental data with the calculated response, the average legacy double ratio was found to be 0.97±0.05 and the average updated double ratio was found to be 0.94 ±0.05.
A 0.2-2 GHz digitally programmable RF delay element based on a time-interleaved multi-stage switched-capacitor (TIMS-SC) approach is presented. The proposed approach enables hundreds of ns of broadband RF delay by employing sample time expansion in multiple stages of switched-capacitor storage elements. The delay element was implemented in a 45 nm SOI CMOS process and achieves a 2.55-448.6 ns programmable delay range with < 0.12% delay variation across 1.8 GHz of bandwidth at maximum delay, 2.42 ns programmable delay steps, and 330 ns/mm2 area efficiency. The device achieves 24 dB gain, 7.1 dB noise figure, and consumes 80 mW from a 1 V supply with an active area of 1.36 mm2.
Numerical simulations of Greenland and Antarctic ice sheets involve the solution of large-scale highly nonlinear systems of equations on complex shallow geometries. This work is concerned with the construction of Schwarz preconditioners for the solution of the associated tangent problems, which are challenging for solvers mainly because of the strong anisotropy of the meshes and wildly changing boundary conditions that can lead to poorly constrained problems on large portions of the domain. Here, two-level generalized Dryja-Smith-Widlund (GDSW)-type Schwarz preconditioners are applied to different land ice problems, i.e., a velocity problem, a temperature problem, as well as the coupling of the former two problems. We employ the message passing interface (MPI)- parallel implementation of multilevel Schwarz preconditioners provided by the package FROSch (fast and robust Schwarz) from the Trilinos library. The strength of the proposed preconditioner is that it yields out-of-the-box scalable and robust preconditioners for the single physics problems. To the best of our knowledge, this is the first time two-level Schwarz preconditioners have been applied to the ice sheet problem and a scalable preconditioner has been used for the coupled problem. The preconditioner for the coupled problem differs from previous monolithic GDSW preconditioners in the sense that decoupled extension operators are used to compute the values in the interior of the subdomains. Several approaches for improving the performance, such as reuse strategies and shared memory OpenMP parallelization, are explored as well. In our numerical study we target both uniform meshes of varying resolution for the Antarctic ice sheet as well as nonuniform meshes for the Greenland ice sheet. We present several weak and strong scaling studies confirming the robustness of the approach and the parallel scalability of the FROSch implementation. Among the highlights of the numerical results are a weak scaling study for up to 32 K processor cores (8 K MPI ranks and 4 OpenMP threads) and 566 M degrees of freedom for the velocity problem as well as a strong scaling study for up to 4 K processor cores (and MPI ranks) and 68 M degrees of freedom for the coupled problem.
A crucial component of field testing is the utilization of numerical models to better understand the system and the experimental data being collected. Meshing and modeling field tests is a complex and computationally demanding problem. Hexahedral elements cannot always reproduce experimental dimensions leading to grid orientation or geometric errors. Voronoi meshes can match complex geometries without sacrificing orthogonality. As a result, here we present a high-resolution 3D numerical study for the BATS heater test at the WIPP that compares both a standard non-deformed cartesian mesh along with a Voronoi mesh to match field data collected during a salt heater experiment.
The growing x-ray detection burden for vehicles at Ports of Entry in the US requires the development of efficient and reliable algorithms to assist human operator in detecting contraband. Developing algorithms for large-scale non-intrusive inspection (NII) that both meet operational performance requirements and are extensible for use in an evolving environment requires large volumes and varieties of training data, yet collecting and labeling data for these enivornments is prohibitively costly and time consuming. Given these, generating synthetic data to augment algorithm training has been a focus of recent research. Here we discuss the use of synthetic imagery in an object detection framework, and describe a simulation based approach to determining domain-informed threat image projection (TIP) augmentation.
Hyperspectral Computed Tomography (HCT) Data is often visualized using dimension reduction algorithms. However, these methods often fail to adequately differentiate between materials with similar spectral signatures. Previous work showed that a combination of image preprocessing, clustering, and dimension reduction techniques can be used to colorize simulated HCT data and enhance the contrast between similar materials. In this work, we evaluate the efficacy of these existing methods on experimental HCT data and propose new improvements to the robustness of these methods. We introduce an automated channel selection method and compare the Feldkamp, Davis, and Kress filtered back-projection (FBP) algorithm with the maximum-likelihood estimation-maximization (MLEM) algorithm in terms of HCT reconstruction image quality and its effect on different colorization methods. Additionally, we propose adaptations to the colorization process that eliminate the need for a priori knowledge of the number distinct materials for material classification. Our results show that these methods generalize to materials in real-world experimental HCT data for both colorization and classification tasks; both tasks have applications in industry, medicine, and security, wherever rapid visualization and identification is needed.
Sanders, Stephen; Dowran, Mohammadjavad; Jain, Umang; Lu, Tzu M.; Marino, Alberto M.; Manjavacas, Alejandro
Periodic arrays of nanoholes perforated in metallic thin films interact strongly with light and produce large electromagnetic near-field enhancements in their vicinity. As a result, the optical response of these systems is very sensitive to changes in their dielectric environment, thus making them an exceptional platform for the development of compact optical sensors. Given that these systems already operate at the shot-noise limit when used as optical sensors, their sensing capabilities can be enhanced beyond this limit by probing them with quantum light, such as squeezed or entangled states. Motivated by this goal, here, we present a comparative theoretical analysis of the quantum enhanced sensing capabilities of metallic nanohole arrays with one and two holes per unit cell. Through a detailed investigation of their optical response, we find that the two-hole array supports resonances that are narrower and stronger than its one-hole counterpart, and therefore have a higher fundamental sensitivity limit as defined by the quantum Cramér-Rao bound. We validate the optical response of the analyzed arrays with experimental measurements of the reflectance of representative samples. The results of this work advance our understanding of the optical response of these systems and pave the way for developing sensing platforms capable of taking full advantage of the resources offered by quantum states of light.
Magann, Alicia B.; Mccaul, Gerard; Rabitz, Herschel A.; Bondar, Denys I.
The characterization of mixtures of non-interacting, spectroscopically similar quantum components has important applications in chemistry, biology, and materials science. We introduce an approach based on quantum tracking control that allows for determining the relative concentrations of constituents in a quantum mixture, using a single pulse which enhances the distinguishability of components of the mixture and has a length that scales linearly with the number of mixture constituents. To illustrate the method, we consider two very distinct model systems: mixtures of diatomic molecules in the gas phase, as well as solid-state materials composed of a mixture of components. A set of numerical analyses are presented, showing strong performance in both settings.
This paper provides a study of the potential impacts of climate change on intermittent renewable energy resources, battery storage, and resource adequacy in Public Service Company of New Mexico's Integrated Resource Plan for 2020 - 2040. Climate change models and available data were first evaluated to determine uncertainty and potential changes in solar irradiance, temperature, and wind speed in NM in the coming decades. These changes were then implemented in solar and wind energy models to determine impacts on renewable energy resources in NM. Results for the extreme climate-change scenario show that the projected wind power may decrease by ~13% due to projected decreases in wind speed. Projected solar power may decrease by ~4% due to decreases in irradiance and increases in temperature in NM. Uncertainty in these climateinduced changes in wind and solar resources was accommodated in probabilistic models assuming uniform distributions in the annual reductions in solar and wind resources. Uncertainty in battery storage performance was also evaluated based on increased temperature, capacity fade, and degradation in roundtrip efficiency. The hourly energy balance was determined throughout the year given uncertainties in the renewable energy resources and energy storage. The loss of load expectation (LOLE) was evaluated for the 2040 No New Combustion portfolio and found to increase from 0 days/year to a median value of ~2 days/year due to potential reductions in renewable energy resources and battery storage performance and capacity. A rank-regression analyses revealed that battery round-trip efficiency was the most significant parameter that impacted LOLE, followed by solar resource, wind resource, and battery fade. An increase in battery storage capacity to ~30,000 MWh from a baseline value of ~14,000 MWh was required to reduce the median value of LOLE to ~0.2 days/year with consideration of potential climate impacts and battery degradation.
Applications such as counterfeit identification, quality control, and non-destructive material identification benefit from improved spatial and compositional analysis. X-ray Computed Tomography is used in these applications but is limited by the X-ray focal spot size and the lack of energy-resolved data. Recently developed hyperspectral X-ray detectors estimate photon energy, which enables composition analysis but lacks spatial resolution. Moving beyond bulk homogeneous transmission anodes toward multi-metal patterned anodes enables improvements in spatial resolution and signal-to-noise ratios in these hyperspectral X-ray imaging systems. We aim to design and fabricate transmission anodes that facilitate confirmation of previous simulation results. These anodes are fabricated on diamond substrates with conventional photolithography and metal deposition processes. The final transmission anode design consists of a cluster of three disjoint metal bumps selected from molybdenum, silver, samarium, tungsten, and gold. These metals are chosen for their k-lines, which are positioned within distinct energy intervals of interest and are readily available in standard clean rooms. The diamond substrate is chosen for its high thermal conductivity and high transmittance of X-rays. The feature size of the metal bumps is chosen such that the cluster is smaller than the 100 m diameter of the impinging electron beam in the X-ray tube. This effectively shrinks the X-ray focal spot in the selected energy bands. Once fabricated, our transmission anode is packaged in a stainless-steel holder that can be retrofitted into our existing X-ray tube. Innovations in anode design enable an inexpensive and simple method to improve existing X-ray imaging systems.
Carbon sequestration is a growing field that requires subsurface monitoring for potential leakage of the sequestered fluids through the casing annulus. Sandia National Laboratories (SNL) is developing a smart collar system for downhole fluid monitoring during carbon sequestration. This technology is part of a collaboration between SNL, University of Texas at Austin (UT Austin) (project lead), California Institute of Technology (Caltech), and Research Triangle Institute (RTI) to obtain real-time monitoring of the movement of fluids in the subsurface through direct formation measurements. Caltech and RTI are developing millimeter-scale radio frequency identification (RFID) sensors that can sense carbon dioxide, pH, and methane. These sensors will be impervious to cement, and as such, can be mixed with cement and poured into the casing annulus. The sensors are powered and communicate via standard RFID protocol at 902-928 MHz. SNL is developing a smart collar system that wirelessly gathers RFID sensor data from the sensors embedded in the cement annulus and relays that data to the surface via a wired pipe that utilizes inductive coupling at the collar to transfer data through each segment of pipe. This system cannot transfer a direct current signal to power the smart collar, and therefore, both power and communications will be implemented using alternating current and electromagnetic signals at different frequencies. The complete system will be evaluated at UT Austin's Devine Test Site, which is a highly characterized and hydraulically fractured site. This is the second year of the three-year effort, and a review of SNL's progress on the design and implementation of the smart collar system is provided.
Firmware emulation is useful for finding vulnerabil-ities, performing debugging, and testing functionalities. However, the process of enabling firmware to execute in an emulator (i.e., re-hosting) is difficult. Each piece of the firmware may depend on hardware peripherals outside the microcontroller that are inaccessible during emulation. Current practices involve painstakingly disentangling these dependencies or replacing them with developed models that emulate functions interacting with hardware. Unfortunately, both are highly manual and error-prone. In this paper, we introduce a systematic graph-based approach to analyze firmware binaries and determine which functions need to be replaced. Our approach is customizable to balance the fidelity of the emulation and the amount of effort it would take to achieve the emulation by modeling functions. We run our algorithm across a number of firmware binaries and show its ability to capture and remove a large majority of hardware dependencies.
Intracellular transport by kinesin motors moving along their associated cytoskeletal filaments, microtubules, is essential to many biological processes. This active transport system can be reconstituted in vitro with the surface-adhered motors transporting the microtubules across a planar surface. In this geometry, the kinesin-microtubule system has been used to study active self-assembly, to power microdevices, and to perform analyte detection. Fundamental to these applications is the ability to characterize the interactions between the surface tethered motors and microtubules. Fluorescence Interference Contrast (FLIC) microscopy can illuminate the height of the microtubule above a surface, which, at sufficiently low surface densities of kinesin, also reveals the number, locations, and dynamics of the bound motors.
This paper presents a type-IV wind turbine generator (WTG) model developed in MATLAB/Simulink. An aerodynamic model is used to improve an electromagnetic transient model. This model is further developed by incorporating a single-mass model of the turbine and including generator torque control from an aerodynamic model. The model is validated using field data collected from an actual WTG located in the Scaled Wind Farm Technology (SWiFT) facility. The model takes the nacelle wind speed as an estimate. To ensure the model and the SWiFT WTG field data is compared accurately, the wind speed is estimated using a Kalman filter. Simulation results shows that using a single-mass model instead of a two-mass model for aerodynamic torque, including the generator torque control from SWiFT, estimating wind speed via the Kalman filter and tunning the synchronous generator, accurately represent the generator torque, speed, and power, compared to the SWiFT WTG field data.
We present a new strategy for automatically exploring the design space of key CUDA + MPI programs and providing design rules that discriminate slow from fast implementations. In such programs, the order of operations (e.g., G PU kernels, MPI communication) and assignment of operations to resources (e.g., G PU streams) makes the space of possible designs enormous. Systems experts have the task of redesigning and reoptimizing these programs to effectively utilize each new platform. This work provides a prototype tool to reduce that burden. In our approach, a directed acyclic graph of CUDA and MPI operations defines the design space for the program. Monte-Carlo tree search discovers regions of the design space that have large impact on the program's performance. A sequence-to-vector transformation defines features for each explored im-plementation, and each implementation is assigned a class label according to its relative performance. A decision tree is trained on the features and labels to produce design rules for each class; these rules can be used by systems experts to guide their implementations. We demonstrate our strategy using a key kernel from scientific computing - sparse-matrix vector multiplication - on a platform with multiple MPI ranks and GPU streams.