Physical fatigue can have adverse effects on humans in extreme environments. Therefore, being able to predict fatigue using easy to measure metrics such as heart rate (HR) signatures has potential to have an impact in real-life scenarios. We apply a functional logistic regression model that uses HR signatures to predict physical fatigue, where physical fatigue is defined in a data-driven manner. Data were collected using commercially available wearable devices on 47 participants hiking the 20.7-mile Grand Canyon rim-to-rim trail in a single day. Fitted model provides good predictions and interpretable parameters for real-life application.
On a high level, the larger project in question, HydroGEN, aims to develop software used for finding equations of state (EOS) to optimize catalyst configuration for H2 production through water splitting. In particular, this summer project focused on solving the nonlinear equations used in fitting the equations. This problem involved using Python to solve a linear system with nonlinear constraints. In order for this to be achieved, Pyomo was used to build a model and the solver Ipopt, interior point optimizer, was used. Pyomo is a Python-based language developed at Sandia; it is an optimization modeling language. Rather than solving the entire problem at once, a toy problem was created, simplifying the problem down to the most important focus. This problem had a known solution, comparable to the calculated solution to assess accuracy and as progress was made towards finding solutions, complexity was gradually added to the problem. After building and solving the toy problem, it was found that it gave reasonably accurate solutions, better compared to the two existing solvers previously used with this project in terms of functionality. The solver is now ready for implementation into the project’s main software.
A predictive thermodynamic model is utilized for the calculation of fuel properties of oxymethylene dimethyl ethers (OME3–4), surrogates for gasoline, diesel and aviation fuel, as well as alcohol blends with gasoline and diesel. The alcohols used for these blends are methanol, ethanol, propanol, butanol and pentanol; their mixing ratio ranges from 10 to 50% by volume. The model is based on the Perturbed-Chain Statistical Association Fluid Theory (PC-SAFT) equation of state (EoS) and Vapor Liquid Equilibrium (VLE) calculations at constant temperature, density and composition. The model includes the association term, with the assumption of two association sites (2B scheme), to enable the modeling of alcohols. The pure-component parameters are estimated based on the Group Contribution (GC) method of various sources, as well as a parametrization model specifically designed for the case of OME3–4. The results of the computational model for the density, vapor pressure and distillation curves at various conditions, including high-pressure, high-temperature (HPHT), are compared to experimental and computational data available in the literature. In the cases where no measurements are available for the surrogates, experimental data for the corresponding target fuel are used, taking into consideration the inherent deviation in properties between real and surrogate fuel. Overall, the results are in good agreement with the data from the literature, with the average deviation not exceeding 12% for temperature (Kelvin) on the distillation curves, 10% for density and 46% for vapor pressure and the general trend being captured successfully. The use of different pure component parameter estimation techniques can further improve the prediction quality in the cases of OME3–4 and the aviation fuel surrogate, especially for the vapor pressure, leading to an average deviation lower than 18%. These results demonstrate the predictive capabilities of the model, which extend to a wide range of fuel types and pressure/temperature conditions. Through this investigation, the present work aims to establish the limits of applicability of this thermodynamic property prediction methodology.
Here, using a quantum computer to speed up one step in a textbook approach to generating random numbers proves to be a savvy strategy, and one that could make good use of quantum computers that will be available in the near future.
Porous liquids (PLs) based on the zeolitic imidazole framework ZIF-8 are attractive systems for carbon capture since the hydrophobic ZIF framework can be solvated in aqueous solvent systems without porous host degradation. However, solid ZIF-8 is known to degrade when exposed to CO2 in wet environments, and therefore the long-term stability of ZIF-8-based PLs is unknown. Through aging experiments, the long-term stability of a ZIF-8 PL formed using the water, ethylene glycol, and 2-methylimidazole solvent system was systematically examined, and the mechanisms of degradation were elucidated. The PL was found to be stable for several weeks, with no ZIF framework degradation observed after aging in N2 or air. However, for PLs aged in a CO2 atmosphere, formation of a secondary phase occurred within 1 day from the degradation of the ZIF-8 framework. From the computational and structural evaluation of the effects of CO2 on the PL solvent mixture, it was identified that the basic environment of the PL caused ethylene glycol to react with CO2 forming carbonate species. These carbonate species further react within the PL to degrade ZIF-8. The mechanisms governing this process involves a multistep pathway for PL degradation and lays out a long-term evaluation strategy of PLs for carbon capture. Additionally, it clearly demonstrates the need to examine the reactivity and aging properties of all components in these complex PL systems in order to fully assess their stabilities and lifetimes.
Astra, deployed in 2018, was the first petascale supercomputer to utilize processors based on the ARM instruction set. The system was also the first under Sandia's Vanguard program which seeks to provide an evaluation vehicle for novel technologies that with refinement could be utilized in demanding, large-scale HPC environments. In addition to ARM, several other important first-of-a-kind developments were used in the machine, including new approaches to cooling the datacenter and machine. This article documents our experiences building a power measurement and control infrastructure for Astra. While this is often beyond the control of users today, the accurate measurement, cataloging, and evaluation of power, as our experiences show, is critical to the successful deployment of a large-scale platform. While such systems exist in part for other architectures, Astra required new development to support the novel Marvell ThunderX2 processor used in compute nodes. In addition to documenting the measurement of power during system bring up and for subsequent on-going routine use, we present results associated with controlling the power usage of the processor, an area which is becoming of progressively greater interest as data centers and supercomputing sites look to improve compute/energy efficiency and find additional sources for full system optimization.
Here, the effects of gamma and proton irradiation, and of forward bias minority carrier injection, on photo-response were investigated for InAsSb/AlAsSb pBn mid-wave infrared (MWIR) detectors with an engineered majority-carrier barrier. Room-temperature gamma irradiation had an insignificant effect on 77 K photo-response. Gamma irradiation at 77 K detector temperature, however, decreased in situ photo-response by 19% after a cumulative dose of ~ 500 krad(Si). Subsequent forward bias minority carrier injection had no effect on photo-response. The 77 K detectors irradiated with 30 MeV protons up to 2 Mrad(Si) had photo-response degraded by up to 70%, but here forward bias minority carrier (hole) injection caused up to 12% recovery that persisted more than 30 min. These results suggest a mitigation strategy for maintaining the photo-response of similar detectors in radiation environments that cause displacement damage defects.
The Proliferation Resistance Optimization (PRO-X) program is actively supporting the design of new-build nuclear systems by identifying intrinsic characteristics or design choices to minimize the potential for diversion or production of weapons-usable nuclear material. The PRO-X program looks to optimize the safety, security, and performance of the fuel cycle infrastructure for a wide array of reactor types, including research reactors, small modular reactors (SMRs), and large advanced reactors (ARs).
Vanadium dioxide (VO2) undergoes a metal-insulator phase transition at ∼70 °C and has attracted substantial interest for potential applications in electronics, including those in neuromorphic computing. The vanadium-oxygen system has a rather complicated phase diagram, and controlling the stoichiometry and the phase of thin films of vanadium oxides is a well-known challenge. We explore the novel combination of two methods of VO2 thin film deposition using off-axis RF magnetron sputtering on (100)- and (111)-oriented yttria-stabilized zirconia (YSZ) substrates: reactive sputtering of vanadium in an oxygen environment and sputtering of vanadium metal followed by oxidation to VO2. Interestingly, the reactive sputtering process on both substrate orientations yields the metastable semiconducting VO2 (B) phase, which is structurally stabilized by the YSZ surface. The metal sputtering and oxidation process on YSZ produces mainly the equilibrium monoclinic (or M1) phase of VO2 that exhibits a metal-insulator transition. Using this method, we obtained thin films of (010)-textured polycrystalline VO2 (M1) that show a metal-insulator transition with an on/off ratio larger than 1000.
The ability to rapidly screen material performance in the vast space of high entropy alloys is of critical importance to efficiently identify optimal hydride candidates for various use cases. Given the prohibitive complexity of first principles simulations and large-scale sampling required to rigorously predict hydrogen equilibrium in these systems, we turn to compositional machine learning models as the most feasible approach to screen on the order of tens of thousands of candidate equimolar high entropy alloys (HEAs). Critically, we show that machine learning models can predict hydride thermodynamics and capacities with reasonable accuracy (e.g. a mean absolute error in desorption enthalpy prediction of ∼5 kJ molH2−1) and that explainability analyses capture the competing trade-offs that arise from feature interdependence. We can therefore elucidate the multi-dimensional Pareto optimal set of materials, i.e., where two or more competing objective properties can't be simultaneously improved by another material. This provides rapid and efficient down-selection of the highest priority candidates for more time-consuming density functional theory investigations and experimental validation. Various targets were selected from the predicted Pareto front (with saturation capacities approaching two hydrogen per metal and desorption enthalpy less than 60 kJ molH2−1) and were experimentally synthesized, characterized, and tested amongst an international collaboration group to validate the proposed novel hydrides. Additional top-predicted candidates are suggested to the community for future synthesis efforts, and we conclude with an outlook on improving the current approach for the next generation of computational HEA hydride discovery efforts.
The penetration of wind power generation into the power grid has been accelerated in recent times due to the aggressive emission targets set by governments and other regulatory authorities. Although wind power has the advantage of being environment-friendly, wind as a resource is intermittent in nature. In addition, wind power contributes little inertia to the system as most wind turbines are connected to the grid via power electronic converters. These negative aspects of wind power pose serious challenges to the frequency security of power systems as penetration increases. In this work, an approach is proposed where an energy storage system (ESS) is used to mitigate frequency security issues of wind-integrated systems. ESSs are well equipped to supply virtual inertia to the grid due to their fast-acting nature, thus replenishing some of the energy storage capability of displaced inertial generation. In this work, a probabilistic approach is proposed to estimate the amount of inertia required by a system to ensure frequency security. Reduction in total system inertia due to the displacement of conventional synchronous generation by wind power generation is considered in this approach, while also taking into account the loss of inertia due to forced outages of conventional units. Monte Carlo simulation is employed for implementing the probabilistic estimation of system inertia. An ESS is then sized appropriately, using the system swing equation, to compensate for the lost inertia. The uncertainty associated with wind energy is modeled into the framework using an autoregressive moving average technique. Effects of increasing the system peak load and changing the wind profile on the expected system inertia are studied to illustrate various factors that might affect system frequency security. The proposed method is validated using the IEEE 39-bus test system.
Jiang, Kunyao; Tang, Jingyu; House, Stephen D.; Xu, Chengchao; Xiao, Kelly; Porter, Lisa M.; Davis, Robert F.
Ga2O3 films were deposited on (100) MgAl2O4 spinel substrates at 550, 650, 750, and 850 °C using metal-organic chemical vapor deposition and investigated using x-ray diffraction and transmission electron microscopy. A phase-pure γ-Ga2O3-based material having an inverse spinel structure was formed at 850 °C; a mixture of the γ-phase and β-Ga2O3 was detected in films grown at 750 °C. Only β-Ga2O3 was determined in the films deposited at 650 and 550 °C. A β- to γ-phase transition occurred from the substrate/film interface during growth at 750 °C. The growth and stabilization of the γ-phase at the outset of film growth at 850 °C was affected by the substantial Mg and Al chemical interdiffusion from the MgAl2O4 substrate observed in the energy-dispersive x-ray spectrum. Further, atomic-scale investigations via scanning transmission electron microscopy of the films grown at 750 and 850 °C revealed a strong tetrahedral site preference for Ga and an octahedral site preference for Mg and Al. It is postulated that the occupation of these atoms in these particular sites drives the β-Ga2O3 to γ-phase transition and markedly enhances the thermal stability of the latter phase at elevated temperatures.
Here, phase formation and stability of five component compositionally complex rare earth zirconates (5RE2Zr2O7) were investigated by X-ray diffraction and electron microprobe analysis. Zirconates with different rare earth compositions (LaNdSmEuDy, LaNdSmEuYb, LaNdEuErYb, LaNdDyErYb, SmEuDyYHo, LaYHoErYb, and DyYHoErYb) were synthesized at 1700°C and 2000°C by the solid-state method to investigate the effect of A-site site disorder (δA) on phase stability. Increased site disorder results from mixed cation occupancy with localized crystallographic strain and bond disorder. Compositions LaNdSmEuDy (δA = 4.6) and LaNdSmEuYb (δA = 6.0) produced a single pyrochlore phase and compositions SmDyYHoErYb (δA = 2.8), LaYHoErYb (δA = 6.2), and DyYHoErYb (δA = 1.7) produced a single fluorite phase. High δA compositions LaNdEuErYb (δA = 6.9) and LaNdDyErYb (δA = 7.2) produced a pyrochlore and fluorite phase mixture at 1700°C. Single phase was obtained for the latter composition at 2000°C. Of the single phase compositions calcined at 1700°C, LaNdSmEuYb and LaYHoErYb (both with largest δA) showed decomposition to mixed fluorite and pyrochlore phases during lower temperature anneals, indicating entropic stabilization. Comparison with prior work shows a temperature dependence of the critical δA for phase stability, and compositions near it are expected to be entropy stabilized.
Molecular dynamics (MD) has served as a powerful tool for designing materials with reduced reliance on laboratory testing. However, the use of MD directly to treat the deformation and failure of materials at the mesoscale is still largely beyond reach. In this work, we propose a learning framework to extract a peridynamics model as a mesoscale continuum surrogate from MD simulated material fracture data sets. Firstly, we develop a novel coarse-graining method, to automatically handle the material fracture and its corresponding discontinuities in the MD displacement data sets. Inspired by the weighted essentially non-oscillatory (WENO) scheme, the key idea lies at an adaptive procedure to automatically choose the locally smoothest stencil, then reconstruct the coarse-grained material displacement field as the piecewise smooth solutions containing discontinuities. Then, based on the coarse-grained MD data, a two-phase optimization-based learning approach is proposed to infer the optimal peridynamics model with damage criterion. In the first phase, we identify the optimal nonlocal kernel function from the data sets without material damage to capture the material stiffness properties. Then, in the second phase, the material damage criterion is learnt as a smoothed step function from the data with fractures. As a result, a peridynamics surrogate is obtained. As a continuum model, our peridynamics surrogate model can be employed in further prediction tasks with different grid resolutions from training, and hence allows for substantial reductions in computational cost compared with MD. We illustrate the efficacy of the proposed approach with several numerical tests for the dynamic crack propagation problem in a single-layer graphene. Our tests show that the proposed data-driven model is robust and generalizable, in the sense that it is capable of modeling the initialization and growth of fractures under discretization and loading settings that are different from the ones used during training.
Modern density functional theory (DFT) is a powerful tool for accurately predicting self-consistent material properties such as equations of state, transport coefficients and opacities in high energy density plasmas, but it is generally restricted to conditions of local thermodynamic equilibrium (LTE) and produces only averaged electronic states instead of detailed configurations. We propose a simple modification to the bound-state occupation factor of a DFT-based average-atom model that captures essential non-LTE effects in plasmas—including autoionization and dielectronic recombination—thus extending DFT-based models to new regimes. Finally, we then expand the self-consistent electronic orbitals of the non-LTE DFT-AA model to generate multi-configuration electronic structure and detailed opacity spectra. This article is part of the theme issue ‘Dynamic and transient processes in warm dense matter’.
Plastic scintillators are widely used as radiation detection media in homeland security and nuclear physics applications. Their attributes include low cost, scalability to large detector volumes, and additive compounding to enable additional material and detection features, such as pulse shape discrimination (PSD), gamma-ray spectroscopy, aging resistance, and coincidence timing. However, traditional chemically cured plastic scintillators (CCS) require long reaction times, and hazardous wet chemical procedures performed by specially trained personnel, and can leave residual monomer, resulting in deleterious optical and material properties. Here, we synthesize melt blended scintillators (MBSs) in 2.5 days using easily accessible solid-state compounding of commercially-available poly(styrene) with 30-60 wt% fluorene-based compound 'P2' to create monolithic detectors with < 100 ppm residual monomer, in several form factors. The best scintillation performance was recorded for 60 wt% P2 in Styron 665, including gamma-ray light yield 139% of EJ- 200 commercial scintillator and PSD figure of merit (FOM) value of 2.65 at 478 keVee, approaching P2 organic glass scintillator (OGS). The capability of MBS to generate fog-resistant scintillators and poly(methyl methacrylate) (PMMA)-based scintillators for use in challenging environments is also demonstrated.
We develop an adaptive method for quantum state preparation that utilizes randomness as an essential component and that does not require classical optimization. Instead, a cost function is minimized to prepare a desired quantum state through an adaptively constructed quantum circuit, where each adaptive step is informed by feedback from gradient measurements in which the associated tangent space directions are randomized. We provide theoretical arguments and numerical evidence that convergence to the target state can be achieved for almost all initial states. We investigate different randomization procedures and develop lower bounds on the expected cost function change, which allows for drawing connections to barren plateaus and for assessing the applicability of the algorithm to large-scale problems.
High-consequence nuclear facilities are required to implement a physical protection system (PPS) capable of preventing a design basis threat (DBT) adversary from completing radiological sabotage or theft of nuclear material. Response forces are a core component of the PPS. A rigorous testing and evaluation program is required to ensure that the response forces are appropriately trained and equipped and are capable of meeting the regulatory and mission requirements.
This document outlines a statistical framework for establishing a shelf-life program for components whose performance is measured by a binary response, usually ‘pass’ or ‘fail.’ The approach applies to both single measurement devices and repeated measurement devices. The high-level objective of these plans is to quickly detect any sizeable increase in fraction defective as the product ages. The statistical approach is to choose a sample size and monitoring technique that alarms when the fraction defective increases to an unacceptably high level, but does not alarm when the process is at nominal. The nominal (acceptable) fraction defective is used, and an increased fraction defective (unacceptable) is assumed as part of the control chart design. The control chart recommended for this problem is the Bernoulli Cumulative Sum (CUSUM) control chart.
This work details the reconfiguration of the 4.5 m Gigahertz Transverse Electromagnetic test facility at Sandia National Laboratories to operate in accordance with the RS105 (radiated susceptibility) test from MIL-STD-461 representing a high-altitude electromagnetic pulse. This reconfiguration involved removal of the existing continuous wave source and connecting both a high voltage feed and a coaxial feed housing the Marx bank pulser. Marx control settings were calibrated for several voltage levels across two pulsers, and position-dependent measurements of the peak electric field were taken throughout the test volume for each pulser. The results showed field uniformity and purity across the test volume comparable to continuous wave operations, and field peaks were measured from 1.63 kV/m to 54.8 kV/m, with maximum capabilities expected to exceed 100 kV/m. Some challenges in consistent pulser operations at lower Marx bank voltages and high frequency reflections in the system were identified for future capability improvements.
In the 1970’s and 1980’s, researchers at Sandia National Laboratories produced electron albedo data for a range of materials. Since that time, the electron albedo data has been used for a wide variety of purposes including the validation of Monte Carlo electron transport codes. This report was compiled to examine the electron albedo experiment results in the context of Integrated Tiger Series (ITS) validation. The report presents tables and figures that could provide insight into the underlying model form uncertainty present in the ITS code. Additionally, the report provides data on potential means to reduce these model form errors by highlighting potential refinements in the cross-section generation process.
The U.S. Department of Energy/National Nuclear Security Administration (DOE/NNSA) and National Technology & Engineering Solutions of Sandia, LLC (NTESS), the management and operating contractor for Sandia National Laboratories/California (SNL/CA), has prepared this addendum to Soil Sampling Results for Closure of a Portion of Solid Waste Management Unit #16 to report the results of additional soil sampling relating to the closure of a portion of Solid Waste Management Unit (SWMU) #16. This additional sampling was in response to a request by the San Francisco Bay Regional Water Quality Control Board (SFRWQCB) in their letters dated February 16 and August 18, 2022 relating to the detection of the benzidine above the defined project action level in a soil sample collected adjacent to the sanitary sewer line in borehole BH-056 (SFRWQCB, 2022A; 2022b).
Micro ribonucleic acids (miRNA) give our immune systems the ability to recognize viruses and other pathogens by their complementary single-stranded RNA (ssRNA) produced in the reproduction of the pathogen in our cells. When miRNA of a specific sequence is detected in a cell sample, it can be assumed that the immune system is activated and attempting to track down the infection. This pathway can be utilized to diagnose infection from a pathogen before the individual even develops symptoms, aiding in early disease detection and proper treatment. One of the ways that we can detect miRNA is through an assay of clustered regularly interspaced short palindromic repeats or “CRISPR” and the bacterial protein Cas13a. This report details discoveries made while attempting to optimize this assay for miRNA detection. After looking at several different factors within the assay, it was determined that some factors, such as reporter type and metallic ion concentration, are more impactful on the overall assay sensitivity than other factors, such as the overall concentration of Cas13a, CRISPR RNA (crRNA), or ssRNA reporter. It was also discovered that different sequences with different lengths require renewed optimization efforts, as each target has a unique binding affinity determined by the sequence length and composition. This information is crucial in the development of point of care molecular detection devices as they become sensitive enough to identify pathogens before they spread.
Avoiding stress concentrations is essential to achieve robust parts since failure tends to originate at such concentrations. With recent advances in multimaterial additive manufacturing, it is possible to alter the stress (or strain) distribution by adjusting the material properties in selected locations. Here, we investigate the use of grayscale digital light processing (g-DLP) 3D printing to create modulus gradients around areas of high stress. These gradients prevent failure by redistributing high stresses (or strains) to the neighboring material. The improved material distributions are calculated using finite element analysis. The much-enhanced properties are demonstrated experimentally for thin plates with circular, triangular, and elliptical holes. This work suggests that multimaterial additive manufacturing techniques like g-DLP printing provide a unique opportunity to create tougher engineering materials and parts.
Understanding titanium particle combustion processes is critical not only for characterizing existing pyrotechnic systems but also for creating new igniter designs. In order to characterize titanium particle combustion processes, morphologies, and temperatures, simultaneous spatially-resolved electric field holography and imaging pyrometry techniques were used to capture post-ignition data at up to 7 kHz. Due to the phase and thermal distortions present in the combustion cloud, traditional digital in-line holography techniques fail to capture accurate data. In this work, electric field holography techniques are used in order to cancel distortions and capture the three-dimensional spatial locations and diameters of the particles. In order to estimate the projected surface temperatures of the titanium particles, an imaging pyrometry method that ratios emission at 750 and 850 nm is utilized. Using these diagnostics, joint statistics are collected for particle size, morphology, velocity, and temperature. Results show that, early in the combustion process, the titanium particles are primarily oxidized by potassium perchlorate inside the igniter cup, resulting in projected surface temperatures near 3000 K. Later in the process, the particles interact with ambient air, resulting in lower surface temperatures around 2400 K and the formation of flame zones. These results are consistent with adiabatic flame temperature predictions as well as particle morphology observations of a titanium core with a TiO2 surface. Late stage particle expansion, star fragmentation, and molten droplet breakup events are also observed using the time-resolved morphology and temperature diagnostics. These results illustrate the different stages of titanium particle combustion in pyrotechnic environments, which can be used to inform improvements in next-generation igniters.
The Z Fundamental Science (ZFS) Program is intended to provide access to the Z machine and its diagnostics for high energy density (HED) experiments in collaboration with a broad community of academic, industrial, and national laboratory research interests. ZFS experiments on the Z Facility focus on conducting fundamental research in HED science and help provide research experience necessary to maintain and grow the HED community, especially through involvement of researchers from academia. This report serves as an executive summary of the ZFS Program and provides a succinct synopsis of the history of the ZFS Program, metrics and impacts of the Program, as well as a brief list of the most impactful publications that have resulted from the various ZFS Projects relevant to laboratory astrophysics, plasma physics, and planetary physics.
High-altitude balloons carrying infrasound sensor payloads can be leveraged toward monitoring efforts to provide some advantages over other sensing modalities. On 10 July 2020, three sets of controlled surface explosions generated infrasound waves detected by a high-altitude floating sensor. One of the signal arrivals, detected when the balloon was in the acoustic shadow zone, could not be predicted via propagation modeling using a model atmosphere. Considering that the balloon’s horizontal motion showed direct evidence of gravity waves, we examined their role in infrasound propagation. Implementation of gravity wave perturbations to the wind field explained the signal detection and aided in correctly predicting infrasound travel times. Our results show that the impact of gravity waves is negligible below 20 km altitude; however, their effect is important above that height. The results presented here demonstrate the utility of balloon-borne acoustic sensing toward constraining the source region of variability, as well as the relevance of complexities surrounding infrasound wave propagation at short ranges for elevated sensing platforms.
Many highly pixelated organic scintillator detection systems would benefit from independent readout of each scintillator pixel. Recent advances in Silicon Photomultiplier (SiPM) technology makes this goal feasible, however the data acquisition from potentially hundreds or thousands of channels requires a low-cost and compact solution. For pixelated neutron detection with organic scintillators, the capability to distinguish between neutron and gamma interactions using Pulse Shape Discrimination (PSD) is required along with pulse charge and time of arrival. The TOFPET2 ASIC from PETsys Electronics is a 64-channel readout chip providing pulse time and charge integration measurements from SiPMs, and is specifically designed for time-of-flight positron-emission tomography. Using an 8 × 8 array of 6 mm × 6 mm J-series SiPMs from SensL/OnSemi (ArrayJ-60035-64P-PCB), we have studied the energy and PSD performance of the TOFPET2 ASIC using a 4 × 4 array of 6 mm × 6 mm × 30 mm trans-Stilbene crystals from Inrad Optics and a custom SiPM routing board from PETsys Electronics. Using a time-over-threshold method, we measure a maximum PSD figure-of-merit of approximately 1.2 at 478 keV (the Compton edge of 662 keV) for a J-series SiPM operating at an over-voltage of 3V.
SPPARKS is an open-source parallel simulation code for developing and running various kinds of on-lattice Monte Carlo models at the atomic or meso scales. It can be used to study the properties of solid-state materials as well as model their dynamic evolution during processing. The modular nature of the code allows new models and diagnostic computations to be added without modification to its core functionality, including its parallel algorithms. A variety of models for microstructural evolution (grain growth), solid-state diffusion, thin film deposition, and additive manufacturing (AM) processes are included in the code. SPPARKS can also be used to implement grid-based algorithms such as phase field or cellular automata models, to run either in tandem with a Monte Carlo method or independently. For very large systems such as AM applications, the Stitch I/O library is included, which enables only a small portion of a huge system to be resident in memory. In this paper we describe SPPARKS and its parallel algorithms and performance, explain how new Monte Carlo models can be added, and highlight a variety of applications which have been developed within the code.
It is necessary to establish confidence in high-consequence codes containing an extensive suite of physics algorithms in the regimes of interest. Verification problems allow code developers to assess numerical accuracy and increase confidence that specific sets of model physics were implemented correctly in the code. The two main verification techniques are code verification and solution verification. In this work, we present verification problems that can be used in other codes to increase confidence in simulations of relativistic beam transport. Specifically, we use the general plasma code EMPIRE to model and compare with the analytical solution to the evolution of the outer radial envelope of a relativistic charged particle beam. We also outline a benchmark test of a relativistic beam propagating through a vacuum and pressurized gas cell, and present the results between EMPIRE and the hybrid code GAZEL. Further, we discuss the subtle errors that were caught with these problems and detail lessons learned.
The Sandia National Laboratories, in California (Sandia/CA) is a research and development facility, owned by the U.S. Department of Energy’s National Nuclear Security Administration agency (DOE/NNSA). The laboratory is located in the City of Livermore (the City) and is comprised of approximately 410 acres. The Sandia/CA facility is operated by National Technology and Engineering Solutions of Sandia, LLC (NTESS) under a contract with the DOE/NNSA. The DOE/ NNSA’s Sandia Field Office (SFO) oversees the operations of the site. North of the Sandia/CA facility is the Lawrence Livermore National Laboratory (LLNL), in which Sandia/CA’s sewer system combines with before discharging to the City’s Publicly Owned Treatment Works (POTW) for final treatment and processing. The City’s POTW authorizes the wastewater discharge from Sandia/CA via the assigned Wastewater Discharge Permit #1251 (the Permit), which is issued to the DOE/NNSA’s main office for Sandia National Laboratories, located in New Mexico (Sandia/NM). The Permit requires the submittal of this Monthly Sewer Monitoring Report to the City by the twenty-fifth day of each month.
As the frequency and quantities of nuclear material shipments escalate internationally to meet the increased demand for small modular (SMR) and advanced (AR) reactors, the risks and costs associated with shipping activities also likely to increase with them. The primary objective of this study is to evaluate possibilities for risk reduction via avoidability—or, avoiding or reducing the need for nuclear shipments, where possible, either by reducing the frequency or quantities of materials contained in shipments.