We have characterized the three-dimensional evolution of microstructural anisotropy of a family of elastomeric foams during uniaxial compression via in-situ X-ray computed tomography. Flexible polyurethane foam specimens with densities of 136, 160 and 240 kg/m3 were compressed in uniaxial stress tests both parallel and perpendicular to the foam rise direction, to engineering strains exceeding 70%. The uncompressed microstructures show slightly elongated ellipsoidal pores, with elongation aligned parallel to the foam rise direction. The evolution of this microstructural anisotropy during deformation is quantified based on the autocorrelation of the image intensity, and verified via the mean intercept length as well as the shape of individual pores. Trends are consistent across all three methods. In the rise direction, the material remains transversely anisotropic throughout compression. Anisotropy initially decreases with compression, reaches a minimum, then increases up to large strains, followed by a small decrease in anisotropy at the largest strains as pores collapse. Compression perpendicular to the foam rise direction induces secondary anisotropy with respect to the compression axis, in addition to primary anisotropy associated with the foam rise direction. In contrast to compression in the rise direction, primary anisotropy initially increases with compression, and shows a slight decrease at large strains. These surprising non-monotonic trends and qualitative differences in rise and transverse loading are explained based on the compression of initially ellipsoidal pores. Microstructural anisotropy trends reflect macroscopic stress-strain and lateral strain response. These findings provide novel quantitative connections between three-dimensional microstructure and anisotropy in moderate density polymer foams up to large deformation, with important implications for understanding complex three-dimensional states of deformation.
The continuum-scale electrokinetic porous-media flow and excess charge redistribution equations are uncoupled using eigenvalue decomposition. The uncoupling results in a pair of independent diffusion equations for “intermediate” potentials subject to modified material properties and boundary conditions. The fluid pressure and electrostatic potential are then found by recombining the solutions to the two intermediate uncoupled problems in a matrix-vector multiplication. Expressions for the material properties or source terms in the intermediate uncoupled problem may require extended precision or careful rewriting to avoid numerical cancellation, but the solutions themselves can typically be computed in double precision. The approach works with analytical or gridded numerical solutions and is illustrated through two examples. The solution for flow to a pumping well is manipulated to predict streaming potential and electroosmosis, and a periodic one-dimensional analytical solution is derived and used to predict electroosmosis and streaming potential in a laboratory flow cell subjected to low frequency alternating current and pressure excitation. The examples illustrate the utility of the eigenvalue decoupling approach, repurposing existing analytical solutions or numerical models and leveraging solutions that are simpler to derive for coupled physics.
For a wide range of PDEs, the discontinuous Petrov–Galerkin (DPG) methodology of Demkowicz and Gopalakrishnan provides discrete stability starting from a coarse mesh and minimization of the residual in a user-controlled norm, among other appealing features. Research on DPG for transient problems has mainly focused on spacetime discretizations, which has theoretical advantages, but practical costs for computations and software implementations. The sole examination of time-stepping DPG formulations was performed by Führer, Heuer, and Gupta, who applied Rothe's method to an ultraweak formulation of the heat equation to develop an implicit time-stepping scheme; their work emphasized theoretical results, including error estimates in time and space. In the present work, we follow Führer, Heuer, and Gupta in examining the heat equation; our focus is on numerical experiments, examining the stability and accuracy of several formulations, including primal as well as ultraweak, and explicit as well as implicit and Crank–Nicolson time-stepping schemes. We are additionally interested in communication-avoiding algorithms, and we therefore include a highly experimental formulation that places all the trace terms on the right-hand side of the equation.
MTTKRP is the bottleneck operation in algorithms used to compute the CP tensor decomposition. For sparse tensors, utilizing the compressed sparse fibers (CSF) storage format and the CSF-oriented MTTKRP algorithms is important for both memory and computational efficiency on distributed-memory architectures. Existing intelligent tensor partitioning models assume the computational cost of MTTKRP to be proportional to the total number of nonzeros in the tensor. However, this is not the case for the CSF-oriented MTTKRP on distributed-memory architectures. We outline two deficiencies of nonzero-based intelligent partitioning models when CSF-oriented MTTKRP operations are performed locally: failure to encode processors' computational loads and increase in total computation due to fiber fragmentation. We focus on existing fine-grain hypergraph model and propose a novel vertex weighting scheme that enables this model encode correct computational loads of processors. We also propose to augment the fine-grain model by fiber nets for reducing the increase in total computational load via minimizing fiber fragmentation. In this way, the proposed model encodes minimizing the load of the bottleneck processor. Parallel experiments with real-world sparse tensors on up to 1024 processors prove the validity of the outlined deficiencies and demonstrate the merit of our proposed improvements in terms of parallel runtimes.
This article proposes a method of predicting the influence of random load behavior on the dynamics of dc microgrids and distribution systems. This is accomplished by combining stochastic load models and deterministic microgrid models. Together, these elements constitute a stochastic hybrid system. The resulting model enables straightforward calculation of dynamic state moments, which are used to assess the probability of desirable operating conditions. Specific consideration is given to systems based on the dual active bridge (DAB) topology. Bounds are derived for the probability of zero voltage switching (ZVS) in DAB converters. A simple example is presented to demonstrate how these bounds may be used to improve ZVS performance as an optimization problem. Predictions of state moment dynamics and ZVS probability assessments are verified through comparisons to Monte Carlo simulations.
Numerous MELCOR modeling improvements and analyses have been performed in the time since the severe accidents at Fukushima Daiichi Nuclear Power Station that occurred in March 2011. This report briefly summarizes the related accident reconstruction and uncertainty analysis efforts. It further discusses a number of potential pursuits to further advance MELCOR modeling and analysis of the severe accidents at Fukushima Daiichi and severe accident modeling in general. Proposed paths forward include further enhancements to identified MELCOR models primarily impacting core degradation calculations, and continued application of uncertainty analysis methods to improve model performance and a develop deeper understanding of severe accident progression.
Infrasound, or low frequency sound 20 Hz, is produced by a variety of natural and anthropogenic sources. Wind also generates signals within this frequency band and serves as a persistent source of infrasonic noise. Infrasound sensors measure pressure fluctuations, which scale with the ambient density and velocity fluctuations of ground winds. Here we compare four different wind noise reduction systems, or "filters", and make recommendations for their use in temporary infrasound deployments. Our results show that there are two filters that are especially effective at reducing wind noise: (1) a Hyperion high frequency (HF) shroud with a 1 m diameter metal mesh dome placed on top and (2) a Hyperion Four Port Garden Hose shroud with 4 Miracle-Gro Soaker System garden hoses. We also find that placing a 5-gallon bucket over the HF wind shroud should not be done as it provides a negligible decrease in noise up to ~ 1 Hz and then an increase in noise. We conclude that it is up to the researcher to determine which of the other filters is best for their needs based on location and expense. We anticipate this study will be used as a resource for future deployments when a wind noise reduction method is necessary but only needed for a limited time period.
In this study, we examine the effects of the radiation reaction force on electrons in a radial magnetically insulated transmission line (MITL) near a load with peak currents of 60+ MA. More specifically, we study the differences in electron motion and kinetic energy with or without radiation reaction physics using a novel guiding center drift approach that incorporates E $\times$ B and ∇B drifts. A key finding of this study is that an electron's magnetic moment, which would be conserved when radiation reaction physics is not incorporated, can be significantly reduced in magnetic fields on the order of 10,000's T when radiation reaction is included. The reduction of magnetic moment gives rise to a significant reduction in cycloidal kinetic energy as well as a reduction in the electron's ∇B drift.
Underground explosions nonlinearly deform the surrounding earth material and can interact with the free surface to produce spall. However, at typical seismological observation distances the seismic wavefield can be accurately modeled using linear approximations. Although nonlinear algorithms can accurately simulate very near field ground motions, they are computationally expensive and potentially unnecessary for far field wave simulations. Conversely, linearized seismic wave propagation codes are orders of magnitude faster computationally and can accurately simulate the wavefield out to typical observational distances. Thus, devising a means of approximating a nonlinear source in terms of a linear equivalent source would be advantageous both for scenario modeling and for interpretation of seismic source models that are based on linear, far-field approximations. This allows fast linear seismic modeling that still incorporates many features of the nonlinear source mechanics built into the simulation results so that one can have many of the advantages of both types of simulations without the computational cost of the nonlinear computation. In this report we first show the computational advantage of using linear equivalent models, and then discuss how the near-source (within the nonlinear wavefield regime) environment affects linear source equivalents and how well we can fit seismic wavefields derived from nonlinear sources.
LocOO3D is a software tool that computes geographical locations for seismic events at regional to global scales. This software has a rich set of features, including the ability to use custom 3D velocity models, correlated observations and master event locations. The LocOO3D software is especially useful for research related to seismic monitoring applications, since it allows users to easily explore a variety of location methods and scenarios and is compatible with the CSS3.0 data format used in monitoring applications. The LocOO3D software, User's Manual, and Examples are available on the web at: https://github.com/sandialabs/LocOO3D For additional information on GeoTess, SALSA3D, RSTT, and other related software, please see: https://github.com/sandialabs/GeoTessJava, www.sandia.gov/geotess, www.sandia.gov/salsa3d, and www.sandia.gov/rstt
PCalc is a software tool that computes travel-time predictions, ray path geometry and model queries. This software has a rich set of features, including the ability to use custom 3D velocity models to compute predictions using a variety of geometries. The PCalc software is especially useful for research related to seismic monitoring applications.
Automated vehicles (AV) hold great promise for improving safety, as well as reducing congestion and emissions. In order to make automated vehicles commercially viable, a reliable and highperformance vehicle-based computing platform that meets ever-increasing computational demands will be key. Given the state of existing digital computing technology, designers will face significant challenges in meeting the needs of highly automated vehicles without exceeding thermal constraints or consuming a large portion of the energy available on vehicles, thus reducing range between charges or refills. The accompanying increases in energy for AV use will place increased demand on energy production and distribution infrastructure, which also motivates increasing computational energy efficiency.
With the growing number of applications designed for heterogeneous HPC devices, application programmers and users are finding it challenging to compose scalable workflows as ensembles of these applications, that are portable, performant and resilient. The Kokkos C++ library has been designed to simplify this cumbersome procedure by providing an intra-application uniform programming model and portable performance. However, assembling multiple Kokkos-enabled applications into a complex workflow is still a challenge. Although Kokkos enables a uniform programming model, the inter-application data exchange still remains a challenge from both performance and software development cost perspectives. In order to address this issue, we propose Kokkos data staging memory space, an extension of Kokkos' data abstraction (memory space) for heterogeneous computing systems. This new abstraction allows to express data on a virtual shared-space for multiple Kokkos applications, thus extending Kokkos to support inter-application data exchange to build an efficient application workflow. Additionally, we study the effectiveness of asynchronous data layout conversions for applications requiring different memory access patterns for the shared data. Our preliminary evaluation with a synthetic benchmark indicate the effectiveness of this conversion adapted to three different scenarios representing access frequency and use patterns of the shared data.
Independent gamma spectroscopy data analysis of a plutonium oxide sample was requested on July 23, 2021. The primary request was to assess the Pu-239 activity/mass of the sample using previously collected gamma spectral data. Using the provided gamma spectral analysis report and spectral files, an independent evaluation of the data was conducted without any prior knowledge of the isotopic activity/mass of the sample.
Management of spent nuclear fuel and high-level radioactive waste consists of three main phases – storage, transportation, and disposal – commonly referred to as the back end of the nuclear fuel cycle. Current practice for commercial spent nuclear fuel management in the United States (US) includes temporary storage of spent fuel in both pools and dry storage systems at operating or shutdown nuclear power plants. Storage pools are filling to their operational capacity, and management of the approximately 2,200 metric tons of spent fuel newly discharged each year requires transferring older and cooler spent fuel from pools into dry storage. Unless a repository becomes available that can accept spent fuel for permanent disposal, projections indicate that the US will have approximately 136,000 metric tons of spent fuel in dry storage systems by mid-century, when the last plants in the current reactor fleet are decommissioned. Current designs for dry storage systems rely on large multi-assembly canisters, the most common of which are so-called “dual-purpose canisters” (DPCs). DPCs are certified for both storage and transportation, but are not designed or licensed for permanent disposal. The large capacity (greater number of spent fuel assemblies) of these canisters can lead to higher canister temperatures, which can delay transportation and/or complicate disposal. This current management practice, in which the utilities continue loading an ever-increasing inventory of larger DPCs, does not emphasize integration among storage, transportation, and disposal. This lack of integration does not cause safety issues, but it does lead to a suboptimal system that increases costs, complicates storage and transportation operations, and limits options for permanent disposal. This paper describes strategies for improving integration of management practices in the US across the entire back end of the nuclear fuel cycle. The complex interactions between storage, transportation, and disposal make a single optimal solution unlikely. However, efforts to integrate various phases of nuclear waste management can have the greatest impact if they begin promptly and continue to evolve throughout the remaining life of the current fuel cycle. A key factor that influences the path forward for integration of nuclear waste management practices is the identification of the timing and location for a repository. The most cost-effective path forward would be to open a repository by mid-century with the capability to directly dispose of DPCs without repackaging the spent fuel into disposalready canisters. Options that involve repackaging of spent fuel from DPCs into disposalready canisters or that delay the repository opening significantly beyond mid-century could add 10s of billions of dollars to the total system life cycle cost.
This report presents revisions to the Strategic Petroleum Reserve (SPR) well grading framework. The well grading framework is composed of multiple components and was developed as a guide in application of well remediation and monitoring resources. The revisions were applied to enhance the efficiency and consistency of the well grading process across the four SPR sites. Documentation of the revisions and any significant impact from these revisions are also discussed. The current general workflow for the application and updating of the well grades is also provided.
Using a combination of geospatial machine learning prediction and sediment thermodynamic/physical modeling, we have developed a novel software workflow to create probabilistic maps of geoacoustic and geomechanical sediment properties of the global seabed. This new technique for producing reliable estimates of seafloor properties can better support Naval operations relying on sonar performance and seabed strength, can constrain models of shallow tomographic structure important for nuclear treaty compliance monitoring/detection, and can provide constraints on the distribution and inventory of shallow methane gas and gas hydrate accumulations on the continental shelves.
The structures that surround and support optical components play a key role in the performance of the overall optical system. For aerospace applications, creating an opto-mechanical structure that is athermal, lightweight, robust, and can be quickly developed from concept through to hardware is challenging. This project demonstrates a design and fabrication method for optical structures using origami-style folded, photo-etched sheetmetal pieces that are micro-welded to each other or to 3d printed metal components. Thin flexures, critical for athermal mounting of optics, can be thinner with sheetmetal than from standard machining, which leads to more compact designs and the ability to mount smaller optics. Building a structure by starting with the thinnest features, then folding that thin material to make the ''thicker'' sections is the opposite of standard machining (cutting thin features from thicker blocks). A design method is shown with mass savings of >90%, and stiffness to weight ratio improvements of 5x to 10x compared to standard methods for space systems hardware. Designs and processes for small, flexured, actively aligned systems are demonstrated as are methods for producing lightweight, structural, Miura-core sandwich panels in both flat and curved configurations. Concepts for deployable panels and component hinges are explored, as is a lens subcell with tunable piston movement with temperature change and an ultralight sunshade.
We report on progress in developing macroscopic balance equations for combustion and electrochemistry systems. A steady state solution capability is described for the macroscopic reactor network, with an associated steady state continuation method and solution storage capability added in. An example is provided of continuation of a hydrogen flame versus the equivalence ratio. The reactor modeling capability is extended to charged fluid systems, with a description of the new ChargedFluidReactor, SubstrateElement, and MetalCurrentElement reactor classes and novel setup of unknowns within these reactors that preserve charge neutrality. Zuzax's setup for electrochemistry is explained including the specification of the electron chemical potential and the adherence to the SHE Reference electrode specification. The description of the different ways to enter electrochemical reaction rates are described, contrasted, and their derivations with respect to one another are derived. An example of using the ChargedFluidReactor within corrosion problems is provided. We present a description of calculations to understand the phenomena of corrosion of copper from a micron sized droplet of NaCl water droplet, where secondary spreading occurs. An analysis of the discrepancies with experiment is carried out, demonstrating that macroscopic balances can be an important tool for understanding what major factors need to be addressed for a better understanding of a physical system.
High-temperature particle receivers are being pursued to enable next-generation concentrating solar thermal power (CSP) systems that can achieve higher temperatures (> 700 °C) to enable more efficient power cycles, lower overall system costs, and emerging CSP-based process-heat applications. The objective of this work was to develop characterization methods to quantify the particle and heat losses from the open aperture of the particle receiver. Novel camera- based imaging methods were developed and applied to both laboratory-scale and larger 1 MWt on-sun tests at the National Solar Thermal Test Facility in Albuquerque, New Mexico. Validation of the imaging methods was performed using gravimetric and calorimetric methods. In addition, conventional particle-sampling methods using volumetric particle-air samplers were applied to the on-sun tests to compare particle emission rates with regulatory standards for worker safety and pollution. Novel particle sampling methods using 3-D printed tipping buckets and tethered balloons were also developed and applied to the on-sun particle-receiver tests. Finally, models were developed to simulate the impact of particle size and wind on particle emissions and concentrations as a function of location. Results showed that particle emissions and concentrations were well below regulatory standards for worker safety and pollution. In addition, estimated particle temperatures and advective heat losses from the camera-based imaging methods correlated well with measured values during the on-sun tests.
This report summarizes activities at Sandia National Laboratories as part of the Explosive Destruction System (EDS) Phase 3 (P3) System design. An exploration of chemical neutralization strategies of phosgene was conducted for safe disposal of recovered mortars and M79 1000 lb. bombs filled with carbonyl dichloride "phosgene" or "CG agent" (molecular formula = COCl2). The incumbent strategy utilized aqueous sodium hydroxide was found to be the worst-case scenario, producing enough CO2 gas that would cause an unacceptable pressure and temperature spike. Several chemical neutralization strategies were evaluated based on criteria set by the operating envelope of the P3 design. In the end, it was determined that a pure solution of N-methyl ethanolamine (MeEA) or 90% monoethanolamine (MEA)(aqueous) provided the best balance reaction profile, cost, and safety.
Sub-channel codes are one of the the modeling and simulation tools used for thermal-hydraulic analysis of nuclear reactors. A few examples of such sub-channel codes are the COolant Boiling in Rod Arrays (COBRA) family of codes. The approximations that are used to simplify the fluid conservation equations into sub-channel form, mainly that of axially-dominated flow, lead to noticeable limitations on sub-channels solvers for problems with significant flow in lateral directions. In this report, a two-dimensional Cartesian solver is developed and implemented within CTF-R, which is the residual solver in the North Carolina State University version of COBRA-TF (CTF). The new solver will enable CTF to simulate flow that is not axially-dominated. The appropriate Cartesian forms of the conservation equations are derived and implemented in the solver. Once the conservation equations are established, the process of constructing the matrix system was altered to solve a two-dimensional staggered grid system. A simple case was used to test that the two-dimensional Cartesian solver is accurate. The test problem does not include any source terms or flow in the lateral direction. The results show that the solver was able to run the simple case and converge to a steady-state solution. Future work will focus on testing existing capabilities by using test cases that include transients and equation cross-terms. Future work will also include adding additional capabilities such as enabling the solver to include cases with source terms and three dimensional cases.
Based on the latest DOE (Department of Energy) milestones, Sandia needs to convert to IPv6 (Internet Protocol version 6)-only networks over the next 5 years. Our original IPv6 migration plan did not include migrating to IPv6-only networks at any point within the next 10 years, so it must necessarily change. To be successful in this endeavor, we need to evaluate technologies that will enable us to deploy IPv6-only networks early without creating system stability or security issues. We have set up a test environment using technology representative of our production network where we configured and evaluated industry standard translation technologies and techniques. Based on our results, bidirectional translation between IPv4 (Internet Protocol version 4) and IPv6 is achievable with our current equipment, but due to the complexity of the configuration, may not scale well to our production environment.
It may seem simple and trivial, but defining the difference between data and information is contested and has implications that may affect the security of United States interests and even cost lives. For security, data are raw facts or figures without context, while information is the compilation or articulation of data that forms context. Security depends on clarity in the differences between data and information and controlling them. Control is necessary to ensure that data and information are not inadvertently released to foreign governments, the public, or those without Need-to-Know. A primary concern in the practice of security is the control of data to avoid the inadvertent conversion to sensitive information. The complexity of this concern is further augmented when institutions are part of tightly coupled networks that informally share data and information. Additionally, those that share data as a function of legislative action—and/or formally integrate data and information system infrastructures—may be a higher security risk. This paper will present a case study that utilizes elements of literature from Knowledge Management and networks to tell a story of an issue in security—specifically, controlling the conversion of data to information.
Stress corrosion cracking (SCC) is an important failure degradation mechanism for storage of spent nuclear fuel. Since 2014, Sandia National Laboratories has been developing a probabilistic methodology for predicting SCC. The model is intended to provide qualitative assessment of data needs, model sensitivities, and future model development. In fiscal year 2021, improvement of the SCC model focused on the salt deposition, maximum pit size, and crack growth rate models.
Twin boundaries play an important role in the thermodynamics, stability, and mechanical properties of nanocrystalline metals. Understanding their structure and chemistry at the atomic scale is key to guide strategies for fabricating nanocrystalline materials with improved properties. We report an unusual segregation phenomenon at gold-doped platinum twin boundaries, which is arbitrated by the presence of disconnections, a type of interfacial line defect. By using atomistic simulations, we show that disconnections containing a stacking fault can induce an unexpected transition in the interfacial-segregation structure at the atomic scale, from a bilayer, alternating-segregation structure to a trilayer, segregation-only structure. This behavior is found for faulted disconnections of varying step heights and dislocation characters. Supported by a structural analysis and the classical Langmuir-McLean segregation model, we reveal that this phenomenon is driven by a structurally induced drop of the local pressure across the faulted disconnection accompanied by an increase in the segregation volume.
The formation of a stress corrosion crack (SCC) in the canister wall of a dry cask storage system (DCSS) has been identified as a potential issue for the long-term storage of spent nuclear fuel. The presence of an SCC in a storage system could represent a through-wall flow path from the canister interior to the environment. Modern, vertical DCSSs are of particular interest due to the commercial practice of using relatively high backfill pressures (up to approximately 800 kPa) in the canister to enhance internal natural convection. This pressure differential offers a comparatively high driving potential for blowdown of any particulates that might be present in the canister. In this study, the rates of gas flow and aerosol transmission of a spent fuel surrogate through an engineered microchannel with dimensions representative of an SCC were evaluated experimentally using coupled mass flow and aerosol analyzers. The microchannel was formed by mating two gage blocks with a linearly tapering slot orifice nominally 13 μm (0.005 in.) tall on the upstream side and 25 μm (0.0010 in.) tall on the downstream side. The orifice is 12.7 mm (0.500 in.) wide by 8.89 mm (0.350 in.) long (flow length). Surrogate aerosols of cerium oxide, CeO2, were seeded and mixed with either helium or air inside a pressurized tank. The aerosol characteristics were measured immediately upstream and downstream of the simulated SCC at elevated and ambient pressures, respectively. These data sets are intended to demonstrate a new capability to characterize SCCs under well-controlled boundary conditions. Modeling efforts were also initiated that evaluate the depletion of aerosols in a commercial dry storage canister. These preliminary modeling and ongoing testing efforts are focused on understanding the evolution in both size and quantity of a hypothetical release of aerosolized spent fuel particles from failed fuel to the canister interior and ultimately through an SCC.
Fungi produce and excrete various proteins, enzymes, and polysaccharides, which may be used for the synthesis of nanoparticles. This study investigated the effect an anion species on the synthesis of ceramic nanoparticles using fungal filtrates. In this work, ceramic zinc oxide (ZnO) nanoparticles ranging between 1 nm and 1000 nm were successfully synthesized using three different filamentous fungi: Aspergillus sp., Penicillium sp., and Paecilomyces variotti. Each fungus was cultured, and the filtrate was extracted and individually exposed to zinc nitrate, zinc sulfate, or zinc chloride. The formation of nanoparticles was characterized using UV-visible spectrophotometry (UV-Vis), fluorescence microscopy, and with transmission electron microscopy (TEM) analyses. UV-Vis spectra exhibited broad increases in the absorption across the range of 200 nm - 800 nm, which corresponded to the formation of ZnO nanoparticles under various conditions. Nanoparticle formation was confirmed with fluorescence microscopy and TEM analysis and determined to form particles with an irregular spherical shape. To date, our work demonstrates that the ability of fungi to synthesize ZnO nanoparticles is not genus/species-specific but is dependent on the starting composition of a given metal salt.
An approach to increase the value of carbon fiber for wind turbines blades, and other compressive strength driven designs, is to identify pathways to increase its cost-specific compressive strength. A finite element model has been developed to evaluate the predictiveness of current finite element methods and to lay groundwork for future studies that focus on improving the cost-specific compressive strength. Parametric studies are conducted to understand which uncertainties in the model inputs have the greatest impact on compressive strength predictions. A statistical approach is also presented that enables the micromechanical model, which is deterministic, to efficiently account for statistical variability in the fiber misalignment present in composite materials; especially if the results from the hexagonal and square pack models are averaged. The model was found to agree well with experimental results for a Zoltek PX-35 pultrusion. The sensitivity studies suggest that the fiber packing and the interface shear strength have the greatest impact on compressive strength prediction for the fiber reinforced polymer studied here. Based on the performance of the modeling approach presented in this work, it is deemed sufficient for future work which will seek to identify carbon fiber composites with improved cost-specific compressive strength.
Well integrity is one of the major concerns in long-term geologic storage sites due to its potential risk for well leakage and groundwater contamination. Evaluating changes in electrical responses due to energized steel-cased wells has the potential to quantify and predict possible wellbore failures, as any kind of breakage or corrosion along highly-conductive well casings will have an impact on the distribution of subsurface electrical potential. However, realistic wellbore-geoelectrical models that can fully capture fine scale details of well completion design and the state of well damage at the field scale require extensive computational e.ort, or can even be intractable to simulate. To overcome this computational burden while still keeping the model realistic, we use the hierarchical finite element method which represents electrical conductivity at each dimensional component (1-D edges, 2-D planes and 3-D cells) of a tetrahedra mesh. This allows well completion designs with real-life geometric scales and well systems with realistic, detailed, progressive corrosion and damage in our models. Here, we present a comparison of possible discretization approaches of a multi-casing completion design in the finite-element model. The e.ects of the surface casing length and the coupling between concentric well casings, as well as the e.ects of the degree and the location of well damage on the electrical responses are also examined. Finally, we analyze real surface electric field data to detect wellbore integrity failure associated with damage.
Esteves, Giovanni; Lundh, James S.; Coleman, Kathleen; Song, Yiwen; Griffin, Benjamin A.; Douglas, Erica A.; Edstrand, Adam; Badescu, Stefan C.; Leach, Jacob H.; Moody, Baxter; Trolier-Mckinstry, Susan; Choi, Sukwon; Moore, Elizabeth A.
In this study, the Raman biaxial stress coefficients KII and strain-free phonon frequencies ω0 have been determined for the E2 (low), E2 (high), and A1 (LO) phonon modes of aluminum nitride, AlN, using both experimental and theoretical approaches. The E2 (high) mode of AlN is recommended for the residual stress analysis of AlN due to its high sensitivity and the largest signal-to-noise ratio among the studied modes. The E2 (high) Raman biaxial stress coefficient of -3.8 cm-1/GPa and strain-free phonon frequency of 656.68 cm-1 were then applied to perform both macroscopic and microscopic stress mappings. For macroscopic stress evaluation, the spatial variation of residual stress was measured across an AlN-on-Si wafer prepared by sputter deposition. A cross-wafer variation in residual stress of ∼150 MPa was observed regardless of the average stress state of the film. Microscopic stress evaluation was performed on AlN piezoelectric micromachined ultrasonic transducers (pMUTs) with submicrometer spatial resolution. These measurements were used to assess the effect of device fabrication on residual stress distribution in an individual pMUT and the effect of residual stress on the resonance frequency. At ∼20 μm directly outside the outer edge of the pMUT electrode, a large lateral spatial variation in residual stress of ∼100 MPa was measured, highlighting the impact of metallization structures on residual stress in the AlN film.
Alkali metals, such as lithium, sodium, potassium, etc., are highly reactive elements. While researchers generally handle these metals with caution, less caution is taken when these elements have been “reacted”. In this work, a recent incident is examined in which a pair of researchers ignited a lithium silicide alloy sample that was assumed to be fully hydrated to lithium hydroxide and, thereby, no longer water-reactive. However, variations in the original chemical composition of the lithium compounds examined resulted in select mixtures failing to hydrate and react completely to lithium hydroxide in the time frame allowed. This gave rise to residual unreacted, water-sensitive lithium silicide which resulted in a violent exothermic reaction with water and autoignition of the produced hydrogen gas. This Article describes this incident and improvements that can be implemented to prevent similar incidents from occurring.
The commercial software package Barracuda, developed by CPFD Software for simulating particle-laden fluid flows, is evaluated as a means to simulate the motion of bubbles in vibrating liquid-filled containers. Demonstration simulations of bubbles rising due to buoyancy forces in a cylinder filled with silicone oil and angled at 0, 30, 45, and 60 degrees from the vertical were performed by CPFD Software. The results of these simulations are discussed, and the capabilities of Barracuda for simulating bubble motion are assessed. It was determined that at present Barracuda does not meet the needs of the desired application. Further developments that would enable its use for this application are highlighted.
A stable solid electrolyte interphase (SEI) layer is key to high performing lithium ion batteries for metrics such as calendar and cycle life. The SEI must be mechanically robust to withstand large volumetric changes in anode materials such as lithium and silicon, so understanding the mechanical properties and behavior of the SEI is essential for the rational design of artificial SEI and anode form factors. The mechanical properties and mechanical failure of the SEI are challenging to study, because the SEI is thin at only ~ 10 – 200 nm thick and is air sensitive. Furthermore, the SEI changes as a function of electrode material, electrolyte and additives, temperature, potential, and formation protocols. A variety of in situ and ex situ techniques have been used to study the mechanics of the SEI on a variety of lithium ion battery anode candidates; however, there hasn't been a succinct review of the findings thus far. Because of the difficultly of isolating the true SEI and its mechanical properties, there have been a limited number of studies that can fully de-convolute the SEI from the anode it forms on. A review of past research will be helpful for culminating current knowledge and helping to inspire new innovations to better quantify and understand the mechanical behavior of the SEI. This review will summarize the different experimental and theoretical techniques used to study the mechanics of SEI on common lithium ion battery anodes and their strengths and weaknesses.
The electric power grid is one of the most critical national infrastructures, and determining the susceptibility of power grid elements to external factors is of significant importance for ensuring grid resilience. Reliable energy is vital to the safety and security of society. One potential threat to the power grid comes in the form of strong electromagnetic field transients arising from high-altitude nuclear weapon detonation. The radiated EM fields from these can affect the operation of electronic components via direct field exposure or from the conducted transients that arise from coupling onto long cables. Vulnerability to these pulses for many electrical components on the grid is unknown. This research focuses on conducted pulse testing of digital protective relays in a power substation and their associated high-voltage circuit breaker circuit and instrumentation transformer circuits. The relays, yard cables, power supplies, and components representing yard equipment were assembled in a manner consistent with installation in a substation to represent the pulse's propagation in the components and wiring. Equipment was tested using pulsed injection into the yard cable. The results showed no equipment damage or undesired operations for insult levels below 180 kV peak open circuit voltage, which is significantly higher than the anticipated coupling to substation yard cables.
Resurrecting a battery chemistry thought to be only primary, we demonstrate the first example of a rechargeable alkaline zinc/copper oxide battery. With the incorporation of a Bi2O3additive to stabilize the copper oxide-based conversion cathode, Zn/(CuO-Bi2O3) cells are capable of cycling over 100 times at >124 W h/L, with capacities from 674 mA h/g (cycle 1) to 362 mA h/g (cycle 150). The crucial role of Bi2O3in facilitating the electrochemical reversibility of Cu2O, Cu(OH)2, and Cuowas supported by scanning and transmission electrochemical microscopy, cyclic voltammetry, and rotating ring-disc electrode voltammetry and monitoredvia operandoenergy-dispersive X-ray diffraction measurements. Bismuth was identified as serving two roles, decreasing the cell resistance and promoting Cu(I) and Cu(II) reduction. To mitigate the capacity losses of long-term cycling CuO cells, we demonstrate two limited depth of discharge (DOD) strategies. First, a 30% DOD (202 mA h/g) retains 99.9% capacity over 250 cycles. Second, the modification of the CuO cathode by the inclusion of additional Cu metal enables performance at very high areal capacities of ∼40 mA h/cm2and unprecedented energy densities of ∼260 W h/L, with near 100% Coulombic efficiency. This work revitalizes a historically primary battery chemistry and opens opportunity to future works in developing copper-based conversion cathode chemistries for the realization of low-cost, safe, and energy-dense secondary batteries.
Despite its promise as a safe, reliable system for grid-scale electrical energy storage, traditional molten sodium (Na) battery deployment remains limited by cost-inflating high-temperature operation. Here, we describe a high-performance sodium iodide-gallium chloride (NaI-GaCl3) molten salt catholyte that enables a dramatic reduction in molten Na battery operating temperature from near 300°C to 110°C. We demonstrate stable, high-performance electrochemical cycling in a high-voltage (3.65 V) Na-NaI battery for >8 months at 110°C. Supporting this demonstration, characterization of the catholyte physical and electrochemical properties identifies critical composition, voltage, and state of charge boundaries associated with this enabling inorganic molten salt electrolyte. Symmetric and full cell testing show that the catholyte salt can support practical current densities in a low-temperature system. Collectively, these studies describe the critical catholyte properties that may lead to the realization of a new class of low-temperature molten Na batteries.
Classification of features in a scene typically requires conversion of the incoming photonic field into the electronic domain. Recently, an alternative approach has emerged whereby passive structured materials can perform classification tasks by directly using free-space propagation and diffraction of light. In this manuscript, we present a theoretical and computational study of such systems and establish the basic features that govern their performance. We show that system architecture, material structure, and input light field are intertwined and need to be co-designed to maximize classification accuracy. Our simulations show that a single layer metasurface can achieve classification accuracy better than conventional linear classifiers, with an order of magnitude fewer diffractive features than previously reported. For a wavelength λ, single layer metasurfaces of size 100λ × 100λ with an aperture density λ-2 achieve ∼96% testing accuracy on the MNIST data set, for an optimized distance ∼100λ to the output plane. This is enabled by an intrinsic nonlinearity in photodetection, despite the use of linear optical metamaterials. Furthermore, we find that once the system is optimized, the number of diffractive features is the main determinant of classification performance. The slow asymptotic scaling with the number of apertures suggests a reason why such systems may benefit from multiple layer designs. Finally, we show a trade-off between the number of apertures and fabrication noise.
In polymer nanoparticle composites (PNCs) with attractive interactions between nanoparticles (NPs) and polymers, a bound layer of the polymer forms on the NP surface, with significant effects on the macroscopic properties of the PNCs. The adsorption and wetting behaviors of polymer solutions in the presence of a solid surface are critical to the fabrication process of PNCs. In this study, we use both classical density functional theory (cDFT) and molecular dynamics (MD) simulations to study dilute and semi-dilute solutions of short polymer chains near a solid surface. Using cDFT, we calculate the equilibrium properties of polymer solutions near a flat surface while varying the solvent quality, surface-fluid interactions, and the polymer chain lengths to investigate their effects on the polymer adsorption and wetting transitions. Using MD simulations, we simulate polymer solutions near solid surfaces with three different curvatures (a flat surface and NPs with two radii) to study the static conformation of the polymer bound layer near the surface and the dynamic chain adsorption process. We find that the bulk polymer concentration at which the wetting transition in the poor solvent system occurs is not affected by the difference in surface-fluid interactions; however, a threshold value of surface-fluid interaction is needed to observe the wetting transition. We also find that with good solvent, increasing the chain length or the difference in the surface-polymer interaction relative to the surface-solvent interaction increases the surface coverage of polymer segments and independent chains for all surface curvatures. Finally, we demonstrate that the polymer segmental adsorption times are heavily influenced only by the surface-fluid interactions, although polymers desorb more quickly from highly curved surfaces.
Fine control over the thermal expansion and contraction behavior of polymer materials is challenging. Most polymers have large coefficients of thermal expansion (CTEs), which preclude long performance lifetimes of composite materials. Herein, we report the design and synthesis of epoxy thermosets with low CTE values below their Tg and large contraction behavior above Tg by incorporating thermally contractile dibenzocyclooctane (DBCO) motifs within the thermoset network. This atypical thermomechanical behavior was rationalized in terms of a twist-boat to chair conformational equilibrium of the DBCO linkages. We anticipate these findings to be generally useful in the preparation of materials with designed CTE values.