Detonation of explosive devices produces extremely hazardous fragments and hot, luminous fireballs. Prior experimental investigations of these post-detonation environments have primarily considered devices containing hundreds of grams of explosives. While relevant to many applications, such large- scale testing also significantly restricts experimental diagnostics and provides limited data for model validation. As an alternative, the current work proposes experiments and simulations of the fragmentation and fireballs from commercial detonators with less than a gram of high explosive. As demonstrated here, reduced experimental hazards and increased optical access significantly expand the viability of advanced imaging and laser diagnostics. Notable developments include the first known validation of MHz-rate optical fragment tracking and the first ever Coherent Anti-Stokes Raman Scattering (CARS) measures of post-detonation fireball temperatures. While certainly not replacing the need for full-scale verification testing, this work demonstrates new opportunities to accelerate developments of diagnostics and predictive models of post-detonation environments.
Digital Image Correlation (DIC) is a well-established, non-contact diagnostic technique used to measure shape, displacement and strain of a solid specimen subjected to loading or deformation. However, measurements using standard DIC can have significant errors or be completely infeasible in challenging experiments, such as explosive, combustion, or fluid-structure interaction applications, where beam-steering due to index of refraction variation biases measurements or where the sample is engulfed in flames or soot. To address these challenges, we propose using X-ray imaging instead of visible light imaging for stereo-DIC, since refraction of X-rays is negligible in many situations, and X-rays can penetrate occluding material. Two methods of creating an appropriate pattern for X-ray DIC are presented, both based on adding a dense material in a random speckle pattern on top of a less-dense specimen. A standard dot-calibration target is adapted for X-ray imaging, allowing the common bundle-adjustment calibration process in commercial stereo-DIC software to be used. High-quality X-ray images with sufficient signal-to-noise ratios for DIC are obtained for aluminum specimens with thickness up to 22.2 mm, with a speckle pattern thickness of only 80 μm of tantalum. The accuracy and precision of X-ray DIC measurements are verified through simultaneous optical and X-ray stereo-DIC measurements during rigid in-plane and out-of-plane translations, where errors in the X-ray DIC displacements were approximately 2–10 μm for applied displacements up to 20 mm. Finally, a vast reduction in measurement error—5–20 times reduction of displacement error and 2–3 times reduction of strain error—is demonstrated, by comparing X-ray and optical DIC when a hot plate induced a heterogeneous index of refraction field in the air between the specimen and the imaging systems. Collectively, these results show the feasibility of using X-ray-based stereo-DIC for non-contact measurements in exacting experimental conditions, where optical DIC cannot be used.
Digital image correlation (DIC) is an optical metrology method widely used in experimental mechanics for full-field shape, displacement and strain measurements. The required strain resolution for engineering applications of interest mandates DIC to have a high image displacement matching accuracy, on the order of 1/100th of a pixel, which necessitates an understanding of DIC errors. In this paper, we examine two spatial bias terms that have been almost completely overlooked. They cause a persistent offset in the matching of image intensities and thus corrupt DIC results. We name them pattern-induced bias (PIB), and intensity discretization bias (IDB). We show that the PIB error occurs in the presence of an undermatched shape function and is primarily dictated by the underlying intensity pattern for a fixed displacement field and DIC settings. The IDB error is due to the quantization of the gray level intensity values in the digital camera. In this paper we demonstrate these errors and quantify their magnitudes both experimentally and with synthetic images.
Residual stress is a common result of manufacturing processes, but it is one that is often overlooked in design and qualification activities. There are many reasons for this oversight, such as lack of observable indicators and difficulty in measurement. Traditional relaxation-based measurement methods use some type of material removal to cause surface displacements, which can then be used to solve for the residual stresses relieved by the removal. While widely used, these methods may offer only individual stress components or may be limited by part or cut geometry requirements. Diffraction-based methods, such as X-ray or neutron, offer non-destructive results but require access to a radiation source. With the goal of producing a more flexible solution, this LDRD developed a generalized residual stress inversion technique that can recover residual stresses released by all traction components on a cut surface, with much greater freedom in part geometry and cut location. The developed method has been successfully demonstrated on both synthetic and experimental data. The project also investigated dislocation density quantification using nonlinear ultrasound, residual stress measurement using Electronic Speckle Pattern Interferometry Hole Drilling, and validation of residual stress predictions in Additive Manufacturing process models.
The Virtual Fields Method (VFM) is an inverse technique used for parameter estimation and calibration of constitutive models. Many assumptions and approximations—such as plane stress, incompressible plasticity, and spatial and temporal derivative calculations—are required to use VFM with full-field deformation data, for example, from Digital Image Correlation (DIC). This work presents a comprehensive discussion of the effects of these assumptions and approximations on parameters identified by VFM for a viscoplastic material model for 304L stainless steel. We generated synthetic data from a Finite-Element Analysis (FEA) in order to have a reference solution with a known material model and known model parameters, and we investigated four cases in which successively more assumptions and approximations were included in the data. We found that VFM is tolerant to small deviations from the plane stress condition in a small region of the sample, and that the incompressible plasticity assumption can be used to estimate thickness changes with little error. A local polynomial fit to the displacement data was successfully employed to compute the spatial displacement gradients. The choice of temporal derivative approximation (i.e., backwards difference versus central difference) was found to have a significant influence on the computed rate of deformation and on the VFM results for the rate-dependent model used in this work. Finally, the noise introduced into the displacement data from a stereo-DIC simulator was found to have negligible influence on the VFM results. Evaluating the effects of assumptions and approximations using synthetic data is a critical first step for verifying and validating VFM for specific applications. The results of this work provide the foundation for confidently using VFM for experimental data.
“Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. There are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heat waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. Therefore, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.
Modeling material and component behavior using finite element analysis (FEA) is critical for modern engineering. One key to a credible model is having an accurate material model, with calibrated model parameters, which describes the constitutive relationship between the deformation and the resulting stress in the material. As such, identifying material model parameters is critical to accurate and predictive FEA. Traditional calibration approaches use only global data (e.g. extensometers and resultant force) and simplified geometries to find the parameters. However, the utilization of rapidly maturing full-field characterization techniques (e.g. Digital Image Correlation (DIC)) with inverse techniques (e.g. the Virtual Feilds Method (VFM)) provide a new, novel and improved method for parameter identification. This LDRD tested that idea: in particular, whether more parameters could be identified per test when using full-field data. The research described in this report successfully proves this hypothesis by comparing the VFM results with traditional calibration methods. Important products of the research include: verified VFM codes for identifying model parameters, a new look at parameter covariance in material model parameter estimation, new validation techniques to better utilize full-field measurements, and an exploration of optimized specimen design for improved data richness.