Material Testing 2.0 (MT2.0) is a paradigm that advocates for the use of rich, full-field data, such as from digital image correlation and infrared thermography, for material identification. By employing heterogeneous, multi-axial data in conjunction with sophisticated inverse calibration techniques such as finite element model updating and the virtual fields method, MT2.0 aims to reduce the number of specimens needed for material identification and to increase confidence in the calibration results. To support continued development, improvement, and validation of such inverse methods—specifically for rate-dependent, temperature-dependent, and anisotropic metal plasticity models—we provide here a thorough experimental data set for 304L stainless steel sheet metal. The data set includes full-field displacement, strain, and temperature data for seven unique specimen geometries tested at different strain rates and in different material orientations. Commensurate extensometer strain data from tensile dog bones is provided as well for comparison. We believe this complete data set will be a valuable contribution to the experimental and computational mechanics communities, supporting continued advances in material identification methods.
Stereo high-speed video of photovoltaic modules undergoing laboratory hail tests was processed using digital image correlation to determine module surface deformation during and immediately following impact. The purpose of this work was to demonstrate a methodology for characterizing module impact response differences as a function of construction and incident hail parameters. Video capture and digital image analysis were able to capture out-of-plane module deformation to a resolution of ±0.1 mm at 11 kHz on an in-plane grid of 10 × 10 mm over the area of a 1 × 2 m commercial photovoltaic module. With lighting and optical adjustments, the technique was adaptable to arbitrary module designs, including size, backsheet color, and cell interconnection. Impacts were observed to produce an initially localized dimple in the glass surface, with peak deflection proportional to the square root of incident energy. Subsequent deformation propagation and dissipation were also captured, along with behavior for instances when the module glass fractured. Natural frequencies of the module were identifiable by analyzing module oscillations postimpact. Limitations of the measurement technique were that the impacting ice ball obscured the data field immediately surrounding the point of contact, and both ice and glass fracture events occurred within 100 μs, which was not resolvable at the chosen frame rate. Increasing the frame rate and visualizing the back surface of the impact could be applied to avoid these issues. Applications for these data include validating computational models for hail impacts, identifying the natural frequencies of a module, and identifying damage initiation mechanisms.
High-speed, optical imaging diagnostics are presented for three-dimensional (3D) quantification of explosively driven metal fragmentation. At early times after detonation, Digital Image Correlation (DIC) provides non-contact measures of 3D case velocities, strains, and strain rates, while a proposed stereo imaging configuration quantifies in-flight fragment masses and velocities at later times. Experiments are performed using commercially obtained RP-80 detonators from Teledyne RISI, which are shown to create a reproducible fragment field at the benchtop scale. DIC measurements are compared with 3D simulations, which have been ‘leveled’ to match the spatial resolution of DIC. Results demonstrate improved ability to identify predicted quantities-of-interest that fall outside of measurement uncertainty and shot-to-shot variability. Similarly, video measures of fragment trajectories and masses allow rapid experimental repetition and provide correlated fragment size-velocity measurements. Measured and simulated fragment mass distributions are shown to agree within confidence bounds, while some statistically meaningful differences are observed between the measured and predicted conditionally averaged fragment velocities. Together these techniques demonstrate new opportunities to improve future model validation.
Residual stress is a contributor to stress corrosion cracking (SCC) and a common byproduct of additive manufacturing (AM). Here the relationship between residual stress and SCC susceptibility in laser powder bed fusion AM 316L stainless steel was studied through immersion in saturated boiling magnesium chloride per ASTM G36-94. The residual stress was varied by changing the sample height for the as-built condition and additionally by heat treatments at 600°C, 800°C, and 1,200°C to control, and in some cases reduce, residual stress. In general, all samples in the as-built condition showed susceptibility to SCC with the thinner, lower residual stress samples showing shallower cracks and crack propagation occurring perpendicular to melt tracks due to local residual stress fields. The heat-treated samples showed a reduction in residual stress for the 800°C and 1,200°C samples. Both were free of cracks after >300 h of immersion in MgCl2, while the 600°C sample showed similar cracking to their as-built counterpart. Geometrically necessary dislocation (GND) density analysis indicates that the dislocation density may play a major role in the SCC susceptibility.
Phase-based motion processing and the associated Motion Magnification that it enables has become popular not only for the striking videos that it can produce of traditionally stiff structures visualized with very large deflections, but also for its ability to pull information out of the noise floor of images so that they can be processed with more traditional optical techniques such as digital image correlation or feature tracking. While the majority of papers in the literature have utilized the Phase-based Image Processing approach as a pre-processor for more quantitative analyses, the technique itself can be used directly to extract modal parameters from an image, noting that the extracted phases are proportional to displacements in the image. Once phases are extracted, they can be fit using traditional experimental modal analysis techniques. This produces a mode “shape” where the degrees of freedom are phases instead of physical motions. These phases can be scaled to produce on-image visualizations of the mode shapes, rather than operational shapes produced by bandpass filtering. Modal filtering techniques can also be used to visualize motions from an environment on an image using the modal phases as a basis for the expansion.
This work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. [2009]). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.
We develop a generalized stress inversion technique (or the generalized inversion method) capable of recovering stresses in linear elastic bodies subjected to arbitrary cuts. Specifically, given a set of displacement measurements found experimentally from digital image correlation (DIC), we formulate a stress estimation inverse problem as a partial differential equation-constrained optimization problem. We use gradient-based optimization methods, and we accordingly derive the necessary gradient and Hessian information in a matrix-free form to allow for parallel, large-scale operations. By using a combination of finite elements, DIC, and a matrix-free optimization framework, the generalized inversion method can be used on any arbitrary geometry, provided that the DIC camera can view a sufficient part of the surface. We present numerical simulations and experiments, and we demonstrate that the generalized inversion method can be applied to estimate residual stress.
Detonation of explosive devices produces extremely hazardous fragments and hot, luminous fireballs. Prior experimental investigations of these post-detonation environments have primarily considered devices containing hundreds of grams of explosives. While relevant to many applications, such large- scale testing also significantly restricts experimental diagnostics and provides limited data for model validation. As an alternative, the current work proposes experiments and simulations of the fragmentation and fireballs from commercial detonators with less than a gram of high explosive. As demonstrated here, reduced experimental hazards and increased optical access significantly expand the viability of advanced imaging and laser diagnostics. Notable developments include the first known validation of MHz-rate optical fragment tracking and the first ever Coherent Anti-Stokes Raman Scattering (CARS) measures of post-detonation fireball temperatures. While certainly not replacing the need for full-scale verification testing, this work demonstrates new opportunities to accelerate developments of diagnostics and predictive models of post-detonation environments.
Digital Image Correlation (DIC) is a well-established, non-contact diagnostic technique used to measure shape, displacement and strain of a solid specimen subjected to loading or deformation. However, measurements using standard DIC can have significant errors or be completely infeasible in challenging experiments, such as explosive, combustion, or fluid-structure interaction applications, where beam-steering due to index of refraction variation biases measurements or where the sample is engulfed in flames or soot. To address these challenges, we propose using X-ray imaging instead of visible light imaging for stereo-DIC, since refraction of X-rays is negligible in many situations, and X-rays can penetrate occluding material. Two methods of creating an appropriate pattern for X-ray DIC are presented, both based on adding a dense material in a random speckle pattern on top of a less-dense specimen. A standard dot-calibration target is adapted for X-ray imaging, allowing the common bundle-adjustment calibration process in commercial stereo-DIC software to be used. High-quality X-ray images with sufficient signal-to-noise ratios for DIC are obtained for aluminum specimens with thickness up to 22.2 mm, with a speckle pattern thickness of only 80 μm of tantalum. The accuracy and precision of X-ray DIC measurements are verified through simultaneous optical and X-ray stereo-DIC measurements during rigid in-plane and out-of-plane translations, where errors in the X-ray DIC displacements were approximately 2–10 μm for applied displacements up to 20 mm. Finally, a vast reduction in measurement error—5–20 times reduction of displacement error and 2–3 times reduction of strain error—is demonstrated, by comparing X-ray and optical DIC when a hot plate induced a heterogeneous index of refraction field in the air between the specimen and the imaging systems. Collectively, these results show the feasibility of using X-ray-based stereo-DIC for non-contact measurements in exacting experimental conditions, where optical DIC cannot be used.
Digital image correlation (DIC) is an optical metrology method widely used in experimental mechanics for full-field shape, displacement and strain measurements. The required strain resolution for engineering applications of interest mandates DIC to have a high image displacement matching accuracy, on the order of 1/100th of a pixel, which necessitates an understanding of DIC errors. In this paper, we examine two spatial bias terms that have been almost completely overlooked. They cause a persistent offset in the matching of image intensities and thus corrupt DIC results. We name them pattern-induced bias (PIB), and intensity discretization bias (IDB). We show that the PIB error occurs in the presence of an undermatched shape function and is primarily dictated by the underlying intensity pattern for a fixed displacement field and DIC settings. The IDB error is due to the quantization of the gray level intensity values in the digital camera. In this paper we demonstrate these errors and quantify their magnitudes both experimentally and with synthetic images.
Residual stress is a common result of manufacturing processes, but it is one that is often overlooked in design and qualification activities. There are many reasons for this oversight, such as lack of observable indicators and difficulty in measurement. Traditional relaxation-based measurement methods use some type of material removal to cause surface displacements, which can then be used to solve for the residual stresses relieved by the removal. While widely used, these methods may offer only individual stress components or may be limited by part or cut geometry requirements. Diffraction-based methods, such as X-ray or neutron, offer non-destructive results but require access to a radiation source. With the goal of producing a more flexible solution, this LDRD developed a generalized residual stress inversion technique that can recover residual stresses released by all traction components on a cut surface, with much greater freedom in part geometry and cut location. The developed method has been successfully demonstrated on both synthetic and experimental data. The project also investigated dislocation density quantification using nonlinear ultrasound, residual stress measurement using Electronic Speckle Pattern Interferometry Hole Drilling, and validation of residual stress predictions in Additive Manufacturing process models.
The Virtual Fields Method (VFM) is an inverse technique used for parameter estimation and calibration of constitutive models. Many assumptions and approximations—such as plane stress, incompressible plasticity, and spatial and temporal derivative calculations—are required to use VFM with full-field deformation data, for example, from Digital Image Correlation (DIC). This work presents a comprehensive discussion of the effects of these assumptions and approximations on parameters identified by VFM for a viscoplastic material model for 304L stainless steel. We generated synthetic data from a Finite-Element Analysis (FEA) in order to have a reference solution with a known material model and known model parameters, and we investigated four cases in which successively more assumptions and approximations were included in the data. We found that VFM is tolerant to small deviations from the plane stress condition in a small region of the sample, and that the incompressible plasticity assumption can be used to estimate thickness changes with little error. A local polynomial fit to the displacement data was successfully employed to compute the spatial displacement gradients. The choice of temporal derivative approximation (i.e., backwards difference versus central difference) was found to have a significant influence on the computed rate of deformation and on the VFM results for the rate-dependent model used in this work. Finally, the noise introduced into the displacement data from a stereo-DIC simulator was found to have negligible influence on the VFM results. Evaluating the effects of assumptions and approximations using synthetic data is a critical first step for verifying and validating VFM for specific applications. The results of this work provide the foundation for confidently using VFM for experimental data.
“Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. There are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heat waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. Therefore, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.
Modeling material and component behavior using finite element analysis (FEA) is critical for modern engineering. One key to a credible model is having an accurate material model, with calibrated model parameters, which describes the constitutive relationship between the deformation and the resulting stress in the material. As such, identifying material model parameters is critical to accurate and predictive FEA. Traditional calibration approaches use only global data (e.g. extensometers and resultant force) and simplified geometries to find the parameters. However, the utilization of rapidly maturing full-field characterization techniques (e.g. Digital Image Correlation (DIC)) with inverse techniques (e.g. the Virtual Feilds Method (VFM)) provide a new, novel and improved method for parameter identification. This LDRD tested that idea: in particular, whether more parameters could be identified per test when using full-field data. The research described in this report successfully proves this hypothesis by comparing the VFM results with traditional calibration methods. Important products of the research include: verified VFM codes for identifying model parameters, a new look at parameter covariance in material model parameter estimation, new validation techniques to better utilize full-field measurements, and an exploration of optimized specimen design for improved data richness.
Traditionally, material identification is performed using global load and displacement data from simple boundary-value problems such as uni-axial tensile and simple shear tests. More recently, however, inverse techniques such as the Virtual Fields Method (VFM) that capitalize on heterogeneous, full-field deformation data have gained popularity. In this work, we have written a VFM code in a finite-deformation framework for calibration of a viscoplastic (i.e. strain-rate dependent) material model for 304L stainless steel. Using simulated experimental data generated via finite-element analysis (FEA), we verified our VFM code and compared the identified parameters with the reference parameters input into the FEA. The identified material model parameters had surprisingly large error compared to the reference parameters, which was traced to parameter covariance and the existence of many essentially equivalent parameter sets. This parameter non-uniqueness and its implications for FEA predictions is discussed in detail. Lastly, we present two strategies to reduce parameter covariance – reduced parametrization of the material model and increased richness of the calibration data – which allow for the recovery of a unique solution.
This document outlines the preliminary analysis of the 2D Challenge 2.0 images. They currently consist of a new Star pattern series of images created by Benoît Blaysat. Another image set may be created by Phillip with an unknown displacement field based on Sample 14 from the 2D Challenge 1.0.
This document will outline the test plans for the Hill AFB Mk 84 aging studies. The goal of the test series is to measure early case expansion velocities, sample the fragment field at various locations, and measure the overall shockwave and large fragment trajectories. This will be accomplished with 3 imaging systems as outlined in the sections below.
With the rapid spread in use of Digital Image Correlation (DIC) globally, it is important there be some standard methods of verifying and validating DIC codes. To this end, the DIC Challenge board was formed and is maintained under the auspices of the Society for Experimental Mechanics (SEM) and the international DIC society (iDICs). The goal of the DIC Board and the 2D–DIC Challenge is to supply a set of well-vetted sample images and a set of analysis guidelines for standardized reporting of 2D–DIC results from these sample images, as well as for comparing the inherent accuracy of different approaches and for providing users with a means of assessing their proper implementation. This document will outline the goals of the challenge, describe the image sets that are available, and give a comparison between 12 commercial and academic 2D–DIC codes using two of the challenge image sets.
Our results for the two sets of impact experiments are reported here. In order to assist with model development using the impact data reported, the materials are mechanically characterized using a series of standard experiments. The first set of impact data comes from a series of coefficient of restitution experiments, in which a 2 meter long pendulum is used to study "in context" measurements of the coefficient of restitution for eight different materials (6061-T6 Aluminum, Phosphor Bronze alloy 510, Hiperco, Nitronic 60A, Stainless Steel 304, Titanium, Copper, and Annealed Copper). The coefficient of restitution is measured via two different techniques: digital image correlation and laser Doppler vibrometry. Due to the strong agreement of the two different methods, only results from the digital image correlation are reported. The coefficient of restitution experiments are "in context" as the scales of the geometry and impact velocities are representative of common features in the motivating application for this research. Finally, a series of compliance measurements are detailed for the same set of materials. Furthermore, the compliance measurements are conducted using both nano-indentation and micro-indentation machines, providing sub-nm displacement resolution and uN force resolution. Good agreement is seen for load levels spanned by both machines. As the transition from elastic to plastic behavior occurs at contact displacements on the order of 30 nm, this data set provides a unique insight into the transitionary region.
A “good” speckle pattern enables DIC to make its full-field measurements, but oftentimes this artistic part of the DIC setup takes a considerable amount of time to develop and evaluate for a given optical configuration. A catalog of well-quantified speckle patterns for various fields of view would greatly decrease the time it would take to start making DIC measurements. The purpose of this speckle patterning study is to evaluate various speckling techniques we had readily available in our laboratories for fields of view from around 100 mm down to 5 mm that are common for laboratory-scale experiments. The list of speckling techniques is not exhaustive: spray painting, UV-printing of computer-designed speckle patterns, airbrushing, and particle dispersion. First, we quantified the resolution of our optical configurations for each of the fields of view to determine the smallest speckle we could resolve. Second, we imaged several speckle patterns at each field of view. Third, we quantified the average and standard deviation of the speckle size, speckle contrast, and density to characterize the quality of the speckle pattern. Finally, we performed computer-aided sub-pixel translation of the speckle patterns and ran correlations to examine how well DIC tracked the pattern translations. We discuss our metrics for a “good” speckle pattern and outline how others may perform similar studies for their desired optical configurations.
One nearly ubiquitous, but often overlooked, source of measurement error in Digital Image Correlation (DIC) arises from imaging through heat waves. “Heat waves” is a colloquial term describing a heterogeneous refractive index field caused by temperature (and thus density) gradients in air. Many sources of heat waves exist in a typical DIC experiment, including hot lights, a heated sample, sunlight, or even a hot camera. This paper presents a detailed description of the error introduced to DIC measurements as a result of heat sources being present in the system. We present characteristic spatial and temporal frequencies of heat waves, and explore the relationships between the location of the heat source, the focal length of the lens, and the stand-off distance between the camera and the imaged object. Finally, we conclude with suggested methods of mitigating the effects of heat waves first by careful design of the experiment and second through data processing. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under contract No. DE-AC04-94AL85000.
A novel, experimental method is presented for measuring the coefficient of restitution during impact events. These measurements are used to indirectly validate a new model of elastic-plastic contact. The experimental setup consists of a stainless steel sphere that is attached at the bottom of a 2.2 m long pendulum. The test materials are of the form of 1 inch diameter pucks that the sphere strikes over a range of velocities. Digital image correlation is used to measure the displacement and velocity of the ball. From this data the coefficient of restitution is calculated as a function of velocity. This report details the experimental setup, experimental process, the results acquired, as well as the future work.
It is well known that the derivative-based classical approach to strain is problematic when the displacement field is irregular, noisy, or discontinuous. Difficulties arise wherever the displacements are not differentiable. We present an alternative, nonlocal approach to calculating strain from digital image correlation (DIC) data that is well-defined and robust, even for the pathological cases that undermine the classical strain measure. This integral formulation for strain has no spatial derivatives and when the displacement field is smooth, the nonlocal strain and the classical strain are identical. We submit that this approach to computing strains from displacements will greatly improve the fidelity and efficacy of DIC for new application spaces previously untenable in the classical framework.
There has been a lot of interest in the matching error for two-dimensional digital image correlation (2D-DIC), including the matching bias and variance; however, there are a number of other sources of error that must also be considered. These include temperature drift of the camera, out-of-plane sample motion, lack of perpendicularity, under-matched subset shape functions, and filtering of the results during the strain calculation. This talk will use experimental evidence to demonstrate some of the ignored error sources and compile a complete “notional” error budget for a typical 2D measurement.
Three-dimensional deformation of rupture discs subjected to gas-dynamic shock loading was measured using a stereomicroscope digital image correlation (DIC) system. One-dimensional blast waves generated with a small-diameter, explosively driven shock tube were used for studying the fluid-structure interactions that exist when incident onto relatively low-strength rupture discs. Prior experiments have shown that subjecting the 0. 64-cm-diameter, stainless steel rupture discs to shock waves of varying strength results in a range of responses from no rupture to shear at the outer weld diameter. In this work, the outer surface of the rupture discs were prepared for DIC using 100–150 _m-sized speckles and illuminated with a Xenon flashlamp. Two synchronized Shimadzu HPV-2 cameras coupled to an Olympus microscope captured stereoimage sequences of rupture disc behavior at speeds of 1 MHz. Image correlation performed on the stereo-images resulted in spatially resolved surface deformation. The experimental facility, specifics of the DIC diagnostic technique, and the temporal deformation and velocity of the surface of a rupturing disc are presented.
This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.
DIC is a non-linear low-pass spatial filtering operation; whether we consider the effect of the subset and shape function, the strain window used in the strain calculation, of other post-processing of the results, each decision will impact the spatial resolution, of the measurement. More fundamentally, the speckle size limits, the spatial resolution by dictating the smallest possible subset. After this decision the processing settings are controlled by the allowable noise level balanced by possible bias errors created by the data filtering. This article describes a process to determine optimum DIC software settings to determine if the peak displacements or strains are being found.
The work presented in this report concerns the response and failure of thin 2024- T3 aluminum alloy circular plates to a blast load produced by the detonation of a nearby spherical charge. The plates were fully clamped around the circumference and the explosive charge was located centrally with respect to the plate. The principal objective was to conduct a numerical model validation study by comparing the results of predictions to experimental measurements of plate deformation and failure for charges with masses in the vicinity of the threshold between no tearing and tearing of the plates. Stereo digital image correlation data was acquired for all tests to measure the deflection and strains in the plates. The size of the virtual strain gage in the measurements, however, was relatively large, so the strain measurements have to be interpreted accordingly as lower bounds of the actual strains in the plate and of the severity of the strain gradients. A fully coupled interaction model between the blast and the deflection of the structure was considered. The results of the validation exercise indicated that the model predicted the deflection of the plates reasonably accurately as well as the distribution of strain on the plate. The estimation of the threshold charge based on a critical value of equivalent plastic strain measured in a bulge test, however, was not accurate. This in spite of efforts to determine the failure strain of the aluminum sheet under biaxial stress conditions. Further work is needed to be able to predict plate tearing with some degree of confidence. Given the current technology, at least one test under the actual blast conditions where the plate tears is needed to calibrate the value of equivalent plastic strain when failure occurs in the numerical model. Once that has been determined, the question of the explosive mass value at the threshold could be addressed with more confidence.
This report evaluates several interpolants implemented in the Digital Image Correlation Engine (DICe), an image correlation software package developed by Sandia. By interpolants we refer to the basis functions used to represent discrete pixel intensity data as a continuous signal. Interpolation is used to determine intensity values in an image at non - pixel locations. It is also used, in some cases, to evaluate the x and y gradients of the image intensities. Intensity gradients subsequently guide the optimization process. The goal of this report is to inform analysts as to the characteristics of each interpolant and provide guidance towards the best interpolant for a given dataset. This work also serves as an initial verification of each of the interpolants implemented.
Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss of spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The paper will demonstrate the tradeoffs associated with limited lens resolution.
Full-field axial deformation within molten-salt batteries was measured using x-ray imaging with a sampling moiré technique. This method worked for in situ testing of the batteries because of the inherent grid pattern of the battery layers when imaged with x-rays. High-speed x-ray imaging acquired movies of the layer deformation during battery activation. Numerical validation of the technique, as implemented in this paper, was done using synthetic and numerically shifted images. Typical results of a battery are shown for one test. Ongoing work on validation and more test results are in progress.
After the meetings at SEM and ICEM this year, which were both well attended by participants, it was decided by the participants that a first round of scoring the codes would be done using the Sample 14 and Sample 15 images. There was plenty of discussion on how we (the DIC Challenge Board) were going to score the results. What is going to be the balance between noise and filtering? And so forth. So it was decided to use a sub-group of the participants to help figure out if the submission guidelines were working, and how we would score the results. An additional benefit of this is that we can fix any submission guideline issues before getting results more broadly, and begin writing automated analysis codes. I expect that there will be a discussion on both subjects after I create a draft document of the scoring. This document is a draft of that report.
There are numerous scenarios where critical systems could be subject to penetration by projectiles or fixed objects (e.g., collision, natural disaster, act of terrorism, etc.). It is desired to use computational models to examine these scenarios and make risk-informed decisions; however, modeling of material failure is an active area of research, and new models must be validated with experimental data. The purpose of this report is to document the experimental work performed from FY07 through FY08 on the Campaign Six Plate Puncture project. The goal of this project was to acquire experimental data on the puncture and penetration of metal plates for use in model validation. Of particular interest is the PLH failure model also known as the multilinear line segment model. A significant amount of data that will be useful for the verification and validation of computational models of ductile failure were collected during this project were collected and documented herein; however, much more work remains to be performed, collecting additional experimental data that will further the task of model verification.
The accuracy of digital in-line holography to detect particle position and size within a 3D domain is evaluated with particular focus placed on detection of nonspherical particles. Dimensionless models are proposed for simulation of holograms from single particles, and these models are used to evaluate the uncertainty of existing particle detection methods. From the lessons learned, a new hybrid method is proposed. This method features automatic determination of optimum thresholds, and simulations indicate improved accuracy compared to alternative methods. To validate this, experiments are performed using quasi-stationary, 3D particle fields with imposed translations. For the spherical particles considered in experiments, the proposed hybrid method resolves mean particle concentration and size to within 4% of the actual value, while the standard deviation of particle depth is less than two particle diameters. Initial experimental results for nonspherical particles reveal similar performance.
There occasionally occur situations in field measurements where direct optical access to the area of interest is not possible. In these cases the borescope is the standard method of imaging. Furthermore, if shape, displacement, or strain are desired in these hidden locations, it would be advantageous to be able to do digital image correlation (DIC) through the borescope. This paper will present the added complexities and errors associated with imaging through a borescope for DIC. Discussion of non-radial distortions and their effects on the measurements, along with a possible correction scheme will be discussed.
There occasionally occur situations in field measurements where direct optical access to the area of interest is not possible. In these cases the borescope is the standard method of imaging. Furthermore, if shape, displacement, or strain are desired in these hidden locations, it would be advantageous to be able to do digital image correlation (DIC) through the borescope. This paper will present the added complexities and errors associated with imaging through a borescope for DIC. Discussion of non-radial distortions and their effects on the measurements, along with a possible correction scheme will be discussed.
Predicting failure of thin-walled structures from explosive loading is a very complex task. The problem can be divided into two parts; the detonation of the explosive to produce the loading on the structure, and secondly the structural response. First, the factors that affect the explosive loading include: size, shape, stand-off, confinement, and chemistry of the explosive. The goal of the first part of the analysis is predicting the pressure on the structure based on these factors. The hydrodynamic code CTH is used to conduct these calculations. Secondly, the response of a structure from the explosive loading is predicted using a detailed finite element model within the explicit analysis code Presto. Material response, to failure, must be established in the analysis to model the failure of this class of structures; validation of this behavior is also required to allow these analyses to be predictive for their intended use. The presentation will detail the validation tests used to support this program. Validation tests using explosively loaded aluminum thin flat plates were used to study all the aspects mentioned above. Experimental measurements of the pressures generated by the explosive and the resulting plate deformations provided data for comparison against analytical predictions. These included pressure-time histories and digital image correlation of the full field plate deflections. The issues studied in the structural analysis were mesh sensitivity, strain based failure metrics, and the coupling methodologies between the blast and structural models. These models have been successfully validated using these tests, thereby increasing confidence of the results obtained in the prediction of failure thresholds of complex structures, including aircraft.
Digital image correlation (DIC) and the tremendous advances in optical imaging are beginning to revolutionize explosive and high-strain rate measurements. This paper presents results obtained from metallic hemispheres expanded at detonation velocities. Important aspects of sample preparation and lighting of the image will be presented that are key considerations in obtaining images for DIC with frame rates at 1-million frames/second. Quantitative measurements of the case strain rate, expansion velocity and deformation will be presented. Furthermore, preliminary estimations of the measurement uncertainty will be discussed with notes on how image noise and contrast effect the measurement of shape and displacement. The data are then compared with analytical representations of the experiment.
Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual test images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.
Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual test images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.
The Doppler electron velocimeter (DEV) has been shown to be theoretically possible. This report attempts to answer the next logical question: Is it a practical instrument? The answer hinges upon whether enough electrons are available to create a time-varying Doppler current to be measured by a detector with enough sensitivity and bandwidth. The answer to both of these questions is a qualified yes. A target Doppler frequency of 1 MHz was set as a minimum rate of interest. At this target a theoretical beam current signal-to-noise ratio of 25-to-1 is shown for existing electron holography equipment. A detector is also demonstrated with a bandwidth of 1-MHz at a current of 10 pA. Additionally, a Linnik-type interferometer that would increase the available beam current is shown that would offer a more flexible arrangement for Doppler electron measurements over the traditional biprism.