We are studying PMDI polyurethane with a fast catalyst, such that filling and polymerization occur simultaneously. The foam is over-packed to tw ice or more of its free rise density to reach the density of interest. Our approach is to co mbine model development closely with experiments to discover new physics, to parameterize models and to validate the models once they have been developed. The model must be able to repres ent the expansion, filling, curing, and final foam properties. PMDI is chemically blown foam, wh ere carbon dioxide is pr oduced via the reaction of water and isocyanate. The isocyanate also re acts with polyol in a competing reaction, which produces the polymer. A new kinetic model is developed and implemented, which follows a simplified mathematical formalism that decouple s these two reactions. The model predicts the polymerization reaction via condensation chemis try, where vitrification and glass transition temperature evolution must be included to correctly predict this quantity. The foam gas generation kinetics are determined by tracking the molar concentration of both water and carbon dioxide. Understanding the therma l history and loads on the foam due to exothermicity and oven heating is very important to the results, since the kinetics and ma terial properties are all very sensitive to temperature. The conservation eq uations, including the e quations of motion, an energy balance, and thr ee rate equations are solved via a stabilized finite element method. We assume generalized-Newtonian rheology that is dependent on the cure, gas fraction, and temperature. The conservation equations are comb ined with a level set method to determine the location of the free surface over time. Results from the model are compared to experimental flow visualization data and post-te st CT data for the density. Seve ral geometries are investigated including a mock encapsulation part, two configur ations of a mock stru ctural part, and a bar geometry to specifically test the density model. We have found that the model predicts both average density and filling profiles well. However, it under predicts density gradients, especially in the gravity direction. Thoughts on m odel improvements are also discussed.
We are developing computational models to elucidate the expansion and dynamic filling process of a polyurethane foam, PMDI. The polyurethane of interest is chemically blown, where carbon dioxide is produced via the reaction of water, the blowing agent, and isocyanate. The isocyanate also reacts with polyol in a competing reaction, which produces the polymer. Here we detail the experiments needed to populate a processing model and provide parameters for the model based on these experiments. The model entails solving the conservation equations, including the equations of motion, an energy balance, and two rate equations for the polymerization and foaming reactions, following a simplified mathematical formalism that decouples these two reactions. Parameters for the polymerization kinetics model are reported based on infrared spectrophotometry. Parameters describing the gas generating reaction are reported based on measurements of volume, temperature and pressure evolution with time. A foam rheology model is proposed and parameters determined through steady-shear and oscillatory tests. Heat of reaction and heat capacity are determined through differential scanning calorimetry. Thermal conductivity of the foam as a function of density is measured using a transient method based on the theory of the transient plane source technique. Finally, density variations of the resulting solid foam in several simple geometries are directly measured by sectioning and sampling mass, as well as through x-ray computed tomography. These density measurements will be useful for model validation once the complete model is implemented in an engineering code.
Estimation of the x-ray attenuation properties of an object with respect to the energy emitted from the source is a challenging task for traditional Bremsstrahlung sources. This exploratory work attempts to estimate the x-ray attenuation profile for the energy range of a given Bremsstrahlung profile. Previous work has shown that calculating a single effective attenuation value for a polychromatic source is not accurate due to the non-linearities associated with the image formation process. Instead, we completely characterize the imaging system virtually and utilize an iterative search method/constrained optimization technique to approximate the attenuation profile of the object of interest. This work presents preliminary results from various approaches that were investigated. The early results illustrate the challenges associated with these techniques and the potential for obtaining an accurate estimate of the attenuation profile for objects composed of homogeneous materials.
This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performanceper- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.
This work describes a high-performance approach to radiograph (i.e. X-ray image for this work) simulation for arbitrary objects. The generation of radiographs is more generally known as the forward projection imaging model. The formation of radiographs is very computationally expensive and is not typically approached for large-scale applications such as industrial radiography. The approach described in this work revolves around a single GPU-based implementation that performs the attenuation calculation in a massively parallel environment. Additionally, further performance gains are realized by exploiting the GPU-specific hardware. Early results show that using a single GPU can increase computational performance by three orders-of- magnitude for volumes of 10003 voxels and images with 10002 pixels.
Estimation of the x-ray attenuation properties of an object with respect to the energy emitted from the source is a challenging task for traditional Bremsstrahlung sources. This exploratory work attempts to estimate the x-ray attenuation profile for the energy range of a given Bremsstrahlung profile. Previous work has shown that calculating a single effective attenuation value for a polychromatic source is not accurate due to the non-linearities associated with the image formation process. Instead, we completely characterize the imaging system virtually and utilize an iterative search method/constrained optimization technique to approximate the attenuation profile of the object of interest. This work presents preliminary results from various approaches that were investigated. The early results illustrate the challenges associated with these techniques and the potential for obtaining an accurate estimate of the attenuation profile for objects composed of homogeneous materials.
The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.