For reactive burn models in hydrocodes, an equilibrium closure assumption is typically made between the unreacted and product equations of state. In the CTH [1] (not an acronym) hydrocode the assumption of density and temperature equilibrium is made by default, while other codes make a pressure and temperature equilibrium assumption. The main reason for this difference is the computational efficiency in making the density and temperature assumption over the pressure and temperature one. With fitting to data, both assumptions can accurately predict reactive flow response using the various models, but the model parameters from one code cannot necessarily be used directly in a different code with a different closure assumption. A new framework is intro-duced in CTH to allow this assumption to be changed independently for each reactive material. Comparisons of the response and computational cost of the History Variable Reactive Burn (HVRB) reactive flow model with the different equilibrium assumptions are presented.
The review was conducted on May 9-10, 2016 at the University of Utah. Overall the review team was impressed with the work presented and found that the CCMSC had met or exceeded the Year 2 milestones. Specific details, comments and recommendations are included in this document.
The review team convened at the University of Utah March 7-8, 2018, to review the Carbon Capture Multidisciplinary Science Center (CCMSC) funded by the 2nd Predictive Science ASC Alliance Program (PSAAP II). Center leadership and researchers made very clear and informative presentations, accurately portraying their work and successes while candidly discussing their concerns and known areas in need of improvement.
CTH is an Eulerian hydrocode developed by Sandia National Laboratories (SNL) to solve a wide range of shock wave propagation and material deformation problems. Adaptive mesh refinement is also used to improve efficiency for problems with a wide range of spatial scales. The code has a history of running on a variety of computing platforms ranging from desktops to massively parallel distributed-data systems. For the Trinity Phase 2 Open Science campaign, CTH was used to study mesoscale simulations of the hypervelocity penetration of granular SiC powders. The simulations were compared to experimental data. A scaling study of CTH up to 8192 KNL nodes was also performed, and several improvements were made to the code to improve the scalability.
Sandia has invested heavily in scientific/engineering application development and in the research, development, and deployment of large scale HPC platforms to support the computational needs of these applications. As application developers continually expand the capabilities of their software and spend more time on performance tuning of applications for these platforms, HPC platform resources are at a premium as they are a heavily shared resource serving the varied needs of many users. To ensure that the HPC platform resources are being used effciently and perform as designed, it is necessary to obtain reliable data on resource utilization that will allow us to investigate the occurrence, severity, and causes of performance-affecting contention between applications. The work presented in this paper was an initial step to determine if resource contention can be understood and minimized through monitoring, modeling, planning and infrastructure. This paper describes the set of metric definitions, identified in this research, that can be used as meaningful and potentially actionable indicators of performance-affecting contention between applications. These metrics were verified using the observed slowdown of IOR, IMB, and CTH in operating scenarios that forced contention. This paper also describes system/application monitoring activities that are critical to distilling vast amounts of data into quantities that hold the key to understanding for an application's performance under production conditions and that will ultimately aid in Sandia's efforts to succeed in extreme-scale computing.
Computational simulation of structures subjected to blast loadings requires integration of computational shock-physics for blast, and structural response with potential for pervasive failure. Current methodologies for this problem space are problematic in terms of efficiency and solution quality. This report details the development of several coupling algorithms for thin shells, with an emphasis on rigorous verification where possible and comparisons to existing methodologies in use at Sandia.
The Method of Manufactured Solutions (MMS) is used to evaluate the Material Point Method (MPM) implemented in CTH, i.e. Markers. MMS is a verification approach in which a desired deformation field is prescribed and the required forcing function to achieve the prescribed deformation is determined analytically. The calculated forcing function is applied within CTH markers determine if the correct displacement field is recovered. For the cases examined in this study, a ring is subjected to a finite, angular-independent, spatially varying body force, superposed with a rigid-body rotation. This test will assess the solid mechanics response of the MPM within CTH for large deformation problems. This page intentionally left blank.
11th World Congress on Computational Mechanics, WCCM 2014, 5th European Conference on Computational Mechanics, ECCM 2014 and 6th European Conference on Computational Fluid Dynamics, ECFD 2014
The modeling of failure in a finite volume shock physics computational code poses many challenges. We recently improved upon our recently implemented numerical technique the Material Point Method (MPM) by adding the Convective Particle Domain Interpolation (CPDI) to our finite volume shock physics computational code CTH. The CPDI technique improves accuracy and efficiency of the MPM for problems involving large tensile deformations and rotations. CPDI provides a method for the particles to remain in communication with each other by expanding the interpolation domain over that of the generalized MPM method. This will in turn prevent numerical fracture where fracture occurs when particles loose communication with one another while under going large tensile deformation. This work will focus on a comparison of the abilities of CPDI and generalized MPM in predicting the penetration of steel into aluminium. Simulations of the experiments will be performed to quantify the two numerical techniques.
11th World Congress on Computational Mechanics, WCCM 2014, 5th European Conference on Computational Mechanics, ECCM 2014 and 6th European Conference on Computational Fluid Dynamics, ECFD 2014
Recently the Lagrangian Material Point Method (MPM) [1] has been integrated into the Eulerian finite volume shock physics code CTH [2] at Sandia National Laboratories. CTH has the capabilities of adaptive mesh refinement (AMR), multiple materials and numerous material models for equation of state, strength, and failure. In order to parallelize the MPM in CTH two different approaches were tested. The first was a ghost particle concept, where the MPM particles are mirrored onto neighboring processors in order to correctly assemble the mesh boundary values on the grid. The second approach exchanges the summed mesh values at processor boundaries without the use of ghost particles. Both methods have distinct advantages for parallelization. These parallelization approaches were tested for both strong and weak scaling. This paper will compare the parallel scaling efficiency, and memory requirements of both approaches for parallelizing the MPM.