The visualization community has invested decades of research and development into producing large-scale production visualization tools. Although in situ is a paradigm shift for large-scale visualization, much of the same algorithms and operations apply regardless of whether the visualization is run post hoc or in situ. Thus, there is a great benefit to taking the large-scale code originally designed for post hoc use and leveraging it for use in situ. This chapter describes two in situ libraries, Libsim and Catalyst, that are based on mature visualization tools, VisIt and ParaView, respectively. Because they are based on fully-featured visualization packages, they each provide a wealth of features. For each of these systems we outline how the simulation and visualization software are coupled, what the runtime behavior and communication between these components are, and how the underlying implementation works. We also provide use cases demonstrating the systems in action. Both of these in situ libraries, as well as the underlying products they are based on, are made freely available as open-source products. The overviews in this chapter provide a toehold to the practical application of in situ visualization.
The field of visualization encompasses a wide range of techniques, from infographics to isosurfaces. An important subfield called "scientific visualization" is specifically dedicated to data sets with spatial components, i.e., (X, Y, Z) locations. Furthermore, this subfield's name is inspired by the fact that the data in question often come from the sciences, i.e., physics simulations or sensor networks.
International Journal of High Performance Computing Applications
Childs, Hank; Ahern, Sean D.; Ahrens, James; Bauer, Andrew C.; Bennett, Janine C.; Bethel, E.W.; Bremer, Peer-Timo; Brugger, Eric; Cottam, Joseph; Dorier, Matthieu; Dutta, Soumya; Favre, Jean M.; Fogal, Thomas; Frey, Steffen; Garth, Christoph; Geveci, Berk; Godoy, William F.; Hansen, Charles D.; Harrison, Cyrus; Insley, Joseph; Johnson, Chris R.; Klasky, Scott; Knoll, Aaron; Kress, James; Laros, James H.; Lofstead, Gerald F.; Ma, Kwan-Liu; Malakar, Preeti; Meredith, Jeremy; Moreland, Kenneth D.; Navratil, Paul; Leary, Manish'; Parashar, Manish; Pascucci, Valerio; Patchett, John; Peterka, Tom; Petruzza, Steve; Pugmire, David; Rasquin, Michel; Rizzi, Silvio; Rogers, David M.; Sane, Sudhanshu; Sauer, Franz; Sisneros, Johnny R.; Shen, Han-Wei; Usher, Will; Vickery, Rhonda; Vishwanath, Venkatram; Wald, Ingo; Wang, Ruonan; Weber, Gunther H.; Whitlock, Brad; Wolf, Matthew; Yu, Hongfeng; Ziegeler, Sean B.
The term “in situ processing” has evolved over the last decade to mean both a specific strategy for visualizing and analyzing data and an umbrella term for a processing paradigm. The resulting confusion makes it difficult for visualization and analysis scientists to communicate with each other and with their stakeholders. To address this problem, a group of over 50 experts convened with the goal of standardizing terminology. This paper summarizes their findings and proposes a new terminology for describing in situ systems. An important finding from this group was that in situ systems are best described via multiple, distinct axes: integration type, proximity, access, division of execution, operation controls, and output type. Here, they discuss these axes, evaluate existing systems within the axes, and explore how currently used terms relate to the axes.
The ECP/VTK-m project is providing the core capabilities to perform scientific visualization on Exascale architectures. The ECP/VTK-m project fills the critical feature gap of performing visualization and analysis on processors like graphics-based processors. The results of this project will be delivered in tools like ParaView, Vislt, and Ascent as well as in stand-alone form. Moreover, these projects are depending on this ECP effort to be able to make effective use of ECP architectures. One of the biggest recent changes in high-performance computing is the increasing use of accelerators. Accelerators contain processing cores that independently are inferior to a core in a typical CPU, but these cores are replicated and grouped such that their aggregate execution provides a very high computation rate at a much lower power. Current and future CPU processors also require much more explicit parallelism. Each successive version of the hardware packs more cores into each processor, and technologies like hyper threading and vector operations require even more parallel processing to leverage each core's full potential. VTK-m is a toolkit of scientific visualization algorithms for emerging processor architectures. VTK-m supports the fine-grained concurrency for data analysis and visualization algorithms required to drive extreme scale computing by providing abstract models for data and execution that can be applied to a variety of algorithms across many different processor architectures. The ECP/VTK-m project is building up the VTK-m codebase with the necessary visualization algorithm implementations that run across the varied hardware platforms to be leveraged at the Exascale. We will be working with other ECP projects, such as ALPINE, to integrate the new VTK-m code into production software to enable visualization on our HPC systems.
In the case for each of the tasks, implementation started in a private topic branch. That branch was later submitted as a merge request where the code was run through regression tests across multiple test platforms. The merge requests were also subjected to human reviewers for approval. After necessary modifications were made, the code was merged to VTK-m's master branch. Subsequently, documentation was written for the VTK-m User's Guide.
The ECP/VTK-m project is providing the core capabilities to perform scientific visualization on Exascale architectures. The ECP/VTK-m project fills the critical feature gap of performing visualization and analysis on processors like graphics-based processors. The results of this project will be delivered in tools like ParaView, Vislt, and Ascent as well as in stand-alone form. Moreover, these projects are depending on this ECP effort to be able to make effective use of ECP architectures.
Scientific computing is no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/0 limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis. This report reviews the accomplishments of the XVis project to prepare scientific visualization for Exascale computing.