Welcome to my home page, where you will find a list of my current and past activities. I am currently a Principal Member of Technical Staff at Sandia National Laboratories. I received my Ph.D. in Computer Science from the University of New Mexico in 2004. I've been working since 2000 at Sandia's Scalable Analysis and Visualization department, which is part of the Computation, Computers, Information, and Mathematics Center. During that time I have focused on research and development with large-scale visualization.
This page contains posts of most of my professional technical work. I also occasionally make blog posts that are less formal but usually technically relevant.
Dax I am leading the Dax project for next-generation visualization tools. Dax provides a framework for designing visualization algorithms with massive amounts of concurrency. The first iteration of this project is focusing on GPU accelerators with a transition plan for supporting future architectures.
SDAV I am a co-PI for the SciDAC Scalable Data Management, Analysis, and Visualization Institute (or SDAV for short). The SDAV mission is to actively work with application teams to assist them in achieving breakthrough science and will provide technical solutions in the data management, analysis, and visualization regimes that are broadly applicable in the computational science community.
ParaView I am an active member in ParaView development. ParaView is a general-purpose scientific visualization tool. ParaView is designed to analyze extremely large data sets using distributed memory computing resources. It can be run on supercomputers to analyze data sets of petascale as well as on laptops for smaller data. Our recent work involves running ParaView in situ with simulation.
IceT I developed the IceT parallel rendering library and continue to maintain it. In addition to providing accelerated rendering for a standard display, IceT provides the unique ability to generate images for tiled displays.
UseLATEX.cmake I created several LaTeX file build scripts called UseLATEX.cmake for use with CMake to build my dissertation. I continue to maintain and use these scripts.
I also maintain a complete list of publications and talks.
Designing the appropriate mapping from values to colors requires a mix of expertise in visualization, color, and perception. Most visualization users and many visualization experts lack the necessary background to design effective color maps. Consequently, many visualizations default to a color map infamous for its ineffectiveness: the rainbow color map. This work provides a method to easily generate a continuum of colors that yield effective and perceptually correct color mapping. In particular, the provided cool to warm color map makes for a good general default color map.
As accelerator processors such as GPUs become more prevalent in HPC, the need for running visualization algorithms that process and generate mesh topologies using massive amounts of execution threads becomes ever more pressing. Breaking a mesh into constituent elements to feed input of these threads is straightforward. However, many algorithms require the coordination and combination of results generated in disparate threads. This work provides the basic programming pattern that can be applied to numerous such algorithms. This critical technique is demonstrated on a variety of algorithms including marching cubes, subdivision, and dual grid generation.
The most common abstraction used by visualization libraries and applications today is what is known as the visualization pipeline. The visualization pipeline provides a mechanism to encapsulate algorithms and then couple them together in a variety of ways. The visualization pipeline has been in existence for over twenty years, and over this time many variations and improvements have been proposed. This paper provides a literature review of the most prevalent features of visualization pipelines and some of the most recent research directions.
In this work we demonstrate running the scalable
rendering library IceT
at scale on a large supercomputer. Along the way we introduce several simple to implement but powerful sort-last parallel rendering modifications that greatly improve the efficiency including minimal copy image interlacing for better load balancing and telescoping compositing for arbitrary job sizes. Visit the IceT
project page for access to the software, documentation, and further papers and information on scalable rendering.
This is, to the best of my knowledge, the first publication describing implementing the Fast Fourier Transform algorithm on a modern graphics processor. The work predates general GPU languages like CUDA and OpenCL, so this implementation uses an older shader language called Cg. Since this paper there have been many improved implementations of the FFT algorithm, but the paper still provides some index manipulations and real transform symmetry encoding that can be useful.
Partial Pre-Integration is a technique to accelerate the radiative transfer computation used in volume rendering. The technique simplifies the complicated equations by collecting integrands with like parameters and evaluating them with universally applicable tables. (Note that Partial Pre-Integration is not
the same as Pre-Integration
, which precomputes all possible integrals in a parameter-specific table.)
"The trouble with quotes over the Internet is that you never know if they're genuine."