Research Interests

My current research interests include numerical analysis, multiscale modeling, scientific machine learning, nonlocal models and mathematics, numerical linear algebra, linear solvers, and multiscale modeling. A description of some current and past projects appears below.

Physics-Informed Machine Learning

I was the Sandia Co-PI for the Physics Informed Learning Machines (PhILMS) project, led by George Karniadakis (PNNL / Brown University). This project conducted research at the interface of mathematics, physics, data science, and deep learning to develop stochastic, multiscale modeling frameworks in conjunction with emerging deep learning techniques to seamlessly fuse physical laws, including thermodynamics and multidelity data, for forward and inverse multiscale problems. The project synthesized physics-based and data-driven tools and approaches, including nonlocal operators, multidelity data and information fusion, deep neural networks (DNNs), meshfree methods, uncertainty propagation, and stochasticity to simulate complex multiscale systems. This large multi-institution project was funded by the DOE Office of Advanced Scientific Computing Research (ASCR) Applied Mathematics Program as one of the Mathematical Multifaceted Integrated Capabilities Centers (MMICCs).

Mesoscopic Material Modeling

I was the Sandia Co-PI for the Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4) project, led by George Karniadakis (PNNL / Brown University). This project focused on developing rigorous mathematical foundations for understanding and controlling fundamental mechanisms in mesoscale processes to enable scalable synthesis of complex materials, through the design of efficient modeling methods and corresponding scalable algorithms. This large multi-institution project was funded by the DOE Office of Advanced Scientific Computing Research (ASCR) Applied Mathematics Program as one of the Mathematical Multifaceted Integrated Capabilities Centers (MMICCs).


The peridynamic theory of continuum mechanics is a nonlocal extension of classical mechanics which allows direct interactions between points separated by a finite distance. The maximum interaction distance between any two points defines a length scale, making peridynamics suitable for multiscale modeling. Peridynamics is based upon integral equations, and was developed to allow discontinuous media (e.g., fracture and fragmentation). Peridynamics was first proposed by Stewart Silling.

Computational Peridynamics

Computational peridynamics is a special variety of of computational mechanics, and is an active area of research and development. Known optimal methods and algorithms for classical (local) computational mechanics frequently do not map directly onto a nonlocal setting. I am interested in the development of algorithms and computational methods for nonlocal models.

A particular discretization of the peridynamic model has the same computational structure as classical molecular dynamics. I am the principal author of the peridynamic model implemented within Sandia’s massively parallel molecular dynamics code, LAMMPS. This is the only open-source peridynamic code, and was developed jointly with Pablo Seleson and Steve Plimpton. Visit my software page for more information.

I develop for the Sandia Peridynamic code, Peridigm. Peridigm is a based upon an agile components methodology to enable massively parallel multiphysics peridynamic simulations. Peridigm provides for optimization, UQ, error estimation, and calibration through an interface to Sandia’s DAKOTA project. This is joint work with Dave Littlewood, John Mitchell, and Stewart Silling.


Peridynamics as a Multiscale Model

Peridynamics is a nonlocal formulation of continuum mechanics. The maximum interaction distance between any two points defines a length scale, making peridynamics suitable for multiscale modeling. Much of my work has been in the development of peridynamics as a continualization of molecular dynamics.


Linear Solvers

My research in iterative methods focuses primarily on the development of robust solvers and preconditioners for ill-conditioned linear systems.

Scalable Solvers for Fluid-DFTs

Fluid density functional theories (Fluid-DFTs) enable modeling and simulation of a wide range of applications, including fluids at interfaces, colloidal fluids, wetting, porous media, and biological mechanisms at the cellular level. Fluid-DFT problems result in a collection of highly nonlinear problems that usually require continuation algorithms around a fully-coupled Newton solver. As most of the computation time is spent in the linear solver, and because problem scalability is ultimately determined by the scalability of the linear solver, scalable preconditioned iterative solvers are a critical capability for Fluid-DFT problems and is the key to enabling realistic solutions for important problems.

I develop for Sandia’s Tramonto Fluid-DFT code. My work is funded by ASCR, in collaboration with David Day, Amalie Frischknecht, Mike Heroux and Laurie Frink.

Krylov Subspace Recycling

Many problems in engineering and physics require the solution of a large sequence of linear systems. We can reduce the cost of solving subsequent systems in the sequence by recycling information from previous systems. I develop a family of solvers based upon a technique known as "Krylov Subspace Recycling". The Belos package in Trilinos currently contains a recycling GMRES solver (GCRODR) and a recycling CG solver (RCG). For some problems, the iteration count required to solve a linear system can be cut by a factor of two.

For Matlab and Trilinos implementations of recycling solvers, please see my software page.


Multiscale Modeling

Multiscale modeling refers to the use of models capturing information at multiple spatial and temporal scales. Such models are particularly important when, for example, microscale phenomena dictate macroscale response.

Peridynamics as a Multiscale Model

The maximum interaction distance between any two points in a peridynamic model induces a length scale, making peridynamics suitable for multiscale modeling. See above for more.

Atomistic-to-Continuum Coupling

The deformation and failure of many engineering materials are inherently multiscale processes. Models for such processes frequently call for decomposition of the material domain into atomistic and continuum subdomains, where the continuum subdomain is modeled via a finite element analysis. This coupling enables a continuum calculation to be performed over the majority of a domain while limiting the more expensive atomistic simulation to some small subset of the domain. The treatment of the interface between these subdomains is what distinguishes one atomistic-to-continuum coupling method from another. Along with Santiago Badia, Pavel Bochev, Jacob FishMax Gunzburger, Rich Lehoucq, and Mark Shephard, I developed an atomistic-to-continuumm coupling method called blending.

Recognizing atomistic-to-continuum coupling as heterogeneous domain decomposition, it makes sense to apply conventional domain decomposition methods to this problem. I developed a method for atomistic-to-continuum coupling based upon alternating Schwartz.

With Greg Wagner, Reese Jones, and Jeremy Templeton I also developed a methodology for atomistic-to-continuum thermal coupling. This was deployed in LAMMPS by Reese Jones, Jeremy Templeton, and Jon Zimmerman.


Domain Decomposition

Domain decomposition is the method of splitting a mathematical and computational problem into coupled problems on smaller subdomains that partition the original domain. This is a necessary process to map a computational problem onto a parallel computer.


In the case where two domains sharing a common curved interface are meshed independently, the domains will generally have an inconsistent description of that boundary. A minimal requirement for any proposed mechanism to tie these two meshes together is that the resulting finite element formulation pass a first-order patch test, whether or not the two discretizations of the shared boundary coincide. Along with Pavel Bochev and Louis Romero, I developed a novel computationally efficient Lagrange-multiplier method for tying together independently meshed subdomains with non-coincident contact boundaries in two dimensions.



Chromatography is a family of analytical chemistry techniques for the separation of mixtures. In gas chromatography, a chemical sample separates into its constituent components as it travels along a long thin column. In a traditional chromatograph, the column has a circular cross section. With the advent of MEMS technology, columns can be miniaturized to fit on a single chip. Unfortunately, these columns cannot be manufactured to have a circular cross-section. With Louis Romero, Joshua Whiting, and Joe Simonson, I analyzed the effects of non-circular cross-sectional geometry on column performance.


Undergraduate and Masters Research

As a masters student in the Department of Computer Science at Virginia Tech, my work was interdisciplinary between the physics and computer science departments.

As an undergraduate at Virginia Tech earning dual degrees in the departments of computer science and physics, I participated in undergraduate research in both departments.

  • Undergraduate Thesis: The Construction and Analysis of Factorial Experiments: Application to Tribochemical Vapor Deposition (1998)
  • Tribochemical Vapor Deposition – A New Deposition Technique: Poster at the 1997 Gordon Research Conference on Solid State Studies in Ceramics (with Jimmy Ritter) (1997)
  • Virginia Tech Physics Department: Tribochemical Vapor Deposition (TCVD) Experiment (1996-98) 
  • Virginia Tech Computer Science Department: Learning in Networked Communities Project on Collaborative Education (1996)