Publications

2 Results

Search results

Jump to search filters

The tensor-train mimetic finite difference method for three-dimensional Maxwell’s wave propagation equations

Mathematics and Computers in Simulation

Vuchkov, Radov; Manzini, Gianmarco; Truong, Phan M.D.; Alexandrov, Boian

Coupling the mimetic finite difference method with the tensor-train format results in a very effective method for low-rank numerical approximations of the solutions of the time-dependent Maxwell wave propagation equations in three dimensions. To this end, we discretize the curl operators on the primal/dual tensor product grid complex and we couple the space discretization with a staggered-in-time second-order accurate time-marching scheme. The resulting solver is accurate to the second order in time and space, and still compatible, so that the approximation of the magnetic flux field has zero discrete divergence with a discrepancy close to the machine precision level. Our approach is not limited to the second-order of accuracy. We can devise higher-order formulations in space through suitable extensions of the tensor-train stencil to compute the derivatives of the mimetic differential operators. Employing the tensor-train format improves the solver performance by orders of magnitude in terms of CPU time and memory storage. A final set of numerical experiments confirms this expectation.

More Details

Challenging the Curse of Dimensionality in Multidimensional Numerical Integration by Using a Low-Rank Tensor-Train Format

Mathematics

Vuchkov, Radov; Alexandrov, Boian; Manzini, Gianmarco; Skau, Erik W.; Truong, Phan M.D.

Numerical integration is a basic step in the implementation of more complex numerical algorithms suitable, for example, to solve ordinary and partial differential equations. The straightforward extension of a one-dimensional integration rule to a multidimensional grid by the tensor product of the spatial directions is deemed to be practically infeasible beyond a relatively small number of dimensions, e.g., three or four. In fact, the computational burden in terms of storage and floating point operations scales exponentially with the number of dimensions. This phenomenon is known as the curse of dimensionality and motivated the development of alternative methods such as the Monte Carlo method. The tensor product approach can be very effective for high-dimensional numerical integration if we can resort to an accurate low-rank tensor-train representation of the integrand function. In this work, we discuss this approach and present numerical evidence showing that it is very competitive with the Monte Carlo method in terms of accuracy and computational costs up to several hundredths of dimensions if the integrand function is regular enough and a sufficiently accurate low-rank approximation is available.

More Details
2 Results
2 Results