The mathematically correct specification of a fractional differential equation on a bounded domain requires specification of appropriate boundary conditions, or their fractional analogue. This paper discusses the application of nonlocal diffusion theory to specify well-posed fractional diffusion equations on bounded domains.
In k-hypergraph matching, we are given a collection of sets of size at most k, each with an associated weight, and we seek a maximumweight subcollection whose sets are pairwise disjoint. More generally, in k-hypergraph b-matching, instead of disjointness we require that every element appears in at most b sets of the subcollection. Our main result is a linear-programming based (k - 1 + 1/k)-approximation algorithm for k-hypergraph b-matching. This settles the integrality gap when k is one more than a prime power, since it matches a previously-known lower bound. When the hypergraph is bipartite, we are able to improve the approximation ratio to k - 1, which is also best possible relative to the natural LP. These results are obtained using a more careful application of the iterated packing method. Using the bipartite algorithmic integrality gap upper bound, we show that for the family of combinatorial auctions in which anyone can win at most t items, there is a truthful-in-expectation polynomial-time auction that t-approximately maximizes social welfare. We also show that our results directly imply new approximations for a generalization of the recently introduced bounded-color matching problem. We also consider the generalization of b-matching to demand matching, where edges have nonuniform demand values. The best known approximation algorithm for this problem has ratio 2k on k-hypergraphs. We give a new algorithm, based on local ratio, that obtains the same approximation ratio in a much simpler way.
The following work presents an adjoint-based methodology for solving inverse problems in heterogeneous and fractured media using state-based peridynamics. We show that the inner product involving the peridynamic operators is self-adjoint. The proposed method is illustrated for several numerical examples with constant and spatially varying material parameters as well as in the context of fractures. We also present a framework for obtaining material parameters by integrating digital image correlation (DIC) with inverse analysis. This framework is demonstrated by evaluating the bulk and shear moduli for a sample of nuclear graphite using digital photographs taken during the experiment. The resulting measured values correspond well with other results reported in the literature. Lastly, we show that this framework can be used to determine the load state given observed measurements of a crack opening. This type of analysis has many applications in characterizing subsurface stress-state conditions given fracture patterns in cores of geologic material.
A notion of material homogeneity is proposed for peridynamic bodies with variable horizon but constant bulk properties. A relation is derived that scales the force state according to the position-dependent horizon while keeping the bulk properties unchanged. Using this scaling relation, if the horizon depends on position, artifacts called ghost forces may arise in a body under a homogeneous deformation. These artifacts depend on the second derivative of the horizon and can be reduced by employing a modified equilibrium equation using a new quantity called the partial stress. Bodies with piecewise constant horizon can be modeled without ghost forces by using a simpler technique called a splice. As a limiting case of zero horizon, both the partial stress and splice techniques can be used to achieve local-nonlocal coupling. Computational examples, including dynamic fracture in a one-dimensional model with local- nonlocal coupling, illustrate the methods.
In this paper, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory-epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
We develop a computationally less expensive alternative to the direct solution of a large sparse symmetric positive definite system arising from the numerical solution of elliptic partial differential equation models. Our method, substituted factorization, replaces the computationally expensive factorization of certain dense submatrices that arise in the course of direct solution with sparse Cholesky factorization with one or more solutions of triangular systems using substitution. These substitutions fit into the tree-structure commonly used by parallel sparse Cholesky, and reduce the initial factorization cost at the expense of a slight increase cost in solving for a right-hand side vector. Our analysis shows that substituted factorization reduces the number of floating-point operations for the model k × k 5-point finite-difference problem by 10% and empirical tests show execution time reduction on average of 24.4%. On a test suite of three-dimensional problems we observe execution time reduction as high as 51.7% and 43.1% on average.
This paper describes a method for incorporating a diffusion field modeling oxygen usage and dispersion in a multi-scale model of Mycobacterium tuberculosis (Mtb) infection mediated granuloma formation. We implemented this method over a floating-point field to model oxygen dynamics in host tissue during chronic phase response and Mtb persistence. The method avoids the requirement of satisfying the Courant-Friedrichs-Lewy (CFL) condition, which is necessary in implementing the explicit version of the finite-difference method, but imposes an impractical bound on the time step. Instead, diffusion is modeled by a matrix-based, steady state approximate solution to the diffusion equation. Moreover, presented in figure 1 is the evolution of the diffusion profiles of a containment granuloma over time.
Quantum tomography is used to characterize quantum operations implemented in quantum information processing (QIP) hardware. Traditionally, state tomography has been used to characterize the quantum state prepared in an initialization procedure, while quantum process tomography is used to characterize dynamical operations on a QIP system. As such, tomography is critical to the development of QIP hardware (since it is necessary both for debugging and validating as-built devices, and its results are used to influence the next generation of devices). But tomography suffers from several critical drawbacks. In this report, we present new research that resolves several of these flaws. We describe a new form of tomography called gate set tomography (GST), which unifies state and process tomography, avoids prior methods critical reliance on precalibrated operations that are not generally available, and can achieve unprecedented accuracies. We report on theory and experimental development of adaptive tomography protocols that achieve far higher fidelity in state reconstruction than non-adaptive methods. Finally, we present a new theoretical and experimental analysis of process tomography on multispin systems, and demonstrate how to more effectively detect and characterize quantum noise using carefully tailored ensembles of input states.
Aleph models continuum electrostatic and steady and transient thermal fields using a finite-element method. Much work has gone into expanding the core solver capability to support enriched modeling consisting of multiple interacting fields, special boundary conditions and two-way interfacial coupling with particles modeled using Aleph's complementary particle-in-cell capability. This report provides quantitative evidence for correct implementation of Aleph's field solver via order- of-convergence assessments on a collection of problems of increasing complexity. It is intended to provide Aleph with a pedigree and to establish a basis for confidence in results for more challenging problems important to Sandia's mission that Aleph was specifically designed to address.
Aleph is an electrostatic particle-in-cell code which uses the finite element method to solve for the electric potential and field based on external potentials and discrete charged particles. The field solver in Aleph was verified for two problems and matched the analytic theory for finite elements. The first problem showed the mesh-refinement convergence for a nonlinear field with no particles within the domain. This matched the theoretical convergence rates of second order for the potential field and first order for the electric field. Then the solution for a single particle in an infinite domain was compared to the analytic solution. This also matched the theory of first order convergence in both the potential and electric fields for both problems over a refinement factor of 16. These solutions give confidence that the field solver and charge weighting schemes are implemented correctly. This page intentionally left blank.