An Introduction to Trilinos
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Spectrum
In each of our brains, 86 billion neurons work in parallel, processing inputs from senses and memories to produce the many feats of human cognition. The brains of other creatures are less broadly capable, but those animals often exhibit innate aptitudes for particular tasks, abilities honed by millions of years of evolution.
Journal of Physics A: Mathematical and Theoretical
This paper presents (Lagrangian) variational formulations for single and multicomponent semi-compressible fluids with both reversible (entropy-conserving) and irreversible (entropy-generating) processes. Semi-compressible fluids are useful in describing low-Mach dynamics, since they are soundproof. These models find wide use in many areas of fluid dynamics, including both geophysical and astrophysical fluid dynamics. Specifically, the Boussinesq, anelastic and pseudoincompressible equations are developed through a unified treatment valid for arbitrary Riemannian manifolds, thermodynamic potentials and geopotentials. By design, these formulations obey the 1st and 2nd laws of thermodynamics, ensuring their thermodynamic consistency. This general approach extends and unifies existing work, and helps clarify the thermodynamics of semi-compressible fluids. To further this goal, evolution equations are presented for a wide range of thermodynamicvariables: entropy density s, specific entropy η, buoyancy b, temperature T, potential temperature O and a generic entropic variable Χ; along with a general definition of buoyancy valid for all three semicompressible models and arbitrary geopotentials. Finally, the elliptic equation for the pressure perturbation (the Lagrange multiplier that enforces semicompressibility) is developed for all three equation sets in the case of reversible dynamics, and for the Boussinesq/anelastic equations in the case of irreversible dynamics; and some discussion is given of the difficulty in formulating it for the pseudoincompressible equations with irreversible dynamics.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Nonlocal models, including peridynamics, often use integral operators that embed lengthscales in their definition. However, the integrands in these operators are difficult to define from the data that are typically available for a given physical system, such as laboratory mechanical property tests. In contrast, molecular dynamics (MD) does not require these integrands, but it suffers from computational limitations in the length and time scales it can address. To combine the strengths of both methods and to obtain a coarse-grained, homogenized continuum model that efficiently and accurately captures materials’ behavior, we propose a learning framework to extract, from MD data, an optimal Linear Peridynamic Solid (LPS) model as a surrogate for MD displacements. To maximize the accuracy of the learnt model we allow the peridynamic influence function to be partially negative, while preserving the well-posedness of the resulting model. To achieve this, we provide sufficient well-posedness conditions for discretized LPS models with sign-changing influence functions and develop a constrained optimization algorithm that minimizes the equation residual while enforcing such solvability conditions. This framework guarantees that the resulting model is mathematically well-posed, physically consistent, and that it generalizes well to settings that are different from the ones used during training. We illustrate the efficacy of the proposed approach with several numerical tests for single layer graphene. Our two-dimensional tests show the robustness of the proposed algorithm on validation data sets that include thermal noise, different domain shapes and external loadings, and discretizations substantially different from the ones used for training.
ACS Photonics
Classification of features in a scene typically requires conversion of the incoming photonic field into the electronic domain. Recently, an alternative approach has emerged whereby passive structured materials can perform classification tasks by directly using free-space propagation and diffraction of light. In this manuscript, we present a theoretical and computational study of such systems and establish the basic features that govern their performance. We show that system architecture, material structure, and input light field are intertwined and need to be co-designed to maximize classification accuracy. Our simulations show that a single layer metasurface can achieve classification accuracy better than conventional linear classifiers, with an order of magnitude fewer diffractive features than previously reported. For a wavelength λ, single layer metasurfaces of size 100λ × 100λ with an aperture density λ-2 achieve ∼96% testing accuracy on the MNIST data set, for an optimized distance ∼100λ to the output plane. This is enabled by an intrinsic nonlinearity in photodetection, despite the use of linear optical metamaterials. Furthermore, we find that once the system is optimized, the number of diffractive features is the main determinant of classification performance. The slow asymptotic scaling with the number of apertures suggests a reason why such systems may benefit from multiple layer designs. Finally, we show a trade-off between the number of apertures and fabrication noise.
IEEE Transactions on Parallel and Distributed Systems
As the push towards exascale hardware has increased the diversity of system architectures, performance portability has become a critical aspect for scientific software. We describe the Kokkos Performance Portable Programming Model that allows developers to write single source applications for diverse high performance computing architectures. Kokkos provides key abstractions for both the compute and memory hierarchy of modern hardware. Here, we describe the novel abstractions that have been added to Kokkos recently such as hierarchical parallelism, containers, task graphs, and arbitrary-sized atomic operations. We demonstrate the performance of these new features with reproducible benchmarks on CPUs and GPUs.
We present an adaptive algorithm for constructing surrogate models for integrated systems composed of a set of coupled components. With this goal we introduce ‘coupling’ variables with a priori unknown distributions that allow approximations of each component to be built independently. Once built, the surrogates of the components are combined and used to predict system-level quantities of interest (QoI) at a fraction of the cost of interrogating the full system model. We use a greedy experimental design procedure, based upon a modification of Multi-Index Stochastic Collocation (MISC), to minimize the error of the combined surrogate. This is achieved by refining each component surrogate in accordance with its relative contribution to error in the approximation of the system-level QoI. Our adaptation of MISC is a multi-fidelity procedure that can leverage ensembles of models of varying cost and accuracy, for one or more components, to produce estimates of system-level QoI. Several numerical examples demonstrate the efficacy of the proposed approach on systems involving feed-forward and feedback coupling. For a fixed computational budget, the proposed algorithm is able to produce approximations that are orders of magnitude more accurate than approximations that treat the integrated system as a black-box.
We present a surrogate modeling framework for conservatively estimating measures of risk from limited realizations of an expensive physical experiment or computational simulation. We adopt a probabilistic description of risk that assigns probabilities to consequences associated with an event and use risk measures, which combine objective evidence with the subjective values of decision makers, to quantify anticipated outcomes. Given a set of samples, we construct a surrogate model that produces estimates of risk measures that are always greater than their empirical estimates obtained from the training data. These surrogate models not only limit over-confidence in reliability and safety assessments, but produce estimates of risk measures that converge much faster to the true value than purely sample-based estimates. We first detail the construction of conservative surrogate models that can be tailored to the specific risk preferences of the stakeholder and then present an approach, based upon stochastic orders, for constructing surrogate models that are conservative with respect to families of risk measures. The surrogate models introduce a bias that allows them to conservatively estimate the target risk measures. We provide theoretical results that show that this bias decays at the same rate as the L2 error in the surrogate model. Our numerical examples confirm that risk-aware surrogate models do indeed over-estimate the target risk measures while converging at the expected rate.
Chemical Science
Potassium channels modulate various cellular functions through efficient and selective conduction of K+ions. The mechanism of ion conduction in potassium channels has recently emerged as a topic of debate. Crystal structures of potassium channels show four K+ions bound to adjacent binding sites in the selectivity filter, while chemical intuition and molecular modeling suggest that the direct ion contacts are unstable. Molecular dynamics (MD) simulations have been instrumental in the study of conduction and gating mechanisms of ion channels. Based on MD simulations, two hypotheses have been proposed, in which the four-ion configuration is an artifact due to either averaged structures or low temperature in crystallographic experiments. The two hypotheses have been supported or challenged by different experiments. Here, MD simulations with polarizable force fields validated byab initiocalculations were used to investigate the ion binding thermodynamics. Contrary to previous beliefs, the four-ion configuration was predicted to be thermodynamically stable after accounting for the complex electrostatic interactions and dielectric screening. Polarization plays a critical role in the thermodynamic stabilities. As a result, the ion conduction likely operates through a simple single-vacancy and water-free mechanism. The simulations explained crystal structures, ion binding experiments and recent controversial mutagenesis experiments. This work provides a clear view of the mechanism underlying the efficient ion conduction and demonstrates the importance of polarization in ion channel simulations.
Annual ACM Symposium on Parallelism in Algorithms and Architectures
We present a theoretical framework for designing and assessing the performance of algorithms executing in networks consisting of spiking artificial neurons. Although spiking neural networks (SNNs) are capable of general-purpose computation, few algorithmic results with rigorous asymptotic performance analysis are known. SNNs are exceptionally well-motivated practically, as neuromorphic computing systems with 100 million spiking neurons are available, and systems with a billion neurons are anticipated in the next few years. Beyond massive parallelism and scalability, neuromorphic computing systems offer energy consumption orders of magnitude lower than conventional high-performance computing systems. We employ our framework to design and analyze neuromorphic graph algorithms, focusing on shortest path problems. Our neuromorphic algorithms are message-passing algorithms relying critically on data movement for computation, and we develop data-movement lower bounds for conventional algorithms. A fair and rigorous comparison with conventional algorithms and architectures is challenging but paramount. We prove a polynomial-factor advantage even when we assume an SNN consisting of a simple grid-like network of neurons. To the best of our knowledge, this is one of the first examples of a provable asymptotic computational advantage for neuromorphic computing.
Applied Physics Letters
Magnetization of clusters is often simulated using atomistic spin dynamics for a fixed lattice. Coupled spin-lattice dynamics simulations of the magnetization of nanoparticles have, to date, neglected the change in the size of the atomic magnetic moments near surfaces. We show that the introduction of variable magnetic moments leads to a better description of experimental data for the magnetization of small Fe nanoparticles. To this end, we divide atoms into a surface-near shell and a core with bulk properties. It is demonstrated that both the magnitude of the shell magnetic moment and the exchange interactions need to be modified to obtain a fair representation of the experimental data. This allows for a reasonable description of the average magnetic moment vs cluster size, and also the cluster magnetization vs temperature.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.