Publications

Results 601–700 of 9,998

Search results

Jump to search filters

Nonparametric, data-based kernel interpolation for particle-tracking simulations and kernel density estimation

Advances in Water Resources

Benson, David A.; Bolster, Diogo; Pankavich, Stephen; Schmidt, Michael J.

Traditional interpolation techniques for particle tracking include binning and convolutional formulas that use pre-determined (i.e., closed-form, parameteric) kernels. In many instances, the particles are introduced as point sources in time and space, so the cloud of particles (either in space or time) is a discrete representation of the Green's function of an underlying PDE. As such, each particle is a sample from the Green's function; therefore, each particle should be distributed according to the Green's function. In short, the kernel of a convolutional interpolation of the particle sample “cloud” should be a replica of the cloud itself. This idea gives rise to an iterative method by which the form of the kernel may be discerned in the process of interpolating the Green's function. When the Green's function is a density, this method is broadly applicable to interpolating a kernel density estimate based on random data drawn from a single distribution. We formulate and construct the algorithm and demonstrate its ability to perform kernel density estimation of skewed and/or heavy-tailed data including breakthrough curves.

More Details

Digital quantum simulation of molecular dynamics and control

Physical Review Research

Magann, Alicia B.; Grace, Matthew G.; Rabitz, Herschel A.; Sarovar, Mohan S.

Optimally-shaped electromagnetic fields have the capacity to coherently control the dynamics of quantum systems and thus offer a promising means for controlling molecular transformations relevant to chemical, biological, and materials applications. Currently, advances in this area are hindered by the prohibitive cost of the quantum dynamics simulations needed to explore the principles and possibilities of molecular control. However, the emergence of nascent quantum-computing devices suggests that efficient simulations of quantum dynamics may be on the horizon. In this article, we study how quantum computers could be employed to design optimally-shaped fields to control molecular systems. We introduce a hybrid algorithm that utilizes a quantum computer for simulating the field-induced quantum dynamics of a molecular system in polynomial time, in combination with a classical optimization approach for updating the field. Qubit encoding methods relevant for molecular control problems are described, and procedures for simulating the quantum dynamics and obtaining the simulation results are discussed. Numerical illustrations are then presented that explicitly treat paradigmatic vibrational and rotational control problems, and also consider how optimally-shaped fields could be used to elucidate the mechanisms of energy transfer in light-harvesting complexes. Resource estimates, as well as a numerical assessment of the impact of hardware noise and the prospects of near-term hardware implementations, are provided for the latter task.

More Details

Quantum foundations of classical reversible computing

Entropy

Frank, Michael P.; Shukla, Karpur

The reversible computation paradigm aims to provide a new foundation for general classical digital computing that is capable of circumventing the thermodynamic limits to the energy efficiency of the conventional, non-reversible digital paradigm. However, to date, the essential rationale for, and analysis of, classical reversible computing (RC) has not yet been expressed in terms that leverage the modern formal methods of non-equilibrium quantum thermodynamics (NEQT). In this paper, we begin developing an NEQT-based foundation for the physics of reversible computing. We use the framework of Gorini-Kossakowski-Sudarshan-Lindblad dynamics (a.k.a. Lindbladians) with multiple asymptotic states, incorporating recent results from resource theory, full counting statistics and stochastic thermodynamics. Important conclusions include that, as expected: (1) Landauer’s Principle indeed sets a strict lower bound on entropy generation in traditional non-reversible architectures for deterministic computing machines when we account for the loss of correlations; and (2) implementations of the alternative reversible computation paradigm can potentially avoid such losses, and thereby circumvent the Landauer limit, potentially allowing the efficiency of future digital computing technologies to continue improving indefinitely. We also outline a research plan for identifying the fundamental minimum energy dissipation of reversible computing machines as a function of speed.

More Details

Evaluating Energy Differences on a Quantum Computer with Robust Phase Estimation

Physical Review Letters

Russo, Antonio R.; Rudinger, Kenneth M.; Morrison, Benjamin M.; Baczewski, Andrew D.

We adapt the robust phase estimation algorithm to the evaluation of energy differences between two eigenstates using a quantum computer. This approach does not require controlled unitaries between auxiliary and system registers or even a single auxiliary qubit. As a proof of concept, we calculate the energies of the ground state and low-lying electronic excitations of a hydrogen molecule in a minimal basis on a cloud quantum computer. The denominative robustness of our approach is then quantified in terms of a high tolerance to coherent errors in the state preparation and measurement. Conceptually, we note that all quantum phase estimation algorithms ultimately evaluate eigenvalue differences.

More Details

First-principles calculations of metal surfaces. II. Properties of low-index platinum surfaces toward understanding electron emission

Physical Review B

Schultz, Peter A.; Hjalmarson, Harold P.; Berg, Morgann B.; Bussmann, Ezra B.; Scrymgeour, David S.; Ohta, Taisuke O.; Moore, Christopher H.

The stability of low-index platinum surfaces and their electronic properties is investigated with density functional theory, toward the goal of understanding the surface structure and electron emission, and identifying precursors to electrical breakdown, on nonideal platinum surfaces. Propensity for electron emission can be related to a local work function, which, in turn, is intimately dependent on the local surface structure. The (1×N) missing row reconstruction of the Pt(110) surface is systematically examined. The (1×3) missing row reconstruction is found to be the lowest in energy, with the (1×2) and (1×4) slightly less stable. In the limit of large (1×N) with wider (111) nanoterraces, the energy accurately approaches the asymptotic limit of the infinite Pt(111) surface. This suggests a local energetic stability of narrow (111) nanoterraces on free Pt surfaces that could be a common structural feature in the complex surface morphologies, leading to work functions consistent with those on thermally grown Pt substrates.

More Details

First-principles calculations of metal surfaces. I. Slab-consistent bulk reference for convergent surface properties

Physical Review B

Schultz, Peter A.

The first-principles computation of the surfaces of metals is typically accomplished through slab calculations of finite thickness. The extraction of a convergent surface formation energy from slab calculations is dependent upon defining an appropriate bulk reference energy. I describe a method for an independently computed, slab-consistent bulk reference that leads to convergent surface formation energies from slab calculations that also provides realistic uncertainties for the magnitude of unavoidable nonlinear divergence in the surface formation energy with slab thickness. The accuracy is demonstrated on relaxed, unreconstructed low-index aluminum surfaces with slabs with up to 35 layers.

More Details

Performance Characteristics of the BlueField-2 SmartNIC

Liu, Jianshen; Maltzahn, Carlos; Ulmer, Craig D.; Curry, Matthew L.

High-performance computing (HPC) researchers have long envisioned scenarios where application workflows could be improved through the use of programmable processing elements embedded in the network fabric. Recently, vendors have introduced programmable Smart Network Interface Cards (SmartNICs) that enable computations to be offloaded to the edge of the network. There is great interest in both the HPC and high-performance data analytics (HPDA) communities in understanding the roles these devices may play in the data paths of upcoming systems. This paper focuses on characterizing both the networking and computing aspects of NVIDIA’s new BlueField-2 SmartNIC when used in a 100Gb/s Ethernet environment. For the networking evaluation we conducted multiple transfer experiments between processors located at the host, the SmartNIC, and a remote host. These tests illuminate how much effort is required to saturate the network and help estimate the processing headroom available on the SmartNIC during transfers. For the computing evaluation we used the stress-ng benchmark to compare the BlueField-2 to other servers and place realistic bounds on the types of offload operations that are appropriate for the hardware. Our findings from this work indicate that while the BlueField-2 provides a flexible means of processing data at the network’s edge, great care must be taken to not overwhelm the hardware. While the host can easily saturate the network link, the SmartNIC’s embedded processors may not have enough computing resources to sustain more than half the expected bandwidth when using kernel-space packet processing. From a computational perspective, encryption operations, memory operations under contention, and on-card IPC operations on the SmartNIC perform significantly better than the general-purpose servers used for comparisons in our experiments. Therefore, applications that mainly focus on these operations may be good candidates for offloading to the SmartNIC.

More Details

Counterfactual Explanations for Multivariate Time Series

2021 International Conference on Applied Artificial Intelligence, ICAPAI 2021

Ates, Emre; Aksar, Burak; Leung, Vitus J.; Coskun, Ayse K.

Multivariate time series are used in many science and engineering domains, including health-care, astronomy, and high-performance computing. A recent trend is to use machine learning (ML) to process this complex data and these ML-based frameworks are starting to play a critical role for a variety of applications. However, barriers such as user distrust or difficulty of debugging need to be overcome to enable widespread adoption of such frameworks in production systems. To address this challenge, we propose a novel explainability technique, CoMTE, that provides counterfactual explanations for supervised machine learning frameworks on multivariate time series data. Using various machine learning frameworks and data sets, we compare CoMTE with several state-of-the-art explainability methods and show that we outperform existing methods in comprehensibility and robustness. We also show how CoMTE can be used to debug machine learning frameworks and gain a better understanding of the underlying multivariate time series data.

More Details

Theoretical study of intrinsic defects in cubic silicon carbide 3C -SiC

Physical Review B

Schultz, Peter A.; Van Ginhoven, Renee M.; Edwards, Arthur H.

Using the local moment counter charge (LMCC) method to accurately represent the asymptotic electrostatic boundary conditions within density functional theory supercell calculations, we present a comprehensive analysis of the atomic structure and energy levels of point defects in cubic silicon carbide (3C-SiC). Finding that the classical long-range dielectric screening outside the supercell induced by a charged defect is a significant contributor to the total energy. we describe and validate a modified Jost screening model to evaluate this polarization energy. This leads to bulk-converged defect levels in finite size supercells. With the LMCC boundary conditions and a standard Perdew-Burke-Ernzerhof (PBE) exchange correlation functional, the computed defect level spectrum exhibits no band gap problem: the range of defect levels spans ∼2.4eV, an effective defect band gap that agrees with the experimental band gap. Comparing with previous literature, our LMCC-PBE defect results are in consistent agreement with the hybrid-exchange functional results of Oda et al. [J. Chem. Phys. 139, 124707 (2013)JCPSA60021-960610.1063/1.4821937] rather than their PBE results. The difference with their PBE results is attributed to their use of a conventional jellium approximation rather than the more rigorous LMCC approach for handling charged supercell boundary conditions. The difference between standard dft and hybrid functional results for defect levels lies not in a band gap problem but rather in solving a boundary condition problem. The LMCC-PBE entirely mitigates the effect of the band gap problem on defect levels. The more computationally economical PBE enables a systematic exploration of 3C-SiC defects, where, most notably, we find that the silicon vacancy undergoes Jahn-Teller-induced distortions from the previously assumed Td symmetry, and that the divacancy, like the silicon vacancy, exhibits a site-shift bistability in p-type conditions.

More Details

Dakota A Multilevel Parallel Object-Oriented Framework for Design Optimization Parameter Estimation Uncertainty Quantification and Sensitivity Analysis (V.6.14) (User's Manual)

Adams, Brian M.; Bohnhoff, William J.; Dalbey, Keith R.; Ebeida, Mohamed S.; Eddy, John P.; Eldred, Michael S.; Hooper, Russell W.; Hough, Patricia D.; Hu, Kenneth T.; Jakeman, John D.; Khalil, Mohammad; Maupin, Kathryn A.; Monschke, Jason A.; Ridgway, Elliott M.; Rushdi, Ahmad A.; Seidl, Daniel T.; Stephens, John A.; Winokur, Justin G.

The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a users manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

A Computational Information Criterion for Particle-Tracking with Sparse or Noisy Data

Advances in Water Resources

Tran, Nhat T.V.; Benson, David A.; Schmidt, Michael J.; Pankavich, Stephen D.

Traditional probabilistic methods for the simulation of advection-diffusion equations (ADEs) often overlook the entropic contribution of the discretization, e.g., the number of particles, within associated numerical methods. Many times, the gain in accuracy of a highly discretized numerical model is outweighed by its associated computational costs or the noise within the data. We address the question of how many particles are needed in a simulation to best approximate and estimate parameters in one-dimensional advective-diffusive transport. To do so, we use the well-known Akaike Information Criterion (AIC) and a recently-developed correction called the Computational Information Criterion (COMIC) to guide the model selection process. Random-walk and mass-transfer particle tracking methods are employed to solve the model equations at various levels of discretization. Numerical results demonstrate that the COMIC provides an optimal number of particles that can describe a more efficient model in terms of parameter estimation and model prediction compared to the model selected by the AIC even when the data is sparse or noisy, the sampling volume is not uniform throughout the physical domain, or the error distribution of the data is non-IID Gaussian.

More Details

Accomplishments of Sandia and Kitware CMake/CTest/CDash Contract for (FY2017-2020)

Bartlett, Roscoe B.; Galbreath, Zack

We describe the accomplishments jointly achieved by Kitware and Sandia over the fiscal years 2016 through 2020 to benefit the Advanced Scientific Computed (ASC) Advanced Technology Development and Mitigation (ATDM) project. As a result of our collaboration, we have improved the Trilinos and ATDM application developer experience by decreasing the time to build, making it easier to identify and resolve build and test defects, and addressing other issues . We have also reduced the turnaround time for continuous integration (CI) results. For example, the combined improvements likely cut the wall clock time to run automated builds of Trilinos posting to CDash by approximately 6x or more in many cases. We primarily achieved these benefits by contributing changes to the Kitware CMake/CTest/CDash suite of open source software development support tools. As a result, ASC developers can now spend more time improving code and less time chasing bugs. And, without this work, one can argue that the stabilization of Trilinos for the ATDM platforms would not have been feasible which would have had a large negative impact on an important internal FY20 L1 milestone.

More Details

Extending sparse tensor accelerators to support multiple compression formats

Proceedings - 2021 IEEE 35th International Parallel and Distributed Processing Symposium, IPDPS 2021

Qin, Eric; Jeong, Geonhwa; Won, William; Kao, Sheng C.; Kwon, Hyoukjun; Das, Dipankar; Moon, Gordon E.; Rajamanickam, Sivasankaran R.; Krishna, Tushar

Sparsity, which occurs in both scientific applications and Deep Learning (DL) models, has been a key target of optimization within recent ASIC accelerators due to the potential memory and compute savings. These applications use data stored in a variety of compression formats. We demonstrate that both the compactness of different compression formats and the compute efficiency of the algorithms enabled by them vary across tensor dimensions and amount of sparsity. Since DL and scientific workloads span across all sparsity regions, there can be numerous format combinations for optimizing memory and compute efficiency. Unfortunately, many proposed accelerators operate on one or two fixed format combinations. This work proposes hardware extensions to accelerators for supporting numerous format combinations seamlessly and demonstrates ∼ 4 × speedup over performing format conversions in software.

More Details

Simulation of powder bed metal additive manufacturing microstructures with coupled finite difference-Monte Carlo method

Additive Manufacturing

Rodgers, Theron R.; Abdeljawad, Fadi; Moser, Daniel M.; Laros, James H.; Carroll, Jay D.; Jared, Bradley H.; Bolintineanu, Dan S.; Mitchell, John A.; Madison, Jonathan D.

Grain-scale microstructure evolution during additive manufacturing is a complex physical process. As with traditional solidification methods of material processing (e.g. casting and welding), microstructural properties are highly dependent on the solidification conditions involved. Additive manufacturing processes however, incorporate additional complexity such as remelting, and solid-state evolution caused by subsequent heat source passes and by holding the entire build at moderately high temperatures during a build. We present a three-dimensional model that simulates both solidification and solid-state evolution phenomena using stochastic Monte Carlo and Potts Monte Carlo methods. The model also incorporates a finite-difference based thermal conduction solver to create a fully integrated microstructural prediction tool. The three modeling methods and their coupling are described and demonstrated for a model study of laser powder-bed fusion of 300-series stainless steel. The investigation demonstrates a novel correlation between the mean number of remelting cycles experienced during a build, and the resulting columnar grain sizes.

More Details

Data-driven learning of nonautonomous systems

SIAM Journal on Scientific Computing

Qin, Tong; Chen, Zhen; Jakeman, John D.; Xiu, Dongbin

We present a numerical framework for recovering unknown nonautonomous dynamical systems with time-dependent inputs. To circumvent the difficulty presented by the nonautonomous nature of the system, our method transforms the solution state into piecewise integration of the system over a discrete set of time instances. The time-dependent inputs are then locally parameterized by using a proper model, for example, polynomial regression, in the pieces determined by the time instances. This transforms the original system into a piecewise parametric system that is locally time invariant. We then design a deep neural network structure to learn the local models. Once the network model is constructed, it can be iteratively used over time to conduct global system prediction. We provide theoretical analysis of our algorithm and present a number of numerical examples to demonstrate the effectiveness of the method.

More Details

RVMA: Remote virtual memory access

Proceedings - 2021 IEEE 35th International Parallel and Distributed Processing Symposium, IPDPS 2021

Grant, Ryan E.; Levenhagen, Michael J.; Dosanjh, Matthew D.; Widener, Patrick W.

Remote Direct Memory Access (RDMA) capabilities have been provided by high-end networks for many years, but the network environments surrounding RDMA are evolving. RDMA performance has historically relied on using strict ordering guarantees to determine when data transfers complete, but modern adaptively-routed networks no longer provide those guarantees. RDMA also exposes low-level details about memory buffers: either all clients are required to coordinate access using a single shared buffer, or exclusive resources must be allocatable per-client for an unbounded amount of time. This makes RDMA unattractive for use in many-to-one communication models such as those found in public internet client-server situations.Remote Virtual Memory Access (RVMA) is a novel approach to data transfer which adapts and builds upon RDMA to provide better usability, resource management, and fault tolerance. RVMA provides a lightweight completion notification mechanism which addresses RDMA performance penalties imposed by adaptively-routed networks, enabling high-performance data transfer regardless of message ordering. RVMA also provides receiver-side resource management, abstracting away previously-exposed details from the sender-side and removing the RDMA requirement for exclusive/coordinated resources. RVMA requires only small hardware modifications from current designs, provides performance comparable or superior to traditional RDMA networks, and offers many new features.In this paper, we describe RVMA's receiver-managed resource approach and how it enables a variety of new data-transfer approaches on high-end networks. In particular, we demonstrate how an RVMA NIC could implement the first hardware-based fault tolerant RDMA-like solution. We present the design and validation of an RVMA simulation model in a popular simulation suite and use it to evaluate the advantages of RVMA at large scale. In addition to support for adaptive routing and easy programmability, RVMA can outperform RDMA on a 3D sweep application by 4.4X.

More Details

Multiscale System Modeling of Single-Event-Induced Faults in Advanced Node Processors

IEEE Transactions on Nuclear Science

Cannon, Matthew J.; Rodrigues, Arun; Black, Dolores A.; Black, Jeff; Bustamante, Luis G.; Feinberg, Benjamin F.; Quinn, Heather; Clark, Lawrence T.; Brunhaver, John S.; Barnaby, Hugh; McLain, Michael L.; Agarwal, Sapan A.; Marinella, Matthew J.

Integration-technology feature shrink increases computing-system susceptibility to single-event effects (SEE). While modeling SEE faults will be critical, an integrated processor's scope makes physically correct modeling computationally intractable. Without useful models, presilicon evaluation of fault-tolerance approaches becomes impossible. To incorporate accurate transistor-level effects at a system scope, we present a multiscale simulation framework. Charge collection at the 1) device level determines 2) circuit-level transient duration and state-upset likelihood. Circuit effects, in turn, impact 3) register-transfer-level architecture-state corruption visible at 4) the system level. Thus, the physically accurate effects of SEEs in large-scale systems, executed on a high-performance computing (HPC) simulator, could be used to drive cross-layer radiation hardening by design. We demonstrate the capabilities of this model with two case studies. First, we determine a D flip-flop's sensitivity at the transistor level on 14-nm FinFet technology, validating the model against published cross sections. Second, we track and estimate faults in a microprocessor without interlocked pipelined stages (MIPS) processor for Adams 90% worst case environment in an isotropic space environment.

More Details

An asymptotically compatible treatment of traction loading in linearly elastic peridynamic fracture

Computer Methods in Applied Mechanics and Engineering

Yu, Yue; You, Huaiqian; Trask, Nathaniel A.

Meshfree discretizations of state-based peridynamic models are attractive due to their ability to naturally describe fracture of general materials. However, two factors conspire to prevent meshfree discretizations of state-based peridynamics from converging to corresponding local solutions as resolution is increased: quadrature error prevents an accurate prediction of bulk mechanics, and the lack of an explicit boundary representation presents challenges when applying traction loads. In this paper, we develop a reformulation of the linear peridynamic solid (LPS) model to address these shortcomings, using improved meshfree quadrature, a reformulation of the nonlocal dilatation, and a consistent handling of the nonlocal traction condition to construct a model with rigorous accuracy guarantees. In particular, these improvements are designed to enforce discrete consistency in the presence of evolving fractures, whose a priori unknown location render consistent treatment difficult. In the absence of fracture, when a corresponding classical continuum mechanics model exists, our improvements provide asymptotically compatible convergence to corresponding local solutions, eliminating surface effects and issues with traction loading which have historically plagued peridynamic discretizations. When fracture occurs, our formulation automatically provides a sharp representation of the fracture surface by breaking bonds, avoiding the loss of mass. We provide rigorous error analysis and demonstrate convergence for a number of benchmarks, including manufactured solutions, free-surface, nonhomogeneous traction loading, and composite material problems. Finally, we validate simulations of brittle fracture against a recent experiment of dynamic crack branching in soda-lime glass, providing evidence that the scheme yields accurate predictions for practical engineering problems.

More Details

AI-Enhanced Co-Design for Next-Generation Microelectronics: Innovating Innovation (Workshop Report)

Descour, Michael R.; Tsao, Jeffrey Y.; Stracuzzi, David J.; Wakeland, Anna K.; Schultz, David R.; Smith, William; Weeks, Jacquilyn A.

On April 6-8, 2021, Sandia National Laboratories hosted a virtual workshop to explore the potential for developing AI-Enhanced Co-Design for Next-Generation Microelectronics (AICoM). The workshop brought together two themes. The first theme was articulated in the 2018 Department of Energy Office of Science (DOE SC) “Basic Research Needs for Microelectronics” (BRN) report, which called for a “fundamental rethinking” of the traditional design approach to microelectronics, in which subject matter experts (SMEs) in each microelectronics discipline (materials, devices, circuits, algorithms, etc.) work near-independently. Instead, the BRN called for a non-hierarchical, egalitarian vision of co-design, wherein “each scientific discipline informs and engages the others” in “parallel but intimately networked efforts to create radically new capabilities.” The second theme was the recognition of the continuing breakthroughs in artificial intelligence (AI) that are currently enhancing and accelerating the solution of traditional design problems in materials science, circuit design, and electronic design automation (EDA).

More Details

Consistency testing for robust phase estimation

Physical Review A

Russo, Antonio R.; Kirby, William M.; Rudinger, Kenneth M.; Baczewski, Andrew D.; Kimmel, Shelby

We present an extension to the robust phase estimation protocol, which can identify incorrect results that would otherwise lie outside the expected statistical range. Robust phase estimation is increasingly a method of choice for applications such as estimating the effective process parameters of noisy hardware, but its robustness is dependent on the noise satisfying certain threshold assumptions. We provide consistency checks that can indicate when those thresholds have been violated, which can be difficult or impossible to test directly. We test these consistency checks for several common noise models, and identify two possible checks with high accuracy in locating the point in a robust phase estimation run at which further estimates should not be trusted. One of these checks may be chosen based on resource availability, or they can be used together in order to provide additional verification.

More Details
Results 601–700 of 9,998
Results 601–700 of 9,998