Publications

Results 301–350 of 9,998

Search results

Jump to search filters

Quantum Transport Simulations for Si:P δ-layer Tunnel Junctions

International Conference on Simulation of Semiconductor Processes and Devices, SISPAD

Mendez Granado, Juan P.; Gao, Xujiao G.; Mamaluy, Denis M.; Misra, Shashank M.

We present an efficient self-consistent implementation of the Non-Equilibrium Green Function formalism, based on the Contact Block Reduction method for fast numerical efficiency, and the predictor-corrector approach, together with the Anderson mixing scheme, for the self-consistent solution of the Poisson and Schrödinger equations. Then, we apply this quantum transport framework to investigate 2D horizontal Si:P δ-layer Tunnel Junctions. We find that the potential barrier height varies with the tunnel gap width and the applied bias and that the sign of a single charge impurity in the tunnel gap plays an important role in the electrical current.

More Details

$\mathrm{LAMMPS}$ - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales

Computer Physics Communications

Thompson, Aidan P.; Aktulga, H.M.; Berger, Richard; Bolintineanu, Dan S.; Brown, W.M.; Crozier, Paul C.; In 'T Veld, Pieter J.; Kohlmeyer, Axel; Moore, Stan G.; Nguyen, Trung D.; Shan, Ray; Stevens, Mark J.; Tranchida, Julien; Trott, Christian R.; Plimpton, Steven J.

Since the classical molecular dynamics simulator LAMMPS was released as an open source code in 2004, it has become a widely-used tool for particle-based modeling of materials at length scales ranging from atomic to mesoscale to continuum. Reasons for its popularity are that it provides a wide variety of particle interaction models for different materials, that it runs on any platform from a single CPU core to the largest supercomputers with accelerators, and that it gives users control over simulation details, either via the input script or by adding code for new interatomic potentials, constraints, diagnostics, or other features needed for their models. As a result, hundreds of people have contributed new capabilities to LAMMPS and it has grown from fifty thousand lines of code in 2004 to a million lines today. In this paper several of the fundamental algorithms used in LAMMPS are described along with the design strategies which have made it flexible for both users and developers. We also highlight some capabilities recently added to the code which were enabled by this flexibility, including dynamic load balancing, on-the-fly visualization, magnetic spin dynamics models, and quantum-accuracy machine learning interatomic potentials.

More Details

Ultradoping Boron on Si(100) via Solvothermal Chemistry**

Chemistry - A European Journal

Frederick, Esther F.; Campbell, Quinn C.; Kolesnichenko, Igor K.; Pena, Luis F.; Benavidez, Angelica; Anderson, Evan M.; Wheeler, David R.; Misra, Shashank M.

Ultradoping introduces unprecedented dopant levels into Si, which transforms its electronic behavior and enables its use as a next-generation electronic material. Commercialization of ultradoping is currently limited by gas-phase ultra-high vacuum requirements. Solvothermal chemistry is amenable to scale-up. However, an integral part of ultradoping is a direct chemical bond between dopants and Si, and solvothermal dopant-Si surface reactions are not well-developed. This work provides the first quantified demonstration of achieving ultradoping concentrations of boron (∼1e14 cm2) by using a solvothermal process. Surface characterizations indicate the catalyst cross-reacted, which led to multiple surface products and caused ambiguity in experimental confirmation of direct surface attachment. Density functional theory computations elucidate that the reaction results in direct B−Si surface bonds. This proof-of-principle work lays groundwork for emerging solvothermal ultradoping processes.

More Details

Mode-Selective Vibrational Energy Transfer Dynamics in 1,3,5-Trinitroperhydro-1,3,5-triazine (RDX) Thin Films

Journal of Physical Chemistry A

Cole-Filipiak, Neil C.; Knepper, Robert; Wood, Mitchell A.; Ramasesha, Krupa R.

The coupling of inter- and intramolecular vibrations plays a critical role in initiating chemistry during the shock-to-detonation transition in energetic materials. Herein, we report on the subpicosecond to subnanosecond vibrational energy transfer (VET) dynamics of the solid energetic material 1,3,5-trinitroperhydro-1,3,5-triazine (RDX) by using broadband, ultrafast infrared transient absorption spectroscopy. Experiments reveal VET occurring on three distinct time scales: subpicosecond, 5 ps, and 200 ps. The ultrafast appearance of signal at all probed modes in the mid-infrared suggests strong anharmonic coupling of all vibrations in the solid, whereas the long-lived evolution demonstrates that VET is incomplete, and thus thermal equilibrium is not attained, even on the 100 ps time scale. Density functional theory and classical molecular dynamics simulations provide valuable insights into the experimental observations, revealing compression-insensitive time scales for the initial VET dynamics of high-frequency vibrations and drastically extended relaxation times for low-frequency phonon modes under lattice compression. Mode selectivity of the longest dynamics suggests coupling of the N-N and axial NO2stretching modes with the long-lived, excited phonon bath.

More Details

A FETI approach to domain decomposition for meshfree discretizations of nonlocal problems

Computer Methods in Applied Mechanics and Engineering

Xu, Xiao; Glusa, Christian A.; D'Elia, Marta D.; Foster, John E.

We propose a domain decomposition method for the efficient simulation of nonlocal problems. Our approach is based on a multi-domain formulation of a nonlocal diffusion problem where the subdomains share “nonlocal” interfaces of the size of the nonlocal horizon. This system of nonlocal equations is first rewritten in terms of minimization of a nonlocal energy, then discretized with a meshfree approximation and finally solved via a Lagrange multiplier approach in a way that resembles the finite element tearing and interconnect method. Specifically, we propose a distributed projected gradient algorithm for the solution of the Lagrange multiplier system, whose unknowns determine the nonlocal interface conditions between subdomains. Several two-dimensional numerical tests on problems as large as 191 million unknowns illustrate the strong and the weak scalability of our algorithm, which outperforms the standard approach to the distributed numerical solution of the problem. Finally, this work is the first rigorous numerical study in a two-dimensional multi-domain setting for nonlocal operators with finite horizon and, as such, it is a fundamental step towards increasing the use of nonlocal models in large scale simulations.

More Details

GDSA Framework Development and Process Model Integration FY2021

Mariner, Paul M.; Berg, Timothy M.; Debusschere, Bert D.; Eckert, Aubrey C.; Harvey, Jacob H.; LaForce, Tara; Leone, Rosemary C.; Mills, Melissa M.; Nole, Michael A.; Park, Heeho D.; Perry, F.V.; Seidl, Daniel T.; Swiler, Laura P.; Chang, Kyung W.

The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Spent Fuel & Waste Disposition (SFWD) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and highlevel nuclear waste (HLW). A high priority for SFWST disposal R&D is disposal system modeling (DOE 2012, Table 6; Sevougian et al. 2019). The SFWST Geologic Disposal Safety Assessment (GDSA) work package is charged with developing a disposal system modeling and analysis capability for evaluating generic disposal system performance for nuclear waste in geologic media.

More Details

Comprehensive Material Characterization and Simultaneous Model Calibration for Improved Computational Simulation Credibility

Seidl, Daniel T.; Jones, Elizabeth M.; Lester, Brian T.

Computational simulation is increasingly relied upon for high-consequence engineering decisions, and a foundational element to solid mechanics simulations is a credible material model. Our ultimate vision is to interlace material characterization and model calibration in a real-time feedback loop, where the current model calibration results will drive the experiment to load regimes that add the most useful information to reduce parameter uncertainty. The current work investigated one key step to this Interlaced Characterization and Calibration (ICC) paradigm, using a finite load-path tree to incorporate history/path dependency of nonlinear material models into a network of surrogate models that replace computationally-expensive finite-element analyses. Our reference simulation was an elastoplastic material point subject to biaxial deformation with a Hill anisotropic yield criterion. Training data was generated using either a space-filling or adaptive sampling method, and surrogates were built using either Gaussian process or polynomial chaos expansion methods. Surrogate error was evaluated to be on the order of 10⁻5 and 10⁻3 percent for the space-filling and adaptive sampling training data, respectively. Direct Bayesian inference was performed with the surrogate network and with the reference material point simulator, and results agreed to within 3 significant figures for the mean parameter values, with a reduction in computational cost over 5 orders of magnitude. These results bought down risk regarding the surrogate network and facilitated a successful FY22-24 full LDRD proposal to research and develop the complete ICC paradigm.

More Details

Revealing quantum effects in highly conductive δ-layer systems

Communications Physics

Mamaluy, Denis M.; Mendez Granado, Juan P.; Gao, Xujiao G.; Misra, Shashank M.

Thin, high-density layers of dopants in semiconductors, known as δ-layer systems, have recently attracted attention as a platform for exploration of the future quantum and classical computing when patterned in plane with atomic precision. However, there are many aspects of the conductive properties of these systems that are still unknown. Here we present an open-system quantum transport treatment to investigate the local density of electron states and the conductive properties of the δ-layer systems. A successful application of this treatment to phosphorous δ-layer in silicon both explains the origin of recently-observed shallow sub-bands and reproduces the sheet resistance values measured by different experimental groups. Further analysis reveals two main quantum-mechanical effects: 1) the existence of spatially distinct layers of free electrons with different average energies; 2) significant dependence of sheet resistance on the δ-layer thickness for a fixed sheet charge density.

More Details

Predictive Data-driven Platform for Subsurface Energy Production

Yoon, Hongkyu Y.; Verzi, Stephen J.; Cauthen, Katherine R.; Musuvathy, Srideep M.; Melander, Darryl J.; Norland, Kyle; Morales, Adriana M.; Lee, Jonghyun; Sun, Alexander

Subsurface energy activities such as unconventional resource recovery, enhanced geothermal energy systems, and geologic carbon storage require fast and reliable methods to account for complex, multiphysical processes in heterogeneous fractured and porous media. Although reservoir simulation is considered the industry standard for simulating these subsurface systems with injection and/or extraction operations, reservoir simulation requires spatio-temporal “Big Data” into the simulation model, which is typically a major challenge during model development and computational phase. In this work, we developed and applied various deep neural network-based approaches to (1) process multiscale image segmentation, (2) generate ensemble members of drainage networks, flow channels, and porous media using deep convolutional generative adversarial network, (3) construct multiple hybrid neural networks such as convolutional LSTM and convolutional neural network-LSTM to develop fast and accurate reduced order models for shale gas extraction, and (4) physics-informed neural network and deep Q-learning for flow and energy production. We hypothesized that physicsbased machine learning/deep learning can overcome the shortcomings of traditional machine learning methods where data-driven models have faltered beyond the data and physical conditions used for training and validation. We improved and developed novel approaches to demonstrate that physics-based ML can allow us to incorporate physical constraints (e.g., scientific domain knowledge) into ML framework. Outcomes of this project will be readily applicable for many energy and national security problems that are particularly defined by multiscale features and network systems.

More Details

Propagation of a Stress Pulse in a Heterogeneous Elastic Bar

Journal of Peridynamics and Nonlocal Modeling

Silling, Stewart A.

The propagation of a wave pulse due to low-speed impact on a one-dimensional, heterogeneous bar is studied. Due to the dispersive character of the medium, the pulse attenuates as it propagates. This attenuation is studied over propagation distances that are much longer than the size of the microstructure. A homogenized peridynamic material model can be calibrated to reproduce the attenuation and spreading of the wave. The calibration consists of matching the dispersion curve for the heterogeneous material near the limit of long wavelengths. It is demonstrated that the peridynamic method reproduces the attenuation of wave pulses predicted by an exact microstructural model over large propagation distances.

More Details

Mapping Stochastic Devices to Probabilistic Algorithms

Aimone, James B.; Safonov, Alexander M.

Probabilistic and Bayesian neural networks have long been proposed as a method to incorporate uncertainty about the world (both in training data and operation) into artificial intelligence applications. One approach to making a neural network probabilistic is to leverage a Monte Carlo sampling approach that samples a trained network while incorporating noise. Such sampling approaches for neural networks have not been extensively studied due to the prohibitive requirement of many computationally expensive samples. While the development of future microelectronics platforms that make this sampling more efficient is an attractive option, it has not been immediately clear how to sample a neural network and what the quality of random number generation should be. This research aimed to start addressing these two fundamental questions by examining basic “off the shelf” neural networks can be sampled through a few different mechanisms (including synapse “dropout” and neuron “dropout”) and examine how these sampling approaches can be evaluated both in terms of evaluating algorithm effectiveness and the required quality of random numbers.

More Details

Incentivizing Adoption of Software Quality Practices

Raybourn, Elaine M.; Milewicz, Reed M.; Mundt, Miranda R.

Although many software teams across the laboratories comply with yearly software quality engineering (SQE) assessments, the practice of introducing quality into each phase of the software lifecycle, or the team processes, may vary substantially. Even with the support of a quality engineer, many teams struggle to adapt and right-size software engineering best practices in quality to fit their context, and these activities aren’t framed in a way that motivates teams to take action. In short, software quality is often a “check the box for compliance” activity instead of a cultural practice that both values software quality and knows how to achieve it. In this report, we present the results of our 6600 VISTA Innovation Tournament project, "Incentivizing and Motivating High Confidence and Research Software Teams to Adopt the Practice of Quality." We present our findings and roadmap for future work based on 1) a rapid review of relevant literature, 2) lessons learned from an internal design thinking workshop, and 3) an external Collegeville 2021 workshop. These activities provided an opportunity for team ideation and community engagement/feedback. Based on our findings, we believe a coordinated effort (e.g. strategic communication campaign) aimed at diffusing the innovation of the practice of quality across Sandia National Laboratories could over time effect meaningful organizational change. As such, our roadmap addresses strategies for motivating and incentivizing individuals ranging from early career to seasoned software developers/scientists.

More Details

Sphynx: A parallel multi-GPU graph partitioner for distributed-memory systems

Parallel Computing

Acer, Seher A.; Boman, Erik G.; Glusa, Christian A.; Rajamanickam, Sivasankaran R.

Graph partitioning has been an important tool to partition the work among several processors to minimize the communication cost and balance the workload. While accelerator-based supercomputers are emerging to be the standard, the use of graph partitioning becomes even more important as applications are rapidly moving to these architectures. However, there is no distributed-memory-parallel, multi-GPU graph partitioner available for applications. We developed a spectral graph partitioner, Sphynx, using the portable, accelerator-friendly stack of the Trilinos framework. In Sphynx, we allow using different preconditioners and exploit their unique advantages. We use Sphynx to systematically evaluate the various algorithmic choices in spectral partitioning with a focus on the GPU performance. We perform those evaluations on two distinct classes of graphs: regular (such as meshes, matrices from finite element methods) and irregular (such as social networks and web graphs), and show that different settings and preconditioners are needed for these graph classes. The experimental results on the Summit supercomputer show that Sphynx is the fastest alternative on irregular graphs in an application-friendly setting and obtains a partitioning quality close to ParMETIS on regular graphs. When compared to nvGRAPH on a single GPU, Sphynx is faster and obtains better balance and better quality partitions. Sphynx provides a good and robust partitioning method across a wide range of graphs for applications looking for a GPU-based partitioner.

More Details

Multimode Metastructures: Novel Hybrid 3D Lattice Topologies

Boyce, Brad B.; Garland, Anthony G.; White, Benjamin C.; Jared, Bradley H.; Conway, Kaitlynn; Adstedt, Katerina; Dingreville, Remi P.; Robbins, Joshua R.; Walsh, Timothy W.; Alvis, Timothy A.; Branch, Brittany A.; Kaehr, Bryan J.; Kunka, Cody; Leathe, Nicholas L.

With the rapid proliferation of additive manufacturing and 3D printing technologies, architected cellular solids including truss-like 3D lattice topologies offer the opportunity to program the effective material response through topological design at the mesoscale. The present report summarizes several of the key findings from a 3-year Laboratory Directed Research and Development Program. The program set out to explore novel lattice topologies that can be designed to control, redirect, or dissipate energy from one or multiple insult environments relevant to Sandia missions, including crush, shock/impact, vibration, thermal, etc. In the first 4 sections, we document four novel lattice topologies stemming from this study: coulombic lattices, multi-morphology lattices, interpenetrating lattices, and pore-modified gyroid cellular solids, each with unique properties that had not been achieved by existing cellular/lattice metamaterials. The fifth section explores how unintentional lattice imperfections stemming from the manufacturing process, primarily sur face roughness in the case of laser powder bed fusion, serve to cause stochastic response but that in some cases such as elastic response the stochastic behavior is homogenized through the adoption of lattices. In the sixth section we explore a novel neural network screening process that allows such stocastic variability to be predicted. In the last three sections, we explore considerations of computational design of lattices. Specifically, in section 7 using a novel generative optimization scheme to design novel pareto-optimal lattices for multi-objective environments. In section 8, we use computational design to optimize a metallic lattice structure to absorb impact energy for a 1000 ft/s impact. And in section 9, we develop a modified micromorphic continuum model to solve wave propagation problems in lattices efficiently.

More Details

Sensitivity Analysis Comparisons on Geologic Case Studies: An International Collaboration

Swiler, Laura P.; Becker, Dirk-Alexander; Brooks, Dusty M.; Govaerts, Joan; Koskinen, Lasse; Plischke, Elmar; Rohlig, Klaus-Jurgen; Saveleva, Elena; Spiessl, Sabine M.; Stein, Emily S.; Svitelman, Valentina

Over the past four years, an informal working group has developed to investigate existing sensitivity analysis methods, examine new methods, and identify best practices. The focus is on the use of sensitivity analysis in case studies involving geologic disposal of spent nuclear fuel or nuclear waste. To examine ideas and have applicable test cases for comparison purposes, we have developed multiple case studies. Four of these case studies are presented in this report: the GRS clay case, the SNL shale case, the Dessel case, and the IBRAE groundwater case. We present the different sensitivity analysis methods investigated by various groups, the results obtained by different groups and different implementations, and summarize our findings.

More Details

Thermal Infrared Detectors: expanding performance limits using ultrafast electron microscopy

Talin, A.A.; Ellis, Scott; Bartelt, Norman C.; Leonard, Francois L.; Perez, Christopher P.; Celio, Km; Fuller, Elliot J.; Hughart, David R.; Garland, Diana; Marinella, Matthew J.; Michael, Joseph R.; Chandler, D.W.; Young, Steve M.; Smith, Sean M.; Kumar, Suhas K.

This project aimed to identify the performance-limiting mechanisms in mid- to far infrared (IR) sensors by probing photogenerated free carrier dynamics in model detector materials using scanning ultrafast electron microscopy (SUEM). SUEM is a recently developed method based on using ultrafast electron pulses in combination with optical excitations in a pump- probe configuration to examine charge dynamics with high spatial and temporal resolution and without the need for microfabrication. Five material systems were examined using SUEM in this project: polycrystalline lead zirconium titanate (a pyroelectric), polycrystalline vanadium dioxide (a bolometric material), GaAs (near IR), InAs (mid IR), and Si/SiO 2 system as a prototypical system for interface charge dynamics. The report provides detailed results for the Si/SiO 2 and the lead zirconium titanate systems.

More Details

ASCEND: Asymptotically compatible strong form foundations for nonlocal discretization

Trask, Nathaniel A.; D'Elia, Marta D.; Littlewood, David J.; Silling, Stewart A.; Trageser, Jeremy T.; Tupek, Michael R.

Nonlocal models naturally handle a range of physics of interest to SNL, but discretization of their underlying integral operators poses mathematical challenges to realize the accuracy and robustness commonplace in discretization of local counterparts. This project focuses on the concept of asymptotic compatibility, namely preservation of the limit of the discrete nonlocal model to a corresponding well-understood local solution. We address challenges that have traditionally troubled nonlocal mechanics models primarily related to consistency guarantees and boundary conditions. For simple problems such as diffusion and linear elasticity we have developed complete error analysis theory providing consistency guarantees. We then take these foundational tools to develop new state-of-the-art capabilities for: lithiation-induced failure in batteries, ductile failure of problems driven by contact, blast-on-structure induced failure, brittle/ductile failure of thin structures. We also summarize ongoing efforts using these frameworks in data-driven modeling contexts. This report provides a high-level summary of all publications which followed from these efforts.

More Details

Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) (Final Report)

Pinar, Ali P.; Tarman, Thomas D.; Swiler, Laura P.; Gearhart, Jared L.; Hart, Derek H.; Vugrin, Eric D.; Cruz, Gerardo C.; Arguello, Bryan A.; Geraci, Gianluca G.; Debusschere, Bert D.; Hanson, Seth T.; Outkin, Alexander V.; Thorpe, Jamie T.; Hart, William E.; Sahakian, Meghan A.; Gabert, Kasimir G.; Glatter, Casey J.; Johnson, Emma S.; Punla-Green, She'Ifa

This report summarizes the activities performed as part of the Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) Grand Challenge LDRD project. We provide an overview of the research done in this project, including work on cyber emulation, uncertainty quantification, and optimization. We present examples of integrated analyses performed on two case studies: a network scanning/detection study and a malware command and control study. We highlight the importance of experimental workflows and list references of papers and presentations developed under this project. We outline lessons learned and suggestions for future work.

More Details

Integrated System and Application Continuous Performance Monitoring and Analysis Capability

Aaziz, Omar R.; Allan, Benjamin A.; Brandt, James M.; Cook, Jeanine C.; Devine, Karen D.; Elliott, James E.; Gentile, Ann C.; Hammond, Simon D.; Kelley, Brian M.; Lopatina, Lena; Moore, Stan G.; Olivier, Stephen L.; Laros, James H.; Poliakoff, David Z.; Pawlowski, Roger P.; Regier, Phillip A.; Schmitz, Mark E.; Schwaller, Benjamin S.; Surjadidjaja, Vanessa S.; Swan, Matthew S.; Tucker, Nick; Tucker, Thomas; Vaughan, Courtenay T.; Walton, Sara P.

Scientific applications run on high-performance computing (HPC) systems are critical for many national security missions within Sandia and the NNSA complex. However, these applications often face performance degradation and even failures that are challenging to diagnose. To provide unprecedented insight into these issues, the HPC Development, HPC Systems, Computational Science, and Plasma Theory & Simulation departments at Sandia crafted and completed their FY21 ASC Level 2 milestone entitled "Integrated System and Application Continuous Performance Monitoring and Analysis Capability." The milestone created a novel integrated HPC system and application monitoring and analysis capability by extending Sandia's Kokkos application portability framework, Lightweight Distributed Metric Service (LDMS) monitoring tool, and scalable storage, analysis, and visualization pipeline. The extensions to Kokkos and LDMS enable collection and storage of application data during run time, as it is generated, with negligible overhead. This data is combined with HPC system data within the extended analysis pipeline to present relevant visualizations of derived system and application metrics that can be viewed at run time or post run. This new capability was evaluated using several week-long, 290-node runs of Sandia's ElectroMagnetic Plasma In Realistic Environments ( EMPIRE ) modeling and design tool and resulted in 1TB of application data and 50TB of system data. EMPIRE developers remarked this capability was incredibly helpful for quickly assessing application health and performance alongside system state. In short, this milestone work built the foundation for expansive HPC system and application data collection, storage, analysis, visualization, and feedback framework that will increase total scientific output of Sandia's HPC users.

More Details

Understanding the Design Space of Sparse/Dense Multiphase Dataflows for Mapping Graph Neural Networks on Spatial Accelerators

Garg, Raveesh; Qin, Eric; Martinez, Francisco M.; Guirado, Robert; Jain, Akshay; Abadal, Sergi; Abellan, Jose L.; Acacio, Manuel E.; Alarcon, Eduard; Rajamanickam, Sivasankaran R.; Krishna, Tushar

Graph Neural Networks (GNNs) have garnered a lot of recent interest because of their success in learning representations from graph-structured data across several critical applications in cloud and HPC. Owing to their unique compute and memory characteristics that come from an interplay between dense and sparse phases of computations, the emergence of reconfigurable dataflow (aka spatial) accelerators offers promise for acceleration by mapping optimized dataflows (i.e., computation order and parallelism) for both phases. The goal of this work is to characterize and understand the design-space of dataflow choices for running GNNs on spatial accelerators in order for the compilers to optimize the dataflow based on the workload. Specifically, we propose a taxonomy to describe all possible choices for mapping the dense and sparse phases of GNNs spatially and temporally over a spatial accelerator, capturing both the intra-phase dataflow and the inter-phase (pipelined) dataflow. Using this taxonomy, we do deep-dives into the cost and benefits of several dataflows and perform case studies on implications of hardware parameters for dataflows and value of flexibility to support pipelined execution.

More Details

Beating random assignment for approximating quantum 2-local hamiltonian problems

Leibniz International Proceedings in Informatics, LIPIcs

Parekh, Ojas D.; Thompson, Kevin T.

The quantum k-Local Hamiltonian problem is a natural generalization of classical constraint satisfaction problems (k-CSP) and is complete for QMA, a quantum analog of NP. Although the complexity of k-Local Hamiltonian problems has been well studied, only a handful of approximation results are known. For Max 2-Local Hamiltonian where each term is a rank 3 projector, a natural quantum generalization of classical Max 2-SAT, the best known approximation algorithm was the trivial random assignment, yielding a 0.75-approximation. We present the first approximation algorithm beating this bound, a classical polynomial-time 0.764-approximation. For strictly quadratic instances, which are maximally entangled instances, we provide a 0.801 approximation algorithm, and numerically demonstrate that our algorithm is likely a 0.821-approximation. We conjecture these are the hardest instances to approximate. We also give improved approximations for quantum generalizations of other related classical 2-CSPs. Finally, we exploit quantum connections to a generalization of the Grothendieck problem to obtain a classical constant-factor approximation for the physically relevant special case of strictly quadratic traceless 2-Local Hamiltonians on bipartite interaction graphs, where a inverse logarithmic approximation was the best previously known (for general interaction graphs). Our work employs recently developed techniques for analyzing classical approximations of CSPs and is intended to be accessible to both quantum information scientists and classical computer scientists.

More Details

Efficient flexible characterization of quantum processors with nested error models

New Journal of Physics

Nielsen, Erik N.; Rudinger, Kenneth M.; Proctor, Timothy J.; Young, Kevin C.; Blume-Kohout, Robin J.

We present a simple and powerful technique for finding a good error model for a quantum processor. The technique iteratively tests a nested sequence of models against data obtained from the processor, and keeps track of the best-fit model and its wildcard error (a metric of the amount of unmodeled error) at each step. Each best-fit model, along with a quantification of its unmodeled error, constitutes a characterization of the processor. We explain how quantum processor models can be compared with experimental data and to each other. We demonstrate the technique by using it to characterize a simulated noisy two-qubit processor.

More Details

Physics-Based Optical Neuromorphic Classification

Leonard, Francois L.; Teeter, Corinne M.; Vineyard, Craig M.

Typical approaches to classify scenes from light convert the light field to electrons to perform the computation in the digital electronic domain. This conversion and downstream computational analysis require significant power and time. Diffractive neural networks have recently emerged as unique systems to classify optical fields at lower energy and high speeds. Previous work has shown that a single layer of diffractive metamaterial can achieve high performance on classification tasks. In analogy with electronic neural networks, it is anticipated that multilayer diffractive systems would provide better performance, but the fundamental reasons for the potential improvement have not been established. In this work, we present extensive computational simulations of two - layer diffractive neural networks and show that they can achieve high performance with fewer diffractive features than single layer systems.

More Details
Results 301–350 of 9,998
Results 301–350 of 9,998