Publications

Results 2851–2900 of 9,998

Search results

Jump to search filters

Engineering Spin-Orbit Interaction in Silicon

Lu, Tzu-Ming L.; Maurer, Leon M.; Bussmann, Ezra B.; Harris, Charles T.; Tracy, Lisa A.; Sapkota, Keshab R.

There has been much interest in leveraging the topological order of materials for quantum information processing. Among the various solid-state systems, one-dimensional topological superconductors made out of strongly spin-orbit-coupled nanowires have been shown to be the most promising material platform. In this project, we investigated the feasibility of turning silicon, which is a non-topological semiconductor and has weak spin-orbit coupling, into a one-dimensional topological superconductor. Our theoretical analysis showed that it is indeed possible to create a sizable effective spin-orbit gap in the energy spectrum of a ballistic one-dimensional electron channel in silicon with the help of nano-magnet arrays. Experimentally, we developed magnetic materials needed for fabricating such nano-magnets, characterized the magnetic behavior at low temperatures, and successfully demonstrated the required magnetization configuration for opening the spin-orbit gap. Our results pave the way toward a practical topological quantum computing platform using silicon, one of the most technologically mature electronic materials.

More Details

Underlying one-step methods and nonautonomous stability of general linear methods

Discrete and Continuous Dynamical Systems - Series B

Steyer, Andrew S.; Van Vleck, Erik S.

We generalize the theory of underlying one-step methods to strictly stable general linear methods (GLMs) solving nonautonomous ordinary differential equations (ODEs) that satisfy a global Lipschitz condition. We combine this theory with the Lyapunov and Sacker-Sell spectral stability theory for one-step methods developed in [34, 35, 36] to analyze the stability of a strictly stable GLM solving a nonautonomous linear ODE. These results are applied to develop a stability diagnostic for the solution of nonautonomous linear ODEs by strictly stable GLMs.

More Details

A Lyapunov and Sacker–Sell spectral stability theory for one-step methods

BIT Numerical Mathematics

Steyer, Andrew S.; Van Vleck, Erik S.

Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly, exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.

More Details

Quantifying Uncertainty to Improve Decision Making in Machine Learning

Stracuzzi, David J.; Darling, Michael C.; Peterson, Matthew G.; Chen, Maximillian G.

Data-driven modeling, including machine learning methods, continue to play an increasing role in society. Data-driven methods impact decision making for applications ranging from everyday determinations about which news people see and control of self-driving cars to high-consequence national security situations related to cyber security and analysis of nuclear weapons reliability. Although modern machine learning methods have made great strides in model induction and show excellent performance in a broad variety of complex domains, uncertainty remains an inherent aspect of any data-driven model. In this report, we provide an update to the preliminary results on uncertainty quantification for machine learning presented in SAND2017-6776. Specifically, we improve upon the general problem definition and expand upon the experiments conducted for the earlier re- port. Most importantly, we summarize key lessons learned about how and when uncertainty quantification can inform decision making and provide valuable insights into the quality of learned models and potential improvements to them.

More Details

Vanguard Astra and ATSE – an ARM-based Advanced Architecture Prototype System and Software Environment (FY18 L2 Milestone #8759 Report)

Laros, James H.; Laros, James H.; Hammond, Simon D.; Aguilar, Michael J.; Curry, Matthew L.; Grant, Ryan E.; Hoekstra, Robert J.; Klundt, Ruth A.; Monk, Stephen T.; Ogden, Jeffry B.; Olivier, Stephen L.; Scott, Randall D.; Ward, Harry L.; Younge, Andrew J.

The Vanguard program informally began in January 2017 with the submission of a white paper entitled "Sandia's Vision for a 2019 Arm Testbed" to NNSA headquarters. The program proceeded in earnest in May 2017 with an announcement by Doug Wade (Director, Office of Advanced Simulation and Computing and Institutional R&D at NNSA) that Sandia National Laboratories (Sandia) would host the first Advanced Architecture Prototype platform based on the Arm architecture. In August 2017, Sandia formed a Tri-lab team chartered to develop a robust HPC software stack for Astra to support the Vanguard program goal of demonstrating the viability of Arm in supporting ASC production computing workloads.

More Details

Validation metrics for deterministic and probabilistic data

Journal of Verification, Validation and Uncertainty Quantification

Maupin, Kathryn A.; Swiler, Laura P.; Porter, Nathan W.

Computational modeling and simulation are paramount to modern science. Computational models often replace physical experiments that are prohibitively expensive, dangerous, or occur at extreme scales. Thus, it is critical that these models accurately represent and can be used as replacements for reality. This paper provides an analysis of metrics that may be used to determine the validity of a computational model. While some metrics have a direct physical meaning and a long history of use, others, especially those that compare probabilistic data, are more difficult to interpret. Furthermore, the process of model validation is often application-specific, making the procedure itself challenging and the results difficult to defend. We therefore provide guidance and recommendations as to which validation metric to use, as well as how to use and decipher the results. An example is included that compares interpretations of various metrics and demonstrates the impact of model and experimental uncertainty on validation processes.

More Details

Neural Algorithms for Low Power Implementation of Partial Differential Equations

Aimone, James B.; Hill, Aaron J.; Lehoucq, Richard B.; Parekh, Ojas D.; Reeder, Leah E.; Severa, William M.

The rise of low-power neuromorphic hardware has the potential to change high-performance computing; however much of the focus on brain-inspired hardware has been on machine learning applications. A low-power solution for solving partial differential equations could radically change how we approach large-scale computing in the future. The random walk is a fundamental stochastic process that underlies many numerical tasks in scientific computing applications. We consider here two neural algorithms that can be used to efficiently implement random walks on spiking neuromorphic hardware. The first method tracks the positions of individual walkers independently by using a modular code inspired by grid cells in the brain. The second method tracks the densities of random walkers at each spatial location directly. We present the scaling complexity of each of these methods and illustrate their ability to model random walkers under different probabilistic conditions. Finally, we present implementations of these algorithms on neuromorphic hardware.

More Details

Building 725 Expansion

Lacy, Susan L.; Noe, John P.; Ogden, Jeffry B.; Hammond, Simon D.

In October 2017, Sandia broke ground for a new computing center dedicated to High Performance Computing. The east expansion of Building 725 was entirely conceived of, designed, and built in less than 18 months and is a certified LEED Gold design building, the first of its kind for a data center in the State of New Mexico. This 15,000 square-foot building, with novel energy and water-saving technologies, will house Astra, the first in a new generation of Advanced Architecture Prototype Systems to be deployed by the NNSA and the first of many HPC systems in Building 725 East.

More Details

Opal: A Centralized Memory Manager for Investigating Disaggregated Memory Systems

Kommareddy, Vamsee R.; Hughes, Clayton H.; Hammond, Simon D.; Awad, Amro

Many modern applications have memory footprints that are increasingly large, driving system memory capacities higher and higher. Moreover, these systems are often organized where the bulk of the memory is collocated with the compute capability, which necessitates the need for message passing APIs to facilitate information sharing between compute nodes. Due to the diversity of applications that must run on High-Performance Computing (HPC) systems, the memory utilization can fluctuate wildly from one application to another. And, because memory is located in the node, maintenance can become problematic because each node must be taken offline and upgraded individually. To address these issues, vendors are exploring disaggregated, memory-centric, systems. In this type of organization, there are discrete nodes, reserved solely for memory, which are shared across many compute nodes. Due to their capacity, low-power, and non-volatility, Non-Volatile Memories (NVMs) are ideal candidates for these memory nodes. This report discusses a new component for the Structural Simulation Toolkit (SST), Opal, that can be used to study the impact of using NVMs in a disaggregated system in terms of performance, security, and memory management. This page intentionally left blank.

More Details

Highly scalable discrete-particle simulations with novel coarse-graining: accessing the microscale

Molecular Physics

Mattox, Timothy I.; Larentzos, James P.; Moore, Stan G.; Stone, Christopher P.; Ibanez-Granados, Daniel A.; Thompson, Aidan P.; Lisal, Martin; Brennan, John K.; Plimpton, Steven J.

Simulating energetic materials with complex microstructure is a grand challenge, where until recently, an inherent gap in computational capabilities had existed in modelling grain-scale effects at the microscale. We have enabled a critical capability in modelling the multiscale nature of the energy release and propagation mechanisms in advanced energetic materials by implementing, in the widely used LAMMPS molecular dynamics (MD) package, several novel coarse-graining techniques that also treat chemical reactivity. Our innovative algorithmic developments rooted within the dissipative particle dynamics framework, along with performance optimisations and application of acceleration technologies, have enabled extensions in both the length and time scales far beyond those ever realised by atomistic reactive MD simulations. In this paper, we demonstrate these advances by modelling a shockwave propagating through a microstructured material and comparing performance with the state-of-the-art in atomistic reactive MD techniques. As a result of this work, unparalleled explorations in energetic materials research are now possible.

More Details

Investment optimization to improve power system resilience

2018 International Conference on Probabilistic Methods Applied to Power Systems, PMAPS 2018 - Proceedings

Pierre, Brian J.; Arguello, Bryan A.; Staid, Andrea S.; Guttromson, Ross G.

Power system utilities continue to strive for increased system resiliency. However, quantifying a baseline system resilience, and deciding the optimal investments to improve their resilience is challenging. This paper discusses a method to create scenarios, based on historical data, that represent the threats of severe weather events, their probability of occurrence, and the system wide consequences they generate. This paper also presents a mixed-integer stochastic nonlinear optimization model which uses the scenarios as an input to determine the optimal investments to reduce the system impacts from those scenarios. The optimization model utilizes a DC power flow to determine the loss of load during an event. Loss of load is the consequence that is minimized in this optimization model as the objective function. The results shown in this paper are from the IEEE RTS-96 three area reliability model. The scenario generation and optimization model have also been utilized on full utility models, but those results cannot be published.

More Details

Investment optimization to improve power system resilience

2018 International Conference on Probabilistic Methods Applied to Power Systems, PMAPS 2018 - Proceedings

Pierre, Brian J.; Arguello, Bryan A.; Staid, Andrea S.; Guttromson, Ross G.

Power system utilities continue to strive for increased system resiliency. However, quantifying a baseline system resilience, and deciding the optimal investments to improve their resilience is challenging. This paper discusses a method to create scenarios, based on historical data, that represent the threats of severe weather events, their probability of occurrence, and the system wide consequences they generate. This paper also presents a mixed-integer stochastic nonlinear optimization model which uses the scenarios as an input to determine the optimal investments to reduce the system impacts from those scenarios. The optimization model utilizes a DC power flow to determine the loss of load during an event. Loss of load is the consequence that is minimized in this optimization model as the objective function. The results shown in this paper are from the IEEE RTS-96 three area reliability model. The scenario generation and optimization model have also been utilized on full utility models, but those results cannot be published.

More Details

Stochastic unit commitment performance considering monte carlo wind power scenarios

2018 International Conference on Probabilistic Methods Applied to Power Systems, PMAPS 2018 - Proceedings

Rachunok, Benjamin A.; Staid, Andrea S.; Watson, Jean-Paul W.; Woodruff, David L.; Yang, Dominic

Stochastic versions of the unit commitment problem have been advocated for addressing the uncertainty presented by high levels of wind power penetration. However, little work has been done to study trade-offs between computational complexity and the quality of solutions obtained as the number of probabilistic scenarios is varied. Here, we describe extensive experiments using real publicly available wind power data from the Bonneville Power Administration. Solution quality is measured by re-enacting day-ahead reliability unit commitment (which selects the thermal units that will be used each hour of the next day) and real-time economic dispatch (which determines generation levels) for an enhanced WECC-240 test system in the context of a production cost model simulator; outputs from the simulation, including cost, reliability, and computational performance metrics, are then analyzed. Unsurprisingly, we find that both solution quality and computational difficulty increase with the number of probabilistic scenarios considered. However, we find unexpected transitions in computational difficulty at a specific threshold in the number of scenarios, and report on key trends in solution performance characteristics. Our findings are novel in that we examine these tradeoffs using real-world wind power data in the context of an out-of-sample production cost model simulation, and are relevant for both practitioners interested in deploying and researchers interested in developing scalable solvers for stochastic unit commitment.

More Details

Generation and application of multivariate polynomial quadrature rules

Computer Methods in Applied Mechanics and Engineering

Jakeman, John D.; Narayan, Akil

The search for multivariate quadrature rules of minimal size with a specified polynomial accuracy has been the topic of many years of research. Finding such a rule allows accurate integration of moments, which play a central role in many aspects of scientific computing with complex models. The contribution of this paper is twofold. First, we provide novel mathematical analysis of the polynomial quadrature problem that provides a lower bound for the minimal possible number of nodes in a polynomial rule with specified accuracy. We give concrete but simplistic multivariate examples where a minimal quadrature rule can be designed that achieves this lower bound, along with situations that showcase when it is not possible to achieve this lower bound. Our second contribution is the formulation of an algorithm that is able to efficiently generate multivariate quadrature rules with positive weights on non-tensorial domains. Our tests show success of this procedure in up to 20 dimensions. We test our method on applications to dimension reduction and chemical kinetics problems, including comparisons against popular alternatives such as sparse grids, Monte Carlo and quasi Monte Carlo sequences, and Stroud rules. The quadrature rules computed in this paper outperform these alternatives in almost all scenarios.

More Details

Gradient-based optimization for regression in the functional tensor-train format

Journal of Computational Physics

Gorodetsky, Alex A.; Jakeman, John D.

Predictive analysis of complex computational models, such as uncertainty quantification (UQ), must often rely on using an existing database of simulation runs. In this paper we consider the task of performing low-multilinear-rank regression on such a database. Specifically we develop and analyze an efficient gradient computation that enables gradient-based optimization procedures, including stochastic gradient descent and quasi-Newton methods, for learning the parameters of a functional tensor-train (FT). We compare our algorithms with 22 other nonparametric and parametric regression methods on 10 real-world data sets and show that for many physical systems, exploiting low-rank structure facilitates efficient construction of surrogate models. Here, we use a number of synthetic functions to build insight into behavior of our algorithms, including the rank adaptation and group-sparsity regularization procedures that we developed to reduce overfitting. Finally we conclude the paper by building a surrogate of a physical model of a propulsion plant on a naval vessel.

More Details

Wireless Temperature Sensing Using Permanent Magnets for Nonlinear Feedback Control of Exothermic Polymers

IEEE Sensors Journal

Mazumdar, Anirban; Mazumdar, Yi C.; van Bloemen Waanders, Bart G.; Brooks, Carlton F.; Kuehl, Michael K.; Nemer, Martin N.

Epoxies and resins can require careful temperature sensing and control in order to monitor and prevent degradation. To sense the temperature inside a mold, it is desirable to utilize a small, wireless sensing element. In this paper, we describe a new architecture for wireless temperature sensing and closed-loop temperature control of exothermic polymers. This architecture is the first to utilize magnetic field estimates of the temperature of permanent magnets within a temperature feedback control loop. We further improve performance and applicability by demonstrating sensing performance at relevant temperatures, incorporating a cure estimator, and implementing a nonlinear temperature controller. This novel architecture enables unique experimental results featuring closed-loop control of an exothermic resin without any physical connection to the inside of the mold. In this paper we describe each of the unique features of this approach including magnetic field-based temperature sensing, Extended Kalman Filtering (EKF) for cure state estimation, and nonlinear feedback control over time-varying temperature trajectories. We use experimental results to demonstrate how low-cost permanent magnets can provide wireless temperature sensing up to ~90°C. In addition, we use a polymer curecontrol test-bed to illustrate how internal temperature sensing can provide improved temperature control over both short and long time-scales. In conclusion, this wireless temperature sensing and control architecture holds value for a range of manufacturing applications.

More Details

Open science on Trinity's knights landing partition: An analysis of user job data

ACM International Conference Proceeding Series

Levy, Scott L.; Laros, James H.; Ferreira, Kurt B.

High-performance computing (HPC) systems are critically important to the objectives of universities, national laboratories, and commercial companies. Because of the cost of deploying and maintaining these systems ensuring their efficient use is imperative. Job scheduling and resource management are critically important to the efficient use of HPC systems. As a result, significant research has been conducted on how to effectively schedule user jobs on HPC systems. Developing and evaluating job scheduling algorithms, however, requires a detailed understanding of how users request resources on HPC systems. In this paper, we examine a corpus of job data that was collected on Trinity, a leadership-class supercomputer. During the stabilization period of its Intel Xeon Phi (Knights Landing) partition, it was made available to users outside of a classified environment for the Trinity Open Science Phase 2 campaign. We collected information from the resource manager about each user job that was run during this Open Science period. In this paper, we examine the jobs contained in this dataset. Our analysis reveals several important characteristics of the jobs submitted during the Open Science period and provides critical insight into the use of one of the most powerful supercomputers in existence. Specifically, these data provide important guidance for the design, development, and evaluation of job scheduling and resource management algorithms.

More Details

Predictive Science ASC Alliance Program (PSAAP) II: 2018 Review of the Carbon Capture Multidisciplinary Science Center (CCMSC) at the University of Utah

Hoekstra, Robert J.; Hungerford, Aimee L.; Montoya, David R.; Ferencz, Robert M.; Kuhl, Alan L.; Ruggirello, Kevin P.

The review team convened at the University of Utah March 7-8, 2018, to review the Carbon Capture Multidisciplinary Science Center (CCMSC) funded by the 2nd Predictive Science ASC Alliance Program (PSAAP II). Center leadership and researchers made very clear and informative presentations, accurately portraying their work and successes while candidly discussing their concerns and known areas in need of improvement.

More Details

The case for semi-permanent cache occupancy

ACM International Conference Proceeding Series

Dosanjh, Matthew D.; Ghazimirsaeed, S.M.; Grant, Ryan E.; Schonbein, William W.; Levenhagen, Michael J.; Bridges, Patrick G.; Afsahi, Ahmad

The performance critical path for MPI implementations relies on fast receive side operation, which in turn requires fast list traversal. The performance of list traversal is dependent on data-locality; whether the data is currently contained in a close-to-core cache due to its temporal locality or if its spacial locality allows for predictable pre-fetching. In this paper, we explore the effects of data locality on the MPI matching problem by examining both forms of locality. First, we explore spacial locality, by combining multiple entries into a single linked list element, we can control and modify this form of locality. Secondly, we explore temporal locality by utilizing a new technique called “hot caching”, a process that creates a thread to periodically access certain data, increasing its temporal locality. In this paper, we show that by increasing data locality, we can improve MPI performance on a variety of architectures up to 4x for micro-benchmarks and up to 2x for an application.

More Details

Tacho: Memory-scalable task parallel sparse cholesky factorization

Proceedings - 2018 IEEE 32nd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018

Kim, Kyungjoo K.; Edwards, Harold C.; Rajamanickam, Sivasankaran R.

We present a memory-scalable, parallel, sparse multifrontal solver for solving symmetric postive-definite systems arising in scientific and engineering applications. Factorizing sparse matrices requires memory for both the computed factors and the temporary workspaces for computing each frontal matrix - a data structure commonly used within multifrontal methods. To factorize multiple frontal matrices in parallel, the conventional approach is to allocate a uniform workspace for each hardware thread. In the manycore era, this results in increasing memory usage proportional to the number of hardware threads. We remedy this problem by using dynamic task parallelism with a scalable memory pool. Tasks are spawned while traversing an assembly tree and executed after their dependences are satisfied. We also use an idea to respawn the tasks when certain conditions are not met. Temporary workspace for frontal matrices in each task is allocated from a memory pool designed by us. If the requested memory space is not available in the memory pool, the task is respawned to yield the hardware thread to execute other tasks. The respawned task is executed after high priority tasks are executed. This approach allows to have robust parallel performance within a bounded memory space. Experimental results demonstrate the merits of our implementation on Intel multicore and manycore architectures.

More Details

Optimal cooperative checkpointing for shared high-performance computing platforms

Proceedings - 2018 IEEE 32nd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018

Herault, Thomas; Robert, Yves; Bouteiller, Aurelien; Arnold, Dorian; Ferreira, Kurt B.; Bosilca, George; Dongarra, Jack

In high-performance computing environments, input/output (I/O) from various sources often contend for scarce available bandwidth. Adding to the I/O operations inherent to the failure-free execution of an application, I/O from checkpoint/restart (CR) operations (used to ensure progress in the presence of failures) place an additional burden as it increase I/O contention, leading to degraded performance. In this work, we consider a cooperative scheduling policy that optimizes the overall performance of concurrently executing CR-based applications which share valuable I/O resources. First, we provide a theoretical model and then derive a set of necessary constraints needed to minimize the global waste on the platform. Our results demonstrate that the optimal checkpoint interval, as defined by Young/Daly, despite providing a sensible metric for a single application, is not sufficient to optimally address resource contention at the platform scale. We therefore show that combining optimal checkpointing periods with I/O scheduling strategies can provide a significant improvement on the overall application performance, thereby maximizing platform throughput. Overall, these results provide critical analysis and direct guidance on checkpointing large-scale workloads in the presence of competing I/O while minimizing the impact on application performance.

More Details

A comparison of power management mechanisms: P-States vs. node-level power cap control

Proceedings - 2018 IEEE 32nd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018

Laros, James H.; Grant, Ryan E.; Laros, James H.; Levenhagen, Michael J.; Olivier, Stephen L.; Ward, Harry L.; Younge, Andrew J.

Large-scale HPC systems increasingly incorporate sophisticated power management control mechanisms. While these mechanisms are potentially useful for performing energy and/or power-aware job scheduling and resource management (EPA JSRM), greater understanding of their operation and performance impact on real-world applications is required before they can be applied effectively in practice. In this paper, we compare static p-state control to static node-level power cap control on a Cray XC system. Empirical experiments are performed to evaluate node-to-node performance and power usage variability for the two mechanisms. We find that static p-state control produces more predictable and higher performance characteristics than static node-level power cap control at a given power level. However, this performance benefit is at the cost of less predictable power usage. Static node-level power cap control produces predictable power usage but with more variable performance characteristics. Our results are not intended to show that one mechanism is better than the other. Rather, our results demonstrate that the mechanisms are complementary to one another and highlight their potential for combined use in achieving effective EPA JSRM solutions.

More Details

Level-spread: A new job allocation policy for dragonfly networks

Proceedings - 2018 IEEE 32nd International Parallel and Distributed Processing Symposium, IPDPS 2018

Zhang, Yijia; Tuncer, Ozan; Kaplan, Fulya; Olcoz, Katzalin; Leung, Vitus J.; Coskun, Ayse K.

The dragonfly network topology has attracted attention in recent years owing to its high radix and constant diameter. However, the influence of job allocation on communication time in dragonfly networks is not fully understood. Recent studies have shown that random allocation is better at balancing the network traffic, while compact allocation is better at harnessing the locality in dragonfly groups. Based on these observations, this paper introduces a novel allocation policy called Level-Spread for dragonfly networks. This policy spreads jobs within the smallest network level that a given job can fit in at the time of its allocation. In this way, it simultaneously harnesses node adjacency and balances link congestion. To evaluate the performance of Level-Spread, we run packet-level network simulations using a diverse set of application communication patterns, job sizes, and communication intensities. We also explore the impact of network properties such as the number of groups, number of routers per group, machine utilization level, and global link bandwidth. Level-Spread reduces the communication overhead by 16% on average (and up to 71%) compared to the state-of-The-Art allocation policies.

More Details

Hybrid Finite Element--Spectral Method for the Fractional Laplacian: Approximation Theory and Efficient Solver

SIAM Journal on Scientific Computing

Glusa, Christian A.; Ainsworth, Mark

Here, a numerical scheme is presented for approximating fractional order Poisson problems in two and three dimensions. The scheme is based on reformulating the original problem posed over $\Omega$ on the extruded domain $\mathcal{C}=\Omega\times[0,\infty)$ following. The resulting degenerate elliptic integer order PDE is then approximated using a hybrid FEM-spectral scheme. Finite elements are used in the direction parallel to the problem domain $\Omega$, and an appropriate spectral method is used in the extruded direction. The spectral part of the scheme requires that we approximate the true eigenvalues of the integer order Laplacian over $\Omega$. We derive an a priori error estimate which takes account of the error arising from using an approximation in place of the true eigenvalues. We further present a strategy for choosing approximations of the eigenvalues based on Weyl's law and finite element discretizations of the eigenvalue problem. The system of linear algebraic equations arising from the hybrid FEM-spectral scheme is decomposed into blocks which can be solved effectively using standard iterative solvers such as multigrid and conjugate gradient. Numerical examples in two and three dimensions suggest that the approach is quasi-optimal in terms of complexity.

More Details
Results 2851–2900 of 9,998
Results 2851–2900 of 9,998