Publications

Results 1–100 of 188
Skip to search filters

Integrated System and Application Continuous Performance Monitoring and Analysis Capability

Aaziz, Omar R.; Allan, Benjamin A.; Brandt, James M.; Cook, Jeanine C.; Devine, Karen D.; Elliott, James E.; Gentile, Ann C.; Hammond, Simon D.; Kelley, Brian M.; Lopatina, Lena L.; Moore, Stan G.; Olivier, Stephen L.; Pedretti, Kevin P.; Poliakoff, David Z.; Pawlowski, Roger P.; Regier, Phillip A.; Schmitz, Mark E.; Schwaller, Benjamin S.; Surjadidjaja, Vanessa S.; Swan, Matthew S.; Tucker, Nick T.; Tucker, Tom T.; Vaughan, Courtenay T.; Walton, Sara P.

Scientific applications run on high-performance computing (HPC) systems are critical for many national security missions within Sandia and the NNSA complex. However, these applications often face performance degradation and even failures that are challenging to diagnose. To provide unprecedented insight into these issues, the HPC Development, HPC Systems, Computational Science, and Plasma Theory & Simulation departments at Sandia crafted and completed their FY21 ASC Level 2 milestone entitled "Integrated System and Application Continuous Performance Monitoring and Analysis Capability." The milestone created a novel integrated HPC system and application monitoring and analysis capability by extending Sandia's Kokkos application portability framework, Lightweight Distributed Metric Service (LDMS) monitoring tool, and scalable storage, analysis, and visualization pipeline. The extensions to Kokkos and LDMS enable collection and storage of application data during run time, as it is generated, with negligible overhead. This data is combined with HPC system data within the extended analysis pipeline to present relevant visualizations of derived system and application metrics that can be viewed at run time or post run. This new capability was evaluated using several week-long, 290-node runs of Sandia's ElectroMagnetic Plasma In Realistic Environments ( EMPIRE ) modeling and design tool and resulted in 1TB of application data and 50TB of system data. EMPIRE developers remarked this capability was incredibly helpful for quickly assessing application health and performance alongside system state. In short, this milestone work built the foundation for expansive HPC system and application data collection, storage, analysis, visualization, and feedback framework that will increase total scientific output of Sandia's HPC users.

More Details

Final report of activities for the LDRD-express project #223796 titled: “Fluid models of charged species transport: numerical methods with mathematically guaranteed properties”, PI: Ignacio Tomas, Co-PI: John Shadid

Tomas, Ignacio T.; Shadid, John N.; Crockatt, Michael M.; Pawlowski, Roger P.; Maier, Matthias M.; Guermond, Jean-Luc G.

This report summarizes the findings and outcomes of the LDRD-express project with title “Fluid models of charged species transport: numerical methods with mathematically guaranteed properties”. The primary motivation of this project was the computational/mathematical exploration of the ideas advanced aiming to improve the state-of-the-art on numerical methods for the one-fluid Euler-Poisson models and gain some understanding on the Euler-Maxwell model. Euler-Poisson and Euler-Maxwell, by themselves are not the most technically relevant PDE plasma-models. However, both of them are elementary building blocks of PDE-models used in actual technical applications and include most (if not all) of their mathematical difficulties. Outside the classical ideal MHD models, rigorous mathematical and numerical understanding of one-fluid models is still a quite undeveloped research area, and the treatment/understanding of boundary conditions is minimal (borderline non-existent) at this point in time. This report focuses primarily on bulk-behaviour of Euler-Poisson’s model, touching boundary conditions only tangentially.

More Details

Integrated System and Application Continuous Performance Monitoring and Analysis Capability

Brandt, James M.; Cook, Jeanine C.; Aaziz, Omar R.; Allan, Benjamin A.; Devine, Karen D.; Elliott, James J.; Gentile, Ann C.; Hammond, Simon D.; Kelley, Brian M.; Lopatina, Lena L.; Moore, Stan G.; Olivier, Stephen L.; Pedretti, Kevin P.; Poliakoff, David Z.; Pawlowski, Roger P.; Regier, Phillip A.; Schmitz, Mark E.; Schwaller, Benjamin S.; Surjadidjaja, Vanessa S.; Swan, Matthew S.; Tucker, Tom T.; Tucker, Nick T.; Vaughan, Courtenay T.; Walton, Sara P.

Abstract not provided.

EMPIRE-PIC: A performance portable unstructured particle-in-cell code

Communications in Computational Physics

Bettencourt, Matthew T.; Brown, Dominic A.S.; Cartwright, Keith L.; Cyr, Eric C.; Glusa, Christian A.; Lin, Paul T.; Moore, Stan G.; McGregor, Duncan A.O.; Pawlowski, Roger P.; Phillips, Edward G.; Roberts, Nathan V.; Wright, Steven A.; Maheswaran, Satheesh; Jones, John P.; Jarvis, Stephen A.

In this paper we introduce EMPIRE-PIC, a finite element method particle-in-cell (FEM-PIC) application developed at Sandia National Laboratories. The code has been developed in C++ using the Trilinos library and the Kokkos Performance Portability Framework to enable running on multiple modern compute architectures while only requiring maintenance of a single codebase. EMPIRE-PIC is capable of solving both electrostatic and electromagnetic problems in two- and three-dimensions to second-order accuracy in space and time. In this paper we validate the code against three benchmark problems - a simple electron orbit, an electrostatic Langmuir wave, and a transverse electromagnetic wave propagating through a plasma. We demonstrate the performance of EMPIRE-PIC on four different architectures: Intel Haswell CPUs, Intel's Xeon Phi Knights Landing, ARM Thunder-X2 CPUs, and NVIDIA Tesla V100 GPUs attached to IBM POWER9 processors. This analysis demonstrates scalability of the code up to more than two thousand GPUs, and greater than one hundred thousand CPUs.

More Details

A linearity preserving nodal variation limiting algorithm for continuous Galerkin discretization of ideal MHD equations

Journal of Computational Physics

Mabuza, Sibusiso M.; Shadid, John N.; Cyr, Eric C.; Pawlowski, Roger P.; Kuzmin, Dmitri

In this work, a stabilized continuous Galerkin (CG) method for magnetohydrodynamics (MHD) is presented. Ideal, compressible inviscid MHD equations are discretized in space on unstructured meshes using piecewise linear or bilinear finite element bases to get a semi-discrete scheme. Stabilization is then introduced to the semi-discrete method in a strategy that follows the algebraic flux correction paradigm. This involves adding some artificial diffusion to the high order, semi-discrete method and mass lumping in the time derivative term. The result is a low order method that provides local extremum diminishing properties for hyperbolic systems. The difference between the low order method and the high order method is scaled element-wise using a limiter and added to the low order scheme. The limiter is solution dependent and computed via an iterative linearity preserving nodal variation limiting strategy. The stabilization also involves an optional consistent background high order dissipation that reduces phase errors. The resulting stabilized scheme is a semi-discrete method that can be applied to inviscid shock MHD problems and may be even extended to resistive and viscous MHD problems. To satisfy the divergence free constraint of the MHD equations, we add parabolic divergence cleaning to the system. Various time integration methods can be used to discretize the scheme in time. We demonstrate the robustness of the scheme by solving several shock MHD problems.

More Details

IMEX and exact sequence discretization of the multi-fluid plasma model

Journal of Computational Physics

Miller, Sean M.; Cyr, E.C.; Shadid, John N.; Kramer, R.M.J.; Phillips, Edward G.; Conde, Sidafa C.; Pawlowski, Roger P.

Multi-fluid plasma models, where an electron fluid is modeled in addition to multiple ion and neutral species as well as the full set of Maxwell's equations, are useful for representing physics beyond the scope of classic MHD. This advantage presents challenges in appropriately dealing with electron dynamics and electromagnetic behavior characterized by the plasma and cyclotron frequencies and the speed of light. For physical systems, such as those near the MHD asymptotic regime, this requirement drastically increases runtimes for explicit time integration even though resolving fast dynamics may not be critical for accuracy. Implicit time integration methods, with efficient solvers, can help to step over fast time-scales that constrain stability, but do not strongly influence accuracy. As an extension, Implicit-explicit (IMEX) schemes provide an additional mechanism to choose which dynamics are evolved using an expensive implicit solve or resolved using a fast explicit solve. In this study, in addition to IMEX methods we also consider a physics compatible exact sequence spatial discretization. This combines nodal bases (H-Grad) for fluid dynamics with a set of vector bases (H-Curl and H-Div) for Maxwell's equations. This discretization allows for multi-fluid plasma modeling without violating Gauss' laws for the electric and magnetic fields. This initial study presents a discussion of the major elements of this formulation and focuses on demonstrating accuracy in the linear wave regime and in the MHD limit for both a visco-resistive and a dispersive ideal MHD problem.

More Details

Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

Journal of Computational and Applied Mathematics

Lin, Paul L.; Shadid, John N.; Hu, J.J.; Pawlowski, Roger P.; Cyr, E.C.

This work explores the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. This study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of the original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.

More Details

ASC ATDM Level 2 Milestone #6358: Assess Status of Next Generation Components and Physics Models in EMPIRE

Bettencourt, Matthew T.; Kramer, Richard M.; Cartwright, Keith C.; Phillips, Edward G.; Ober, Curtis C.; Pawlowski, Roger P.; Swan, Matthew S.; Kalashnikova, Irina; Phipps, Eric T.; Conde, Sidafa C.; Cyr, Eric C.; Ulmer, Craig D.; Kordenbrock, Todd H.; Levy, Scott L.; Templet, Gary J.; Hu, Jonathan J.; Lin, Paul L.; Glusa, Christian A.; Siefert, Christopher S.; Glass, Micheal W.

This report documents the outcome from the ASC ATDM Level 2 Milestone 6358: Assess Status of Next Generation Components and Physics Models in EMPIRE. This Milestone is an assessment of the EMPIRE (ElectroMagnetic Plasma In Realistic Environments) application and three software components. The assessment focuses on the electromagnetic and electrostatic particle-in-cell solu- tions for EMPIRE and its associated solver, time integration, and checkpoint-restart components. This information provides a clear understanding of the current status of the EMPIRE application and will help to guide future work in FY19 in order to ready the application for the ASC ATDM L 1 Milestone in FY20. It is clear from this assessment that performance of the linear solver will have to be a focus in FY19.

More Details

Rythmos: Solution and Analysis Package for Differential-Algebraic and Ordinary-Differential Equations

Ober, Curtis C.; Bartlett, Roscoe B.; Coffey, Todd S.; Pawlowski, Roger P.

Time integration is a central component for most transient simulations. It coordinates many of the major parts of a simulation together, e.g., a residual calculation with a transient solver, solution with the output, various operator-split physics, and forward and adjoint solutions for inversion. Even though there is this variety in these transient simulations, there is still a common set of algorithms and proce- dures to progress transient solutions for ordinary-differential equations (ODEs) and differential-alegbraic equations (DAEs). Rythmos is a collection of these algorithms that can be used for the solution of transient simulations. It provides common time-integration methods, such as Backward and Forward Euler, Explicit and Im- plicit Runge-Kutta, and Backward-Difference Formulas. It can also provide sensitivities, and adjoint components for transient simulations. Rythmos is a package within Trilinos, and requires some other packages (e.g., Teuchos and Thrya) to provide basic time-integration capabilities. It also can be cou- pled with several other Trilinos packages to provide additional capabilities (e.g., AztecOO and Belos for linear solutions, and NOX for non-linear solutions). The documentation is broken down into three parts: Theory Manual, User's Manual, and Developer's Guide. The Theory Manual contains the basic theory of the time integrators, the nomenclature and mathematical structure utilized within Rythmos, and verification results demonstrating that the designed order of accuracy is achieved. The User's Manual provides information on how to use the Rythmos, description of input parameters through Teuchos Parameter Lists, and description of convergence test examples. The Developer's Guide is a high-level discussion of the design and structure of Rythmos to provide information to developers for the continued development of capabilities. Details of individual components can be found in the Doxygen webpages. Notice: This document is a collection of notes gathered over many years, however its development has been dormant for the past several years due to the lack of funding. Pre-release copies of this document have circulated to internal Sandia developers, who have found it useful in their application development. Also external Sandia developers have made inquiries for additional Rythmos documentation. To make this documentation publicly available, we are releasing this documentation in an " as-is " condition. We apologize for its incomplete state and obvious lack of readability, however we hope that the information contained in this document will still be helpful to users in their application development.

More Details

Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

Journal of Computational Physics

Shadid, John N.; Smith, Thomas M.; Cyr, E.C.; Wildey, T.M.; Pawlowski, Roger P.

A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts to apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier-Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.

More Details

Scalable implicit incompressible resistive MHD with stabilized FE and fully-coupled Newton-Krylov-AMG

Computer Methods in Applied Mechanics and Engineering

Shadid, John N.; Pawlowski, Roger P.; Cyr, E.C.; Tuminaro, Raymond S.; Chacón, L.; Weber, Paula D.

The computational solution of the governing balance equations for mass, momentum, heat transfer and magnetic induction for resistive magnetohydrodynamics (MHD) systems can be extremely challenging. These difficulties arise from both the strong nonlinear, nonsymmetric coupling of fluid and electromagnetic phenomena, as well as the significant range of time- and length-scales that the interactions of these physical mechanisms produce. This paper explores the development of a scalable, fully-implicit stabilized unstructured finite element (FE) capability for 3D incompressible resistive MHD. The discussion considers the development of a stabilized FE formulation in context of the variational multiscale (VMS) method, and describes the scalable implicit time integration and direct-to-steady-state solution capability. The nonlinear solver strategy employs Newton-Krylov methods, which are preconditioned using fully-coupled algebraic multilevel preconditioners. These preconditioners are shown to enable a robust, scalable and efficient solution approach for the large-scale sparse linear systems generated by the Newton linearization. Verification results demonstrate the expected order-of-accuracy for the stabilized FE discretization. The approach is tested on a variety of prototype problems, that include MHD duct flows, an unstable hydromagnetic Kelvin-Helmholtz shear layer, and a 3D island coalescence problem used to model magnetic reconnection. Initial results that explore the scaling of the solution methods are also presented on up to 128K processors for problems with up to 1.8B unknowns on a CrayXK7.

More Details

An assessment of coupling algorithms for nuclear reactor core physics simulations

Journal of Computational Physics

Hamilton, Steven; Berrill, Mark; Clarno, Kevin; Pawlowski, Roger P.; Toth, Alex; Kelley, C.T.; Evans, Thomas; Philip, Bobby

This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss-Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton-Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.

More Details

A new class of finite element variational multiscale turbulence models for incompressible magnetohydrodynamics

Journal of Computational Physics

Sondak, D.; Shadid, John N.; Oberai, A.A.; Pawlowski, Roger P.; Cyr, E.C.; Smith, Thomas M.

New large eddy simulation (LES) turbulence models for incompressible magnetohydrodynamics (MHD) derived from the variational multiscale (VMS) formulation for finite element simulations are introduced. The new models include the variational multiscale formulation, a residual-based eddy viscosity model, and a mixed model that combines both of these component models. Each model contains terms that are proportional to the residual of the incompressible MHD equations and is therefore numerically consistent. Moreover, each model is also dynamic, in that its effect vanishes when this residual is small. The new models are tested on the decaying MHD Taylor Green vortex at low and high Reynolds numbers. The evaluation of the models is based on comparisons with available data from direct numerical simulations (DNS) of the time evolution of energies as well as energy spectra at various discrete times. A numerical study, on a sequence of meshes, is presented that demonstrates that the large eddy simulation approaches the DNS solution for these quantities with spatial mesh refinement.

More Details

Design of a high fidelity core simulator for analysis of pellet CLAD interaction

Mathematics and Computations, Supercomputing in Nuclear Applications and Monte Carlo International Conference, M and C+SNA+MC 2015

Pawlowski, Roger P.; Clarno, K.; Montgomery, R.; Salko, R.; Evans, T.; Turner, J.; Gaston, D.

The Tiamat code is being developed by CASL (Consortium for Advanced Simulation of LWRs) as an integrated tool for predicting pellet-clad interaction and improving the high-fidelity core simulator. Tiamat is a large-scale parallel code that couples the the multi-dimensional Bison-CASL fuel performance code on every fuel rod with the COBRA-TF (CTF) sub-channel thermal-hydraulics code and either the Insilico or MPACT neutronics codes. Tiamat solves a transient problem where each time step is subcycled using Picard iteration to converge the fully-coupled nonlinear system. This report discusses the solution algorithms and software design of the simulator. Results are shown for a five assembly cross and compared against a separate core simulator developed by CASL. Tiamat has demonstrated that it can compute quantities of interest for analyzing pellet-clad interaction.

More Details

Xyce parallel electronic simulator users' guide, Version 6.0.1

Keiter, Eric R.; Warrender, Christina E.; Mei, Ting M.; Russo, Thomas V.; Schiek, Richard S.; Thornquist, Heidi K.; Verley, Jason V.; Coffey, Todd S.; Pawlowski, Roger P.

This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.

More Details

Xyce parallel electronic simulator reference guide, Version 6.0.1

Keiter, Eric R.; Mei, Ting M.; Russo, Thomas V.; Pawlowski, Roger P.; Schiek, Richard S.; Coffey, Todd S.; Thornquist, Heidi K.; Verley, Jason V.; Warrender, Christina E.

This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide [1] .

More Details

A block preconditioner for an exact penalty formulation for stationary MHD

SIAM Journal on Scientific Computing

Phillips, Edward G.; Elman, Howard C.; Cyr, Eric C.; Shadid, John N.; Pawlowski, Roger P.

The magnetohydrodynamics (MHD) equations are used to model the flow of electrically conducting fluids in such applications as liquid metals and plasmas. This system of nonself-adjoint, nonlinear PDEs couples the Navier-Stokes equations for fluids and Maxwell's equations for electromagnetics. There has been recent interest in fully coupled solvers for the MHD system because they allow for fast steady-state solutions that do not require pseudo-time-stepping. When the fully coupled system is discretized, the strong coupling can make the resulting algebraic systems difficult to solve, requiring effective preconditioning of iterative methods for efficiency. In this work, we consider a finite element discretization of an exact penalty formulation for the stationary MHD equations posed in two-dimensional domains. This formulation has the benefit of implicitly enforcing the divergence-free condition on the magnetic field without requiring a Lagrange multiplier. We consider extending block preconditioning techniques developed for the Navier-Stokes equations to the full MHD system. We analyze operators arising in block decompositions from a continuous perspective and apply arguments based on the existence of approximate commutators to develop new preconditioners that account for the physical coupling. This results in a family of parameterized block preconditioners for both Picard and Newton linearizations. We develop an automated method for choosing the relevant parameters and demonstrate the robustness of these preconditioners for a range of the physical nondimensional parameters and with respect to mesh refinement.

More Details

Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD

Shadid, John N.; Pawlowski, Roger P.; Cyr, Eric C.; Wildey, Timothy M.

This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.

More Details

Xyce parallel electronic simulator users guide, version 6.0

Russo, Thomas V.; Mei, Ting M.; Keiter, Eric R.; Schiek, Richard S.; Thornquist, Heidi K.; Verley, Jason V.; Coffey, Todd S.; Pawlowski, Roger P.; Warrender, Christina E.

This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.

More Details

Xyce parallel electronic simulator reference guide, version 6.0

Keiter, Eric R.; Mei, Ting M.; Russo, Thomas V.; Pawlowski, Roger P.; Schiek, Richard S.; Coffey, Todd S.; Thornquist, Heidi K.; Verley, Jason V.; Warrender, Christina E.

This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide [1] .

More Details

A comparison of adjoint and data-centric verification techniques

Cyr, Eric C.; Shadid, John N.; Pawlowski, Roger P.

This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. We compare the adjoint-based a posteriori error estimation approach with a recent variant of a data-centric verification technique. We provide a brief overview of each technique and then we discuss their relative advantages and disadvantages. We use Drekar::CFD to produce numerical results for steady-state Navier Stokes and SARANS approximations. 3

More Details
Results 1–100 of 188
Results 1–100 of 188