CCR Focus Areas

The CCR collaborates on innovations in tool development, component development, and scalable algorithm research with partners and customers around the world through open source projects. Current software projects focus on enabling technologies for scientific computing in areas such as machine learning, graph algorithms, cognitive modeling, visualization, optimization, large-scale multi-physics simulation, HPC miniapplications, HPC system simulation and HPC system software.

Advanced Device Technologies

Advanced Device Technologies

Despite the vast computational power available in today’s extreme-scale computing systems, there are still certain types of problems for which that power is inadequate and silicon-based computing devices will likely never be able to solve. Sandia is exploring technologies necessary to enable a new paradigm of computing that goes beyond the limits of Moore’s Law. Core areas of competency are post-CMOS processors, quantum information processing, simulation of solid state and quantum devices, and development of computing methods to support materials and device simulations.

Climate Science

Climate Science

Our emphasis is on developing high-order accurate numerical methods for climate modeling such as the spectral-element atmospheric dynamical-core and in leveraging our capabilities in scientific software engineering and uncertainty quantification to improve the rigor and predictivity of global climate models.

Sandia National Laboratories Climate Science Software – Sandia’s Energy and Climate mission provides software that covers a variety of applications including simulation, modeling, computation and more.

Cognitive Science

Cognitive Science

In addressing critical problems in national security, the Cognitive Science and Applications groups provide solutions that include both technology and human cognition aspects. The main focus of these efforts include understanding human decision making, improving human performance, human-centric data collection and analysis, advanced software development, and surety-based verification and validation, ethics, legal, and social issues.

Complex Systems

Complex Systems

Our complex systems groups develop underlying computer science and mathematical techniques needed to solve national problems related to large, complex systems. Our core research focuses on advanced methods in computer science, operations research, system dynamics, and discrete mathematics.

Computational Geoscience

Computational Geoscience

We apply our expertise in large-scale optimization and uncertainty quantification as well as scientific software design to geoscience problems including porous media flow, seismic imaging, and hydraulic fracture.

Computational Materials

Computational Materials

Modern materials science relies upon computational tools involving theory, modeling, and simulation to work in tandem with experimental measurements. We develop and apply computational methods and tools for materials science, including molecular dynamics, peridynamics, and density functional theory. Computational materials science plays a key role in enabling Sandia’s many mission areas that rely on fundamental understanding of materials behavior.

Computational Shock & Multiphysics

Computational Shock & Multiphysics

Sandia’s Computational Shock & Multiphysics Department 01443 provides unique, state-of-the-art modeling and simulation capabilities using a variety of multiphysics discretization technologies to simulate high strain rate, magnetohydrodynamic, electromechanic and high energy density physics phenomena for the U.S. defense and energy programs. To accomplish this objective, we conduct an active program of research and development in computational shock physics and methods, produce high-performance application codes that address relevant issues important to the DOE, DoD, DHS, and other U.S. Government agencies, and support production use of our codes by our principal customers.

Computer Architecture

Computer Architecture

Our efforts in scalable computer architecture seek to explore advancements in the design and integration of processors, memory, and networks necessary to effectively deploy and use the largest parallel computing systems in the world. Core areas of competency are hardware simulation, microarchitectures, network interface design, system reliability, and energy/power analysis.

Continuous Optimization

Continuous Optimization

We perform research and development of advanced algorithms for engineering-based optimization. This includes engineering design optimization, model calibration (i.e., parameter estimation), and material identification (or inversion). Often, the problems we address have equality constraints given by solutions to partial differential equations. We implement these algorithms in our flagship software tools DAKOTA (http://dakota.sandia.gov) and Trilinos (https://trilinos.github.io/). Our optimization capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty.

Cyber Security

Cyber Security

Our cybersecurity research is focused on developing cross-cutting enabling capabilities that can impact a wide range of cybersecurity challenges. This includes research in streaming algorithms to quickly process large cyber data streams, algorithms to find patterns in large graphs and machine learning techniques to detect adversarial behavior (e.g. phishing emails). Our researchers are part of the Cybersecurity Engineering Research Institute (CERI), which provides a conduit for collaboration with industry and academia.

Data Analysis and Visualization

Data Analysis and Visualization

The data analysis and visualization groups focus on scalable techniques for providing decision support in national security and national-level challenges. By incorporating advanced algorithms, innovative hardware architectures, and scalable analysis components, we develop solutions that promote insight, regardless of the complexity, size, or uncertainty of the data. With a focus on machine learning, graph algorithms, text analysis, Bayesian modeling, and tensor factorizations, our groups provide state-of-the-art capabilities for critical data modeling and analysis applications.

Defense Applications

Defense Applications

We develop and apply multiphysics analysis tools that address various issues important to the DOE, DoD, DHS, and other U.S. Government agencies, and support production use of our codes by our principal customers. Our ALEGRA multiphysics code employs an Arbitrary Lagrangian-Eulerian (ALE) methodology targeted at the simulation of high strain rate, magnetohydrodynamic, electromechanic and high energy density physics phenomena. It is a C++ multi-material finite element code based on a large displacement fomulation designed to accurately model strong shock behavior.

Density Functional Theory

Density Functional Theory

Our research in electronic structure methods, particularly density functional theory (DFT), spans investigations of improved physical approximations (new functionals, time-dependent DFT), development of high-performance quantum simulation codes (SeqQuest), and applications focused on probing microscopic materials chemistry underlying challenging materials problems.  The research encompasses the pursuit for predictive simulations of materials behavior to inform engineering scale assessments: the search for new methods to quantify the role of electronic excitations and charge transport for energy storage, characterize states of matter far from equilibrium (energetic materials, shock wave physics, and equation of state), for aging and radiation response; researching high-performance implementations of these methods for practical applications; and deploying these advanced capabilities for problems of mission relevance.  DFT is also used as the quantitative foundation for adding high-fidelity to dynamical simulations, training new molecular dynamics interatomic potentials and related coarser scale modeling methods.

Click here for more information on DFT.

Discrete Optimization

Discrete Optimization

We develop advanced algorithms and modeling tools for a variety of optimization research areas, including engineering design, parameter estimation, inverse modeling, logistics and planning, and design of complex systems. Our research considers a wide range of optimization problems, including integer and linear programming, disjunctive programming, stochastic programming, surrogate-based optimization and PDE-constrained optimization. We implement these algorithms in our software tools, including DAKOTA (http://dakota.sandia.gov), Trilinos (https://trilinos.github.io/) and Coopr (https://software.sandia.gov/coopr), which are distributed as open source software.

Magneto-Hydro Dynamics

Magneto-Hydro Dynamics

The magnetohydrodynamics (MHD) model describes the dynamics of charged fluids in the presence of electromagnetic fields. MHD models are used to describe important phenomena in the natural world (e.g., solar flares, astrophysical magnetic field generation, Earth’s magnetosphere interaction with the solar wind) and in technological applications (e.g., spacecraft propulsion, magnetically confined plasma for fusion energy devices such as tokamak reactors (e.g. ITER), and plasma dynamics in pulsed reactors such as Sandia’s Z-pinch device). We have an active research program to develop advanced computational formulations and solution methods for multiphysics MHD, and we deploy the results of this research in our large-scale massively parallel MHD simulation codes.

The mathematical basis for the continuum modeling of MHD systems is the solution of the governing  partial differential equations (PDEs) describing conservation of mass, momentum, and energy, augmented by Maxwell’s equations for the electric and magnetic field. This system of PDEs is non-self adjoint, strongly coupled, highly nonlinear, and characterized by multiple physical phenomena that span a very large range of length- and time-scales. These interacting, nonlinear multiple time-scale physical mechanisms can balance to produce steady-state behavior, nearly balance to evolve a solution on a dynamical time scale that is long relative to the component time-scales, or can be dominated by just a few fast modes. These characteristics make the scalable, robust, accurate, and efficient computational solution of these systems over relevant dynamical time scales of interest extremely challenging.

Our production MHD capabilities are contained within a family of multiphysics codes known as ALEGRA*. The codes — including ALEGRA, ALEGRA-MHD, ALEGRA-HEDP and ALEGRA-EMMA — constitute an extensive set of physics modeling capabilities built on software in the Nevada framework and third-party libraries. They simulate large deformations and strong shock physics including solid dynamics in an Arbitrary Lagrangian-Eulerian methodology as well as magnetics, magnetohydrodynamics, electromechanics and a wide range of phenomena for high-energy physics applications. Our principal customers have applied these codes in a variety of Z-pinch physics experiment designs and applications, in the development of advanced armor concepts, and in numerous National Security programs. Research and development in advanced methods, including code frameworks, large scale inline meshing, multiscale lagrangian hydrodynamics, resistive magnetohydrodynamic methods, material interface reconstruction, and code verification and validation, keeps the software on the cutting edge of high performance computing.

We also conduct research and development of advanced computational formulations and solution methods for challenging multiple-time-scale multiphysics MHD systems. For multiple-time-scale systems, fully-implicit methods can be an attractive choice that can often provide unconditionally-stable time integration techniques. The stability of these methods, however, comes at a cost, as these techniques generate large and highly nonlinear sparse systems of equations that must be solved at each time step. In the context of MHD, the dominant computational solution strategy has been the use of explicit, semi-implicit, and operator-splitting time integration methods.  With the exception of fully-explicit strategies, which are limited by severe stability restrictions to follow the fastest component time scale, all these temporal integration methods include some implicitness to enable a more efficient solution of MHD systems. Such implicitness is aimed at removing one or more sources of numerical stiffness in the problem, either from parabolic diffusion or from fast wave phenomena. While these types of techniques currently form the basis for most production-level resistive MHD simulation tools, a number of outstanding numerical and computational issues remain. These include conditional stability limits, operator-splitting-type errors, heuristic time-step-controls and limited temporal orders of accuracy.

In our DOE Advanced Scientific Computing Research Applied Math Program funded research** we are pursuing the development and evaluation of

  1. higher-order-accurate, scalable, and efficient fully-implicit formulations for resistive and extended MHD with coupled multiphysics effects (e.g. anisotropic transport, multiple temperatures, coupled radiation-diffusion models, etc.),
  2. stable and accurate spatial discretizations based on unstructured mesh FE approximations that allow efficient enforcement of physical constraints (e.g. conservation, positivity preservation, div B = 0, etc),
  3. strongly coupled Newton-Krylov nonlinear solver with new physics-based  and approximate block factorization preconditioners that enable scalable multilevel sub-block solvers,
  4. new mathematical algorithms and computer science techniques to effectively utilize extreme-scale resources with very high-core counts and high-concurrency node architectures.

To enable the research described above SNL has developed a very flexible multiphysics MHD simulation code, Drekar::XMHD that enables the development of new multiphysics MHD models and allows the rapid prototyping of new computational formulations and solution methods on large-scale parallel machines. This code has been demonstrated to weak-scale on a Cray XK7 and an IBM BG/Q on up to 128K and 256K cores (respectively). Recently we have also carried out strong scaling studies on up to 500,000 cores of an IBM BG/Q for a fully-coupled Krylov/AMG V-cycle linear solver that is a critical kernel for scalable solution of MHD systems.

Mini-Applications

Mini-Applications

Application performance is determined by a combination of many choices: hardware, runtime environment, languages and compilers used, algorithm choice and implementation, and more. In this complicated environment, we find that the use of mini-applications, small self-contained proxies for real applications is an excellent approach for rapidly exploring the parameter space of all these choices. Furthermore, use of mini-applications enriches the interaction between application, library and computer system developers by providing explicit functioning software and concrete performance results that lead to detailed, focused discussions of design trade-offs, algorithm choices and runtime performance issues.

Molecular Dynamics

Molecular Dynamics

We develop and use molecular dynamics and related simulation methodologies, especially those encompassed in our LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) molecular simulation package (http://lammps.sandia.gov). LAMMPS is a freely available, widely used software package that runs in serial or on high performance computing platforms, and includes potentials for sold-state materials, soft matter, and coarse-grained or mesoscopic systems.

Multiphysics Simulation Technologies

Multiphysics Simulation Technologies

Multiphysics Simulation Technologies develops advanced or novel simulation methods and application software designed for use on high-performance computing platforms to solve currently intractable problems of national importance. Current strategic thrusts include development and application of peridynamics and gradient-free methods for material failure, fracture, and fragmentation and development of new integrated capabilities in support of nuclear energy, including reactor performance and safety and used nuclear fuel storage and disposal. Core areas of competency include PDE solution methods, multiscale techniques, multiphysics coupling, embedded uncertainty quantification, constitutive modeling, and code development.

Nuclear Energy

Nuclear Energy

We develop and apply new coupled and integrated capabilities in support of nuclear energy, including reactor performance and safety and used nuclear fuel storage and disposal. We are a core partner in the Consortium for Advanced Simulation of Light Water Reactors (CASL) and are developing and deploying center capabilities that comprise the foundation of the Virtual Environment for Reactor Applications (VERA), recently released by CASL for use by its nuclear industry partners to address many of their most challenging operational and safety problems. We also support the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program through our Verification & Validation and Uncertainty Quantification capabilities and expertise.

Nuclear Nonproliferation

Nuclear Nonproliferation

Description forthcoming

Numerical Analysis and Applications

Numerical Analysis and Applications

Applied mathematics and numerical analysis provides a key foundation to Sandia’s success in applying advanced computing to solve national-scale problems. Such work includes development of new numerical discretizations that are provably stable and convergent for multi-physics problems as well as algorithms and tools for mathematical optimization in the presence of uncertainty that enable critical decisions to be made based on the results of computer simulation. In each case, these methods are developed with a keen awareness of the underlying mathematics and are applied to a wide range of energy, geoscience, climate and national security applications.

Peridynamics

Peridynamics

The peridynamic theory of solid mechanics, developed at Sandia, is a nonlocal extension of classical continuum mechanics for discontinuities and long-range forces. It is a mathematical theory that unifies the mechanics of continuous media, cracks, and discrete particles. Our research in peridynamics span a number of interrelated areas, including mathematics, mechanics, constitutive-model development, scientific computing, and engineering. We apply peridynamics with a focus on pervasive material failure and fracture to meet the challenges of Sandia’s national security missions. Our work is enriched through ongoing collaborations with academia and industry.

Scalable Algorithms

Scalable Algorithms

Effective use of extreme-scale computing systems depends on the availability of scalable parallel algorithms. Sandia has a long history of activities in this area, with a focus on algorithms to enable parallel science and engineering simulations. Core areas of competency include dynamic load balancing for adaptive applications, iterative linear solvers, eigensolvers, and preconditioning methods.

Smart Grid

Smart Grid

Electricity grid operators routinely solve optimization problems to address core decision processes at various time-scales, ranging from 5 minutes to multiple decades. Historically, these problems are addressed in terms of deterministic optimization, with resources kept in reserve to address any potential uncertainty regarding the future. In the context of daily operations, this approach is becoming increasingly costly and unreliable with the introduction of significant quantities of renewables generation units, e.g., wind and solar farms, for which the electricity generation levels are both variable and uncertain. For planning, increasing volatile weather leads to disruptions caused by events that were not anticipated, e.g., "100 year" floods occurring multiple times in a decade. Thus, stochastic optimization — the ability to perform optimization while directly addressing system and environmental uncertainties — is becoming a significant algorithm driver for utilities and national planning agencies. The development of efficient algorithms for stochastic optimization remains a significant challenge, however, due to the complexity of the associated decision problems. This research is being conducted in the context of Sandia’s Coopr optimization package, through the modeling and solver functionality provided by the Pyomo and PySP libraries.

Beyond optimization, predictive simulation will likely play a significant role in the future electricity grid. Specifically, the ability to anticipate the consequences of and risk associated with specific control actions is critical in the operation of a resilient electricity grid. Examples of predictive simulation tools include advanced circuit simulators such as Xyce, which are capable of faster-than-real-time simulations of large-scale electricity networks. Similarly, network analysis tools can be leveraged to identify critical nodes in an electricity grid, which can inform both longer-term planning processes and shorter-term security concerns.

Software Development

Software Development

Through open source projects, we collaborate on innovations in tool development, component development, and scalable algorithm research with partners and customers around the world. Our current software projects focus on machine learning, graph algorithms, cognitive modeling, text analysis, visualization, systems dynamics, and operations research.

System Software

System Software

System software research and development activities provide the software foundation that enables the scaling and performance of applications to unprecedented levels. Sandia has performed pioneering work in lightweight operating system and scalable runtime systems for some of the world’s largest computing platforms. Core areas of competency are lightweight operating systems, multi-threaded runtime systems, high-performance interconnect APIs, parallel I/O and file systems, and scalable system management infrastructure software.

Uncertainty Quantification

Uncertainty Quantification

Uncertainty Quantification (UQ) is a growing area of importance for quantifying confidence in computational engineering simulations. Our activities in this area include fundamental algorithm research, implementation of those algorithms into our flagship software tools DAKOTA (http://dakota.sandia.gov) and Trilinos (http://trilinos.sandia.gov), and deployment of those tools to mission-critical application teams within Sandia as well as other partners throughout the country. A major focus of the technical R&D is on improving robustness, computational efficiency, and scalability for problems with high random dimensions, multi-scale and coupled multi-physics applications, and problems with difficult-to-model physics such as nonlinearities and discontinuities.

Verification and Validation

Verification and Validation

Verification and Validation (V&V) are important technologies for enabling a science-basis for computational engineering predictions. Our activities include development of new methodologies and workflows for code verification, solution verification, and validation, in support of engineering analysis. This work requires close partnership with other Sandia organizations. We also work across organizations to develop and apply predictive maturity assessment methodologies.