Publications

Results 826–850 of 9,998

Search results

Jump to search filters

Second-order invariant domain preserving approximation of the compressible Navier–Stokes equations

Computer Methods in Applied Mechanics and Engineering

Guermond, Jean L.; Maier, Matthias; Popov, Bojan; Tomas, Ignacio T.

We present a fully discrete approximation technique for the compressible Navier–Stokes equations that is second-order accurate in time and space, semi-implicit, and guaranteed to be invariant domain preserving. The restriction on the time step is the standard hyperbolic CFL condition, i.e. τ≲O(h)∕V where V is some reference velocity scale and h the typical meshsize.

More Details

Milestone M6 Report: Reducing Excess Data Movement Part 1

Peng, Ivy; Voskuilen, Gwendolyn R.; Sarkar, Abhik; Boehme, David; Long, Rogelio; Moore, Shirley; Gokhale, Maya

This is the second in a sequence of three Hardware Evaluation milestones that provide insight into the following questions: What are the sources of excess data movement across all levels of the memory hierarchy, going out to the network fabric? What can be done at various levels of the hardware/software hierarchy to reduce excess data movement? How does reduced data movement track application performance? The results of this study can be used to suggest where the DOE supercomputing facilities, working with their hardware vendors, can optimize aspects of the system to reduce excess data movement. Quantitative analysis will also benefit systems software and applications to optimize caching and data layout strategies. Another potential avenue is to answer cost-benefit questions, such as those involving memory capacity versus latency and bandwidth. This milestone focuses on techniques to reduce data movement, quantitatively evaluates the efficacy of the techniques in accomplishing that goal, and measures how performance tracks data movement reduction. We study a small collection of benchmarks and proxy mini-apps that run on pre-exascale GPUs and on the Accelsim GPU simulator. Our approach has two thrusts: to measure advanced data movement reduction directives and techniques on the newest available GPUs, and to evaluate our benchmark set on simulated GPUs configured with architectural refinements to reduce data movement.

More Details

Validation Metrics for Fixed Effects and Mixed-Effects Calibration

Journal of Verification, Validation and Uncertainty Quantification

Porter, N.W.; Maupin, Kathryn A.; Swiler, Laura P.; Mousseau, Vincent A.

The modern scientific process often involves the development of a predictive computational model. To improve its accuracy, a computational model can be calibrated to a set of experimental data. A variety of validation metrics can be used to quantify this process. Some of these metrics have direct physical interpretations and a history of use, while others, especially those for probabilistic data, are more difficult to interpret. In this work, a variety of validation metrics are used to quantify the accuracy of different calibration methods. Frequentist and Bayesian perspectives are used with both fixed effects and mixed-effects statistical models. Through a quantitative comparison of the resulting distributions, the most accurate calibration method can be selected. Two examples are included which compare the results of various validation metrics for different calibration methods. It is quantitatively shown that, in the presence of significant laboratory biases, a fixed effects calibration is significantly less accurate than a mixed-effects calibration. This is because the mixed-effects statistical model better characterizes the underlying parameter distributions than the fixed effects model. The results suggest that validation metrics can be used to select the most accurate calibration model for a particular empirical model with corresponding experimental data.

More Details

Classification of orthostatic intolerance through data analytics

Medical and Biological Engineering and Computing

Hart, Joseph L.; Gilmore, Steven; Gremaud, Pierre; Olsen, Christian H.; Mehlsen, Jesper; Olufsen, Mette S.

Imbalance in the autonomic nervous system can lead to orthostatic intolerance manifested by dizziness, lightheadedness, and a sudden loss of consciousness (syncope); these are common conditions, but they are challenging to diagnose correctly. Uncertainties about the triggering mechanisms and the underlying pathophysiology have led to variations in their classification. This study uses machine learning to categorize patients with orthostatic intolerance. We use random forest classification trees to identify a small number of markers in blood pressure, and heart rate time-series data measured during head-up tilt to (a) distinguish patients with a single pathology and (b) examine data from patients with a mixed pathophysiology. Next, we use Kmeans to cluster the markers representing the time-series data. We apply the proposed method analyzing clinical data from 186 subjects identified as control or suffering from one of four conditions: postural orthostatic tachycardia (POTS), cardioinhibition, vasodepression, and mixed cardioinhibition and vasodepression. Classification results confirm the use of supervised machine learning. We were able to categorize more than 95% of patients with a single condition and were able to subgroup all patients with mixed cardioinhibitory and vasodepressor syncope. Clustering results confirm the disease groups and identify two distinct subgroups within the control and mixed groups. The proposed study demonstrates how to use machine learning to discover structure in blood pressure and heart rate time-series data. The methodology is used in classification of patients with orthostatic intolerance. Diagnosing orthostatic intolerance is challenging, and full characterization of the pathophysiological mechanisms remains a topic of ongoing research. This study provides a step toward leveraging machine learning to assist clinicians and researchers in addressing these challenges. [Figure not available: see fulltext.].

More Details

Performant implementation of the atomic cluster expansion

Lysogorskiy, Yury; Rinaldi, Matteo; Menon, Sarath; Van Der Oord, Van; Hammerschmidt, Thomas; Mrovec, Matous; Thompson, Aidan P.; Csanyi, Gabor; Ortner, Christoph; Drautz, Ralf

The atomic cluster expansion is a general polynomial expansion of the atomic energy in multi-atom basis functions. Here we implement the atomic cluster expansion in the performant C++ code PACE that is suitable for use in large scale atomistic simulations. We briefly review the atomic cluster expansion and give detailed expressions for energies and forces as well as efficient algorithms for their evaluation. We demonstrate that the atomic cluster expansion as implemented in PACE shifts a previously established Pareto front for machine learning interatomic potentials towards faster and more accurate calculations. Moreover, general purpose parameterizations are presented for copper and silicon and evaluated in detail. We show that the new Cu and Si potentials significantly improve on the best available potentials for highly accurate large-scale atomistic simulations.

More Details

Concentric Spherical GNN for 3D Representation Learning

Fox, James S.; Zhao, Bo; Rajamanickam, Sivasankaran R.; Ramprasad, Rampi; Le SongLe

Learning 3D representations that generalize well to arbitrarily oriented inputs is a challenge of practical importance in applications varying from computer vision to physics and chemistry. We propose a novel multi-resolution convolutional architecture for learning over concentric spherical feature maps, of which the single sphere representation is a special case. Our hierarchical architecture is based on alternatively learning to incorporate both intra-sphere and inter-sphere information. We show the applicability of our method for two different types of 3D inputs, mesh objects, which can be regularly sampled, and point clouds, which are irregularly distributed. We also propose an efficient mapping of point clouds to concentric spherical images, thereby bridging spherical convolutions on grids with general point clouds. We demonstrate the effectiveness of our approach in improving state-of-the-art performance on 3D classification tasks with rotated data.

More Details

High Rayleigh number variational multiscale large eddy simulations of Rayleigh-Bénard convection

Mechanics Research Communications

Sondak, David; Smith, Thomas M.; Pawlowski, Roger P.; Conde, Sidafa C.; Shadid, John N.

The variational multiscale (VMS) formulation is used to develop residual-based VMS large eddy simulation (LES) models for Rayleigh-Bénard convection. The resulting model is a mixed model that incorporates the VMS model and an eddy viscosity model. The Wall-Adapting Local Eddy-viscosity (WALE) model is used as the eddy viscosity model in this work. The new LES models were implemented in the finite element code Drekar. Simulations are performed using continuous, piecewise linear finite elements. The simulations ranged from Ra=106 to Ra=1014 and were conducted at Pr=1 and Pr=7. Two domains were considered: a two-dimensional domain of aspect ratio 2 with a fluid confined between two parallel plates and a three-dimensional cylinder of aspect ratio 1/4. The Nusselt number from the VMS results is compared against three dimensional direct numerical simulations and experiments. In all cases, the VMS results are in good agreement with existing literature.

More Details
Results 826–850 of 9,998
Results 826–850 of 9,998