Publications

Results 801–850 of 9,998

Search results

Jump to search filters

The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

Experimental Mechanics

Turner, Daniel Z.; Lehoucq, Richard B.; Reu, Phillip L.

This work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. [2009]). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

More Details

TAFI/Kebab End of Project Report

Laros, James H.; Wisniewski, Kyra L.; Ward, Katrina J.; Khanna, Kanad K.

This report focuses on the two primary goals set forth in Sandia’s TAFI effort, referred to here under the name Kebab. The first goal is to overlay a trajectory onto a large database of historical trajectories, all with very different sampling rates than the original track. We demonstrate a fast method to accomplish this, even for databases that hold over a million tracks. The second goal is to then demonstrate that these matched historical trajectories can be used to make predictions about unknown qualities associated with the original trajectory. As part of this work, we also examine the problem of defining the qualities of a trajectory in a reproducible way.

More Details

Efficacy of the radial pair potential approximation for molecular dynamics simulations of dense plasmas

Physics of Plasmas

Stanek, Lucas J.; Clay III, Raymond C.; Dharma-Wardana, M.W.C.; Wood, Mitchell A.; Beckwith, Kristian B.; Murillo, Michael S.

Macroscopic simulations of dense plasmas rely on detailed microscopic information that can be computationally expensive and is difficult to verify experimentally. In this work, we delineate the accuracy boundary between microscale simulation methods by comparing Kohn-Sham density functional theory molecular dynamics (KS-MD) and radial pair potential molecular dynamics (RPP-MD) for a range of elements, temperature, and density. By extracting the optimal RPP from KS-MD data using force matching, we constrain its functional form and dismiss classes of potentials that assume a constant power law for small interparticle distances. Our results show excellent agreement between RPP-MD and KS-MD for multiple metrics of accuracy at temperatures of only a few electron volts. The use of RPPs offers orders of magnitude decrease in computational cost and indicates that three-body potentials are not required beyond temperatures of a few eV. Due to its efficiency, the validated RPP-MD provides an avenue for reducing errors due to finite-size effects that can be on the order of ∼ 20 %.

More Details

Parallel Solver Framework for Mixed-Integer PDE-Constrained Optimization

Phillips, Cynthia A.; Chatter, Michelle A.; Eckstein, Jonathan; Erturk, Alper; El-Kady, I.; Gerbe, Romain; Kouri, Drew P.; Loughlin, William; Reinke, Charles M.; Rokkam, Rohith; Ruzzene, Massimo; Sugino, Christopher; Swanson, Calvin; van Bloemen Waanders, Bart G.

ROL-PEBBL is a C++, MPI-based parallel code for mixed-integer PDE-constrained optimization (MIPDECO). In these problems we wish to optimize (control, design, etc.) physical systems, which must obey the laws of physics, when some of the decision variables must take integer values. ROL-PEBBL combines a code to efficiently search over integer choices (PEBBL = Parallel Enumeration Branch-and-Bound Library) and a code for efficient nonlinear optimization, including PDE-constrained optimization (ROL = Rapid Optimization Library). In this report, we summarize the design of ROL-PEBBL and initial applications/results. For an artificial source-inversion problem, finding sources of pollution on a grid from sparse samples, ROL-PEBBLs solution for the nest grid gave the best optimization guarantee for any general solver that gives both a solution and a quality guarantee.

More Details

Second-order invariant domain preserving approximation of the compressible Navier–Stokes equations

Computer Methods in Applied Mechanics and Engineering

Guermond, Jean L.; Maier, Matthias; Popov, Bojan; Tomas, Ignacio T.

We present a fully discrete approximation technique for the compressible Navier–Stokes equations that is second-order accurate in time and space, semi-implicit, and guaranteed to be invariant domain preserving. The restriction on the time step is the standard hyperbolic CFL condition, i.e. τ≲O(h)∕V where V is some reference velocity scale and h the typical meshsize.

More Details

Milestone M6 Report: Reducing Excess Data Movement Part 1

Peng, Ivy; Voskuilen, Gwendolyn R.; Sarkar, Abhik; Boehme, David; Long, Rogelio; Moore, Shirley; Gokhale, Maya

This is the second in a sequence of three Hardware Evaluation milestones that provide insight into the following questions: What are the sources of excess data movement across all levels of the memory hierarchy, going out to the network fabric? What can be done at various levels of the hardware/software hierarchy to reduce excess data movement? How does reduced data movement track application performance? The results of this study can be used to suggest where the DOE supercomputing facilities, working with their hardware vendors, can optimize aspects of the system to reduce excess data movement. Quantitative analysis will also benefit systems software and applications to optimize caching and data layout strategies. Another potential avenue is to answer cost-benefit questions, such as those involving memory capacity versus latency and bandwidth. This milestone focuses on techniques to reduce data movement, quantitatively evaluates the efficacy of the techniques in accomplishing that goal, and measures how performance tracks data movement reduction. We study a small collection of benchmarks and proxy mini-apps that run on pre-exascale GPUs and on the Accelsim GPU simulator. Our approach has two thrusts: to measure advanced data movement reduction directives and techniques on the newest available GPUs, and to evaluate our benchmark set on simulated GPUs configured with architectural refinements to reduce data movement.

More Details

Validation Metrics for Fixed Effects and Mixed-Effects Calibration

Journal of Verification, Validation and Uncertainty Quantification

Porter, N.W.; Maupin, Kathryn A.; Swiler, Laura P.; Mousseau, Vincent A.

The modern scientific process often involves the development of a predictive computational model. To improve its accuracy, a computational model can be calibrated to a set of experimental data. A variety of validation metrics can be used to quantify this process. Some of these metrics have direct physical interpretations and a history of use, while others, especially those for probabilistic data, are more difficult to interpret. In this work, a variety of validation metrics are used to quantify the accuracy of different calibration methods. Frequentist and Bayesian perspectives are used with both fixed effects and mixed-effects statistical models. Through a quantitative comparison of the resulting distributions, the most accurate calibration method can be selected. Two examples are included which compare the results of various validation metrics for different calibration methods. It is quantitatively shown that, in the presence of significant laboratory biases, a fixed effects calibration is significantly less accurate than a mixed-effects calibration. This is because the mixed-effects statistical model better characterizes the underlying parameter distributions than the fixed effects model. The results suggest that validation metrics can be used to select the most accurate calibration model for a particular empirical model with corresponding experimental data.

More Details

Classification of orthostatic intolerance through data analytics

Medical and Biological Engineering and Computing

Hart, Joseph L.; Gilmore, Steven; Gremaud, Pierre; Olsen, Christian H.; Mehlsen, Jesper; Olufsen, Mette S.

Imbalance in the autonomic nervous system can lead to orthostatic intolerance manifested by dizziness, lightheadedness, and a sudden loss of consciousness (syncope); these are common conditions, but they are challenging to diagnose correctly. Uncertainties about the triggering mechanisms and the underlying pathophysiology have led to variations in their classification. This study uses machine learning to categorize patients with orthostatic intolerance. We use random forest classification trees to identify a small number of markers in blood pressure, and heart rate time-series data measured during head-up tilt to (a) distinguish patients with a single pathology and (b) examine data from patients with a mixed pathophysiology. Next, we use Kmeans to cluster the markers representing the time-series data. We apply the proposed method analyzing clinical data from 186 subjects identified as control or suffering from one of four conditions: postural orthostatic tachycardia (POTS), cardioinhibition, vasodepression, and mixed cardioinhibition and vasodepression. Classification results confirm the use of supervised machine learning. We were able to categorize more than 95% of patients with a single condition and were able to subgroup all patients with mixed cardioinhibitory and vasodepressor syncope. Clustering results confirm the disease groups and identify two distinct subgroups within the control and mixed groups. The proposed study demonstrates how to use machine learning to discover structure in blood pressure and heart rate time-series data. The methodology is used in classification of patients with orthostatic intolerance. Diagnosing orthostatic intolerance is challenging, and full characterization of the pathophysiological mechanisms remains a topic of ongoing research. This study provides a step toward leveraging machine learning to assist clinicians and researchers in addressing these challenges. [Figure not available: see fulltext.].

More Details

Performant implementation of the atomic cluster expansion

Lysogorskiy, Yury; Rinaldi, Matteo; Menon, Sarath; Van Der Oord, Van; Hammerschmidt, Thomas; Mrovec, Matous; Thompson, Aidan P.; Csanyi, Gabor; Ortner, Christoph; Drautz, Ralf

The atomic cluster expansion is a general polynomial expansion of the atomic energy in multi-atom basis functions. Here we implement the atomic cluster expansion in the performant C++ code PACE that is suitable for use in large scale atomistic simulations. We briefly review the atomic cluster expansion and give detailed expressions for energies and forces as well as efficient algorithms for their evaluation. We demonstrate that the atomic cluster expansion as implemented in PACE shifts a previously established Pareto front for machine learning interatomic potentials towards faster and more accurate calculations. Moreover, general purpose parameterizations are presented for copper and silicon and evaluated in detail. We show that the new Cu and Si potentials significantly improve on the best available potentials for highly accurate large-scale atomistic simulations.

More Details

Concentric Spherical GNN for 3D Representation Learning

Fox, James S.; Zhao, Bo; Rajamanickam, Sivasankaran R.; Ramprasad, Rampi; Le SongLe

Learning 3D representations that generalize well to arbitrarily oriented inputs is a challenge of practical importance in applications varying from computer vision to physics and chemistry. We propose a novel multi-resolution convolutional architecture for learning over concentric spherical feature maps, of which the single sphere representation is a special case. Our hierarchical architecture is based on alternatively learning to incorporate both intra-sphere and inter-sphere information. We show the applicability of our method for two different types of 3D inputs, mesh objects, which can be regularly sampled, and point clouds, which are irregularly distributed. We also propose an efficient mapping of point clouds to concentric spherical images, thereby bridging spherical convolutions on grids with general point clouds. We demonstrate the effectiveness of our approach in improving state-of-the-art performance on 3D classification tasks with rotated data.

More Details

High Rayleigh number variational multiscale large eddy simulations of Rayleigh-Bénard convection

Mechanics Research Communications

Sondak, David; Smith, Thomas M.; Pawlowski, Roger P.; Conde, Sidafa C.; Shadid, John N.

The variational multiscale (VMS) formulation is used to develop residual-based VMS large eddy simulation (LES) models for Rayleigh-Bénard convection. The resulting model is a mixed model that incorporates the VMS model and an eddy viscosity model. The Wall-Adapting Local Eddy-viscosity (WALE) model is used as the eddy viscosity model in this work. The new LES models were implemented in the finite element code Drekar. Simulations are performed using continuous, piecewise linear finite elements. The simulations ranged from Ra=106 to Ra=1014 and were conducted at Pr=1 and Pr=7. Two domains were considered: a two-dimensional domain of aspect ratio 2 with a fluid confined between two parallel plates and a three-dimensional cylinder of aspect ratio 1/4. The Nusselt number from the VMS results is compared against three dimensional direct numerical simulations and experiments. In all cases, the VMS results are in good agreement with existing literature.

More Details
Results 801–850 of 9,998
Results 801–850 of 9,998