Publications

Results 751–800 of 9,998

Search results

Jump to search filters

Spin-lattice model for cubic crystals

Physical Review B

Nieves, P.; Tranchida, Julien G.; Arapan, S.; Legut, D.

We present a methodology based on the Néel model to build a classical spin-lattice Hamiltonian for cubic crystals capable of describing magnetic properties induced by the spin-orbit coupling like magnetocrystalline anisotropy and anisotropic magnetostriction, as well as exchange magnetostriction. Taking advantage of the analytical solutions of the Néel model, we derive theoretical expressions for the parametrization of the exchange integrals and Néel dipole and quadrupole terms that link them to the magnetic properties of the material. This approach allows us to build accurate spin-lattice models with the desired magnetoelastic properties. We also explore a possible way to model the volume dependence of magnetic moment based on the Landau energy. This feature allows us to consider the effects of hydrostatic pressure on the saturation magnetization. We apply this method to develop a spin-lattice model for BCC Fe and FCC Ni, and we show that it accurately reproduces the experimental elastic tensor, magnetocrystalline anisotropy under pressure, anisotropic magnetostrictive coefficients, volume magnetostriction, and saturation magnetization under pressure at zero temperature. This work could constitute a step towards large-scale modeling of magnetoelastic phenomena.

More Details

Beryllium-driven structural evolution at the divertor surface

Nuclear Fusion

Cusentino, Mary A.; Wood, Mitchell A.; Thompson, Aidan P.

Erosion of the beryllium first wall material in tokamak reactors has been shown to result in transport and deposition on the tungsten divertor. Experimental studies of beryllium implantation in tungsten indicate that mixed W–Be intermetallic deposits can form, which have lower melting temperatures than tungsten and can trap tritium at higher rates. To better understand the formation and growth rate of these intermetallics, we performed cumulative molecular dynamics (MD) simulations of both high and low energy beryllium deposition in tungsten. In both cases, a W–Be mixed material layer (MML) emerged at the surface within several nanoseconds, either through energetic implantation or a thermally-activated exchange mechanism, respectively. While some ordering of the material into intermetallics occurred, fully ordered structures did not emerge from the deposition simulations. Targeted MD simulations of the MML to further study the rate of Be diffusion and intermetallic growth rates indicate that for both cases, the gradual re-structuring of the material into an ordered intermetallic layer is beyond accessible MD time scales(≤1 μs). However, the rapid formation of the MML within nanoseconds indicates that beryllium deposition can influence other plasma species interactions at the surface and begin to alter the tungsten material properties. Therefore, beryllium deposition on the divertor surface, even in small amounts, is likely to cause significant changes in plasma-surface interactions and will need to be considered in future studies.

More Details

Higher-order particle representation for particle-in-cell simulations

Journal of Computational Physics

Bettencourt, Matthew T.

In this paper we present an alternative approach to the representation of simulation particles for unstructured electrostatic and electromagnetic PIC simulations. In our modified PIC algorithm we represent particles as having a smooth shape function limited by some specified finite radius, r0. A unique feature of our approach is the representation of this shape by surrounding simulation particles with a set of virtual particles with delta shape, with fixed offsets and weights derived from Gaussian quadrature rules and the value of r0. As the virtual particles are purely computational, they provide the additional benefit of increasing the arithmetic intensity of traditionally memory bound particle kernels. The modified algorithm is implemented within Sandia National Laboratories' unstructured EMPIRE-PIC code, for electrostatic and electromagnetic simulations, using periodic boundary conditions. We show results for a representative set of benchmark problems, including electron orbit, a transverse electromagnetic wave propagating through a plasma, numerical heating, and a plasma slab expansion. In this work, good error reduction across all of the chosen problems is achieved as the particles are made progressively smoother, with the optimal particle radius appearing to be problem-dependent.

More Details

First-principles modeling of plasmons in aluminum under ambient and extreme conditions

Physical Review B

Ramakrishna, Kushal; Cangi, Attila; Dornheim, Tobias; Baczewski, Andrew D.; Vorberger, Jan

The theoretical understanding of plasmon behavior is crucial for an accurate interpretation of inelastic scattering diagnostics in many experiments. We highlight the utility of linear response time-dependent density functional theory (LR-TDDFT) as a first-principles framework for consistently modeling plasmon properties. We provide a comprehensive analysis of plasmons in aluminum from ambient to warm dense matter conditions and assess typical properties such as the dynamical structure factor, the plasmon dispersion, and the plasmon lifetime. We compare our results with scattering measurements and with other TDDFT results as well as models such as the random phase approximation, the Mermin approach, and the dielectric function obtained using static local field corrections of the uniform electron gas parametrized from path-integral Monte Carlo simulations. We conclude that results for the plasmon dispersion and lifetime are inconsistent between experiment and theories and that the common practice of extracting and studying plasmon dispersion relations is an insufficient procedure to capture the complicated physics contained in the dynamic structure factor in its full breadth.

More Details

An Analog Preconditioner for Solving Linear Systems [Slides]

Feinberg, Benjamin F.; Wong, Ryan; Xiao, Tianyao X.; Rohan, Jacob N.; Boman, Erik G.; Marinella, Matthew J.; Agarwal, Sapan A.; Ipek, Engin

This presentation concludes in situ computation enables new approaches to linear algebra problems which can be both more effective and more efficient as compared to conventional digital systems. Preconditioning is well-suited to analog computation due to the tolerance for approximate solutions. When combined with prior work on in situ MVM for scientific computing, analog preconditioning can enable significant speedups for important linear algebra applications.

More Details

A Taxonomy for Classification and Comparison of Dataflows for GNN Accelerators

Garg, Raveesh; Qin, Eric; Martinez, Francisco M.; Guirado, Robert; Jain, Akshay; Abadal, Sergi; Abellan, Jose L.; Acacio, Manuel E.; Alarcon, Eduard; Rajamanickam, Sivasankaran R.; Krishna, Tushar

Recently, Graph Neural Networks (GNNs) have received a lot of interest because of their success in learning representations from graph structured data. However, GNNs exhibit different compute and memory characteristics compared to traditional Deep Neural Networks (DNNs). Graph convolutions require feature aggregations from neighboring nodes (known as the aggregation phase), which leads to highly irregular data accesses. GNNs also have a very regular compute phase that can be broken down to matrix multiplications (known as the combination phase). All recently proposed GNN accelerators utilize different dataflows and microarchitecture optimizations for these two phases. Different communication strategies between the two phases have been also used. However, as more custom GNN accelerators are proposed, the harder it is to qualitatively classify them and quantitatively contrast them. In this work, we present a taxonomy to describe several diverse dataflows for running GNN inference on accelerators. This provides a structured way to describe and compare the design-space of GNN accelerators.

More Details

Classification of orthostatic intolerance through data analytics

Medical and Biological Engineering and Computing

Hart, Joseph L.; Gilmore, Steven; Gremaud, Pierre; Olsen, Christian H.; Mehlsen, Jesper; Olufsen, Mette S.

Imbalance in the autonomic nervous system can lead to orthostatic intolerance manifested by dizziness, lightheadedness, and a sudden loss of consciousness (syncope); these are common conditions, but they are challenging to diagnose correctly. Uncertainties about the triggering mechanisms and the underlying pathophysiology have led to variations in their classification. This study uses machine learning to categorize patients with orthostatic intolerance. We use random forest classification trees to identify a small number of markers in blood pressure, and heart rate time-series data measured during head-up tilt to (a) distinguish patients with a single pathology and (b) examine data from patients with a mixed pathophysiology. Next, we use Kmeans to cluster the markers representing the time-series data. We apply the proposed method analyzing clinical data from 186 subjects identified as control or suffering from one of four conditions: postural orthostatic tachycardia (POTS), cardioinhibition, vasodepression, and mixed cardioinhibition and vasodepression. Classification results confirm the use of supervised machine learning. We were able to categorize more than 95% of patients with a single condition and were able to subgroup all patients with mixed cardioinhibitory and vasodepressor syncope. Clustering results confirm the disease groups and identify two distinct subgroups within the control and mixed groups. The proposed study demonstrates how to use machine learning to discover structure in blood pressure and heart rate time-series data. The methodology is used in classification of patients with orthostatic intolerance. Diagnosing orthostatic intolerance is challenging, and full characterization of the pathophysiological mechanisms remains a topic of ongoing research. This study provides a step toward leveraging machine learning to assist clinicians and researchers in addressing these challenges. [Figure not available: see fulltext.].

More Details

The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

Experimental Mechanics

Turner, Daniel Z.; Lehoucq, Richard B.; Reu, Phillip L.

This work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. [2009]). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

More Details

Validation Metrics for Fixed Effects and Mixed-Effects Calibration

Journal of Verification, Validation and Uncertainty Quantification

Porter, N.W.; Maupin, Kathryn A.; Swiler, Laura P.; Mousseau, Vincent A.

The modern scientific process often involves the development of a predictive computational model. To improve its accuracy, a computational model can be calibrated to a set of experimental data. A variety of validation metrics can be used to quantify this process. Some of these metrics have direct physical interpretations and a history of use, while others, especially those for probabilistic data, are more difficult to interpret. In this work, a variety of validation metrics are used to quantify the accuracy of different calibration methods. Frequentist and Bayesian perspectives are used with both fixed effects and mixed-effects statistical models. Through a quantitative comparison of the resulting distributions, the most accurate calibration method can be selected. Two examples are included which compare the results of various validation metrics for different calibration methods. It is quantitatively shown that, in the presence of significant laboratory biases, a fixed effects calibration is significantly less accurate than a mixed-effects calibration. This is because the mixed-effects statistical model better characterizes the underlying parameter distributions than the fixed effects model. The results suggest that validation metrics can be used to select the most accurate calibration model for a particular empirical model with corresponding experimental data.

More Details

Concentric Spherical GNN for 3D Representation Learning

Fox, James S.; Zhao, Bo; Rajamanickam, Sivasankaran R.; Ramprasad, Rampi; Le SongLe

Learning 3D representations that generalize well to arbitrarily oriented inputs is a challenge of practical importance in applications varying from computer vision to physics and chemistry. We propose a novel multi-resolution convolutional architecture for learning over concentric spherical feature maps, of which the single sphere representation is a special case. Our hierarchical architecture is based on alternatively learning to incorporate both intra-sphere and inter-sphere information. We show the applicability of our method for two different types of 3D inputs, mesh objects, which can be regularly sampled, and point clouds, which are irregularly distributed. We also propose an efficient mapping of point clouds to concentric spherical images, thereby bridging spherical convolutions on grids with general point clouds. We demonstrate the effectiveness of our approach in improving state-of-the-art performance on 3D classification tasks with rotated data.

More Details

Milestone M6 Report: Reducing Excess Data Movement Part 1

Peng, Ivy; Voskuilen, Gwendolyn R.; Sarkar, Abhik; Boehme, David; Long, Rogelio; Moore, Shirley; Gokhale, Maya

This is the second in a sequence of three Hardware Evaluation milestones that provide insight into the following questions: What are the sources of excess data movement across all levels of the memory hierarchy, going out to the network fabric? What can be done at various levels of the hardware/software hierarchy to reduce excess data movement? How does reduced data movement track application performance? The results of this study can be used to suggest where the DOE supercomputing facilities, working with their hardware vendors, can optimize aspects of the system to reduce excess data movement. Quantitative analysis will also benefit systems software and applications to optimize caching and data layout strategies. Another potential avenue is to answer cost-benefit questions, such as those involving memory capacity versus latency and bandwidth. This milestone focuses on techniques to reduce data movement, quantitatively evaluates the efficacy of the techniques in accomplishing that goal, and measures how performance tracks data movement reduction. We study a small collection of benchmarks and proxy mini-apps that run on pre-exascale GPUs and on the Accelsim GPU simulator. Our approach has two thrusts: to measure advanced data movement reduction directives and techniques on the newest available GPUs, and to evaluate our benchmark set on simulated GPUs configured with architectural refinements to reduce data movement.

More Details

Second-order invariant domain preserving approximation of the compressible Navier–Stokes equations

Computer Methods in Applied Mechanics and Engineering

Guermond, Jean L.; Maier, Matthias; Popov, Bojan; Tomas, Ignacio T.

We present a fully discrete approximation technique for the compressible Navier–Stokes equations that is second-order accurate in time and space, semi-implicit, and guaranteed to be invariant domain preserving. The restriction on the time step is the standard hyperbolic CFL condition, i.e. τ≲O(h)∕V where V is some reference velocity scale and h the typical meshsize.

More Details
Results 751–800 of 9,998
Results 751–800 of 9,998