Publications

Results 401–500 of 9,998

Search results

Jump to search filters

Memo regarding the Final Review of FY21 ASC L2 Milestone 7840: Neural Mini-Apps for Future Heterogeneous HPC Systems

Oldfield, Ron A.; Plimpton, Steven J.; Laros, James H.; Poliakoff, David Z.; Sornborger, Andrew

The final review for the FY21 Advanced Simulation and Computing (ASC) Computational Systems and Software Environments (CSSE) L2 Milestone #7840 was conducted on August 25th, 2021 at Sandia National Laboratories in Albuquerque, New Mexico. The review committee/panel unanimously agreed that the milestone has been successfully completed, exceeding expectations on several of the key deliverables.

More Details

Advances in Mixed Precision Algorithms: 2021 Edition

Abdelfattah, Ahmad; Anzt, Hartwig; Ayala, Alan; Boman, Erik G.; Carson, Erin C.; Cayrols, Sebastien; Cojean, Terry; Dongarra, Jack J.; Falgout, Rob; Gates, Mark; G, R\{U}Tzmacher; Higham, Nicholas J.; Kruger, Scott E.; Li, Sherry; Lindquist, Neil; Liu, Yang; Loe, Jennifer A.; Nayak, Pratik; Osei-Kuffuor, Daniel; Pranesh, Sri; Rajamanickam, Sivasankaran R.; Ribizel, Tobias; Smith, Bryce B.; Swirydowicz, Kasia; Thomas, Stephen J.; Tomov, Stanimire; Tsai, Yaohung M.; Yamazaki, Ichitaro Y.; Yang, Urike M.

Over the last year, the ECP xSDK-multiprecision effort has made tremendous progress in developing and deploying new mixed precision technology and customizing the algorithms for the hardware deployed in the ECP flagship supercomputers. The effort also has succeeded in creating a cross-laboratory community of scientists interested in mixed precision technology and now working together in deploying this technology for ECP applications. In this report, we highlight some of the most promising and impactful achievements of the last year. Among the highlights we present are: Mixed precision IR using a dense LU factorization and achieving a 1.8× speedup on Spock; results and strategies for mixed precision IR using a sparse LU factorization; a mixed precision eigenvalue solver; Mixed Precision GMRES-IR being deployed in Trilinos, and achieving a speedup of 1.4× over standard GMRES; compressed Basis (CB) GMRES being deployed in Ginkgo and achieving an average 1.4× speedup over standard GMRES; preparing hypre for mixed precision execution; mixed precision sparse approximate inverse preconditioners achieving an average speedup of 1.2×; and detailed description of the memory accessor separating the arithmetic precision from the memory precision, and enabling memory-bound low precision BLAS 1/2 operations to increase the accuracy by using high precision in the computations without degrading the performance. We emphasize that many of the highlights presented here have also been submitted to peer-reviewed journals or established conferences, and are under peer-review or have already been published.

More Details

MFNets: data efficient all-at-once learning of multifidelity surrogates as directed networks of information sources

Computational Mechanics

Gorodetsky, Alex A.; Jakeman, John D.; Geraci, Gianluca G.

We present an approach for constructing a surrogate from ensembles of information sources of varying cost and accuracy. The multifidelity surrogate encodes connections between information sources as a directed acyclic graph, and is trained via gradient-based minimization of a nonlinear least squares objective. While the vast majority of state-of-the-art assumes hierarchical connections between information sources, our approach works with flexibly structured information sources that may not admit a strict hierarchy. The formulation has two advantages: (1) increased data efficiency due to parsimonious multifidelity networks that can be tailored to the application; and (2) no constraints on the training data—we can combine noisy, non-nested evaluations of the information sources. Finally, numerical examples ranging from synthetic to physics-based computational mechanics simulations indicate the error in our approach can be orders-of-magnitude smaller, particularly in the low-data regime, than single-fidelity and hierarchical multifidelity approaches.

More Details

Uncertainty and Sensitivity Analysis Methods and Applications in the GDSA Framework (FY2021)

Swiler, Laura P.; Basurto, Eduardo B.; Brooks, Dusty M.; Eckert, Aubrey C.; Leone, Rosemary C.; Mariner, Paul M.; Portone, Teresa P.; Laros, James H.; Stein, Emily S.

The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (FCT) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). Two high priorities for SFWST disposal R&D are design concept development and disposal system modeling. These priorities are directly addressed in the SFWST Geologic Disposal Safety Assessment (GDSA) control account, which is charged with developing a geologic repository system modeling and analysis capability, and the associated software, GDSA Framework, for evaluating disposal system performance for nuclear waste in geologic media. GDSA Framework is supported by SFWST Campaign and its predecessor the Used Fuel Disposition (UFD) campaign. This report fulfills the GDSA Uncertainty and Sensitivity Analysis Methods work package (SF-21SN01030404) level 3 milestone, Uncertainty and Sensitivity Analysis Methods and Applications in GDSA Framework (FY2021) (M3SF-21SN010304042). It presents high level objectives and strategy for development of uncertainty and sensitivity analysis tools, demonstrates uncertainty quantification (UQ) and sensitivity analysis (SA) tools in GDSA Framework in FY21, and describes additional UQ/SA tools whose future implementation would enhance the UQ/SA capability of GDSA Framework. This work was closely coordinated with the other Sandia National Laboratory GDSA work packages: the GDSA Framework Development work package (SF-21SN01030405), the GDSA Repository Systems Analysis work package (SF-21SN01030406), and the GDSA PFLOTRAN Development work package (SF-21SN01030407). This report builds on developments reported in previous GDSA Framework milestones, particularly M3SF 20SN010304032.

More Details

Statistical Distributions for Mesh Independent Solutions in ALEGRA

Merrell, David P.; Robinson, Allen C.; Sanchez, Jason J.

The representation of material heterogeneity (also referred to as "spatial variation") plays a key role in the material failure simulation method used in ALEGRA. ALEGRA is an arbitrary Lagrangian-Eulerian shock and multiphysics code developed at Sandia National Laboratories and contains several methods for incorporating spatial variation into simulations. A desirable property of a spatial variation method is that it should produce consistent stochastic behavior regardless of the mesh used (a property referred to as "mesh independence"). However, mesh dependence has been reported using the Weibull distribution with ALEGRA's spatial variation method. This report describes efforts towards providing additional insight into both the theory and numerical experiments investigating such mesh dependence. In particular, we have implemented a discrete minimum order statistic model with properties that are theoretically mesh independent.

More Details

Comparing reproduced cyber experimentation studies across different emulation testbeds

ACM International Conference Proceeding Series

Tarman, Thomas D.; Swiler, Laura P.; Vugrin, Eric D.; Rollins, Trevor; Cruz, Gerardo C.; Huang, Hao; Sahu, Abhijeet; Wlazlo, Patrick; Goulart, Ana; Davis, Kate

Cyber testbeds provide an important mechanism for experimentally evaluating cyber security performance. However, as an experimental discipline, reproducible cyber experimentation is essential to assure valid, unbiased results. Even minor differences in setup, configuration, and testbed components can have an impact on the experiments, and thus, reproducibility of results. This paper documents a case study in reproducing an earlier emulation study, with the reproduced emulation experiment conducted by a different research group on a different testbed. We describe lessons learned as a result of this process, both in terms of the reproducibility of the original study and in terms of the different testbed technologies used by both groups. This paper also addresses the question of how to compare results between two groups' experiments, identifying candidate metrics for comparison and quantifying the results in this reproduction study.

More Details

Increased preheat energy to MagLIF targets with cryogenic cooling

Harvey-Thompson, Adam J.; Geissel, Matthias G.; Crabtree, Jerry A.; Weis, Matthew R.; Gomez, Matthew R.; Fein, Jeffrey R.; Ampleford, David A.; Awe, Thomas J.; Chandler, Gordon A.; Galloway, B.R.; Hansen, Stephanie B.; Hanson, Jeffrey J.; Harding, Eric H.; Jennings, Christopher A.; Kimmel, Mark W.; Knapp, Patrick K.; Lamppa, Derek C.; Laros, James H.; Mangan, Michael M.; Maurer, A.; Perea, L.; Peterson, Kara J.; Porter, John L.; Rambo, Patrick K.; Robertson, Grafton K.; Rochau, G.A.; Ruiz, Daniel E.; Shores, Jonathon S.; Slutz, Stephen A.; Smith, Ian C.; Speas, Christopher S.; Yager-Elorriaga, David A.; York, Adam Y.; Paguio, R.R.; Smith, G.E.

Abstract not provided.

Thermodynamically consistent semi-compressible fluids: A variational perspective

Journal of Physics A: Mathematical and Theoretical

Eldred, Christopher; Gay-Balmaz, Francois

This paper presents (Lagrangian) variational formulations for single and multicomponent semi-compressible fluids with both reversible (entropy-conserving) and irreversible (entropy-generating) processes. Semi-compressible fluids are useful in describing low-Mach dynamics, since they are soundproof. These models find wide use in many areas of fluid dynamics, including both geophysical and astrophysical fluid dynamics. Specifically, the Boussinesq, anelastic and pseudoincompressible equations are developed through a unified treatment valid for arbitrary Riemannian manifolds, thermodynamic potentials and geopotentials. By design, these formulations obey the 1st and 2nd laws of thermodynamics, ensuring their thermodynamic consistency. This general approach extends and unifies existing work, and helps clarify the thermodynamics of semi-compressible fluids. To further this goal, evolution equations are presented for a wide range of thermodynamicvariables: entropy density s, specific entropy η, buoyancy b, temperature T, potential temperature O and a generic entropic variable Χ; along with a general definition of buoyancy valid for all three semicompressible models and arbitrary geopotentials. Finally, the elliptic equation for the pressure perturbation (the Lagrange multiplier that enforces semicompressibility) is developed for all three equation sets in the case of reversible dynamics, and for the Boussinesq/anelastic equations in the case of irreversible dynamics; and some discussion is given of the difficulty in formulating it for the pseudoincompressible equations with irreversible dynamics.

More Details

Threat data generation for space systems

Proceedings - 2021 IEEE Space Computing Conference, SCC 2021

Sahakian, Meghan A.; Musuvathy, Srideep M.; Thorpe, Jamie T.; Verzi, Stephen J.; Vugrin, Eric D.; Dykstra, Matthew D.

Concerns about cyber threats to space systems are increasing. Researchers are developing intrusion detection and protection systems to mitigate these threats, but sparsity of cyber threat data poses a significant challenge to these efforts. Development of credible threat data sets are needed to overcome this challenge. This paper describes the extension/development of three data generation algorithms (generative adversarial networks, variational auto-encoders, and generative algorithm for multi-variate timeseries) to generate cyber threat data for space systems. The algorithms are applied to a use case that leverages the NASA Operational Simulation for Small Satellites (NOS$^{3})$ platform. Qualitative and quantitative measures are applied to evaluate the generated data. Strengths and weaknesses of each algorithm are presented, and suggested improvements are provided. For this use case, generative algorithm for multi-variate timeseries performed best according to both qualitative and quantitative measures.

More Details

Randomized algorithms for generalized singular value decomposition with application to sensitivity analysis

Numerical Linear Algebra with Applications

Hart, Joseph L.; van Bloemen Waanders, Bart G.; Saibaba, Arvind K.

The generalized singular value decomposition (GSVD) is a valuable tool that has many applications in computational science. However, computing the GSVD for large-scale problems is challenging. Motivated by applications in hyper-differential sensitivity analysis (HDSA), we propose new randomized algorithms for computing the GSVD which use randomized subspace iteration and weighted QR factorization. Detailed error analysis is given which provides insight into the accuracy of the algorithms and the choice of the algorithmic parameters. We demonstrate the performance of our algorithms on test matrices and a large-scale model problem where HDSA is used to study subsurface flow.

More Details

A data-driven peridynamic continuum model for upscaling molecular dynamics

D'Elia, Marta D.; Silling, Stewart A.; Yu, Yue; You, Huaiqian

Nonlocal models, including peridynamics, often use integral operators that embed lengthscales in their definition. However, the integrands in these operators are difficult to define from the data that are typically available for a given physical system, such as laboratory mechanical property tests. In contrast, molecular dynamics (MD) does not require these integrands, but it suffers from computational limitations in the length and time scales it can address. To combine the strengths of both methods and to obtain a coarse-grained, homogenized continuum model that efficiently and accurately captures materials’ behavior, we propose a learning framework to extract, from MD data, an optimal Linear Peridynamic Solid (LPS) model as a surrogate for MD displacements. To maximize the accuracy of the learnt model we allow the peridynamic influence function to be partially negative, while preserving the well-posedness of the resulting model. To achieve this, we provide sufficient well-posedness conditions for discretized LPS models with sign-changing influence functions and develop a constrained optimization algorithm that minimizes the equation residual while enforcing such solvability conditions. This framework guarantees that the resulting model is mathematically well-posed, physically consistent, and that it generalizes well to settings that are different from the ones used during training. We illustrate the efficacy of the proposed approach with several numerical tests for single layer graphene. Our two-dimensional tests show the robustness of the proposed algorithm on validation data sets that include thermal noise, different domain shapes and external loadings, and discretizations substantially different from the ones used for training.

More Details

Threat data generation for space systems

Proceedings - 2021 IEEE Space Computing Conference, SCC 2021

Sahakian, Meghan A.; Musuvathy, Srideep M.; Thorpe, Jamie T.; Verzi, Stephen J.; Vugrin, Eric D.; Dykstra, Matthew D.

Concerns about cyber threats to space systems are increasing. Researchers are developing intrusion detection and protection systems to mitigate these threats, but sparsity of cyber threat data poses a significant challenge to these efforts. Development of credible threat data sets are needed to overcome this challenge. This paper describes the extension/development of three data generation algorithms (generative adversarial networks, variational auto-encoders, and generative algorithm for multi-variate timeseries) to generate cyber threat data for space systems. The algorithms are applied to a use case that leverages the NASA Operational Simulation for Small Satellites (NOS$^{3})$ platform. Qualitative and quantitative measures are applied to evaluate the generated data. Strengths and weaknesses of each algorithm are presented, and suggested improvements are provided. For this use case, generative algorithm for multi-variate timeseries performed best according to both qualitative and quantitative measures.

More Details

True Load Balancing for Matricized Tensor Times Khatri-Rao Product

IEEE Transactions on Parallel and Distributed Systems

Abubaker, Nabil; Acer, Seher A.; Aykanat, Cevdet

MTTKRP is the bottleneck operation in algorithms used to compute the CP tensor decomposition. For sparse tensors, utilizing the compressed sparse fibers (CSF) storage format and the CSF-oriented MTTKRP algorithms is important for both memory and computational efficiency on distributed-memory architectures. Existing intelligent tensor partitioning models assume the computational cost of MTTKRP to be proportional to the total number of nonzeros in the tensor. However, this is not the case for the CSF-oriented MTTKRP on distributed-memory architectures. We outline two deficiencies of nonzero-based intelligent partitioning models when CSF-oriented MTTKRP operations are performed locally: failure to encode processors' computational loads and increase in total computation due to fiber fragmentation. We focus on existing fine-grain hypergraph model and propose a novel vertex weighting scheme that enables this model encode correct computational loads of processors. We also propose to augment the fine-grain model by fiber nets for reducing the increase in total computational load via minimizing fiber fragmentation. In this way, the proposed model encodes minimizing the load of the bottleneck processor. Parallel experiments with real-world sparse tensors on up to 1024 processors prove the validity of the outlined deficiencies and demonstrate the merit of our proposed improvements in terms of parallel runtimes.

More Details

Energy Efficient Computing R&D Roadmap Outline for Automated Vehicles

Aitken, Rob; Nakahira, Yorie; Strachan, John P.; Bresniker, Kirk; Young, Ian; Li, Zhiyong L.; Klebanoff, Leonard E.; Burchard, Carrie L.; Kumar, Suhas K.; Marinella, Matthew J.; Severa, William M.; Talin, A.A.; Vineyard, Craig M.; Mailhiot, Christian M.; Dick, Robert; Lu, Wei; Mogill, Jace

Automated vehicles (AV) hold great promise for improving safety, as well as reducing congestion and emissions. In order to make automated vehicles commercially viable, a reliable and highperformance vehicle-based computing platform that meets ever-increasing computational demands will be key. Given the state of existing digital computing technology, designers will face significant challenges in meeting the needs of highly automated vehicles without exceeding thermal constraints or consuming a large portion of the energy available on vehicles, thus reducing range between charges or refills. The accompanying increases in energy for AV use will place increased demand on energy production and distribution infrastructure, which also motivates increasing computational energy efficiency.

More Details

Lessons from α Dragon Fly's Brain: Evolution Built a Small, Fast, Efficient Neural Network in a Dragonfly. Why Not Copy It for Missile Defense?

IEEE Spectrum

Chance, Frances S.

In each of our brains, 86 billion neurons work in parallel, processing inputs from senses and memories to produce the many feats of human cognition. The brains of other creatures are less broadly capable, but those animals often exhibit innate aptitudes for particular tasks, abilities honed by millions of years of evolution.

More Details

Co-Design of Free-Space Metasurface Optical Neuromorphic Classifiers for High Performance

ACS Photonics

Leonard, Francois L.; Backer, Adam S.; Fuller, Elliot J.; Teeter, Corinne M.; Vineyard, Craig M.

Classification of features in a scene typically requires conversion of the incoming photonic field into the electronic domain. Recently, an alternative approach has emerged whereby passive structured materials can perform classification tasks by directly using free-space propagation and diffraction of light. In this manuscript, we present a theoretical and computational study of such systems and establish the basic features that govern their performance. We show that system architecture, material structure, and input light field are intertwined and need to be co-designed to maximize classification accuracy. Our simulations show that a single layer metasurface can achieve classification accuracy better than conventional linear classifiers, with an order of magnitude fewer diffractive features than previously reported. For a wavelength λ, single layer metasurfaces of size 100λ × 100λ with an aperture density λ-2 achieve ∼96% testing accuracy on the MNIST data set, for an optimized distance ∼100λ to the output plane. This is enabled by an intrinsic nonlinearity in photodetection, despite the use of linear optical metamaterials. Furthermore, we find that once the system is optimized, the number of diffractive features is the main determinant of classification performance. The slow asymptotic scaling with the number of apertures suggests a reason why such systems may benefit from multiple layer designs. Finally, we show a trade-off between the number of apertures and fabrication noise.

More Details

Kokkos 3: Programming Model Extensions for the Exascale Era

IEEE Transactions on Parallel and Distributed Systems

Trott, Christian R.; Lebrun-Grandie, Damien; Arndt, Daniel; Ciesko, Jan; Dang, Vinh Q.; Ellingwood, Nathan D.; Gayatri, Rahulkumar; Harvey, Evan C.; Hollman, Daisy S.; Ibanez-Granados, Daniel A.; Liber, Nevin; Madsen, Jonathan; Miles, Jeff S.; Poliakoff, David Z.; Powell, Amy J.; Rajamanickam, Sivasankaran R.; Simberg, Mikael; Sunderland, Dan; Turcksin, Bruno; Wilke, Jeremiah

As the push towards exascale hardware has increased the diversity of system architectures, performance portability has become a critical aspect for scientific software. We describe the Kokkos Performance Portable Programming Model that allows developers to write single source applications for diverse high performance computing architectures. Kokkos provides key abstractions for both the compute and memory hierarchy of modern hardware. Here, we describe the novel abstractions that have been added to Kokkos recently such as hierarchical parallelism, containers, task graphs, and arbitrary-sized atomic operations. We demonstrate the performance of these new features with reproducible benchmarks on CPUs and GPUs.

More Details

Surrogate Modeling For Efficiently Accurately and Conservatively Estimating Measures of Risk

Jakeman, John D.; Kouri, Drew P.; Huerta, Jose G.

We present a surrogate modeling framework for conservatively estimating measures of risk from limited realizations of an expensive physical experiment or computational simulation. We adopt a probabilistic description of risk that assigns probabilities to consequences associated with an event and use risk measures, which combine objective evidence with the subjective values of decision makers, to quantify anticipated outcomes. Given a set of samples, we construct a surrogate model that produces estimates of risk measures that are always greater than their empirical estimates obtained from the training data. These surrogate models not only limit over-confidence in reliability and safety assessments, but produce estimates of risk measures that converge much faster to the true value than purely sample-based estimates. We first detail the construction of conservative surrogate models that can be tailored to the specific risk preferences of the stakeholder and then present an approach, based upon stochastic orders, for constructing surrogate models that are conservative with respect to families of risk measures. The surrogate models introduce a bias that allows them to conservatively estimate the target risk measures. We provide theoretical results that show that this bias decays at the same rate as the L2 error in the surrogate model. Our numerical examples confirm that risk-aware surrogate models do indeed over-estimate the target risk measures while converging at the expected rate.

More Details

Adaptive resource allocation for surrogate modeling of systems comprised of multiple disciplines with varying fidelity

Friedman, Sam; Jakeman, John D.; Eldred, Michael S.; Tamellini, Lorenzo; Gorodestky, Alex A.; Allaire, Doug

We present an adaptive algorithm for constructing surrogate models for integrated systems composed of a set of coupled components. With this goal we introduce ‘coupling’ variables with a priori unknown distributions that allow approximations of each component to be built independently. Once built, the surrogates of the components are combined and used to predict system-level quantities of interest (QoI) at a fraction of the cost of interrogating the full system model. We use a greedy experimental design procedure, based upon a modification of Multi-Index Stochastic Collocation (MISC), to minimize the error of the combined surrogate. This is achieved by refining each component surrogate in accordance with its relative contribution to error in the approximation of the system-level QoI. Our adaptation of MISC is a multi-fidelity procedure that can leverage ensembles of models of varying cost and accuracy, for one or more components, to produce estimates of system-level QoI. Several numerical examples demonstrate the efficacy of the proposed approach on systems involving feed-forward and feedback coupling. For a fixed computational budget, the proposed algorithm is able to produce approximations that are orders of magnitude more accurate than approximations that treat the integrated system as a black-box.

More Details

Thermodynamics of ion binding and occupancy in potassium channels

Chemical Science

Rempe, Susan R.; Jing, Zhifeng; Rackers, Joshua R.; Pratt, Lawrence R.; Liu, Chengwen; Ren, Pengyu

Potassium channels modulate various cellular functions through efficient and selective conduction of K+ions. The mechanism of ion conduction in potassium channels has recently emerged as a topic of debate. Crystal structures of potassium channels show four K+ions bound to adjacent binding sites in the selectivity filter, while chemical intuition and molecular modeling suggest that the direct ion contacts are unstable. Molecular dynamics (MD) simulations have been instrumental in the study of conduction and gating mechanisms of ion channels. Based on MD simulations, two hypotheses have been proposed, in which the four-ion configuration is an artifact due to either averaged structures or low temperature in crystallographic experiments. The two hypotheses have been supported or challenged by different experiments. Here, MD simulations with polarizable force fields validated byab initiocalculations were used to investigate the ion binding thermodynamics. Contrary to previous beliefs, the four-ion configuration was predicted to be thermodynamically stable after accounting for the complex electrostatic interactions and dielectric screening. Polarization plays a critical role in the thermodynamic stabilities. As a result, the ion conduction likely operates through a simple single-vacancy and water-free mechanism. The simulations explained crystal structures, ion binding experiments and recent controversial mutagenesis experiments. This work provides a clear view of the mechanism underlying the efficient ion conduction and demonstrates the importance of polarization in ion channel simulations.

More Details

Provable advantages for graph algorithms in spiking neural networks

Annual ACM Symposium on Parallelism in Algorithms and Architectures

Aimone, James B.; Ho, Yang H.; Parekh, Ojas D.; Phillips, Cynthia A.; Pinar, Ali P.; Severa, William M.; Wang, Yipu W.

We present a theoretical framework for designing and assessing the performance of algorithms executing in networks consisting of spiking artificial neurons. Although spiking neural networks (SNNs) are capable of general-purpose computation, few algorithmic results with rigorous asymptotic performance analysis are known. SNNs are exceptionally well-motivated practically, as neuromorphic computing systems with 100 million spiking neurons are available, and systems with a billion neurons are anticipated in the next few years. Beyond massive parallelism and scalability, neuromorphic computing systems offer energy consumption orders of magnitude lower than conventional high-performance computing systems. We employ our framework to design and analyze neuromorphic graph algorithms, focusing on shortest path problems. Our neuromorphic algorithms are message-passing algorithms relying critically on data movement for computation, and we develop data-movement lower bounds for conventional algorithms. A fair and rigorous comparison with conventional algorithms and architectures is challenging but paramount. We prove a polynomial-factor advantage even when we assume an SNN consisting of a simple grid-like network of neurons. To the best of our knowledge, this is one of the first examples of a provable asymptotic computational advantage for neuromorphic computing.

More Details

Spin-lattice dynamics of surface vs core magnetization in Fe nanoparticles

Applied Physics Letters

Dos Santos, Gonzalo; Meyer, Robert; Aparicio, Romina; Tranchida, Julien G.; Bringa, Eduardo M.; Urbassek, Herbert M.

Magnetization of clusters is often simulated using atomistic spin dynamics for a fixed lattice. Coupled spin-lattice dynamics simulations of the magnetization of nanoparticles have, to date, neglected the change in the size of the atomic magnetic moments near surfaces. We show that the introduction of variable magnetic moments leads to a better description of experimental data for the magnetization of small Fe nanoparticles. To this end, we divide atoms into a surface-near shell and a core with bulk properties. It is demonstrated that both the magnitude of the shell magnetic moment and the exchange interactions need to be modified to obtain a fair representation of the experimental data. This allows for a reasonable description of the average magnetic moment vs cluster size, and also the cluster magnetization vs temperature.

More Details
Results 401–500 of 9,998
Results 401–500 of 9,998