Unifying the mechanics of continua cracks and particles
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Computational Physics
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Chemical Physics
Several extensions and improvements have been made to the multilevel summation method (MSM) of computing long-range electrostatic interactions. These include pressure calculation, an improved error estimator, faster direct part calculation, extension to non-orthogonal (triclinic) systems, and parallelization using the domain decomposition method. MSM also allows fully non-periodic long-range electrostatics calculations which are not possible using traditional Ewald-based methods. In spite of these significant improvements to the MSM algorithm, the particle-particle particle-mesh (PPPM) method was still found to be faster for the periodic systems we tested on a single processor. However, the fast Fourier transforms (FFTs) that PPPM relies on represent a major scaling bottleneck for the method when running on many cores (because the many-to-many communication pattern of the FFT becomes expensive) and MSM scales better than PPPM when using a large core count for two test problems on Sandia's Redsky machine. This FFT bottleneck can be reduced by running PPPM on only a subset of the total processors. MSM is most competitive for relatively low accuracy calculations. On Sandia's Chama machine, however, PPPM is found to scale better than MSM for all core counts that we tested. These results suggest that PPPM is usually more efficient than MSM for typical problems running on current high performance computers. However, further improvements to MSM algorithm could increase its competitiveness for calculation of long-range electrostatic interactions. © 2014 AIP Publishing LLC.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Recovery from process loss during the execution of a distributed memory parallel application is presently achieved by restarting the program, typically from a checkpoint file. Future computer system trends indicate that the size of data to checkpoint, the lack of improvement in parallel file system performance and the increase in process failure rates will lead to situations where checkpoint restart becomes infeasible. In this report we describe and prototype the use of a new application level resilient computing model that manages persistent storage of local state for each process such that, if a process fails, recovery can be performed locally without requiring access to a global checkpoint file. LFLR provides application developers with an ability to recover locally and continue application execution when a process is lost. This report discusses what features are required from the hardware, OS and runtime layers, and what approaches application developers might use in the design of future codes, including a demonstration of LFLR-enabled MiniFE code from the Matenvo mini-application suite.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE International Ultrasonics Symposium Procedings
Abstract not provided.
Abstract not provided.
Exascale data environments are fast approaching, driven by diverse structured and unstructured data such as system and application telemetry streams, open-source information capture, and on-demand simulation output. Storage costs having plummeted, the question is now one of converting vast stores of data to actionable information. Complicating this problem are the low degrees of awareness across domain boundaries about what potentially useful data may exist, and write-once-read- never issues (data generation/collection rates outpacing data analysis and integration rates). Increasingly, technologists and researchers need to correlate previously unrelated data sources and artifacts to produce fused data views for domain-specific purposes. New tools and approaches for creating such views from vast amounts of data are vitally important to maintaining research and operational momentum. We propose to research and develop tools and services to assist in the creation, refinement, discovery and reuse of fused data views over large, diverse collections of heterogeneously structured data. We innovate in the following ways. First, we enable and encourage end-users to introduce customized index methods selected for local benefit rather than for global interaction (flexible multi-indexing). We envision rich combinations of such views on application data: views that span backing stores with different semantics, that introduce analytic methods of indexing, and that define multiple views on individual data items. We specifically decline to build a big fused database of everything providing a centralized index over all data, or to export a rigid schema to all comers as in federated query approaches. Second, we proactively advertise these application-specific views so that they may be programmatically reused and extended (data proactivity). Through this mechanism, both changes in state (new data in existing view collected) and changes in structure (new or derived view exists) are made known. Lastly, we embrace found data heterogeneity by coupling multi-indexing to backing stores with appropriate semantics (as opposed to a single store or schema).
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.