Semi-Lagrangian transport in the atmospheric dycore of E3SM
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This final report summarizes the results of the Laboratory Directed Research and Devel- opment (LDRD) Project Number 212587 entitled "Modeling Charged Defects in Non-Cubic Semiconductors for Radiation Effects Studies in Next Generation Electronic Materials" . The goal of this project was to extend a predictive capability for modeling defect level energies using first principle density functional theory methods (e.g., for radiation effects assessments) to semiconductors with non-cubic crystal structures. Computational methods that proved accurate for predicting defect levels in standard cubic semiconductors, were found to have shortcomings when applied to the lowered symmetry structures prevalent in next generation electronic materials such as SiC, GaN, and Ga203, stemming from an error in the treatment of the electrostatic boundary conditions. I describe methods to generalized the local moment countercharge (LMCC) scheme to position a charge in bulk supercell calculations of charged defects, circumventing the problem of measuring a dipole in a periodically replicated bulk calculation.
Abstract not provided.
Abstract not provided.
This SAND report fulfills the final report requirement for the Born Qualified Grand Challenge LDRD. Born Qualified was funded from FY16-FY18 with a total budget of ~$13M over the 3 years of funding. Overall 70+ staff, Post Docs, and students supported this project over its lifetime. The driver for Born Qualified was using Additive Manufacturing (AM) to change the qualification paradigm for low volume, high value, high consequence, complex parts that are common in high-risk industries such as ND, defense, energy, aerospace, and medical. AM offers the opportunity to transform design, manufacturing, and qualification with its unique capabilities. AM is a disruptive technology, allowing the capability to simultaneously create part and material while tightly controlling and monitoring the manufacturing process at the voxel level, with the inherent flexibility and agility in printing layer-by-layer. AM enables the possibility of measuring critical material and part parameters during manufacturing, thus changing the way we collect data, assess performance, and accept or qualify parts. It provides an opportunity to shift from the current iterative design-build-test qualification paradigm using traditional manufacturing processes to design-by-predictivity where requirements are addressed concurrently and rapidly. The new qualification paradigm driven by AM provides the opportunity to predict performance probabilistically, to optimally control the manufacturing process, and to implement accelerated cycles of learning. Exploiting these capabilities to realize a new uncertainty quantification-driven qualification that is rapid, flexible, and practical is the focus of this effort.
2018 IEEE 8th Symposium on Large Data Analysis and Visualization, LDAV 2018
A key component of most large-scale rendering systems is a parallel image compositing algorithm, and the most commonly used compositing algorithms are binary swap and its variants. Although shown to be very efficient, one of the classic limitations of binary swap is that it only works on a number of processes that is a perfect power of 2. Multiple variations of binary swap have been independently introduced to overcome this limitation and handle process counts that have factors that are not 2. To date, few of these approaches have been directly compared against each other, making it unclear which approach is best. This paper presents a fresh implementation of each of these methods using a common software framework to make them directly comparable. These methods to run binary swap with odd factors are directly compared. The results show that some simple compositing approaches work as well or better than more complex algorithms that are more difficult to implement.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Transactions on Information Forensics and Security
File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features, such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used to reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers, such as support vector machines over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.
Abstract not provided.
Physical Review E
We propose a functional integral framework for the derivation of hierarchies of Landau-Lifshitz-Bloch (LLB) equations that describe the flow toward equilibrium of the first and second moments of the magnetization. The short-scale description is defined by the stochastic Landau-Lifshitz-Gilbert equation, under both Markovian or non-Markovian noise, and takes into account interaction terms that are of practical relevance. Depending on the interactions, different hierarchies on the moments are obtained in the corresponding LLB equations. Two closure Ansätze are discussed and tested by numerical methods that are adapted to the symmetries of the problem. Our formalism provides a rigorous bridge between the atomistic spin dynamics simulations at short scales and micromagnetic descriptions at larger scales.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Trusting simulation output is crucial for Sandia's mission objectives. We rely on these simulations to perform our high-consequence mission tasks given our treaty obligations. Other science and modelling needs, while they may not be high-consequence, still require the strongest levels of trust to enable using the result as the foundation for both practical applications and future research. To this end, the computing community has developed work- flow and provenance systems to aid in both automating simulation and modelling execution, but to also aid in determining exactly how was some output created so that conclusions can be drawn from the data. Current approaches for workflows and provenance systems are all at the user level and have little to no system level support making them fragile, difficult to use, and incomplete solutions. The introduction of container technology is a first step towards encapsulating and tracking artifacts used in creating data and resulting insights, but their current implementation is focused solely on making it easy to deploy an application in an isolated "sandbox" and maintaining a strictly read-only mode to avoid any potential changes to the application. All storage activities are still using the system-level shared storage. This project was an initial exploration into extending the container concept to also include storage and to use writable containers, auto generated by the system, as a way to link the contained data back to the simulation and input deck used to create it.
Abstract not provided.
Abstract not provided.
Reproducibility is an essential ingredient of the scientific enterprise. The ability to reproduce results builds trust that we can rely on the results as foundations for future scientific exploration. Presently, the fields of computational and computing sciences provide two opposing definitions of reproducible and replicable. In computational sciences, reproducible research means authors provide all necessary data and computer codes to run analyses again, so others can re-obtain the results (J. Claerbout et al., 1992). The concept was adopted and extended by several communities, where it was distinguished from replication: collecting new data to address the same question, and arriving at consistent findings (Peng et al. 2006). The Association of Computing Machinery (ACM), representing computer science and industry professionals, recently established a reproducibility initiative, adopting essentially opposite definitions. The purpose of this report is to raise awareness of the opposite definitions and propose a path to a compatible taxonomy.
This report summarizes the data analysis activities that were performed under the Born Qualified Grand Challenge Project from 2016 - 2018. It is meant to document the characterization of additively manufactured parts and processe s for this project as well as demonstrate and identify further analyses and data science that could be done relating material processes to microstructure to properties to performance.
Abstract not provided.
This report summarizes the work performed under the Sandia LDRD project "Adverse Event Prediction Using Graph-Augmented Temporal Analysis." The goal of the project was to develop a method for analyzing multiple time-series data streams to identify precursors providing advance warning of the potential occurrence of events of interest. The proposed approach combined temporal analysis of each data stream with reasoning about relationships between data streams using a geospatial-temporal semantic graph. This class of problems is relevant to several important topics of national interest. In the course of this work we developed new temporal analysis techniques, including temporal analysis using Markov Chain Monte Carlo techniques, temporal shift algorithms to refine forecasts, and a version of Ripley's K-function extended to support temporal precursor identification. This report summarizes the project's major accomplishments, and gathers the abstracts and references for the publication sub-missions and reports that were prepared as part of this work. We then describe work in progress that is not yet ready for publication.
Abstract not provided.
Running visualization and analysis algorithms on ATS-1 platforms is a critical step for supporting ATDM apps at the exascale. We are leveraging VTK-m to port our algorithms to the ATS-specific hardware and ensuring that it runs well.
Abstract not provided.
Abstract not provided.
Neural Computation
Neural-inspired spike-based computing machines often claim to achieve considerable advantages in terms of energy and time efficiency by using spikes for computation and communication. However, fundamental questions about spike-based computation remain unanswered. For instance, how much advantage do spike-based approaches have over conventionalmethods, and underwhat circumstances does spike-based computing provide a comparative advantage? Simply implementing existing algorithms using spikes as the medium of computation and communication is not guaranteed to yield an advantage. Here, we demonstrate that spike-based communication and computation within algorithms can increase throughput, and they can decrease energy cost in some cases. We present several spiking algorithms, including sorting a set of numbers in ascending/descending order, as well as finding the maximum or minimum ormedian of a set of numbers.We also provide an example application: a spiking median-filtering approach for image processing providing a low-energy, parallel implementation. The algorithms and analyses presented here demonstrate that spiking algorithms can provide performance advantages and offer efficient computation of fundamental operations useful in more complex algorithms.
An analysis of microgrids to increase resilience was conducted for the island of Puerto Rico. Critical infrastructure throughout the island was mapped to the key services provided by those sectors to help inform primary and secondary service sources during a major disruption to the electrical grid. Additionally, a resilience metric of burden was developed to quantify community resilience, and a related baseline resilience figure was calculated for the area. To improve resilience, Sandia performed an analysis of where clusters of critical infrastructure are located and used these suggested resilience node locations to create a portfolio of 159 microgrid options throughout Puerto Rico. The team then calculated the impact of these microgrids on the region's ability to provide critical services during an outage, and compared this impact to high-level estimates of cost for each microgrid to generate a set of efficient microgrid portfolios costing in the range of 218-917M dollars. This analysis is a refinement of the analysis delivered on June 01, 2018.
Concurrency and Computation. Practice and Experience
The Exascale Computing Project (ECP) is currently the primary effort in the United States focused on developing “exascale” levels of computing capabilities, including hardware, software, and applications. In order to obtain a more thorough understanding of how the software projects under the ECP are using, and planning to use the Message Passing Interface (MPI), and help guide the work of our own project within the ECP, we created a survey. Of the 97 ECP projects active at the time the survey was distributed, we received 77 responses, 56 of which reported that their projects were using MPI. Furthermore, this paper reports the results of that survey for the benefit of the broader community of MPI developers.