Publications

Results 2776–2800 of 9,998

Search results

Jump to search filters

Posters for AA/CE Reception

Kuether, Robert J.; Allensworth, Brooke M.; Backer, Adam; Chen, Elton Y.; Dingreville, Remi; Forrest, Eric C.; Knepper, Robert A.; Tappan, Alexander S.; Marquez, Michael P.; Vasiliauskas, Jonathan G.; Rupper, Stephen; Grant, Michael J.; Atencio, Lauren C.; Hipple, Tyler; Maes, Danae; Timlin, Jerilyn A.; Ma, Tian J.; Garcia, Rudy J.; Danford, Forest L.; Patrizi, Laura P.; Galasso, Jennifer; Draelos, Timothy J.; Gunda, Thushara; Venezuela, Otoniel; Brooks, Wesley A.; Anthony, Stephen M.; Carson, Bryan; Reeves, Michael; Roach, Matthew; Maines, Erin; Lavin, Judith M.; Whetten, Shaun R.; Swiler, Laura P.

Abstract not provided.

Sparse coding for N-gram feature extraction and training for file fragment classification

IEEE Transactions on Information Forensics and Security

Wang, Felix W.; Quach, Tu T.; Wheeler, Jason; Aimone, James B.; James, Conrad D.

File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features, such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used to reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers, such as support vector machines over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.

More Details

Hierarchies of Landau-Lifshitz-Bloch equations for nanomagnets: A functional integral framework

Physical Review E

Tranchida, Julien; Cea, Pascal T.; Nicolis, Stam

We propose a functional integral framework for the derivation of hierarchies of Landau-Lifshitz-Bloch (LLB) equations that describe the flow toward equilibrium of the first and second moments of the magnetization. The short-scale description is defined by the stochastic Landau-Lifshitz-Gilbert equation, under both Markovian or non-Markovian noise, and takes into account interaction terms that are of practical relevance. Depending on the interactions, different hierarchies on the moments are obtained in the corresponding LLB equations. Two closure Ansätze are discussed and tested by numerical methods that are adapted to the symmetries of the problem. Our formalism provides a rigorous bridge between the atomistic spin dynamics simulations at short scales and micromagnetic descriptions at larger scales.

More Details

Toward a Compatible Reproducibility Taxonomy for Computational and Computing Sciences

Heroux, Michael A.; Barba, Lorena A.; Parashar, Manish; Stodden, Victoria; Taufer, Michela

Reproducibility is an essential ingredient of the scientific enterprise. The ability to reproduce results builds trust that we can rely on the results as foundations for future scientific exploration. Presently, the fields of computational and computing sciences provide two opposing definitions of reproducible and replicable. In computational sciences, reproducible research means authors provide all necessary data and computer codes to run analyses again, so others can re-obtain the results (J. Claerbout et al., 1992). The concept was adopted and extended by several communities, where it was distinguished from replication: collecting new data to address the same question, and arriving at consistent findings (Peng et al. 2006). The Association of Computing Machinery (ACM), representing computer science and industry professionals, recently established a reproducibility initiative, adopting essentially opposite definitions. The purpose of this report is to raise awareness of the opposite definitions and propose a path to a compatible taxonomy.

More Details

Data Analysis for the Born Qualified Grand LDRD Project

Swiler, Laura P.; Van Bloemen Waanders, Bart; Jared, Bradley H.; Koepke, Joshua R.; Whetten, Shaun R.; Madison, Jonathan D.; Ivanoff, Thomas; Foulk, James W.; Cook, Adam; Brown-Shaklee, Harlan J.; Kammler, Daniel; Johnson, Kyle L.; Ford, Kurtis; Bishop, Joseph E.; Roach, Robert A.

This report summarizes the data analysis activities that were performed under the Born Qualified Grand Challenge Project from 2016 - 2018. It is meant to document the characterization of additively manufactured parts and processes for this project as well as demonstrate and identify further analyses and data science that could be done relating material processes to microstructure to properties to performance.

More Details

End-to-end Provenance Traceability and Reproducibility Through "Palletized'' Simulation Data

Lofstead, Gerald F.; Younge, Andrew J.; Baker, Joshua

Trusting simulation output is crucial for Sandia's mission objectives. We rely on these simulations to perform our high-consequence mission tasks given our treaty obligations. Other science and modelling needs, while they may not be high-consequence, still require the strongest levels of trust to enable using the result as the foundation for both practical applications and future research. To this end, the computing community has developed work- flow and provenance systems to aid in both automating simulation and modelling execution, but to also aid in determining exactly how was some output created so that conclusions can be drawn from the data. Current approaches for workflows and provenance systems are all at the user level and have little to no system level support making them fragile, difficult to use, and incomplete solutions. The introduction of container technology is a first step towards encapsulating and tracking artifacts used in creating data and resulting insights, but their current implementation is focused solely on making it easy to deploy an application in an isolated "sandbox" and maintaining a strictly read-only mode to avoid any potential changes to the application. All storage activities are still using the system-level shared storage. This project was an initial exploration into extending the container concept to also include storage and to use writable containers, auto generated by the system, as a way to link the contained data back to the simulation and input deck used to create it.

More Details

Adverse Event Prediction Using Graph-Augmented Temporal Analysis (Final Report)

Brost, Randolph; Carrier, Erin E.; Carroll, Michelle J.; Groth, Katrina M.; Kegelmeyer, William P.; Leung, Vitus J.; Link, Hamilton E.; Patterson, Andrew J.; Phillips, Cynthia A.; Richter, Samuel; Robinson, David G.; Staid, Andrea; Woodbridge, Diane M.K.

This report summarizes the work performed under the Sandia LDRD project "Adverse Event Prediction Using Graph-Augmented Temporal Analysis." The goal of the project was to develop a method for analyzing multiple time-series data streams to identify precursors providing advance warning of the potential occurrence of events of interest. The proposed approach combined temporal analysis of each data stream with reasoning about relationships between data streams using a geospatial-temporal semantic graph. This class of problems is relevant to several important topics of national interest. In the course of this work we developed new temporal analysis techniques, including temporal analysis using Markov Chain Monte Carlo techniques, temporal shift algorithms to refine forecasts, and a version of Ripley's K-function extended to support temporal precursor identification. This report summarizes the project's major accomplishments, and gathers the abstracts and references for the publication sub-missions and reports that were prepared as part of this work. We then describe work in progress that is not yet ready for publication.

More Details

ATDM/ECP Milestone Memo WBS 2.3.4.04 / SNL ATDM Data and Visualization Projects STDV04-21 - [MS1/YR2] Q3: Prototype Catalyst/ParaView in-situ viz for unsteady RV flow on ATS-1

Moreland, Kenneth D.

ParaView Catalyst is an API for accessing the scalable visualization infrastructure of ParaView in an in-situ context. In-situ visualization allows simulation codes to access data post-processing operations while the simulation is running. In-situ techniques can reduce data post-processing time, allow computational steering, and increase the resolution and frequency of data output. For a simulation code to use ParaView Catalyst, adapter code needs to be created that interfaces the simulations data structures to ParaView/VTK data structures. Under ATDM, Catalyst is to be integrated with SPARC, a code used for simulation of unsteady reentry vehicle flow.

More Details
Results 2776–2800 of 9,998
Results 2776–2800 of 9,998