Automated analysis of unstructured text documents (e.g., web pages, newswire articles, research publications, business reports) is a key capability for solving important problems in areas including decision making, risk assessment, social network analysis, intelligence analysis, scholarly research and others. However, as data sizes continue to grow in these areas, scalable processing, modeling, and semantic analysis of text collections becomes essential. In this paper, we present the ParaText text analysis engine, a distributed memory software framework for processing, modeling, and analyzing collections of unstructured text documents. Results on several document collections using hundreds of processors are presented to illustrate the exibility, extensibility, and scalability of the the entire process of text modeling from raw data ingestion to application analysis.
The aim of this project is to develop low dimension parametric (deterministic) models of complex networks, to use compressive sensing (CS) and multiscale analysis to do so and to exploit the structure of complex networks (some are self-similar under coarsening). CS provides a new way of sampling and reconstructing networks. The approach is based on multiresolution decomposition of the adjacency matrix and its efficient sampling. It requires preprocessing of the adjacency matrix to make it 'blocky' which is the biggest (combinatorial) algorithm challenge. Current CS reconstruction algorithm makes no use of the structure of a graph, its very general (and so not very efficient/customized). Other model-based CS techniques exist, but not yet adapted to networks. Obvious starting point for future work is to increase the efficiency of reconstruction.
The present paper is the second in a series published at I/ITSEC that seeks to explain the efficacy of multi-role experiential learning employed to create engaging game-based training methods transitioned to the U.S. Army, U.S. Army Special Forces, Civil Affairs, and Psychological Operations teams. The first publication (I/ITSEC 2009) summarized findings from a quantitative study that investigated experiential learning in the multi-player, PC-based game module transitioned to PEO-STRI, DARWARS Ambush! NK (non-kinetic). The 2009 publication reported that participants of multi-role (Player and Reflective Observer/Evaluator) game-based training reported statistically significant learning and engagement. Additionally when the means of the two groups (Player and Reflective Observer/Evaluator) were compared, they were not statistically significantly different from each other. That is to say that both playing as well as observing/evaluating were engaging learning modalities. The Observer/Evaluator role was designed to provide an opportunity for real-time reflection and meta-cognitive learning during game play. Results indicated that this role was an engaging way to learn about communication, that participants learned something about cultural awareness, and that the skills they learned were helpful in problem solving and decision-making.
The present paper seeks to continue to understand what and how users of non-kinetic game-based missions learn by revisiting the 2009 quantitative study with further investigation such as stochastic player performance analysis using latent semantic analyses and graph visualizations. The results are applicable to First-Person game-based learning systems designed to enhance trainee intercultural communication, interpersonal skills, and adaptive thinking. In the full paper, we discuss results obtained from data collected from 78 research participants of diverse backgrounds who trained by engaging in tasks directly, as well as observing and evaluating peer performance in real-time. The goal is two-fold. One is to quantify and visualize detailed player performance data coming from game play transcription to give further understanding to the results in the 2009 I/ITSEC paper. The second is to develop a set of technologies from this quantification and visualization approach into a generalized application tool to be used to aid in future games’ development of player/learner models and game adaptation algorithms.
Specifically, this paper addresses questions such as, “Are there significant differences in one's experience when an experiential learning task is observed first, and then performed by the same individual?” “Are there significant differences among groups participating in different roles in non-kinetic engagement training, especially when one role requires more active participation that the other?” “What is the impact of behavior modeling on learning in games?” In answering these questions the present paper reinforces the 2009 empirical study conclusion that contrary to current trends in military game development, experiential learning is enhanced by innovative training approaches designed to facilitate trainee mastery of reflective observation and abstract conceptualization as much as performance-based skills.
This paper compares three approaches for model selection: classical least squares methods, information theoretic criteria, and Bayesian approaches. Least squares methods are not model selection methods although one can select the model that yields the smallest sum-of-squared error function. Information theoretic approaches balance overfitting with model accuracy by incorporating terms that penalize more parameters with a log-likelihood term to reflect goodness of fit. Bayesian model selection involves calculating the posterior probability that each model is correct, given experimental data and prior probabilities that each model is correct. As part of this calculation, one often calibrates the parameters of each model and this is included in the Bayesian calculations. Our approach is demonstrated on a structural dynamics example with models for energy dissipation and peak force across a bolted joint. The three approaches are compared and the influence of the log-likelihood term in all approaches is discussed.
The problem of incomplete data - i.e., data with missing or unknown values - in multi-way arrays is ubiquitous in biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, communication networks, etc. We consider the problem of how to factorize data sets with missing values with the goal of capturing the underlying latent structure of the data and possibly reconstructing missing values (i.e., tensor completion). We focus on one of the most well-known tensor factorizations that captures multi-linear structure, CANDECOMP/PARAFAC (CP). In the presence of missing data, CP can be formulated as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) that uses a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factorize tensors with noise and up to 99% missing data. A unique aspect of our approach is that it scales to sparse large-scale data, e.g., 1000 x 1000 x 1000 with five million known entries (0.5% dense). We further demonstrate the usefulness of CP-WOPT on two real-world applications: a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes and the problem of modeling computer network traffic where data may be absent due to the expense of the data collection process.
Co-design has been identified as a key strategy for achieving Exascale computing in this decade. This paper describes the need for co-design in High Performance Computing related research in embedded computing the development of hardware/software co-simulation methods.
The analysis of networked activities is dramatically more challenging than many traditional kinds of analysis. A network is defined by a set of entities (people, organizations, banks, computers, etc.) linked by various types of relationships. These entities and relationships are often uninteresting alone, and only become significant in aggregate. The analysis and visualization of these networks is one of the driving factors behind the creation of the Titan Toolkit. Given the broad set of problem domains and the wide ranging databases in use by the information analysis community, the Titan Toolkit's flexible, component based pipeline provides an excellent platform for constructing specific combinations of network algorithms and visualizations.
The observation and characterization of a single atom system in silicon is a significant landmark in half a century of device miniaturization, and presents an important new laboratory for fundamental quantum and atomic physics. We compare with multi-million atom tight binding (TB) calculations the measurements of the spectrum of a single two-electron (2e) atom system in silicon - a negatively charged (D-) gated Arsenic donor in a FinFET. The TB method captures accurate single electron eigenstates of the device taking into account device geometry, donor potentials, applied fields, interfaces, and the full host bandstructure. In a previous work, the depths and fields of As donors in six device samples were established through excited state spectroscopy of the D0 electron and comparison with TB calculations. Using self-consistent field (SCF) TB, we computed the charging energies of the D- electron for the same six device samples, and found good agreement with the measurements. Although a bulk donor has only a bound singlet ground state and a charging energy of about 40 meV, calculations show that a gated donor near an interface can have a reduced charging energy and bound excited states in the D- spectrum. Measurements indeed reveal reduced charging energies and bound 2e excited states, at least one of which is a triplet. The calculations also show the influence of the host valley physics in the two-electron spectrum of the donor.
Automated processing, modeling, and analysis of unstructured text (news documents, web content, journal articles, etc.) is a key task in many data analysis and decision making applications. As data sizes grow, scalability is essential for deep analysis. In many cases, documents are modeled as term or feature vectors and latent semantic analysis (LSA) is used to model latent, or hidden, relationships between documents and terms appearing in those documents. LSA supplies conceptual organization and analysis of document collections by modeling high-dimension feature vectors in many fewer dimensions. While past work on the scalability of LSA modeling has focused on the SVD, the goal of our work is to investigate the use of distributed memory architectures for the entire text analysis process, from data ingestion to semantic modeling and analysis. ParaText is a set of software components for distributed processing, modeling, and analysis of unstructured text. The ParaText source code is available under a BSD license, as an integral part of the Titan toolkit. ParaText components are chained-together into data-parallel pipelines that are replicated across processes on distributed-memory architectures. Individual components can be replaced or rewired to explore different computational strategies and implement new functionality. ParaText functionality can be embedded in applications on any platform using the native C++ API, Python, or Java. The ParaText MPI Process provides a 'generic' text analysis pipeline in a command-line executable that can be used for many serial and parallel analysis tasks. ParaText can also be deployed as a web service accessible via a RESTful (HTTP) API. In the web service configuration, any client can access the functionality provided by ParaText using commodity protocols ... from standard web browsers to custom clients written in any language.
While RAID is the prevailing method of creating reliable secondary storage infrastructure, many users desire more flexibility than offered by current implementations. To attain needed performance, customers have often sought after hardware-based RAID solutions. This talk describes a RAID system that offloads erasure correction coding calculations to GPUs, allowing increased reliability by supporting new RAID levels while maintaining high performance.
A multi-laboratory ontology construction effort during the summer and fall of 2009 prototyped an ontology for counterfeit semiconductor manufacturing. This effort included an ontology development team and an ontology validation methods team. Here the third team of the Ontology Project, the Data Analysis (DA) team reports on their approaches, the tools they used, and results for mining literature for terminology pertinent to counterfeit semiconductor manufacturing. A discussion of the value of ontology-based analysis is presented, with insights drawn from other ontology-based methods regularly used in the analysis of genomic experiments. Finally, suggestions for future work are offered.
After more than 50 years of molecular simulations, accurate empirical models are still the bottleneck in the wide adoption of simulation techniques. Addressing this issue with a fresh paradigm is the need of the day. In this study, we outline a new genetic-programming based method to develop empirical models for a system purely from its energy and/or forces. While the approach was initially developed for the development of classical force-fields from ab-initio calculations, we also discuss its application to the molecular coarse-graining of methanol. Two models, one representing methanol by a single site and the other via two sites will be developed using this method. They will be validated against existing coarse-grained potentials for methanol by comparing thermophysical properties.
This brief paper explores the development of scalable, nonlinear, fully-implicit solution methods for a stabilized unstructured finite element (FE) discretization of the 2D incompressible (reduced) resistive MHD system. The discussion considers the stabilized FE formulation in context of a fully-implicit time integration and direct-to-steady-state solution capability. The nonlinear solver strategy employs Newton-Krylov methods, which are preconditioned using fully-coupled algebraic multilevel (AMG) techniques and a new approximate block factorization (ABF) preconditioner. The intent of these preconditioners is to enable robust, scalable and efficient solution approaches for the large-scale sparse linear systems generated by the Newton linearization. We present results for the fully-coupled AMG preconditioner for two prototype problems, a low Lundquist number MHD Faraday conduction pump and moderately-high Lundquist number incompressible magnetic island coalescence problem. For the MHD pump results we explore the scaling of the fully-coupled AMG preconditioner for up to 4096 processors for problems with up to 64M unknowns on a CrayXT3/4. Using the island coalescence problem we explore the weak scaling of the AMG preconditioner and the influence of the Lundquist number on the iteration count. Finally we present some very recent results for the algorithmic scaling of the ABF preconditioner.
Dynamical systems theory provides a powerful framework for understanding the behavior of complex evolving systems. However applying these ideas to large-scale dynamical systems such as discretizations of multi-dimensional PDEs is challenging. Such systems can easily give rise to problems with billions of dynamical variables, requiring specialized numerical algorithms implemented on high performance computing architectures with thousands of processors. This talk will describe LOCA, the Library of Continuation Algorithms, a suite of scalable continuation and bifurcation tools optimized for these types of systems that is part of the Trilinos software collection. In particular, we will describe continuation and bifurcation analysis techniques designed for large-scale dynamical systems that are based on specialized parallel linear algebra methods for solving augmented linear systems. We will also discuss several other Trilinos tools providing nonlinear solvers (NOX), eigensolvers (Anasazi), iterative linear solvers (AztecOO and Belos), preconditioners (Ifpack, ML, Amesos) and parallel linear algebra data structures (Epetra and Tpetra) that LOCA can leverage for efficient and scalable analysis of large-scale dynamical systems.
The data in many disciplines such as social networks, web analysis, etc. is link-based, and the link structure can be exploited for many different data mining tasks. In this paper, we consider the problem of temporal link prediction: Given link data for time periods 1 through T, can we predict the links in time period T + 1? Specifically, we look at bipartite graphs changing over time and consider matrix- and tensor-based methods for predicting links. We present a weight-based method for collapsing multi-year data into a single matrix. We show how the well-known Katz method for link prediction can be extended to bipartite graphs and, moreover, approximated in a scalable way using a truncated singular value decomposition. Using a CANDECOMP/PARAFAC tensor decomposition of the data, we illustrate the usefulness of exploiting the natural three-dimensional structure of temporal link data. Through several numerical experiments, we demonstrate that both matrix- and tensor-based techniques are effective for temporal link prediction despite the inherent difficulty of the problem.
Modeling the interaction of dislocations with internal boundaries and free surfaces is essential to understanding the effect of material microstructure on dislocation motion. However, discrete dislocation dynamics methods rely on infinite domain solutions of dislocation fields which makes modeling of heterogeneous materials difficult. A finite domain dislocation dynamics capability is under development that resolves both the dislocation array and polycrystalline structure in a compatible manner so that free surfaces and material interfaces are easily treated. In this approach the polycrystalline structure is accommodated using the GFEM, and the displacement due to the dislocation array is added to the displacement approximation. Shown in figure 1 are representative results from simulations of randomly placed and oriented dislocation sources in a cubic nickel polycrystal. Each grain has a randomly assigned (unique) material basis, and available glide planes are chosen accordingly. The change in basis between neighboring grains has an important effect on the motion of dislocations since the resolved shear on available glide planes can change dramatically. Dislocation transmission through high angle grain boundaries is assumed to occur by absorption into the boundary and subsequent nucleation in the neighboring grain. Such behavior is illustrated in figure 1d. Nucleation from the vertically oriented source in the bottom right grain is due to local stresses from dislocation pile-up in the neighboring grain. In this talk, the method and implementation is presented as well as some representative results from large scale (i.e., massively parallel) simulations of dislocation motion in cubic nano-domain nickel alloy. Particular attention will be paid to the effect of grain size on polycrystalline strength.
As computational science applications grow more parallel with multi-core supercomputers having hundreds of thousands of computational cores, it will become increasingly difficult for solvers to scale. Our approach is to use hybrid MPI/threaded numerical algorithms to solve these systems in order to reduce the number of MPI tasks and increase the parallel efficiency of the algorithm. However, we need efficient threaded numerical kernels to run on the multi-core nodes in order to achieve good parallel efficiency. In this paper, we focus on improving the performance of a multithreaded triangular solver, an important kernel for preconditioning. We analyze three factors that affect the parallel performance of this threaded kernel and obtain good scalability on the multi-core nodes for a range of matrix sizes.
In this presentation we examine the accuracy and performance of a suite of discrete-element-modeling approaches to predicting equilibrium and dynamic rheological properties of polystyrene suspensions. What distinguishes each approach presented is the methodology of handling the solvent hydrodynamics. Specifically, we compare stochastic rotation dynamics (SRD), fast lubrication dynamics (FLD) and dissipative particle dynamics (DPD). Method-to-method comparisons are made as well as comparisons with experimental data. Quantities examined are equilibrium structure properties (e.g. pair-distribution function), equilibrium dynamic properties (e.g. short- and long-time diffusivities), and dynamic response (e.g. steady shear viscosity). In all approaches we deploy the DLVO potential for colloid-colloid interactions. Comparisons are made over a range of volume fractions and salt concentrations. Our results reveal the utility of such methods for long-time diffusivity prediction can be dubious in certain ranges of volume fraction, and other discoveries regarding the best formulation to use in predicting rheological response.