We introduce Recursive Spoke Darts (RSD): a recursive hyperplane sampling algorithm that exploits the full duality between Voronoi and Delaunay entities of various dimensions. Our algorithm abandons the dependence on the empty sphere principle in the generation of Delaunay simplices providing the foundation needed for scalable consistent meshing. The algorithm relies on two simple operations: line-hyperplane trimming and spherical range search. Consequently, this approach improves scalability as multiple processors can operate on different seeds at the same time. Moreover, generating consistent meshes across processors eliminates the communication needed between them, improving scalability even more. We introduce a simple tweak to the algo- rithm which makes it possible not to visit all vertices of a Voronoi cell, generating almost-exact Delaunay graphs while avoiding the natural curse of dimensionality in high dimensions.
We know the rainbow color map is terrible, and it is emphatically reviled by the visualization community, yet its use continues to persist. Why do we continue to use a this perceptual encoding with so many known flaws? Instead of focusing on why we should not use rainbow colors, this position statement explores the rational for why we do pick these colors despite their flaws. Often the decision is influenced by a lack of knowledge, but even experts that know better sometimes choose poorly. A larger issue is the expedience that we have inadvertently made the rainbow color map become. Knowing why the rainbow color map is used will help us move away from it. Education is good, but clearly not sufficient. We gain traction by making sensible color alternatives more convenient. It is not feasible to force, a color map on users. Our goal is to supplant the rainbow color map as a common standard, and we w ill find that even those wedded to it will migrate away.
System-of-systems modeling has traditionally focused on physical systems rather than humans, but recent events have proved the necessity of considering the human in the loop. As technology becomes more complex and layered security continues to increase in importance, capturing humans and their interactions with technologies within the system-of-systems will be increasingly necessary. After an extensive job-task analysis, a novel type of system-ofsystems simulation model has been created to capture the human-technology interactions on an extra-small forward operating base to better understand performance, key security drivers, and the robustness of the base. In addition to the model, an innovative framework for using detection theory to calculate d’ for individual elements of the layered security system, and for the entire security system as a whole, is under development.
We present here an example of how a large,multi-dimensional unstructured data set, namely aircraft trajectories over the United States, can be analyzed using relatively straightforward unsupervised learning techniques. We begin by adding a rough structure to the trajectory data using the notion of distance geometry. This provides a very generic structure to the data that allows it to be indexed as an n-dimensional vector. We then do a clustering based on the HDBSCAN algorithm to both group flights with similar shapes and find outliers that have a relatively unique shape. Next, we expand the notion of geometric features to more specialized features and demonstrate the power of these features to solve specific problems. Finally, we highlight not just the power of the technique but also the speed and simplicity of the implementation by demonstrating them on very large data sets.
A critical challenge in data science is conveying the meaning of data to human decision makers. While working with visualizations, decision makers are engaged in a visual search for information to support their reasoning process. As sensors proliferate and high performance computing becomes increasingly accessible, the volume of data decision makers must contend with is growing continuously and driving the need for more efficient and effective data visualizations. Consequently, researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles to assess the effectiveness of data visualizations. In this paper, we compare the performance of three different saliency models across a common set of data visualizations. This comparison establishes a performance baseline for assessment of new data visualization saliency models.
As CMOS technology approaches the end of its scaling, oxide-based memristors have become one of the leading candidates for post-CMOS memory and logic devices. To facilitate the understanding of physical switching mechanisms and accelerate experimental development of memristors, we have developed a three-dimensional fully-coupled electrical and thermal transport model, which captures all the important processes that drive memristive switching and is applicable for simulating a wide range of memristors. The model is applied to simulate the RESET and SET switching in a 3D filamentary TaOx memristor. Extensive simulations show that the switching dynamics of the bipolar device is determined by thermally-activated field-dominant processes: with Joule heating, the raised temperature enables the movement of oxygen vacancies, and the field drift dominates the overall motion of vacancies. Simulated current-voltage hysteresis and device resistance profiles as a function of time and voltage during RESET and SET switching show good agreement with experimental measurement.
This contribution is the second part of three papers on Adaptive Multigrid Methods for the eXtended Fluid-Structure Interaction (eXFSI) Problem, where we introduce a monolithic variational formulation and solution techniques. To the best of our knowledge, such a model is new in the literature. This model is used to design an on-line structural health monitoring (SHM) system in order to determine the coupled acoustic and elastic wave propagation in moving domains and optimum locations for SHM sensors. In a monolithic nonlinear fluid-structure interaction (FSI), the fluid and structure models are formulated in different coordinate systems. This makes the FSI setup of a common variational description difficult and challenging. This article presents the state-of-the-art in the finite element approximation of FSI problem based on monolithic variational formulation in the well-established arbitrary Lagrangian Eulerian (ALE) framework. This research focuses on the newly developed mathematical model of a new FSI problem, which is referred to as extended Fluid-Structure Interaction (eXFSI) problem in the ALE framework. The eXFSI is a strongly coupled problem of typical FSI with a coupled wave propagation problem on the fluid-solid interface (WpFSI). The WpFSI is a strongly coupled problem of acoustic and elastic wave equations, where wave propagation problems automatically adopts the boundary conditions from the FSI problem at each time step. The ALE approach provides a simple but powerful procedure to couple solid deformations with fluid flows by a monolithic solution algorithm. In such a setting, the fluid problems are transformed to a fixed reference configuration by the ALE mapping. The goal of this work is the development of concepts for the efficient numerical solution of eXFSI problem, the analysis of various fluid-solid mesh motion techniques and comparison of different second-order time-stepping schemes. This work consists of the investigation of different time stepping scheme formulations for a nonlinear FSI problem coupling the acoustic/elastic wave propagation on the fluid-structure interface. Temporal discretization is based on finite differences and is formulated as a one step-θ scheme, from which we can consider the following particular cases: the implicit Euler, Crank-Nicolson, shifted Crank-Nicolson and the Fractional-Step-θ schemes. The nonlinear problem is solved with a Newton-like method where the discretization is done with a Galerkin finite element scheme. The implementation is accomplished via the software library package DOPELIB based on the deal. II finite element library for the computation of different eXFSI configurations.
Best-estimate fuel performance codes such as BISON currently under development at the Idaho National Laboratory, utilize empirical and mechanistic lower-length-scale informed correlations to predict fuel behavior under normal operating and accident reactor conditions. Traditionally, best-estimate results are presented using the correlations with no quantification of the uncertainty in the output metrics of interest. However, there are associated uncertainties in the input parameters and correlations used to determine the behavior of the fuel and cladding under irradiation. Therefore, it is important to perform uncertainty quantification and include confidence bounds on the output metrics that take into account the uncertainties in the inputs. In addition, sensitivity analyses can be performed to determine which input parameters have the greatest influence on the outputs. In this paper we couple the BISON fuel performance code to the DAKOTA uncertainty analysis software to analyze a representative fuel performance problem. The case studied in this paper is based upon rod 1 from the IFA-432 integral experiment performed at the Halden Reactor in Norway. The rodlet is representative of a BWR fuel rod. The input parameters uncertainties are broken into three separate categories including boundary condition uncertainties (e.g., power, coolant flow rate), manufacturing uncertainties (e.g., pellet diameter, cladding thickness), and model uncertainties (e.g., fuel thermal conductivity, fuel swelling). Utilizing DAKOTA, a variety of statistical analysis techniques are applied to quantify the uncertainty and sensitivity of the output metrics of interest. Specifically, we demonstrate the use of sampling methods, polynomial chaos expansions, surrogate models, and variance-based decomposition. The output metrics investigated in this study are the fuel centerline temperature, cladding surface temperature, fission gas released, and fuel rod diameter. The results highlight the importance of quantifying the uncertainty and sensitivity in fuel performance modeling predictions and the need for additional research into improving the material models that are currently available.
Ebeida, Mohamed; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong M.; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.
Temperature monitoring is essential in automation, mechatronics, robotics and other dynamic systems. Wireless methods which can sense multiple temperatures at the same time without the use of cables or slip-rings can enable many new applications. A novel method utilizing small permanent magnets is presented for wirelessly measuring the temperature of multiple points moving in repeatable motions. The technique utilizes linear least squares inversion to separate the magnetic field contributions of each magnet as it changes temperature. The experimental setup and calibration methods are discussed. Initial experiments show that temperatures from 5 to 50 °C can be accurately tracked for three neodymium iron boron magnets in a stationary configuration and while traversing in arbitrary, repeatable trajectories. This work presents a new sensing capability that can be extended to tracking multiple temperatures inside opaque vessels, on rotating bearings, within batteries, or at the tip of complex endeffectors.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Terboven, Christian; Hahnfeld, Jonas; Teruel, Xavier; Mateo, Sergi; Duran, Alejandro; Klemm, Michael; Olivier, Stephen L.; De Supinski, Bronis R.
OpenMP tasking supports parallelization of irregular algorithms. Recent OpenMP specifications extended tasking to increase functionality and to support optimizations, for instance with the taskloop construct. However, task scheduling remains opaque, which leads to inconsistent performance on NUMA architectures. We assess design issues for task affinity and explore several approaches to enable it. We evaluate these proposals with implementations in the Nanos++ and LLVM OpenMP runtimes that improve performance up to 40% and significantly reduce execution time variation.
ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)
Chattopadhyay, Ashesh; Kotteda, V.M.K.; Kumar, Vinod; Spotz, William S.
A framework is developed to integrate the existing MFiX (Multiphase Flow with Interphase eXchanges) flow solver with state-of-the-art linear equation solver packages in Trilinos. The integrated solver is tested on various flow problems. The performance of the solver is evaluated on fluidized bed problems and observed that the integrated flow solver performs better compared to the native solver.
A critical challenge in data science is conveying the meaning of data to human decision makers. While working with visualizations, decision makers are engaged in a visual search for information to support their reasoning process. As sensors proliferate and high performance computing becomes increasingly accessible, the volume of data decision makers must contend with is growing continuously and driving the need for more efficient and effective data visualizations. Consequently, researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles to assess the effectiveness of data visualizations. In this paper, we compare the performance of three different saliency models across a common set of data visualizations. This comparison establishes a performance baseline for assessment of new data visualization saliency models.
Mycobacterium tuberculosis associated granuloma formation can be viewed as a structural immune response that can contain and halt the spread of the pathogen. In several mammalian hosts, including non-human primates, Mtb granulomas are often hypoxic, although this has not been observed in wild type murine infection models. While a presumed consequence, the structural contribution of the granuloma to oxygen limitation and the concomitant impact on Mtb metabolic viability and persistence remains to be fully explored. We develop a multiscale computational model to test to what extent in vivo Mtb granulomas become hypoxic, and investigate the effects of hypoxia on host immune response efficacy and mycobacterial persistence. Our study integrates a physiological model of oxygen dynamics in the extracellular space of alveolar tissue, an agent-based model of cellular immune response, and a systems biology-based model of Mtb metabolic dynamics. Our theoretical studies suggest that the dynamics of granuloma organization mediates oxygen availability and illustrates the immunological contribution of this structural host response to infection outcome. Furthermore, our integrated model demonstrates the link between structural immune response and mechanistic drivers influencing Mtbs adaptation to its changing microenvironment and the qualitative infection outcome scenarios of clearance, containment, dissemination, and a newly observed theoretical outcome of transient containment. We observed hypoxic regions in the containment granuloma similar in size to granulomas found in mammalian in vivo models of Mtb infection. In the case of the containment outcome, our model uniquely demonstrates that immune response mediated hypoxic conditions help foster the shift down of bacteria through two stages of adaptation similar to thein vitro non-replicating persistence (NRP) observed in the Wayne model of Mtb dormancy. The adaptation in part contributes to the ability of Mtb to remain dormant for years after initial infection.
The peridynamic theory of solid mechanics provides a natural framework for modeling constitutive response and simulating dynamic crack propagation, pervasive damage, and fragmentation. In the case of a fragmenting body, the principal quantities of interest include the number of fragments, and the masses and velocities of the fragments. We present a method for identifying individual fragments in a peridynamic simulation. We restrict ourselves to the meshfree approach of Silling and Askari, in which nodal volumes are used to discretize the computational domain. Nodal volumes, which are connected by peridynamic bonds, may separate as a result of material damage and form groups that represent fragments. Nodes within each fragment have similar velocities and their collective motion resembles that of a rigid body. The identification of fragments is achieved through inspection of the peridynamic bonds, established at the onset of the simulation, and the evolving damage value associated with each bond. An iterative approach allows for the identification of isolated groups of nodal volumes by traversing the network of bonds present in a body. The process of identifying fragments may be carried out at specified times during the simulation, revealing the progression of damage and the creation of fragments. Incorporating the fragment identification algorithm directly within the simulation code avoids the need to write bond data to disk, which is often prohibitively expensive. Results are recorded using fragment identification numbers. The identification number for each fragment is stored at each node within the fragment and written to disk, allowing for any number of post-processing operations, for example the construction of cumulative distribution functions for quantities of interest. Care is taken with regard to very small clusters of isolated nodes, including individual nodes for which all bonds have failed. Small clusters of nodes may be treated as tiny fragments, or may be omitted from the fragment identification process. The fragment identification algorithm is demonstrated using the Sierra/SolidMechanics analysis code. It is applied to a simulation of pervasive damage resulting from a spherical projectile impacting a brittle disk, and to a simulation of fragmentation of an expanding ductile ring.
Computational Science and Engineering (CSE) software can benefit substantially from an explicit focus on quality improvement. This is especially true as we face increased demands in both modeling and software complexities. At the same time, just desiring improved quality is not sufficient. We must work with the entities that provide CSE research teams with publication venues, funding, and professional recognition in order to increase incentives for improved software quality. In fact, software quality is precisely calibrated to the expectations, explicit and implicit, set by these entities. We will see broad improvements in sustainability and productivity only when publishers, funding agencies and employers raise their expectations for software quality. CSE software community leaders, those who are in a position to inform and influence these entities, have a unique opportunity to broadly and positively impact software quality by working to establish incentives that will spur creative and novel approaches to improve developer productivity and software sustainability.
We consider the problem of sampling points from a collection of smooth curves in the plane, such that the CRUST family of proximity-based reconstruction algorithms can rebuild the curves. Reconstruction requires a dense sampling of local features, i.e., parts of the curve that are close in Euclidean distance but far apart geodesically. We show that e < 0:47-sampling is sufficient for our proposed HNN-CRUST variant, improving upon the state-of-the-art requirement of e < 13-sampling. Thus we may reconstruct curves with many fewer samples. We also present a new sampling scheme that reduces the required density even further than e < 0:47-sampling. We achieve this by better controlling the spacing between geodesically consecutive points. Our novel sampling condition is based on the reach, the minimum local feature size along intervals between samples. This is mathematically closer to the reconstruction density requirements, particularly near sharp-angled features. We prove lower and upper bounds on reach r-sampling density in terms of lfs e-sampling and demonstrate that we typically reduce the required number of samples for reconstruction by more than half.
Therefore, to design a building or a bridge that stands up and is safe, one might assume that engineers must need to know a lot about these tensor fields and stress potentials, in all their mathematical glory. If not, then surely they must depend on specialists in continuum mechanics for guidance. Right?