Publications

Results 1–50 of 66

Search results

Jump to search filters

Comparison of Designs of Hydrogen Isotope Separation Columns by Numerical Modeling

Industrial and Engineering Chemistry Research

Robinson, David R.; Salloum, Maher S.

Mixtures of gas-phase hydrogen isotopologues (diatomic combinations of protium, deuterium, and tritium) can be separated using columns containing a solid such as palladium that reversibly absorbs hydrogen. A temperature-swing process can transport hydrogen into or out of a column by inducing temperature-dependent absorption or desorption reactions. We consider two designs: a thermal cycling absorption process, which moves hydrogen back and forth between two columns, and a simulated moving bed (SMB), where columns are in a circular arrangement. We present a numerical mass and heat transport model of absorption columns for hydrogen isotope separation. It includes a detailed treatment of the absorption-desorption reaction for palladium. By comparing the isotope concentrations within the columns as a function of position and time, we observe that SMB can lead to sharper separations for a given number of thermal cycles by avoiding the remixing of isotopes.

More Details

Alpert multi-wavelets for functional inverse problems: direct optimization and deep learning

International Journal for Computational Methods in Engineering Science and Mechanics

Salloum, Maher S.; Bon, Bradley L.

Computational engineering models often contain unknown entities (e.g. parameters, initial and boundary conditions) that require estimation from other measured observable data. Estimating such unknown entities is challenging when they involve spatio-temporal fields because such functional variables often require an infinite-dimensional representation. We address this problem by transforming an unknown functional field using Alpert wavelet bases and truncating the resulting spectrum. Hence the problem reduces to the estimation of few coefficients that can be performed using common optimization methods. We apply this method on a one-dimensional heat transfer problem where we estimate the heat source field varying in both time and space. The observable data is comprised of temperature measured at several thermocouples in the domain. This latter is composed of either copper or stainless steel. The optimization using our method based on wavelets is able to estimate the heat source with an error between 5% and 7%. We analyze the effect of the domain material and number of thermocouples as well as the sensitivity to the initial guess of the heat source. Finally, we estimate the unknown heat source using a different approach based on deep learning techniques where we consider the input and output of a multi-layer perceptron in wavelet form. We find that this deep learning approach is more accurate than the optimization approach with errors below 4%.

More Details

Optimization of flow in additively manufactured porous columns with graded permeability

AIChE Journal

Salloum, Maher S.; Robinson, David R.

Chemical engineering systems often involve a functional porous medium, such as in catalyzed reactive flows, fluid purifiers, and chromatographic separations. Ideally, the flow rates throughout the porous medium are uniform, and all portions of the medium contribute efficiently to its function. The permeability is a property of a porous medium that depends on pore geometry and relates flow rate to pressure drop. Additive manufacturing techniques raise the possibilities that permeability can be arbitrarily specified in three dimensions, and that a broader range of permeabilities can be achieved than by traditional manufacturing methods. Using numerical optimization methods, we show that designs with spatially varying permeability can achieve greater flow uniformity than designs with uniform permeability. We consider geometries involving hemispherical regions that distribute flow, as in many glass chromatography columns. By several measures, significant improvements in flow uniformity can be obtained by modifying permeability only near the inlet and outlet.

More Details

Comparing field data using Alpert multi-wavelets

Computational Mechanics

Salloum, Maher S.; Jin, Huiqing J.; Brown, Judith A.; Bolintineanu, Dan S.; Long, Kevin N.; Karlson, Kyle N.

In this paper we introduce a method to compare sets of full-field data using Alpert tree-wavelet transforms. The Alpert tree-wavelet methods transform the data into a spectral space allowing the comparison of all points in the fields by comparing spectral amplitudes. The methods are insensitive to translation, scale and discretization and can be applied to arbitrary geometries. This makes them especially well suited for comparison of field data sets coming from two different sources such as when comparing simulation field data to experimental field data. We have developed both global and local error metrics to quantify the error between two fields. We verify the methods on two-dimensional and three-dimensional discretizations of analytical functions. We then deploy the methods to compare full-field strain data from a simulation of elastomeric syntactic foam.

More Details

Physics-Based Checksums for Silent-Error Detection in PDE Solvers

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Salloum, Maher S.; Mayo, Jackson M.; Armstrong, Robert C.

We discuss techniques for efficient local detection of silent data corruption in parallel scientific computations, leveraging physical quantities such as momentum and energy that may be conserved by discretized PDEs. The conserved quantities are analogous to “algorithm-based fault tolerance” checksums for linear algebra but, due to their physical foundation, are applicable to both linear and nonlinear equations and have efficient local updates based on fluxes between subdomains. These physics-based checksums enable precise intermittent detection of errors and recovery by rollback to a checkpoint, with very low overhead when errors are rare. We present applications to both explicit hyperbolic and iterative elliptic (unstructured finite-element) solvers with injected memory bit flips.

More Details

Physics-Based Checksums for Silent-Error Detection in PDE Solvers

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Salloum, Maher S.; Mayo, Jackson M.; Armstrong, Robert C.

We discuss techniques for efficient local detection of silent data corruption in parallel scientific computations, leveraging physical quantities such as momentum and energy that may be conserved by discretized PDEs. The conserved quantities are analogous to “algorithm-based fault tolerance” checksums for linear algebra but, due to their physical foundation, are applicable to both linear and nonlinear equations and have efficient local updates based on fluxes between subdomains. These physics-based checksums enable precise intermittent detection of errors and recovery by rollback to a checkpoint, with very low overhead when errors are rare. We present applications to both explicit hyperbolic and iterative elliptic (unstructured finite-element) solvers with injected memory bit flips.

More Details

Adaptive wavelet compression of large additive manufacturing experimental and simulation datasets

Computational Mechanics

Salloum, Maher S.; Johnson, Kyle J.; Bishop, Joseph E.; Aytac, Jon M.; Dagel, Daryl D.; van Bloemen Waanders, Bart G.

New manufacturing technologies such as additive manufacturing require research and development to minimize the uncertainties in the produced parts. The research involves experimental measurements and large simulations, which result in huge quantities of data to store and analyze. We address this challenge by alleviating the data storage requirements using lossy data compression. We select wavelet bases as the mathematical tool for compression. Unlike images, additive manufacturing data is often represented on irregular geometries and unstructured meshes. Thus, we use Alpert tree-wavelets as bases for our data compression method. We first analyze different basis functions for the wavelets and find the one that results in maximal compression and miminal error in the reconstructed data. We then devise a new adaptive thresholding method that is data-agnostic and allows a priori estimation of the reconstruction error. Finally, we propose metrics to quantify the global and local errors in the reconstructed data. One of the error metrics addresses the preservation of physical constraints in reconstructed data fields, such as divergence-free stress field in structural simulations. While our compression and decompression method is general, we apply it to both experimental and computational data obtained from measurements and thermal/structural modeling of the sintering of a hollow cylinder from metal powders using a Laser Engineered Net Shape process. The results show that monomials achieve optimal compression performance when used as wavelet bases. The new thresholding method results in compression ratios that are two to seven times larger than the ones obtained with commonly used thresholds. Overall, adaptive Alpert tree-wavelets can achieve compression ratios between one and three orders of magnitude depending on the features in the data that are required to preserve. These results show that Alpert tree-wavelet compression is a viable and promising technique to reduce the size of large data structures found in both experiments and simulations.

More Details

Robust digital computation in the physical world

Cyber-Physical Systems Security

Mayo, Jackson M.; Armstrong, Robert C.; Hulette, Geoffrey C.; Salloum, Maher S.; Smith, Andrew M.

Modern digital hardware and software designs are increasingly complex but are themselves only idealizations of a real system that is instantiated in, and interacts with, an analog physical environment. Insights from physics, formal methods, and complex systems theory can aid in extending reliability and security measures from pure digital computation (itself a challenging problem) to the broader cyber-physical and out-of-nominal arena. Example applications to design and analysis of high-consequence controllers and extreme-scale scientific computing illustrate the interplay of physics and computation. In particular, we discuss the limitations of digital models in an analog world, the modeling and verification of out-of-nominal logic, and the resilience of computational physics simulation. A common theme is that robustness to failures and attacks is fostered by cyber-physical system designs that are constrained to possess inherent stability or smoothness. This chapter contains excerpts from previous publications by the authors.

More Details

A Numerical model of exchange chromatography through 3-D lattice structures

AIChE Journal

Salloum, Maher S.; Robinson, David R.

Rapid progress in the development of additive manufacturing technologies is opening new opportunities to fabricate structures that control mass transport in three dimensions across a broad range of length scales. We describe a structure that can be fabricated by newly available commercial 3-D printers. It contains an array of regular three-dimensional flow paths that are in intimate contact with a solid phase, and thoroughly shuffle material among the paths. We implement a chemically reacting flow model to study its behavior as an exchange chromatography column, and compare it to an array of 1-D flow paths that resemble more traditional honeycomb monoliths. A reaction front moves through the columns and then elutes. The front is sharper at all flow rates for the structure with three-dimensional flow paths, and this structure is more robust to channel width defects than the 1-D array. © 2018 American Institute of Chemical Engineers AIChE J, 64: 1874–1884, 2018.

More Details

Optimal Compressed Sensing and Reconstruction of Unstructured Mesh Datasets

Data Science and Engineering

Salloum, Maher S.; Fabian, Nathan D.; Hensinger, David M.; Lee, Jina L.; Allendorf, Elizabeth M.; Bhagatwala, Ankit; Blaylock, Myra L.; Chen, Jacqueline H.; Templeton, Jeremy A.; Kalashnikova, Irina

Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate its usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.

More Details

Trigger Detection for Adaptive Scientific Workflows Using Percentile Sampling

SIAM Journal on Scientific Computing (Online)

Pinar, Ali P.; Bennett, Janine C.; Salloum, Maher S.; Bhagatwala, Ankit B.; Chen, Jacqueline H.; Seshadhri, C.

The increasing complexity of both scientific simulations and high-performance computing system architectures are driving the need for adaptive workflows, in which the composition and execution of computational and data manipulation steps dynamically depend on the evolutionary state of the simulation itself. Consider, for example, the frequency of data storage. Critical phases of the simulation should be captured with high frequency and with high fidelity for postanalysis; however, we cannot afford to retain the same frequency for the full simulation due to the high cost of data movement. We can instead look for triggers, indicators that the simulation will be entering a critical phase, and adapt the workflow accordingly. In this paper, we present a methodology for detecting triggers and demonstrate its use in the context of direct numerical simulations of turbulent combustion using S3D. We show that chemical explosive mode analysis (CEMA) can be used to devise a noise-tolerant indicator for rapid increase in heat release. However, exhaustive computation of CEMA values dominates the total simulation, and thus is prohibitively expensive. To overcome this computational bottleneck, we propose a quantile sampling approach. Our sampling-based algorithm comes with provable error/confidence bounds, as a function of the number of samples. Most importantly, the number of samples is independent of the problem size, and thus our proposed sampling algorithm offers perfect scalability. Our experiments on homogeneous charge compression ignition and reactivity controlled compression ignition simulations show that the proposed method can detect rapid increases in heat release, and its computational overhead is negligible. Our results will be used to make dynamic workflow decisions regarding data storage and mesh resolution in future combustion simulations.

More Details

In-situ mitigation of silent data corruption in PDE solvers

FTXS 2016 - Proceedings of the ACM Workshop on Fault-Tolerance for HPC at Extreme Scale

Salloum, Maher S.; Mayo, Jackson M.; Armstrong, Robert C.

We present algorithmic techniques for parallel PDE solvers that leverage numerical smoothness properties of physics simulation to detect and correct silent data corruption within local computations. We initially model such silent hardware errors (which are of concern for extreme scale) via injected DRAM bit flips. Our mitigation approach generalizes previously developed "robust stencils" and uses modified linear algebra operations that spatially interpolate to replace large outlier values. Prototype implementations for 1D hyperbolic and 3D elliptic solvers, tested on up to 2048 cores, show that this error mitigation enables tolerating orders of magnitude higher bit-flip rates. The runtime overhead of the approach generally decreases with greater solver scale and complexity, becoming no more than a few percent in some cases. A key advantage is that silent data corruption can be handled transparently with data in cache, reducing the cost of false-positive detections compared to rollback approaches.

More Details

Enabling adaptive scientific workflows via trigger detection

Proceedings of ISAV 2015: 1st International Workshop on In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization, Held in conjunction with SC 2015: The International Conference for High Performance Computing, Networking, Storage and Analysis

Salloum, Maher S.; Bennett, Janine C.; Pinar, Ali P.; Bhagatwala, Ankit; Chen, Jacqueline H.

Next generation architectures necessitate a shift away from traditional workflows in which the simulation state is saved at prescribed frequencies for post-processing analysis. While the need to shift to in situ workflows has been acknowledged for some time, much of the current research is focused on static workflows, where the analysis that would have been done as a post-process is performed concurrently with the simulation at user-prescribed frequencies. Recently, research efforts are striving to enable adaptive workflows, in which the frequency, composition, and execution of computational and data manipulation steps dynamically depend on the state of the simulation. Adapting the workflow to the state of simulation in such a data-driven fashion puts extremely strict efficiency requirements on the analysis capabilities that are used to identify the transitions in the workflow. In this paper we build upon earlier work on trigger detection using sublinear techniques to drive adaptive workflows. Here we propose a methodology to detect the time when sudden heat release occurs in simulations of turbulent combustion. Our proposed method provides an alternative metric that can be used along with our former metric to increase the robustness of trigger detection. We show the effectiveness of our metric empirically for predicting heat release for two use cases.

More Details

Final Report: Sublinear Algorithms for In-situ and In-transit Data Analysis at Exascale

Bennett, Janine C.; Pinar, Ali P.; Seshadhri, C.; Thompson, David; Salloum, Maher S.; Bhagatwala, Ankit; Chen, Jacqueline H.

Post-Moore's law scaling is creating a disruptive shift in simulation workflows, as saving the entirety of raw data to persistent storage becomes expensive. We are moving away from a post-process centric data analysis paradigm towards a concurrent analysis framework, in which raw simulation data is processed as it is computed. Algorithms must adapt to machines with extreme concurrency, low communication bandwidth, and high memory latency, while operating within the time constraints prescribed by the simulation. Furthermore, in- put parameters are often data dependent and cannot always be prescribed. The study of sublinear algorithms is a recent development in theoretical computer science and discrete mathematics that has significant potential to provide solutions for these challenges. The approaches of sublinear algorithms address the fundamental mathematical problem of understanding global features of a data set using limited resources. These theoretical ideas align with practical challenges of in-situ and in-transit computation where vast amounts of data must be processed under severe communication and memory constraints. This report details key advancements made in applying sublinear algorithms in-situ to identify features of interest and to enable adaptive workflows over the course of a three year LDRD. Prior to this LDRD, there was no precedent in applying sublinear techniques to large-scale, physics based simulations. This project has definitively demonstrated their efficacy at mitigating high performance computing challenges and highlighted the rich potential for follow-on re- search opportunities in this space.

More Details

Quantifying sampling noise and parametric uncertainty in atomistic-to-continuum simulations using surrogate models

Multiscale Modeling and Simulation

Salloum, Maher S.; Sargsyan, Khachik S.; Jones, Reese E.; Najm, H.N.; Debusschere, Bert D.

We present a methodology to assess the predictive fidelity of multiscale simulations by incorporating uncertainty in the information exchanged between the components of an atomisticto-continuum simulation. We account for both the uncertainty due to finite sampling in molecular dynamics (MD) simulations and the uncertainty in the physical parameters of the model. Using Bayesian inference, we represent the expensive atomistic component by a surrogate model that relates the long-term output of the atomistic simulation to its uncertain inputs. We then present algorithms to solve for the variables exchanged across the atomistic-continuum interface in terms of polynomial chaos expansions (PCEs). We consider a simple Couette flow where velocities are exchanged between the atomistic and continuum components, while accounting for uncertainty in the atomistic model parameters and the continuum boundary conditions. Results show convergence of the coupling algorithm at a reasonable number of iterations. The uncertainty in the obtained variables significantly depends on the amount of data sampled from the MD simulations and on the width of the time averaging window used in the MD simulations.

More Details

Inference and uncertainty propagation of atomistically informed continuum constitutive laws, part 2: Generalized continuum models based on Gaussian processes

International Journal for Uncertainty Quantification

Salloum, Maher S.; Templeton, Jeremy A.

Constitutive models in nanoscience and engineering often poorly represent the physics due to significant deviations in model form from their macroscale counterparts. In Part 1 of this study, this problem was explored by considering a continuum scale heat conduction constitutive law inferred directly from molecular dynamics (MD) simulations. In contrast, this work uses Bayesian inference based on the MD data to construct a Gaussian process emulator of the heat flux as a function of temperature and temperature gradient. No assumption of Fourier-like behavior is made, requiring alternative approaches to assess the well-posedness and accuracy of the emulator. Validation is provided by comparing continuum scale predictions using the emulator model against a larger all-MD simulation representing the true solution. The results show that a Gaussian process emulator of the heat conduction constitutive law produces an empirically unbiased prediction of the continuum scale temperature field for a variety of time scales, which was not observed when Fourier’s law is assumed to hold. Finally, uncertainty is propagated in the continuum model and quantified in the temperature field so the impact of errors in the model on continuum quantities can be determined.

More Details

Empirical and physics based mathematical models of uranium hydride decomposition kinetics with quantified uncertainties

Salloum, Maher S.; Gharagozloo, Patricia E.

Metal particle beds have recently become a major technique for hydrogen storage. In order to extract hydrogen from such beds, it is crucial to understand the decomposition kinetics of the metal hydride. We are interested in obtaining a a better understanding of the uranium hydride (UH3) decomposition kinetics. We first developed an empirical model by fitting data compiled from different experimental studies in the literature and quantified the uncertainty resulting from the scattered data. We found that the decomposition time range predicted by the obtained kinetics was in a good agreement with published experimental results. Secondly, we developed a physics based mathematical model to simulate the rate of hydrogen diffusion in a hydride particle during the decomposition. We used this model to simulate the decomposition of the particles for temperatures ranging from 300K to 1000K while propagating parametric uncertainty and evaluated the kinetics from the results. We compared the kinetics parameters derived from the empirical and physics based models and found that the uncertainty in the kinetics predicted by the physics based model covers the scattered experimental data. Finally, we used the physics-based kinetics parameters to simulate the effects of boundary resistances and powder morphological changes during decomposition in a continuum level model. We found that the species change within the bed occurring during the decomposition accelerates the hydrogen flow by increasing the bed permeability, while the pressure buildup and the thermal barrier forming at the wall significantly impede the hydrogen extraction.

More Details
Results 1–50 of 66
Results 1–50 of 66