Here, we develop a stochastic optimization model for scheduling a hybrid solar-battery storage system. Solar power in excess of the promise can be used to charge the battery, while power short of the promise is met by discharging the battery. We ensure reliable operations by using a joint chance constraint. Models with a few hundred scenarios are relatively tractable; for larger models, we demonstrate how a Lagrangian relaxation scheme provides improved results. To further accelerate the Lagrangian scheme, we embed the progressive hedging algorithm within the subgradient iterations of the Lagrangian relaxation. Lastly, we investigate several enhancements of the progressive hedging algorithm, and find bundling of scenarios results in the best bounds.
The development of scientific software is, more than ever, critical to the practice of science, and this is accompanied by a trend towards more open and collaborative efforts. Unfortunately, there has been little investigation into who is driving the evolution of such scientific software or how the collaboration happens. In this paper, we address this problem. We present an extensive analysis of seven open-source scientific software projects in order to develop an empirically-informed model of the development process. This analysis was complemented by a survey of 72 scientific software developers. In the majority of the projects, we found senior research staff (e.g. professors) to be responsible for half or more of commits (an average commit share of 72%) and heavily involved in architectural concerns (seniors were more likely to interact with files related to the build system, project meta-data, and developer documentation). Juniors (e.g. graduate students) also contribute substantially - in one studied project, juniors made almost 100% of its commits. Still, graduate students had the longest contribution periods among juniors (with 1.72 years of commit activity compared to 0.98 years for postdocs and 4 months for undergraduates). Moreover, we also found that third-party contributors are scarce, contributing for just one day for the project. The results from this study aim to help scientists to better understand their own projects, communities, and the contributors' behavior, while paving the road for future software engineering research.
The modern HPC scientific software ecosystem is instrumental to the practice of science. However, software can only fulfill that role if it is readily usable. In this position paper, we discuss usability in the context of scientific software development, how usability engineering can be incorporated into current practice, and how software engineering research can help satisfy that objective.
We describe details of a general Mie-Gruneisen equation of state and its numerical implementation. The equation of state contains a polynomial Hugoniot reference curve, an isentropic expansion and a tension cutoff.
The flexibility of network communication within Internet protocols is fundamental to network function, yet this same flexibility permits the possibility of malicious use. In particular, malicious behavior can masquerade as benign traffic, thus evading systems designed to catch misuse of network resources. However, perfect imitation of benign traffic is difficult, meaning that small unintentional deviations from normal can occur. Identifying these deviations requires that the defenders know what features reveal malicious behavior. Herein, we present an application of compression-based analytics to network communication that can reduce the need for defenders to know a priori what features they need to examine. Motivating the approach is the idea that compression relies on the ability to discover and make use of predictable elements in information, thereby highlighting any deviations between expected and received content. We introduce a so-called 'slice compression' score to identify malicious or anomalous communication in two ways. First, we apply normalized compression distances to classification problems and discuss methods for reducing the noise by excising application content (as opposed to protocol features) using slice compression. Second, we present a new technique for anomaly detection, referred to as slice compression for anomaly detection. A diverse collection of datasets are analyzed to illustrate the efficacy of the proposed approaches. While our focus is network communication, other types of data are also considered to illustrate the generality of the method.
Creating images of high-speed projectiles has been a topic of interest for almost a century. Historically, ballistics ranges have used air-gap flash photography or high-speed video cameras to capture this type of data. Air-gap flash photography provides a single image at each camera station. Using modern high-speed imagers provides accurate data but is cost prohibitive for a long-distance range. This paper presents a camera system capable of capturing the three-dimensional data of high speed projectiles over a long distance. The system uses relatively low-cost cameras which are set up in a stereo vision configuration and uses high-speed strobe lights to create multi exposure images. Each pulse of light captures the position of the projectile as it passes the camera. For each position captured in the image, the three-dimensional position of the projectile is found using triangulation geometry. The linear velocity of the projectile is calculated by combining the position of the projectile with timing data. Two test series were conducted. The first test series compares different cameras and backdrops for the camera system. The second test series captured position data for two different shapes of high-speed tumbling projectiles.
Sandia National Laboratories flew its Facility for Advanced RF and Algorithm Development X-Band (9.6-GHz center frequency), fully polarimetric synthetic aperture radar (PolSAR) in VideoSAR mode to collect complex-valued SAR imagery before, during, and after the sixth Source Physics Experiment's (SPE-6) underground explosion. The VideoSAR products generated from the data sets include 'movies' of single-and quad-polarization coherence maps, magnitude imagery, and polarimetric decompositions. Residual defocus, due to platform motion during data acquisition, was corrected with a digital elevation model-based autofocus algorithm. We generated and exploited the VideoSAR image products to characterize the surface movement effects caused by the underground explosion. Unlike seismic sensors, which measure local area seismic waves using sparse spacing and subterranean positioning, these VideoSAR products captured high-spatial resolution, 2-D, time-varying surface movement. The results from the fifth SPE (SPE-5) used single-polarimetric VideoSAR data. In this paper, we present single-polarimetric and fully polarimetric VideoSAR results while monitoring the SPE-6 underground chemical explosion. We show that fully polarimetric VideoSAR imaging provides a unique, coherent, time-varying measure of the surface expression of the SPE-6 underground chemical explosion. We include new surface characterization results from the measured PolSAR SPE-6 data via H/A/α polarimetric decomposition.
We extend a phase-field/gradient damage formulation for cohesive fracture to the dynamic case. The model is characterized by a regularized fracture energy that is linear in the damage field, as well as non-polynomial degradation functions. Two categories of degradation functions are examined, and a process to derive a given degradation function based on a local stress–strain response in the cohesive zone is presented. The resulting model is characterized by a linear elastic regime prior to the onset of damage, and controlled strain-softening thereafter. The governing equations are derived according to macro- and microforce balance theories, naturally accounting for the irreversible nature of the fracture process by introducing suitable constraints for the kinetics of the underlying microstructural changes. The model is complemented by an efficient staggered solution scheme based on an augmented Lagrangian method. Numerical examples demonstrate that the proposed model is a robust and effective method for simulating cohesive crack propagation, with particular emphasis on dynamic fracture.
Alternative architectures for imaging devices which fuse the optical design with an algorithmic component enable inexpensive sensing systems optimized for specific classification tasks. Leveraging past work in task-specific compressive devices, this work seeks to improve upon previous designs of optical and algorithmic elements. We achieve this through use of genetic algorithms to enforce conditions upon the optimization phase of a computational imaging system. Through enforcement of binary sampling or discrete-valued outputs of a system measurement matrix, it is possible to simplify optical hardware design while achieving high task-specific performance.
Zirconium tetrachloride was synthesized from the reaction between zirconium metal and chlorine gas at 300 °C and was analyzed by electron impact mass spectrometry (EI-MS). Substantial fragmentation products of ZrCl4 were observed in the mass spectra, with ZrCl3 being the most abundant species, followed by ZrCl2, ZrCl, and Zr. The predicted geometry and kinetic stability of the fragments previously mentioned were investigated by density functional theory (DFT) calculations. Energetics of the dissociation processes support the most stable fragment to be ZrCl3 while the least abundant are ZrCl and ZrCl2.
This work proposes a machine-learning framework for constructing statistical models of errors incurred by approximate solutions to parameterized systems of nonlinear equations. These approximate solutions may arise from early termination of an iterative method, a lower-fidelity model, or a projection-based reduced-order model, for example. The proposed statistical model comprises the sum of a deterministic regression-function model and a stochastic noise model. The method constructs the regression-function model by applying regression techniques from machine learning (e.g., support vector regression, artificial neural networks) to map features (i.e., error indicators such as sampled elements of the residual) to a prediction of the approximate-solution error. The method constructs the noise model as a mean-zero Gaussian random variable whose variance is computed as the sample variance of the approximate-solution error on a test set; this variance can be interpreted as the epistemic uncertainty introduced by the approximate solution. This work considers a wide range of feature-engineering methods, data-set-construction techniques, and regression techniques that aim to ensure that (1) the features are cheaply computable, (2) the noise model exhibits low variance (i.e., low epistemic uncertainty introduced), and (3) the regression model generalizes to independent test data. Numerical experiments performed on several computational-mechanics problems and types of approximate solutions demonstrate the ability of the method to generate statistical models of the error that satisfy these criteria and significantly outperform more commonly adopted approaches for error modeling.
Many terrestrial and astrophysical plasmas encompass very large dynamical ranges in space and time, which are not accessible by direct numerical simulations. Thus, idealized subvolumes are often used to study small-scale effects including the dynamics of magnetized turbulence. A significant aspect of magnetized turbulence is the transfer of energy from large to small scales, in part through the operation of a turbulent cascade. In this paper, we present a new shell-to-shell energy transfer analysis framework for understanding energy transfer within magnetized turbulence and in particular, through the cascade. We demonstrate the viability of this framework through application to a series of isothermal subsonic and supersonic simulations of compressible magnetized turbulence and utilize results from this analysis to establish a nonlinear benchmark for compressible magnetized turbulence in the subsonic regime. We further study how the autocorrelation time of the driving and its normalization systematically change properties of compressible magnetized turbulence. For example, we find that δ -in-time forcing with a constant energy injection leads to a steeper slope in kinetic energy spectrum and less efficient small-scale dynamo action. We examine how these results can impact a range of diagnostics relevant for a range of terrestrial and astrophysical applications.
Due to challenges in generating high-quality 3D speckle patterns for Digital Volume Correlation (DVC) strain measurements, DVC experiments often utilize the intrinsic texture and contrast of composite microstructures. One common deficiency of these natural speckle patterns is their poor durability under large deformations, which can lead to decorrelation and inaccurate strain measurements. Using syntactic foams as a model material, the effects of speckle pattern degradation on the accuracy of DVC displacement and strain measurements are assessed with both experimentally-acquired and numerically-generated images. It is shown that measurement error can be classified into two regimes as a function of the percentage of markers that have disappeared from the speckle pattern. For minor levels of damage beneath a critical level of damage, displacement and strain error remained near the noise floor of less than 0.05 voxels and 100 με, respectively; above this level, error rapidly increased to unacceptable levels above 0.2 voxels and 10,000 με. This transition occurred after 30%–40% of the speckles disappeared, depending on characteristics of the speckle pattern and its degradation mechanisms. Furthermore, these results suggest that accurate DVC measurements can be obtained in many types of fragile materials despite severe damage to the speckle pattern.
Raudales, David; Bliss, Donald B.; Michalis, Krista A.; Rouse, Jerry W.; Franzoni, Linda P.
Analytical solutions are presented for broadband sound fields in three rectangular enclosures with absorption applied on the floor and ceiling, rigid sidewalls, and a vertically oriented dipole source. The solutions are intended to serve as benchmarks that can be used to assess the performance of broadband techniques, particularly energy-based methods, in a relatively straightforward configuration with precisely specified boundary conditions. A broadband Helmholtz solution is developed using a frequency-by-frequency modal approach to determine the exact band averaged mean-square pressures along spatial trajectories within each enclosure. Due to the specific choice of enclosure configuration and absorption distribution, an approximate specular solution can be obtained through a summation of uncorrelated image sources. Comparisons between the band averaged Helmholtz solution and the uncorrelated image solution reveal excellent agreement for a wide range of absorption levels and improve the understanding of correlation effects in broadband sound fields. In conclusion, a boundary element solution with diffuse boundaries is also presented, which produces consistently higher mean-square pressures in comparison with the specular solution, emphasizing the careful attention that must be placed on correctly modeling reflecting boundary conditions and demonstrating the errors that can result from assuming a Lambertian surface.
The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy Office of Nuclear Energy, Office of Spent Fuel and Waste Disposition (SFWD), has been conducting research and development on generic deep geologic disposal systems (i.e., geologic repositories). This report describes specific activities in fiscal year (FY) 2019 associated with FY19 Geologic Disposal Safety Assessment (GDSA) Repository Systems Analysis (RSA) work package within the SFWST Campaign. The overall objective of the GDSA RSA work package is to develop generic deep geologic repository concepts and system performance assessment (PA) models in several host-rock environments, and to simulate and analyze these generic repository concepts and models using the GDSA Framework toolkit, and other tools as needed.
The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Spent Fuel and Waste Disposition (SFWD) is conducting research and development (R&D) on deep geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). R&D addressing the disposal of SNF/HLW in the U.S. is currently generic (i.e., "non-site-specific") in scope, following the suspension of the Yucca Mountain Repository Project in 2010. However, to prepare for the eventuality of a repository siting process, the former Used Fuel Disposition Campaign (UFDC) of DOE-NE, which was succeeded by the SFWST Campaign, formulated an R&D Roadmap in 2012 outlining generic R&D activities and their priorities appropriate for developing safety cases and associated performance assessment (PA) models for generic deep geologic repositories in several potential host-rock environments in the contiguous United States. This 2012 UFDC Roadmap also identified the importance of re-evaluating priorities in future years as knowledge is gained from the DOE's ongoing R&D activities.
This report discusses the fiscal year 2019 (FY19) design, implementation, and preliminary data interpretation plan for a set of borehole heater tests call the brine availability tests in salt (BATS), which is funded by the DOE Office of Nuclear Energy (DOE-NE) at the Waste Isolation Pilot Plant (WIPP). The organization of BATS is outlined in Project Plan: Salt In-Situ Heater Test. An early design of the field test is laid out in Kuhlman et al., including extensive references to previous field tests, which illustrates aspects of the present test. The previous test plan by Stauffer et al., places BATS in the context of a multi-year testing strategy, which involves tests of multiple scales and processes, possibly culminating in a drift-scale disposal demonstration.
Smart grid technologies and wide-spread installation of advanced metering infrastructure (AMI) equipment present new opportunities for the use of machine learning algorithms paired with big data to improve distribution system models. Accurate models are critical in the continuing integration of distributed energy resources (DER) into the power grid, however the low-voltage models often contain significant errors. This paper proposes a novel spectral clustering approach for validating and correcting customer electrical phase labels in existing utility models using the voltage timeseries produced by AMI equipment. Spectral clustering is used in conjunction with a sliding window ensemble to improve the accuracy and scalability of the algorithm for large datasets. The proposed algorithm is tested using real data to validate or correct over 99% of customer phase labels within the primary feeder under consideration. This is over a 94% reduction in error given the 9% of customers predicted to have incorrect phase labels.
As the power grid incorporates increasing amounts of distributed energy resources (DER) that provide new generation sources, new opportunities are created for improving operation of the grid while large challenges also arise for preserving grid reliability and security. To improve grid performance, DERs can be utilized to provide important support functionality, such as supporting frequency and voltage levels, especially if they are assisted by communication schemes as part of an advanced distribution management system (ADMS). Unfortunately, such connectivity and grid support functionality also creates additional cyber security risk with the potential for degradation of grid services, especially under conditions with high amounts of distributed generation. This paper will first discuss the communications needed by DERs to support system and interoperability objectives, as well as the security requirements and impact of securing these communications. Some common security mechanisms are discussed in relation to DERs, and a simulated 15-bus model of a distribution feeder is used to demonstrate aspects of the DER communications and impact to grid performance. These results help to advance understanding of the benefits, requirements, and mechanisms for securely implementing DER communications while ensuring that grid reliability is maintained.
Since the last update, Sandia has performed additional testing on COTS baseline cells, as well as initial testing of University of Michigan baseline graphite anode cells. The COTS cell testing included an extreme stress test using 12C charging, as well as a non-cooled 6C charging test to determine the effectiveness of cooling plates. The Michigan cells were cycled with increasing charge rates to determine rate capability and possible propensity for lithium plating. The COTS cells are 5Ah NMC/Graphite pouches from Kokam with a modest 112 Wh/kg. The University of Michigan cells are NMC ~2.6Ah NMC/Graphite pouches made at the UM Battery Lab, and were designed with a more aggressive energy density. On the horizon, we will be testing additional cells from University of Michigan, including a baseline hard carbon anode cell. We will also disassemble the graphite anode cell, and continue to analyze the dQdV signals obtained during the graphite cell experiment to further understand evidence of possible plating. We also expect to receive cells with improved anodes for eventual testing against baselines.
Nuclear magnetic resonance (NMR) spin diffusion measurements have been widely used to estimate domain sizes in a variety of polymer materials. In cases where the domains are well-described as regular, repeating structures (e.g., lamellar, cylindrical channels, monodispersed spherical domains), the domain sizes estimated from NMR spin diffusion experiments agree with the characteristic length scales obtained from small-angle x-ray scattering and microscopy. In our laboratory, recent NMR spin diffusion experiments for hydrated sulfonated Diels Alder poly(phenylene) (SDAPP) polymer membranes have revealed that assuming a simple structural model can often misrepresent or overestimate the domain size in situations where more complex and disordered morphologies exist. Molecular dynamics simulations of the SDAPP membranes predict a complex heterogeneous hydrophilic domain structure that varies with the degree of sulfonation and hydration and is not readily represented by a simple repeating domain structure. This heterogeneous morphology results in NMR-measured domain sizes that disagree with length scales estimated from the ionomer peak in scattering experiments. Here we present numerical NMR spin diffusion simulations that show how structural disorder in the form of domain size distributions or domain clustering can significantly impact the spin diffusion analysis and estimated domain sizes. Simulations of NMR spin diffusion with differing domain size distributions and domain clustering are used to identify the impact of the heterogeneous domain structure and highlight the limitations of using NMR spin diffusion techniques for irregular structures.
Photoluminescent spectral peak positions are known to shift as a function of mechanical stress state. This has been demonstrated at macroscales to determine mean stress and mesoscales to determine mean stress and a quantity related to shear stress. Here, we propose a method to utilize traction-free surface conditions and knowledge of material orientation to solve for two in-plane displacement fields given two measured spectral peak positions measured at a grid of points. It is then possible to calculate the full stress tensor at each measurement point. This is a significant advancement over the previous ability to measure one or two stress quantities. We validate the proposed method using a simple, two-grain geometry and show that it produces the same mean stress and shear stress measure as the existing direct method. Furthermore, we also demonstrate determination of the full stress field in a polycrystalline alumina specimen.
We propose here new projection methods for treating near-incompressibility in small and large deformation elasticity and plasticity within the framework of particle and meshfree methods. Using the $\bar{B}$ and $\bar{F}$ techniques as our point of departure, we develop projection methods for the conforming reproducing kernel method and the immersed-particle or material point-like methods. The methods are based on the projection of the dilatational part of the appropriate measure of deformation onto lower-dimensional approximation spaces, according to the traditional $\bar{B}$ and $\bar{F}$ approaches, but tailored to meshfree and particle methods. The presented numerical examples exhibit reduced stress oscillations and are free of volumetric locking and hourglassing phenomena.
Long-term seismic monitoring networks are well positioned to leverage advances in machine learning because of the abundance of labeled training data that curated event catalogs provide. We explore the use of convolutional and recurrent neural networks to accomplish discrimination of explosive and tectonic sources for local distances. Using a 5-year event catalog generated by the University of Utah Seismograph Stations, we train models to produce automated event labels using 90-s event spectrograms from three-component and single-channel sensors. Both network architectures are able to replicate analyst labels above 98%. Most commonly, model error is the result of label error (70% of cases). Accounting for mislabeled events (~1% of the catalog) model accuracy for both models increases to above 99%. Classification accuracy remains above 98% for shallow tectonic events, indicating that spectral characteristics controlled by event depth do not play a dominant role in event discrimination.
Survey data from the Energy Information Administration (EIA) was combined with data from the Environmental Protection Agency (EPA) to explore ways in which operations might impact water use intensity (both withdrawals and consumption) at thermoelectric power plants. Two disparities in cooling and power systems operations were identified that could impact water use intensity: (1) Idling Gap - where cooling systems continue to operate when their boilers and generators are completely idled; and (2) Cycling Gap - where cooling systems operate at full capacity, while their associated boiler and generator systems cycle over a range of loads. Analysis of the EIA and EPA data indicated that cooling systems operated on average 13% more than their corresponding power system (Idling Gap), while power systems operated on average 30% below full load when the boiler was reported as operating (Cycling Gap). Regression analysis was then performed to explore whether the degree of power plant idling/cycling could be related to the physical characteristics of the plant, its environment or time of year. While results suggested that individual power plants' operations were unique, weak trends consistently pointed to a plant's place on the dispatch curve as influencing patterns of cooling system, boiler, and generator operation. This insight better positions us to interpret reported power plant water use data as well as improve future water use projections.
Keith, Jordan R.; Rebello, Nathan J.; Cowen, Benjamin J.; Ganesan, Venkat
We performed long-time all-atom molecular dynamics simulations of cationic polymerized ionic liquids with eight mobile counterions, systematically varying size and shape to probe their influence on the decoupling of conductivity from polymer segmental dynamics. We demonstrated rigorous identification of the dilatometric glass-transition temperature (T g ) for polymerized ionic liquids using an all-atom force field. Polymer segmental relaxation rates are presumed to be consistent for different materials at the same glass-transition-normalized temperature (T g /T), allowing us to extract a relative order of decoupling by examining conductivity at the same T g /T. Size, or ionic volume, cannot fully explain decoupling trends, but within certain geometric and chemical-specific classes, small ions generally show a higher degree of decoupling. This size effect is not universal and appears to be overcome when structural results reveal substantial coordination delocalization. We also reveal a universal inverse correlation between ion-association structural relaxation time and absolute conductivity for these polymerized ionic liquids, supporting the ion-hopping interpretation of ion mobility in polymerized ionic liquids.
Nonlocal modeling has come a long way. Researchers in the continuum mechanics and computational mechanics communities increasingly recognize that nonlocality is critical in realistic mathematical models of many aspects of the physical world. Physical interaction over a finite distance is fundamental at the atomic and nanoscale level, in which atoms and molecules interact through multibody potentials. Long-range forces partially determine the mechanics of surfaces and the behavior of dissolved molecules and suspended particles in a fluid. Nonlocality is therefore a vital feature of any continuum model that represents these physical systems at small length scales.
Glassy silicates are substantially weaker when in contact with aqueous electrolyte solutions than in vacuum due to chemical interactions with preexisting cracks. To investigate this silicate weakening phenomenon, classical molecular dynamics (MD) simulations of silica fracture were performed using the bond-order based, reactive force field ReaxFF. Four different environmental conditions were investigated: vacuum, water, and two salt solutions (1M NaCl, 1M NaOH) that form relatively acidic and basic solutions, respectively. Any aqueous environment weakens the silica, with NaOH additions resulting in the largest decreases in the effective fracture toughness (eKIC) of silica or the loading rate at which the fracture begins to propagate. The basic solution leads to higher surface deprotonation, narrower radius of curvature of the crack tip, and greater weakening of the silica, compared with the more acidic environment. The results from the two different electrolyte solutions correspond to phenomena observed in experiments and provide a unique atomistic insight into how anions alter the chemical-mechanical fracture response of silica.
Nanocrystalline metals typically have high fatigue strengths but low resistance to crack propagation. Amorphous intergranular films are disordered grain boundary complexions that have been shown to delay crack nucleation and slow crack propagation during monotonic loading by diffusing grain boundary strain concentrations, which suggests they may also be beneficial for fatigue properties. To probe this hypothesis, in situ transmission electron microscopy fatigue cycling is performed on Cu-1 at.% Zr thin films thermally treated to have either only ordered grain boundaries or amorphous intergranular films. The sample with only ordered grain boundaries experienced grain coarsening at crack initiation followed by unsteady crack propagation and extensive nanocracking, whereas the sample containing amorphous intergranular films had no grain coarsening at crack initiation followed by steady crack propagation and distributed plastic activity. Microstructural design for control of these behaviors through simple thermal treatments can allow for the improvement of nanocrystalline metal fatigue toughness.
Window functions provide a base for the construction of approximation functions in many meshfree methods. They control the smoothness and extent of the approximation functions and are commonly defined using Euclidean distances which helps eliminate the need for a meshed discretization, simplifying model development for some classes of problems. However, for problems with complicated geometries such as nonconvex or multi-body domains, poor solution accuracy and convergence can occur unless the extents of the window functions, and thus approximation functions, are carefully controlled, often a time consuming or intractable task. In this paper, we present a method to provide more control in window function design, allowing efficient and systematic handling of complex geometries. “Conforming” window functions are constructed using Bernstein–Bézier splines defined on local triangulations with constraints imposed to control smoothness. Graph distances are used in conjunction with Euclidean metrics to provide adequate information for shaping the window functions. The conforming window functions are demonstrated using the Reproducing Kernel Particle Method showing improved accuracy and convergence rates for problems with challenging geometries. Conforming window functions are also demonstrated as a means to simplify the imposition of essential boundary conditions.
We present a formulation to simultaneously invert for a heterogeneous shear modulus field and traction boundary conditions in an incompressible linear elastic plane stress model. Our approach utilizes scalable deterministic methods, including adjoint-based sensitivities and quasi-Newton optimization, to reduce the computational requirements for large-scale inversion with partial differential equation (PDE) constraints. Here, we address the use of regularization for such formulations and explore the use of different types of regularization for the shear modulus and boundary traction. We apply this PDE-constrained optimization algorithm to a synthetic dataset to verify the accuracy in the reconstructed parameters, and to experimental data from a tissue-mimicking ultrasound phantom. In all of these examples, we compare inversion results from full-field and sparse data measurements.
The intent of this document is to assist the programmer in understanding details of contemporary and Exascale hardware system design and how these designs provide opportunities and place constraints on next-generation simulation software design. We attempt to clarify hardware organization and component details for our most current and Exascale systems to help program developers understand how software needs to change in order to take best advantage of the performance available. Exascale success is specifically defined for ECP as a 50x improvement over baseline in the aggregate "capability volume" on several KPP axes, of which raw floating point performance is only one, but also includes characteristics such as problem size, system memory size, node memory size, power, and efficiency. This multi-axis approach is particularly important to understand in the context of delivered improvements in real applications, since, for instance, the floating point computation may comprise less than 10% of the actual computational work required. Given the Exascale requirements and the constraints these requirements put on the performance expectations of fundamental system components, the programmer will be forced to re-think several application implementation details in order to achieve exaflop performance on these systems. The remainder of this document aims to present more detail on Exascale era system hardware and the specific areas that the programmer should address to extract performance from these systems. We attempt to give the programmer guidance at both a high- and low-level, providing some abstract suggestions on how to refactor codes given the expected system architectures and some low-level recommendations on how to implement these modifications. We also include a section on training resources that are helpful for both programmers that are just beginning to understand code modifications for contemporary and Exascale systems and for those that have done some refactoring and are now trying to extract maximal application performance from these systems.
The primary objective of this report is to determine a viable pipe preheating system for a chloridesalt blend (40%MgCl2/20%NaCl/40%KCI) that can preheat the pipe to 450 °C and withstand a maximum exposure temperature of 740 °C. Preheating involves heating the pipe to a specific desired temperature, called preheat temperature, of the pipe. The temperature is maintained by heated molten salt flowing through the piping system. This report reviews 5-types of pipe preheating systems, of which three pipe preheating systems- MI cable, heat tape, and ceramic fiber heaters, were found to be viable for the Gen 3 Liquid Pathway application. The report reviews the pipe preheating efficiency of conduction verses radiant heat transfer. For each of the 5 types of pipe preheating systems, the report describes the system and addresses installation requirements, temperature control, reliability survey, and pre-construction verification testing for the most applicable preheating system. Under Appendix A, images from design drawings demonstrate pipe routing with the preheating system and insulation attached to the pipe along with pipe guides and pipe supports, as designed using Caesar II finite element analysis within the SNL NSTTF Solar Power Tower.
We present an architecture-portable and performant implementation of the atmospheric dynamical core (High-Order Methods Modeling Environment, HOMME) of the Energy Exascale Earth System Model (E3SM). The original Fortran implementation is highly performant and scalable on conventional architectures using the Message Passing Interface (MPI) and Open MultiProcessor (OpenMP) programming models. We rewrite the model in C++ and use the Kokkos library to express on-node parallelism in a largely architecture-independent implementation. Kokkos provides an abstraction of a compute node or device, layout-polymorphic multidimensional arrays, and parallel execution constructs. The new implementation achieves the same or better performance on conventional multicore computers and is portable to GPUs. We present performance data for the original and new implementations on multiple platforms, on up to 5400 compute nodes, and study several aspects of the single-and multi-node performance characteristics of the new implementation on conventional CPU (e.g., Intel Xeon), many core CPU (e.g., Intel Xeon Phi Knights Landing), and Nvidia V100 GPU.
Network designers, planners, and security professionals increasingly rely on large-scale testbeds based on virtualization to emulate networks and make decisions about real-world deployments. However, there has been limited research on how well these virtual testbeds match their physical counterparts. Specifically, does the virtualization that these testbeds depend on actually capture real-world behaviors sufficiently well to support decisions?As a first step, we perform simple experiments on both physical and virtual testbeds to begin to understand where and how the testbeds differ. We set up a web service on one host and run ApacheBench against this service from a different host, instrumenting each system during these tests.We define an initial repeatable methodology (algorithm) to quantitatively compare physical and virtual testbeds. Specifically we compare the testbeds at three levels of abstraction: application, operating system (OS) and network. For the application level, we use the ApacheBench results. For OS behavior, we compare patterns of system call orderings using Markov chains. This provides a unique visual representation of the workload and OS behavior in our testbeds. We also drill down into read-system-call behaviors and show how at one level both systems are deterministic and identical, but as we move up in abstractions that consistency declines. Finally, we use packet captures to compare network behaviors and performance. We reconstruct flows and compare per-flow and per-experiment statistics.From these comparisons, we find that the behavior of the workload in the testbeds is similar but that the underlying processes to support it do vary. The low-level network behavior can vary quite widely in packetization depending on the virtual network driver. While these differences can be important, and knowing about them will help experiment designers, the core application and OS behaviors still represent similar processes.
The following table provides evidence for each implemented feature with links to the completed merge requests (evidence that the implementation is merged into the master branch) and a link to the excerpt from the VTK-m User's Guide documenting the feature
Laboratory measurements were made on the concentration and temperature fields of cryogenic hydrogen jets. Images of spontaneous Raman scattering from a pulsed planar laser sheet were used to measure the concentration and temperature fields from varied releases. Jets with up to 5 bar pressure, with near-liquid temperatures at the release point, were characterized in this work. This data is relevant for characterizing unintended leaks from piping connected to cryogenic hydrogen storage tanks, such as might be encountered at a hydrogen fuel cell vehicle fueling station. The average centerline mass fraction was observed to decay at a rate similar to room temperature hydrogen jets, while the half-width of the Gaussian profiles of mass fraction were observed to spread more slowly than for room temperature hydrogen. This suggests that the mixing and models for cryogenic hydrogen may be different than for room temperature hydrogen. Results from this work were also compared to a one-dimensional (streamwise) model. Good agreement was seen in terms of temperature and mass fraction. In subsequent work, a validated version of this model will be exercised to quantitatively assess the risk at hydrogen fueling stations with cryogenic hydrogen on-site.
Exhaust gas recirculation (EGR) can be used to mitigate knock in SI engines. However, experiments have shown that the effectiveness of various EGR constituents to suppress knock varies with fuel type and compression ratio (CR). To understand some of the underlying mechanisms by which fuel composition, octane sensitivity (S), and CR affect the knock-mitigation effectiveness of EGR constituents, the current paper presents results from a chemical-kinetics modeling study. The numerical study was conducted with CHEMKIN, imposing experimentally acquired pressure traces on a closed reactor model. Simulated conditions include combinations of three RON-98 (Research Octane Number) fuels with two octane sensitivities and distinctive compositions, three EGR diluents, and two CRs (12:1 and 10:1). The experimental results point to the important role of thermal stratification in the end-gas to smooth peak heat-release rate (HRR) and prevent acoustic noise. To model the effects of thermal stratification due to heat-transfer losses to the combustion-chamber walls, the initial temperature at the start of the CHEMKIN simulation was successively reduced below the adiabatic core temperature while observing changes in end-gas heat release and its effect on the reactant temperature. The results reveal that knock-prone conditions generally exhibit an increased amount of heat release in the colder temperature zones, thus counteracting the HRR-smoothing effect of the naturally occurring thermal stratification. This detrimental effect becomes more pronounced for the low-S fuel due to its significant Negative Temperature Coefficient (NTC) autoignition characteristics. This explains the generally reduced effectiveness of dilution for the low-S fuel, and higher knock intensity for the cycles with autoignition.
This document will detail a test procedure, involving bench and emulation testing, for the Module OT device developed for the joint NREL-SNL DOE CEDS project titled "Modular Security Apparatus for Managing Distributed Cryptography for Command & Control Messages on Operational Technology (OT) Networks." The aim of this document is to create the testing and evaluation protocol for the module for lab-level testing; this includes checklists and experiments for information gathering, functional testing, cryptographic implementation, public key infrastructure, key exchange/authentication, encryption, and implementation testing in the emulation environment.
Analog crossbars have the potential to reduce the energy and latency required to train a neural network by three orders of magnitude when compared to an optimized digital ASIC. The crossbar simulator, CrossSim, can be used to model device nonidealities and determine what device properties are needed to create an accurate neural network accelerator. Experimentally measured device statistics are used to simulate neural network training accuracy and compare different classes of devices including TaOx ReRAM, Lir-Co-Oz devices, and conventional floating gate SONOS memories. A technique called 'Periodic Carry' can overcomes device nonidealities by using a positional number system while maintaining the benefit of parallel analog matrix operations.
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Hammond, Glenn E.; Hu, Bill; Zachara, John M.
Sensitivity analysis is a vital tool in numerical modeling to identify important parameters and processes that contribute to the overall uncertainty in model outputs. We developed a new sensitivity analysis method to quantify the relative importance of uncertain model processes that contain multiple uncertain parameters. The method is based on the concepts of Bayesian networks (BNs) to account for complex hierarchical uncertainty structure of a model system. We derived a new set of sensitivity indices using the methodology of variance-based global sensitivity analysis with the Bayesian inference. The framework is capable of representing the detailed uncertainty information of a complex model system using BNs and affords flexible grouping of different uncertain inputs given their characteristics and dependency structures. We have implemented the method on a real-world biogeochemical model at the groundwater-surface water interface within the Hanford Site's 300 Area. The uncertainty sources of the model were first grouped into forcing scenario and three different processes based on our understanding of the complex system. The sensitivity analysis results indicate that both the reactive transport and groundwater flow processes are important sources of uncertainty for carbon-consumption predictions. Within the groundwater flow process, the structure of geological formations is more important than the permeability heterogeneity within a given geological formation. Our new sensitivity analysis framework based on BNs offers substantial flexibility for investigating the importance of combinations of interacting uncertainty sources in a hierarchical order, and it is expected to be applicable to a wide range of multiphysics models for complex systems.
The application, continued performance, and degradation behavior of polymers often depends on their interaction with small organic or gaseous volatiles. Understanding the underlying permeation and diffusion properties of materials is crucial for predicting their barrier properties (permeant flux), drying behavior, solvent loss or tendency to trap small molecules, as well as their interaction with materials in the vicinity due to off-gassing phenomena, perhaps leading to compatibility concerns. Further, the diffusion of low M w organics is also important for mechanistic aspects of degradation processes. Based on our need for improved characterization methods, a FTIR-based spectroscopic gas/volatile quantification setup was designed and evaluated for determination of the diffusion, desorption and transport behavior of small IR-active molecules in polymers. At the core of the method, a modified, commercially available IR transmission gas cell monitors time-dependent gas concentration. Appropriate experimental conditions, e.g. desorption or permeation under continuous flow or static gas conditions, are achieved using easily adaptable external components such as flow controllers and sample ampoules. This study presents overview approaches using the same IR detection methodology to determine diffusivity (desorption into a static gas environment, continuous gas flow, or intermittent desorption) and permeability (static and dynamic flow detection). Further, the challenges encountered for design and setup of IR gas quantification experiments, related to calibration and gas interaction, are presented. These methods establish desorption and permeation behavior of solvents (water and methanol), CO 2 off-gassing from foam, and offer simultaneous measurements of the permeation of several gases in a gas mixture (CO 2 , CO and CH 4 ) through polymer films such as epoxy and Kapton. They offer complementary guidance for material diagnostics and understanding of basic properties in sorption and transport behavior often of relevance to polymer degradation or materials reliability phenomena.