The team worked on supporting compile and build on ATS2 initial deployment systems for SPARC. Conducted performance runs of EMPIRE on Trinity and an initial compile and run of SPARC on Intel Skylake processors.
The retina plays an important role in animal vision - namely preprocessing visual information before sending it to the brain through the optic nerve. Understanding howthe retina does this is of particular relevance for development and design of neuromorphic sensors, especially those focused towards image processing. Our research focuses on examining mechanisms of motion processing in the retina. We are specifically interested in detection of moving targets under challenging conditions, specifically small or low-contrast (dim) targets amidst high quantities of clutter or distractor signals. In this paper we compare a classic motion-sensitive cell model, the Hassenstein-Reichardt model, to a model of the OMS (object motion-sensitive) cell, that relies primarily on change-detection, and describe scenarios for which each model is better suited. We also examine mechanisms, inspired by features of retinal circuitry, by which performance may be enhanced. For example, lateral inhibition (mediated by amacrine cells) conveys selectivity for small targets to the W3 ganglion cell - we demonstrate that a similar mechanism can be combined with the previously mentioned motion-processing cell models to select small moving targets for further processing.
Modelling and Simulation in Materials Science and Engineering
Akhondzadeh, Sh; Sills, Ryan; Papanikolaou, S.; Van Der Giessen, E.; Cai, W.
Three-dimensional discrete dislocation dynamics methods (3D-DDD) have been developed to explicitly track the motion of individual dislocations under applied stress. At present, these methods are limited to plastic strains of about one percent or less due to high computational cost associated with the interactions between large numbers of dislocations. This limitation motivates the construction of minimalistic approaches to efficiently simulate the motion of dislocations for higher strains and longer time scales. In the present study, we propose geometrically projected discrete dislocation dynamics (GP-DDD), a method in which dislocation loops are modeled as geometrical objects that maintain their shape with a constant number of degrees of freedom as they expand. We present an example where rectangles composed of two screw and two edge dislocation segments are used for modeling gliding dislocation loops. We use this model to simulate single slip loading of copper and compare the results with detailed 3D-DDD simulations. We discuss the regimes in which GP-DDD is able to adequately capture the variation of the flow stress with strain rate in the single slip loading condition. A simulation using GP-DDD requires ∼40 times fewer degrees of freedom for a copper single slip loading case, thus reducing computational time and complexity.
Broadband terahertz radiation potentially has extensive applications, ranging from personal health care to industrial quality control and security screening. While traditional methods for broadband terahertz generation rely on bulky and expensive mode-locked lasers, frequency combs based on quantum cascade lasers (QCLs) can provide an alternative compact, high power, wideband terahertz source. QCL frequency combs incorporating a heterogeneous gain medium design can obtain even greater spectral range by having multiple lasing transitions at different frequencies. However, despite their greater spectral coverage, the comparatively low gain from such gain media lowers the maximum operating temperature and power. Lateral heterogeneous integration offers the ability to cover an extensive spectral range while maintaining the competitive performance offered from each homogeneous gain media. Here, we present the first lateral heterogeneous design for broadband terahertz generation: by combining two different homogeneous gain media, we have achieved a two-color frequency comb spaced by 1.5 THz.
Because of their extraordinary surface areas and tailorable porosity, metal-organic frameworks (MOFs) have the potential to be excellent sensors of gas-phase analytes. MOFs with open metal sites are particularly attractive for detecting Lewis basic atmospheric analytes, such as water. Here, we demonstrate that thin films of the MOF HKUST-1 can be used to quantitatively determine the relative humidity (RH) of air using a colorimetric approach. HKUST-1 thin films are spin-coated onto rigid or flexible substrates and are shown to quantitatively determine the RH within the range of 0.1-5% RH by either visual observation or a straightforward optical reflectivity measurement. At high humidity (>10% RH), a polymer/MOF bilayer is used to slow the transport of H2O to the MOF film, enabling quantitative determination of RH using time as the distinguishing metric. Finally, the sensor is combined with an inexpensive light-emitting diode light source and Si photodiode detector to demonstrate a quantitative humidity detector for low humidity environments.
Twenty-five high-burnup fuel rods were extracted from seven different fuel assemblies used for power production at the North Anna nuclear power plant and shipped to Oak Ridge National Laboratory (ORNL) in 2016 for detailed non-destructive examination (NDE) and destructive examination (DE). The spent fuel rods were from 17×17 lattices and consist of four cladding types—Zirlo®, M5®, Zircaloy-4, and low tin Zircaloy-4 (Zirc-4). These spent fuel rods are being tested to provide: (a) baseline characterization and mechanical property data that can be used as a comparison to fuel that was loaded into a modified TN-32B cask in November 2017, as part of the high-burnup confirmatory data project and (b) data applicable to high-burnup fuel rods (>45 GWd/MTU) currently stored and to be stored in the dry-cask fleet. The TN-32B cask is referred to as the “Demo” cask and is currently expected to be transported to a separate location and the internal contents inspected in approximately ten years. ORNL has completed the NDE of the twenty-five fuel rods. The purpose of this technical memorandum is to present a simplified summary of the first phase of destructive examinations and test conditions that will be used for communicating with various stakeholders. The destructive examinations will leverage the expertise and capabilities from multiple national laboratories for performing independent measurements of relevant data. Close coordination is required to ensure that all examinations follow well documented procedures and are performed so that measured data and characteristics can be readily compared. Pacific Northwest National Laboratory (PNNL) has published a detailed overview of the test program. ORNL and PNNL developed detailed draft test plans for testing to be performed at their facilities. ORNL and PNNL are in the process of refining these test plans to apply specifically to the testing described in this memorandum. Argonne National Laboratory (ANL) contributed to the ORNL test plan by describing tests to be conducted at ANL. Testing will be based on continuous learning. If a test produces results that are inconsistent with expectations or current trends, further testing will be paused until a path forward is established to understand the results and to identify follow-on testing.
Curtis, Jeremy A.; Burch, Ashlyn D.; Barman, Biplob; Linn, A.G.; Mcclintock, Luke M.; O'Beirne, A.L.; Stiles, M.J.; Reno, John L.; Mcgill, S.A.; Karaiskal, D.; Hilton, D.J.
In this paper, we describe the development of a broadband (0.3–10 THz) optical pump-terahertz probe spectrometer with an unprecedented combination of temporal resolution (≤200 fs) operating in external magnetic fields as high as 25 T using the new Split Florida-Helix magnet system. Finally, using this new instrument, we measure the transient dynamics in a gallium arsenide four-quantum well sample after photoexcitation at 800 nm.
The ordered monoclinic phase of the alkali-metal decahydro-closo-decaborate salt Rb2B10H10 was found to be stable from about 250 K all the way up to an order-disorder phase transition temperature of ≈762 K. The broad temperature range for this phase allowed for a detailed quasielastic neutron scattering (QENS) and nuclear magnetic resonance (NMR) study of the protypical B10H10 2- anion reorientational dynamics. The QENS and NMR combined results are consistent with an anion reorientational mechanism comprised of two types of rotational jumps expected from the anion geometry and lattice structure, namely, more rapid 90° jumps around the anion C4 symmetry axis (e.g., with correlation frequencies of ≈2.6 × 1010 s-1 at 530 K) combined with order of magnitude slower orthogonal 180° reorientational flips (e.g., ≈3.1 × 109 s-1 at 530 K) resulting in an exchange of the apical H (and apical B) positions. Each latter flip requires a concomitant 45° twist around the C4 symmetry axis to preserve the ordered Rb2B10H10 monoclinic structural symmetry. This result is consistent with previous NMR data for ordered monoclinic Na2B10H10, which also pointed to two types of anion reorientational motions. The QENS-derived reorientational activation energies are 197(2) and 288(3) meV for the C4 fourfold jumps and apical exchanges, respectively, between 400 and 680 K. Below this temperature range, NMR (and QENS) both indicate a shift to significantly larger reorientational barriers, for example, 485(8) meV for the apical exchanges. Finally, subambient diffraction measurements identify a subtle change in the Rb2B10H10 structure from monoclinic to triclinic symmetry as the temperature is decreased from around 250 to 210 K.
Anisotropic nanoparticles, such as nanorods and nanoprisms, enable packing of complex nanoparticle structures with different symmetry and assembly orientation, which result in unique functions. Despite previous extensive efforts, formation of large areas of oriented or aligned nanoparticle structures still remains a great challenge. Here, we report fabrication of large-area arrays of vertically aligned gold nanorods (GNR) through a controlled evaporation deposition process. We began with a homogeneous suspension of GNR and surfactants prepared in water. During drop casting on silicon substrates, evaporation of water progressively enriched the concentrations of the GNR suspension, which induces the balance between electrostatic interactions and entropically driven depletion attraction in the evaporating solution to produce large-area arrays of self-assembled GNR on the substrates. Electron microscopy characterizations revealed the formation of layers of vertically aligned GNR arrays that consisted of hexagonally close-packed GNR in each layer. Benefiting from the close-packed GNR arrays and their smooth topography, the GNR arrays exhibited a surface-enhanced Raman scattering (SERS) signal for molecular detection at a concentration as low as 10-15 M. Because of the uniformity in large area, the GNR arrays exhibited exceptional detecting reproducibility and operability. This method is scalable and cost-effective and could lead to diverse packing structures and functions by variation of guest nanoparticles in the suspensions.
Microtubule dynamics play a critical role in the normal physiology of eukaryotic cells as well as a number of cancers and neurodegenerative disorders. The polymerization/depolymerization of microtubules is regulated by a variety of stabilizing and destabilizing factors, including microtubule-associated proteins and therapeutic agents (e.g., paclitaxel, nocodazole). Here we describe the ability of the osmolytes polyethylene glycol (PEG) and trimethylamine-N-oxide (TMAO) to inhibit the depolymerization of individual microtubule filaments for extended periods of time (up to 30 days). We further show that PEG stabilizes microtubules against both temperature- and calcium-induced depolymerization. Our results collectively suggest that the observed inhibition may be related to combination of the kosmotropic behavior and excluded volume/osmotic pressure effects associated with PEG and TMAO. Taken together with prior studies, our data suggest that the physiochemical properties of the local environment can regulate microtubule depolymerization and may potentially play an important role in in vivo microtubule dynamics.
Measurement uncertainties in the techniques used to characterize loss in photonic waveguides becomes a significant issue as waveguide loss is reduced through improved fabrication technology. Typical loss measurement techniques involve environmentally unknown parameters such as facet reflectivity or varying coupling efficiencies, which directly contribute to the uncertainty of the measurement. We present a loss measurement technique, which takes advantage of the differential loss between multiple paths in an arrayed waveguide structure, in which we are able to gather statistics on propagation loss from several waveguides in a single measurement. This arrayed waveguide structure is characterized using a swept-wavelength interferometer, enabling the analysis of the arrayed waveguide transmission as a function of group delay between waveguides. Loss extraction is only dependent on the differential path length between arrayed waveguides and is therefore extracted independently from on and off-chip coupling efficiencies, which proves to be an accurate and reliable method of loss characterization. This method is applied to characterize the loss of the silicon photonic platform at Sandia Labs with an uncertainty of less than 0.06 dB/cm.
Two basic challenges limiting the simulation capabilities of the streamer discharge community are the efficient resolution of Poisson's equation and the proper treatment of photoionization. This paper addresses both of these challenges, beginning with a graphics processing unit executed multigrid (MG) algorithm to efficiently solve Poisson's equation on a massively parallel platform. When utilized in a 3D particle-in-cell (PIC) model with radiation transport, the MG solver is demonstrated to reduce the required simulation time by approximately a factor of three over a conventional Jacobi scheme. Next, a fully theoretical photoionization model, based on the basic properties of N2 and O2 molecules is developed as an alternative to widely utilized semi-empirical models. Following a review of N2 emission properties, a total of eight transitions from only three excited states are reported as a base set of transitions for a practical physics-based photoionization model. A 3D PIC simulation of streamer formation is demonstrated with two dominant transitions included in the radiation transport model.
Two basic challenges limiting the simulation capabilities of the streamer discharge community are the efficient resolution of Poisson's equation and the proper treatment of photoionization. This paper addresses both of these challenges, beginning with a graphics processing unit executed multigrid (MG) algorithm to efficiently solve Poisson's equation on a massively parallel platform. When utilized in a 3D particle-in-cell (PIC) model with radiation transport, the MG solver is demonstrated to reduce the required simulation time by approximately a factor of three over a conventional Jacobi scheme. Next, a fully theoretical photoionization model, based on the basic properties of N2 and O2 molecules is developed as an alternative to widely utilized semi-empirical models. Following a review of N2 emission properties, a total of eight transitions from only three excited states are reported as a base set of transitions for a practical physics-based photoionization model. A 3D PIC simulation of streamer formation is demonstrated with two dominant transitions included in the radiation transport model.
Metals across all industries demand anticorrosion surface treatments and drive a continual need for high-performing and low-cost coatings. Here we demonstrate polymer-clay nanocomposite thin films as a new class of transparent conformal barrier coatings for protection in corrosive atmospheres. Films assembled via layer-by-layer deposition, as thin as 90 nm, are shown to reduce copper corrosion rates by >1000× in an aggressive H2S atmosphere. These multilayer nanobrick wall coatings hold promise as high-performing anticorrosion treatment alternatives to costlier, more toxic, and less scalable thin films, such as graphene, hexavalent chromium, or atomic-layer-deposited metal oxides.
This investigation tackles the probabilistic parameter estimation problem involving the Arrhenius parameters for the rate coefficient of the chain branching reaction H + O2 → OH + O. This is achieved in a Bayesian inference framework that uses indirect data from the literature in the form of summary statistics by approximating the maximum entropy solution with the aid of approximate bayesian computation. The summary statistics include nominal values and uncertainty factors of the rate coefficient, obtained from shock-tube experiments performed at various initial temperatures. The Bayesian framework allows for the incorporation of uncertainty in the rate coefficient of a secondary reaction, namely OH + H2 → H2O + H, resulting in a consistent joint probability density on Arrhenius parameters for the two rate coefficients. It also allows for uncertainty quantification in numerical ignition predictions while conforming with the published summary statistics. The method relies on probabilistic reconstruction of the unreported data, OH concentration profiles from shock-tube experiments, along with the unknown Arrhenius parameters. The data inference is performed using a Markov chain Monte Carlo sampling procedure that relies on an efficient adaptive quadrature in estimating relevant integrals needed for data likelihood evaluations. For further efficiency gains, local Padé–Legendre approximants are used as surrogates for the time histories of OH concentration, alleviating the need for 0-D auto-ignition simulations. The reconstructed realisations of the missing data are used to provide a consensus joint posterior probability density on the unknown Arrhenius parameters via probabilistic pooling. Uncertainty quantification analysis is performed for stoichiometric hydrogen–air auto-ignition computations to explore the impact of uncertain parameter correlations on a range of quantities of interest.
The use of S2 glass/SC15 epoxy woven fabric composite materials for blast and ballistic protection has been an area of on-going research over the past decade. In order to accurately model this material system within potential applications under extreme loading conditions, a well characterized and understood anisotropic equation of state (EOS) is needed. This work details both an experimental program and associated analytical modelling efforts which aim to provide better physical understanding of the anisotropic EOS behavior of this material. Experimental testing focused on planar shock impact tests loading the composite to peak pressures of 15 GPa in both the transverse and longitudinal orientations. Test results highlighted the anisotropic response of the material and provided a basis by which the associated numeric micromechanical investigation was compared. Results of the combined experimental and numerical modeling investigation provided insights into not only the constituent material influence on the composite response but also the importance of the plain weave microstructure geometry and the significance of the microstructural configuration.
The shock response of porous amorphous silica was investigated using classical molecular dynamics, over a range of porosity ranging from fully dense (2.21 g/cc) down to 0.14 g/cc. We observed an enhanced densification in the Hugoniot response at initial porosities above 50%, and the effect increased with increasing porosity. In the lowest initial densities, after an initial compression response, the systems expanded with increased pressure. These results show good agreement with experiments. We explored mechanisms leading to enhanced densification which appear to differ from mechanisms observed in similar studies in silicon.
Detonation corner turning describes the ability of a detonation wave to propagate into unreacted explosive that is not immediately in the path normal to the wave. The classic example of a corner turning test has a cylindrical geometry and involves a small diameter explosive propagating into a larger diameter explosive as described by Los Alamos' Mushroom test, where corner turning is inferred from optical breakout of the detonation wave. We present a complimentary method to study corner turning in millimeter-scale explosives through the use of vapor deposition to prepare the slab (quasi-2D) analog of the axisymmetric mushroom test. Because the samples are in a slab configuration, optical access to the explosive is excellent and direct imaging of the detonation wave and "dead zone" that results during corner turning is possible. Micromushroom test results are compared for two explosives that demonstrate different behaviors: pentaerythritol tetranitrate (PETN), which has corner turning properties that are nearly ideal; and hexanitroazobenzene (HNAB), which has corner turning properties that reveal a substantial dead zone.
Explosive shock desensitization phenomena have been recognized for some time. It has been demonstrated that pressure-based reactive flow models do not adequately capture the basic nature of the explosive behavior. Historically, replacing the local pressure with a shock captured pressure has dramatically improved the numerical modeling approaches. A pseudo-entropy based formulation using the History Variable Reactive Burn model, as proposed by Starkenberg, was implemented into the Eulerian shock physics code CTH. Improvements in the shock capturing algorithm in the model were made that allow reproduction of single shock behavior consistent with published Pop-plot data. It is also demonstrated to capture a desensitization effect based on available literature data, and to qualitatively capture multi-dimensional desensitization behavior. This model shows promise for use in modeling and simulation problems that are relevant to the desensitization phenomena. Issues are identified with the current implementation and future work is proposed for improving and expanding model capabilities.
The shock response of porous amorphous silica was investigated using classical molecular dynamics, over a range of porosity ranging from fully dense (2.21 g/cc) down to 0.14 g/cc. We observed an enhanced densification in the Hugoniot response at initial porosities above 50%, and the effect increased with increasing porosity. In the lowest initial densities, after an initial compression response, the systems expanded with increased pressure. These results show good agreement with experiments. We explored mechanisms leading to enhanced densification which appear to differ from mechanisms observed in similar studies in silicon.
Tin has been shock compressed to ∼69 GPa on the Hugoniot using Sandia's Z Accelerator. A shockless compression wave closely followed the shock wave to ramp compress the shocked tin and probe a high temperature quasi-isentrope near the melt line. A new hybrid backwards integration - Lagrangian analysis routine was applied to the velocity waveforms to obtain the Lagrangian sound velocity of the tin as a function of particle velocity. Surprisingly, an elastic wave was observed on initial compression from the shock state. The presence of the elastic wave indicates tin possess a small but finite strength at this shock pressure, strongly indicating a (mostly) solid state. High fidelity shock Hugoniot measurements on tin sound velocities in this stress range may be required to refine the shock melting stress for pure tin.
The microstructure of pentaerythritol tetranitrate (PETN) films fabricated by physical vapor deposition can be altered substantially by changing the surface energy of the substrate on which they are deposited. High substrate surface energies lead to higher density, strongly textured films, while low substrate surface energies lead to lower density, more randomly oriented films. We take advantage of this behavior to create aluminum-confined PETN films with different microstructures depending on whether a vapor-deposited aluminum layer is exposed to atmosphere prior to PETN deposition. Detonation velocities are measured as a function of both PETN and aluminum thickness at near-failure conditions to elucidate the effects of microstructure on detonation behavior. The differences in microstructure produce distinct changes in detonation velocity but do not have a significant effect on failure geometry when confinement thicknesses are above the minimum effectively infinite condition.
High-resolution, quasi-static time series (QSTS) simulations are essential for modeling modern distribution systems with high-penetration of distributed energy resources (DER) in order to accurately simulate the time-dependent aspects of the system. Presently, QSTS simulations are too computationally intensive for widespread industry adoption. This paper proposes to simulate a portion of the year with QSTS and to use decision tree machine learning methods, random forests and boosting ensembles, to predict the voltage regulator tap changes for the remainder of the year, accurately reproducing the results of the time-consuming, brute-force, yearlong QSTS simulation. This research uses decision tree ensemble machine learning, applied for the first time to QSTS simulations, to produce high-accuracy QSTS results, up to 4x times faster than traditional methods.
It has been an ongoing scientific debate whether biological parameters are conserved across experimental setups with different media, pH values, and other experimental conditions. Our work explores this question using Bayesian probability as a rigorous framework to assess the biological context of parameters in a model of the cell growth controller in You et al. When this growth controller is uninduced, the E. coli cell population grows to carrying capacity; however, when the circuit is induced, the cell population growth is regulated to remain well below carrying capacity. This growth control controller regulates the E. coli cell population by cell to cell communication using the signaling molecule AHL and by cell death using the bacterial toxin CcdB. To evaluate the context dependence of parameters such as the cell growth rate, the carrying capacity, the AHL degradation rate, the leakiness of AHL, the leakiness of toxin CcdB, and the IPTG induction factor, we collect experimental data from the growth control circuit in two different media, at two different pH values, and with several induction levels. We define a set of possible context dependencies that describe how these parameters may differ with the experimental conditions and we develop mathematical models of the growth controller across the different experimental contexts. We then determine whether these parameters are shared across experimental contexts or whether they are context dependent. For each of these possible context dependencies, we use Bayesian inference to assess its plausibility and to estimate the parameters of the growth controller. Ultimately, we find that there is significant experimental context dependence in this circuit. Furthermore, we also find that the estimated parameter values are sensitive to our assumption of a context relationship.
The goal of this project is to create a modular optical section that can be inserted into an exhaust runner to measure soot mass being produced by combustion.
My oral presentation will focus on the progress made by me on mechanical design for the Exhaust Runner Soot Diagnostic (ERSD) for use on optical research diesel engines.
Proceedings of PMBS 2018: Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems, Held in conjunction with SC 2018: The International Conference for High Performance Computing, Networking, Storage and Analysis
Proxy applications, or proxies, are simple applications meant to exercise systems in a way that mimics real applications (their parents). However, characterizing the relationship between the behavior of parent and proxy applications is not an easy task. In prior work [1], we presented a data-driven methodology to characterize the relationship between parent and proxy applications based on collecting runtime data from both and then using data analytics to find their correspondence or divergence. We showed that it worked well for hardware counter data, but our initial attempt using MPI function data was less satisfactory. In this paper, we present an exploratory effort at making an improved quantification of the correspondence of communication behavior for proxies and their respective parent applications. We present experimental evidence of positive results using four proxy applications from the current ECP Proxy Application Suite and their corresponding parent applications (in the ECP application portfolio). Results show that each proxy analyzed is representative of its parent with respect to communication data. In conjunction with our method presented in [1] (correspondence between computation and memory behavior), we get a strong understanding of how well a proxy predicts the comprehensive performance of its parent.
In this paper we apply Convolutional Neural Networks (CNNs) to the task of automatic threat detection, specifically conventional explosives, in security X-ray scans of passenger baggage. We present the first results of utilizing CNNs for explosives detection, and introduce a dataset, the Passenger Baggage Object Database (PBOD), which can be used by researchers to develop new threat detection algorithms. Using state-of-the-art CNN models and taking advantage of the properties of the Xray scanner, we achieve reliable detection of threats, with the best model achieving an AUC of the ROC of 0.95. We also explore heatmaps as a visualization of the location of the threat.
This paper discusses the optimal output feedback control problem of linear time-invariant systems with additional restrictions on the structure of the optimal feedback control gain. These restrictions include setting individual elements of the optimal gain matrix to zero and making the sum of certain rows of the gain matrix equal to desired values. The paper proposes a method that modifies the standard quadratic cost function to include soft constraints ensuring the satisfaction of these restrictions on the structure of the optimal gain. Necessary conditions for optimality with these soft constraints are derived, and an algorithm to solve the resulting optimal output feedback control problem is given. Finally, a power systems example is presented to illustrate the usefulness of proposed approach.
This paper formulates general computation as a feedback-control problem, which allows the agent to autonomously overcome some limitations of standard procedural language programming: resilience to errors and early program termination. Our formulation considers computation to be trajectory generation in the program's variable space. The computing then becomes a sequential decision making problem, solved with reinforcement learning (RL), and analyzed with Lyapunov stability theory to assess the agent's resilience and progression to the goal. We do this through a case study on a quintessential computer science problem, array sorting. Evaluations show that our RL sorting agent makes steady progress to an asymptotically stable goal, is resilient to faulty components, and performs less array manipulations than traditional Quicksort and Bubble sort.
Saturn is a short-pulse ( 40 ns FWHM) x-ray generator capable of delivering up 10 MA into a bremsstrahlung diode to yield up 5 × 10^12 rad/s (Si) per shot at an energy of 1 to 2 MeV. With the machine now over 30 years old it is necessary to rebuild and replace many components, upgrade controls and diagnostics, design for more reliability and reproducibility, and, as possible upgrade the accelerator to produce more current at a low voltage ( 1 MV or lower). Thus it has been necessary to reevaluate machine design parameters. The machine is modeled as a simple LR circuit driven with an equivalent a sine-squared drive waveform as peak voltage, drive impedance, and vacuum inductance are varied. Each variation has implications for vacuum insulator voltage, diode voltage, diode impedance, and radiation output. For purposes of this study, radiation is scaled as the diode current times the diode voltage raised to the 2.7 power. Results of parameter scans are presented and used to develop a design that optimizes radiation output. Results indicate that to maintain the existing short pulse length of the machine but to increase output it is most beneficial to operate at an even higher impedance than originally designed. Also discussed are critical improvements that need to be made.
Nominal behavior selection of an electronic device from a measured dataset is often difficult. Device characteristics are rarely monotonic and choosing the single device measurement which best represents the center of a distribution across all regions of operation is neither obvious nor easy to interpret. Often, a device modeler uses a degree of subjectivity when selecting nominal device behavior from a dataset of measurements on a group of devices. This paper proposes applying a functional data approach to estimate the mean and nominal device of an experimental dataset. This approach was applied to a dataset of electrical measurements on a set of commercially available Zener diodes and proved to more accurately represent the average device characteristics than a point-wise calculation of the mean. It also enabled an objective method for selecting a nominal device from a dataset of device measurements taken across the full operating region of the Zener diode.
Malware detection and remediation is an on-going task for computer security and IT professionals. Here, we examine the use of neural algorithms to detect malware using the system calls generated by executables-alleviating attempts at obfuscation as the behavior is monitored. We examine several deep learning techniques, and liquid state machines baselined against a random forest. The experiments examine the effects of concept drift to understand how well the algorithms generalize to novel malware samples by testing them on data that was collected after the training data. The results suggest that each of the examined machine learning algorithms is a viable solution to detect malware-achieving between 90% and 95% class-averaged accuracy (CAA). In real-world scenarios, the performance evaluation on an operational network may not match the performance achieved in training. Namely, the CAA may be about the same, but the values for precision and recall over the malware can change significantly. We structure experiments to highlight these caveats and offer insights into expected performance in operational environments. In addition, we use the induced models to better understand what differentiates malware samples from goodware, which can further be used as a forensics tool to provide directions for investigation and remediation.
Proceedings of Pmbs 2018 Performance Modeling Benchmarking and Simulation of High Performance Computer Systems Held in Conjunction with Sc 2018 the International Conference for High Performance Computing Networking Storage and Analysis
Proxy applications, or proxies, are simple applications meant to exercise systems in a way that mimics real applications (their parents). However, characterizing the relationship between the behavior of parent and proxy applications is not an easy task. In prior work [1], we presented a data-driven methodology to characterize the relationship between parent and proxy applications based on collecting runtime data from both and then using data analytics to find their correspondence or divergence. We showed that it worked well for hardware counter data, but our initial attempt using MPI function data was less satisfactory. In this paper, we present an exploratory effort at making an improved quantification of the correspondence of communication behavior for proxies and their respective parent applications. We present experimental evidence of positive results using four proxy applications from the current ECP Proxy Application Suite and their corresponding parent applications (in the ECP application portfolio). Results show that each proxy analyzed is representative of its parent with respect to communication data. In conjunction with our method presented in [1] (correspondence between computation and memory behavior), we get a strong understanding of how well a proxy predicts the comprehensive performance of its parent.
Peacekeepmg and humamtanan aid interventions in Somalia have attempted to bring peace and stability to the country and region for more than twenty-five years. Different dynamics characterize four distinct phases of these interventions, determining the likelihood of conflict transformation. These dynamics display archetypal system behaviors representative of other persistent conflicts in Africa during the same time period. Field interviews combined with comparative statistics informed system models of conflict dynamics in Africa and Somalia. The models explored the relative impact of intervention feedback loops and key levers on potential for conflict transformation. It is shown that sustainable peace depends less on the appropriate sequencing of aid than on transparency, trust, and cooperation between various intervention actors and stakeholders to enable accountability at the local level. Technical innovations are needed to build transparency and trust between intervention stakeholders without increasing security risks. A potential solution is proposed that incorporates predictive analytics into peer-to-peer networks for monitoring interventions.
To counter manufacturing irregularities and ensure ASIC design integrity, it is essential that robust design verification methods are employed. It is possible to ensure such integrity using ASIC static timing analysis (STA) and machine learning. In this research, uniquely devised machine and statistical learning methods which quantify anomalous variations in Register Transfer Level (RTL) or Graphic Design System II (GDSII) formats are discussed. To measure the variations in ASIC analysis data, the timing delays in relation to path electrical characteristics are explored. It is shown that semi-supervised learning techniques are powerful tools in characterizing variations within STA path data and has much potential for identifying anomalies in ASIC RTL and GDSII design data.
Curwen, Christopher A.; Reno, John L.; Williams, Benjamin S.
We report a terahertz quantum-cascade vertical-external-cavity surface-emitting laser (QC-VECSEL) whose output power is scaled up to watt-level by using an amplifying metasurface designed for increased power density. The metasurface is composed of a subwavelength array of metal-metal waveguide antenna-coupled sub-cavities loaded with a terahertz quantum-cascade gain material. Unlike previously demonstrated THz QC-VECSELs, the sub-cavities operate on their third-order lateral modal resonance (TM03), instead of their first-order (TM01) resonance. This results in a metasurface with a higher spatial density of the gain material, leading to an increased output power per metasurface area. In pulsed mode operation, peak THz output powers up to 830 mW at 77 K and 1.35 W at 6 K are observed, while a single-mode spectrum and a low divergence beam pattern are maintained. In addition, piezoelectric control of the cavity length allows approximately 50 GHz of continuous, single-mode tuning without a significant effect on output power or beam quality.
Bowman, Daniel; Young, Eliot F.; Krishnamoorthy, Siddharth; Lees, Jonathan M.; Albert, Sarah; Komjathy, Attila; Cutts, James
Balloon-borne infrasound research began again in 2014 with a small payload launched as part of the High Altitude Student Platform (HASP; Bowman and Lees(2015)). A larger payload was deployed through the same program in 2015. These proof of concept experiments demonstrated that balloon-borne microbarometers can capture the ocean microbarom (a pervasive infrasound signal generated by ocean waves) even when nearby ground sensors are not able to resolve them (Bowman and Lees, 2017). The following year saw infrasound sensors as secondary payloads on the 2016 Ultra Long Duration Balloon flight from Wanaka, New Zealand (Bowman and Lees, 2018; Lamb et al., 2018) and the WASP 2016 balloon flight from Ft. Sumner, New Mexico (Young et al., 2018). Another payload was included on the HASP 2016 flight as well. In 2017, the Heliotrope project included a four element microbarometer network drifting at altitudes of 20-24 km on solar hot air balloons (Bowman and Albert, 2018). At the time of this writing the Trans-Atlantic Infrasound Payload (TAIP, operated by Sandia National Laboratories) and the Payload for Infrasound Measurement in the Arctic (PIMA, operated by Jet Propulsion Laboratory) are preparing to fly from Sweden to Canada aboard the PMC-Turbo balloon. The purpose of this experiment is to cross-calibrate several different infrasound sensing systems and test whether wind noise events occur in the stratosphere.
As the scale of high performance computing facilities approaches the exascale era, gaining a detailed understanding of hardware failures becomes important. In particular, the extreme memory capacity of modern supercomputers means that data corruption errors which were statistically negligible at smaller scales will become more prevalent. In order to understand hardware faults and mitigate their adverse effects on exascale workloads, we must learn from the behavior of current hardware. In this work, we investigate the predictability of DRAM errors using field data from two recently decommissioned supercomputers: Cielo, at Los Alamos National Laboratory, and Hopper, at Lawrence Berkeley National Laboratory. Due to the volume and complexity of the field data, we apply statistical machine learning to predict the probability of DRAM errors at previously un-accessed locations. We compare the predictive performance of six machine learning algorithms, and find that a model incorporating physical knowledge of DRAM spatial structure outperforms purely statistical methods. Our findings both support expected physical behavior of DRAM hardware as well as providing a mechanism for real-time error prediction. We demonstrate real-world feasibility by training an error model on one supercomputer and effectively predicting errors on another. Our methods demonstrate the importance of spatial locality over temporal locality in DRAM errors, and show that relatively simple statistical models are effective at predicting future errors based on historical data, allowing proactive error mitigation.
Nominal behavior selection of an electronic device from a measured dataset is often difficult. Device characteristics are rarely monotonic and choosing the single device measurement which best represents the center of a distribution across all regions of operation is neither obvious nor easy to interpret. Often, a device modeler uses a degree of subjectivity when selecting nominal device behavior from a dataset of measurements on a group of devices. This paper proposes applying a functional data approach to estimate the mean and nominal device of an experimental dataset. This approach was applied to a dataset of electrical measurements on a set of commercially available Zener diodes and proved to more accurately represent the average device characteristics than a point-wise calculation of the mean. It also enabled an objective method for selecting a nominal device from a dataset of device measurements taken across the full operating region of the Zener diode.
Logic-memory integration helps mitigate the von Neumann bottleneck, and this has enabled a new class of architectures that helps accelerate graph analytics and operations on sparse data streams. These utilize merge networks as a key unit of computation. Such networks are highly parallel and their performance increases with tighter coupling between logic and memory when a bitonic algorithm is used. This paper presents energy-efficient on-chip network architectures for merging key-value pairs using both word-parallel and bit-serial paradigms. The proposed architectures are capable of merging two rows of high bandwidth memory (HBM)worth of data in a manner that is completely overlapped with the reading from and writing back to such a row. Furthermore, their energy consumption is about an order of magnitude lower when compared to a naive crossbar based design.
Proceedings of Correctness 2018: 2nd International Workshop on Software Correctness for HPC Applications, Held in conjunction with SC 2018: The International Conference for High Performance Computing, Networking, Storage and Analysis
'As scale grows and relaxed memory models become common, it is becoming more difficult to establish the correctness of HPC runtimes through simple testing, making formal verification an attractive alternative. This paper describes a formal specification and verification of an HPC user-level tasking runtime through the design, implementation, and evaluation of a model checked implementation of the Qthreads user-level tasking runtime. We implement our model in SPIN model checker by doing a function to function translation of Qthreads'' C implementation to Promela code. This translation bridges the differences in the modeling and implementation languages by translating C''s rich pointer semantics, functions and non-local gotos to Promela''s comparatively simple semantics. We then evaluate our implementation to show that it is both tractable and useful, exhaustively searching the state-space for counterexamples in reasonable time on modern architectures and use it to find a lingering concurrency error in the Qthreads runtime.
When multiple blasts occur at different times, the situation arises in which a blast wave is propagating into a medium that has already been shocked. Determining the evolution in the shape of the second shock is not trivial, as it is propagating into air that is not only non-uniform, but also non-stationary. To accomplish this task, we employ the method of Kompaneets to determine the shape of a shock in a non-uniform media. We also draw from the work of Korycansky (Astrophys J 398:184–189. https://doi.org/10.1086/171847, 1992) on an off-center explosion in a medium with radially varying density. Extending this to treat non-stationary flow, and making use of approximations to the Sedov solution for the point blast problem, we are able to determine an analytic expression for the evolving shape of the second shock. In particular, we consider the case of a shock in air at standard ambient temperature and pressure, with the second shock occurring shortly after the original blast wave reaches it, as in a sympathetic detonation.
As systems become more complex, systems engineers rely on experts to inform decisions. There are few experts and limited data in many complex new technologies. This challenges systems engineers as they strive to plan activities such as qualification in an environment where technical constraints are coupled with the traditional cost, risk, and schedule constraints. Bayesian network (BN) models provide a framework to aid systems engineers in planning qualification efforts with complex constraints by harnessing expert knowledge and incorporating technical factors. By quantifying causal factors, a BN model can provide data about the risk of implementing a decision supplemented with information on driving factors. This allows a systems engineer to make informed decisions and examine “what-if” scenarios. This paper discusses a novel process developed to define a BN model structure based primarily on expert knowledge supplemented with extremely limited data (25 data sets or less). The model was developed to aid qualification decisions—specifically to predict the suitability of six degrees of freedom (6DOF) vibration testing for qualification. The process defined the model structure with expert knowledge in an unbiased manner. Validation during the process execution and of the model provided evidence the process may be an effective tool in harnessing expert knowledge for a BN model.
Direct numerical simulation (DNS) of a high Karlovitz number (Ka) CH4/air stratified premixed jet flame was performed and used to provide insights into fundamentals of turbulent stratified premixed flames and their modelling implications. The flame exhibits significant stratification where the central jet has an equivalence ratio of 0.4, which is surrounded by a pilot flame with an equivalence ratio of 0.9. A reduced chemical mechanism for CH4/air combustion based on GRI-Mech3.0 was used, including 268 elementary reactions and 28 transported species. Over five billion grid points were employed to adequately resolve the turbulence and flame scales. The maximum Ka of the flame in the domain approaches 1400, while the jet Damköhler number (Dajet) is as low as 0.0027. The flame shows early stages of CH4/air combustion in the near field and later stages in the far field; the separation of combustion stages can be largely attributed to the small jet flow timescale and the low Dajet. The gradient of equivalence ratio in the flame normal direction, dϕ/dn, is predominantly negative, and small-scale stratification was found to play an important role in determining the local flame structure. Notably, the flame is thinner, the burning is more intense, and the levels of the radical pools, including OH, O and H, are higher in regions with stronger mixture stratification. The local flame structure is more strained and less curved in these regions. The mean flame structure is considerably influenced by the strong shear, which can be reasonably predicted by unity Lewis number stratified premixed flamelets when the thermochemical conditions of the reactant and product are taken locally from the DNS and the strain rates close to those induced by the mean flow are used in the flamelet calculation. An enhanced secondary reaction zone behind the primary reaction zone was observed in the downstream region, where the temperature is high and the fuel concentration is negligible, consistent with the observed separation of combustion stages. The main reactions involved in the secondary reaction zone, including CO + OH⇔CO2 + H (R94), H + O2 + M⇔HO2 + M (R31), HO2 + OH⇔H2O + O2 (R82) and H2 + OH⇔H2O + H (R79), are related to accumulated intermediate species including CO, H2, H, and OH. The detailed mechanism of intermediate species accumulation was explored and its effect on chemical pathways and heat release rate was highlighted.
It is the purpose of this paper to provide a comprehensive documentation of the new NCAR (National Center for Atmospheric Research) version of the spectral element (SE) dynamical core as part of the Community Earth System Model (CESM2.0) release. This version differs from previous releases of the SE dynamical core in several ways. Most notably the hybrid sigma vertical coordinate is based on dry air mass, the condensates are dynamically active in the thermodynamic and momentum equations (also referred to as condensate loading), and the continuous equations of motion conserve a more comprehensive total energy that includes condensates. Not related to the vertical coordinate change, the hyperviscosity operators and the vertical remapping algorithms have been modified. The code base has been significantly reduced, sped up, and cleaned up as part of integrating SE as a dynamical core in the CAM (Community Atmosphere Model) repository rather than importing the SE dynamical core from High-Order Methods Modeling environment as an external code.
The following document provides guidance for developing a microgrid preliminary design specification. Development of a microgrid preliminary design specification takes previous analysis has been done in coordination with key stakeholders to scope out and perform analysis on a microgrid conceptual design to be further developed into a preliminary design. The microgrid preliminary design specification outlines the functional requirements and recommendations for the preliminary design that can be put into request for information (RFI) or request for quote (RFQ) process in order to select a microgrid integrator to oversee the final design and construction process of the microgrid to be implemented. In addition to requirements, the RFI/RFQ needs to specify of the responsibilities of the microgrid integrator as well as the microgrid owner/operator of the completed microgrid as part of the microgrid preliminary design specification as well as procurement process to evaluate the bidders applying to be the microgrid integrators.
Rapid and accurate quasi-static time series (QSTS) analysis is becoming increasingly important for distribution system analysis as the complexity of the distribution system intensifies with the addition of new types, and quantities, of distributed energy resources (DER). The expanding need for hosting capacity analysis, control systems analysis, photovoltaic (PV) and DER impact analysis, and maintenance cost estimations are just a few reasons that QSTS is necessary. Historically, QSTS analysis has been prohibitively slow due to the number of computations required for a full-year analysis. Therefore, new techniques are required that allow QSTS analysis to rapidly be performed for many different use cases. This research demonstrates a novel approach to doing rapid QSTS analysis for analyzing the number of voltage regulator tap changes in a distribution system with PV components. A representative portion of a yearlong dataset is selected and QSTS analysis is performed to determine the number of tap changes, and this is used as training data for a machine learning algorithm. The machine learning algorithm is then used to predict the number of tap changes in the remaining portion of the year not analyzed directly with QSTS. The predictions from the machine learning algorithms are combined with the results of the partial year simulation for a final prediction for the entire year, with the goal of maintaining an error <10% on the full-year prediction. Five different machine learning techniques were evaluated and compared with each other; a neural network ensemble, a random forest decision tree ensemble, a boosted decision tree ensemble, support vector machines, and a convolutional neural network deep learning technique. A combination of the neural network ensemble together with the random forest produced the best results. Using 20% of the year as training data, analyzed with QSTS, the average performance of the technique resulted in ~2.5% error in the yearly tap changes, while maintaining a <10% 99.9th percentile error bound on the results. This is a 5x speedup compared to a standard, full-length QSTS simulation. These results demonstrate the potential for applying machine learning techniques to facilitate modern distribution system analysis and further integration of distributed energy resources into the power grid.
Sandia National Laboratories, California (SNL/CA) is a Department of Energy (DOE) facility. The management and operations of the facility are under a contract with the DOE's National Nuclear Security Administration (NNSA). On May 1, 2017, the name of the management and operating contractor changed from Sandia Corporation to National Technology and Engineering Solutions of Sandia, LLC (NTESS). The DOE, NNSA, Sandia Field Office administers the contract and oversees contractor operations at the site. This Site Environmental Report for 2017 was prepared in accordance with DOE Order 231.1B, Environment, Safety and Health Reporting (DOE 2012). The report provides a summary of environmental monitoring information and compliance activities that occurred at SNL/CA during calendar year 2017, unless noted otherwise. General site and environmental program information is also included.
I am working for Sandia National Laboratories in Albuquerque, New Mexico in the Summer Product Realization Institute for Nuclear Weapons (NW SPRINT). NW SPRINT focused on increasing agility and facilitating the development of novel concepts and ideas on a compressed schedule. The program focuses on using advanced manufacturing technologies to innovate and revolutionize the products that Sandia National Laboratories delivers. The program is a design challenge incorporating knowledge from various engineering fields to design and implement a working product. Multiple teams from different departments compete to develop and iterate the best design. I am working on a team of five with individual disciplines including Mechanical, Aerospace, and Electrical Engineering.
This monthly report is intended to communicate the status of North Slope ARM facilities managed by Sandia National Labs. The report includes: budget, summary of current management issues, safety, tethered balloon operations, North Slope facilities, and instrument status reports.
In order to consider and understand emerging energy storage technologies, data analysis can be executed to ascertain proper operation and performance. The technical benefits of rigorous testing and data analysis are important for the customer, the planner, developer, and system operator: the end-user has a safe, reliable system that performs predictably on a macro level. The test-and-analyze approach to verifying performance of energy storage devices, equipment, and systems integration into the grid improves the understanding of the value of energy storage over time from the economic vantage point. Demonstrating the lifecycle value of energy storage begins with the data the provider supplies for analysis. After review of energy storage data received from several providers, it has become clear that some ESS data is inconsistent and incomplete - thus leading to a question of the inefficacy of the data when it comes time to analyze it. This paper will review and propose general guidelines such as sampling rates and data points that providers must supply in order for robust data analysis to take place. Consistent guidelines are the basis of the proper protocol to (a) reduce time it takes data to reach those who are providing analyses; (b) allow them to better understand the energy storage installations; and (c) provide high quality analysis of the installation. This paper intends to serve as a starting point for what data points should be provided when monitoring. As battery technologies continue to advance and the industry expands, this paper will be updated to remain current.
A set of coupled electron/photon radiation transport calculations were performed on optimized Ta / C converters due to questions about previous work at Sandia National Laboratories by Halbleib and Sanford. Generally, the results of the previous calculations were confirmed. However, new relationships between the incident electron beam energy and the average energy of a bremsstrahlung x-ray spectrum for the converters have been defined for the incident electron energy range of 50 keV to 15 MeV as well as for the narrower range of 50 keV to 1 MeV. The relationships were developed by bracketing the results of radiation transport calculations rather than by a rigorous mathematical fit to the data. Additional data such as the total x-ray or the energy spectra of the x-ray fluence exiting the Ta / C converters are available upon request.
The Exhaust Runner Soot Diagnostic (ERSD) system is an in-line, time-resolved soot mass measurement system designed to allow rapid measurement of soot mass flux to detect and measure cyclic variability. The ERSD system design was generally split into two sections: conceptual mechanical design and measurement design—meaning the relevant calculations to demonstrate the feasibility of our planned measurement approach. For measurement design, the Beer-Lambert Law was the central focus for design justification. With measured values for the Filter Smoke Number (FSN) and a conversion to soot mass concentration, the required effective optical path length can be calculated for a desired light attenuation percentage. For mechanical design, the key constraints were space and modularity. The design must be placed into an existing mechanical setup with relative ease, as well as being modular enough to be implemented on other engines. The crux of the mechanical design was proper sealing and optical access, as both are crucial to the system's effectiveness. For proper sealing, extensive thermal expansion calculations were performed alongside O-ring design guides to produce the desired sealing and custom gland dimensions. For maximizing optical access, many iterations were modeled to provide full optical access while maintaining effective gas sealing.
Due to its balance of accuracy and computational cost, density functional theory has become the method of choice for computing the electronic structure and related properties of materials. However, present-day semilocal approximations to the exchange-correlation energy of density functional theory break down for materials containing d and f electrons. In this report we summarize our progress in addressing this issue. We describe the construction of the BSC exchange-correlation functional within the subsystem functional formalism which enables us to capture bulk, surface, and confinement physics with a single exchange-correlation functional. We report on the initial assessment of this functional within the jellium surface system and demonstrate that the BSC functional captures the confinement physics more accurately than standard semilocal exchange-correlation functionals. We conclude by outlining our future research objectives which focus on refining the functional form of the BSC functional and achieving significantly more accurate energetics of materials containing f and d electrons than existing semilocal functionals.
Professional judgement is a key element of an exposure assessment program. Industrial hygienists frequently rely on it to make judgements about the acceptability of occupational exposures to chemical, biological, and physical agents. Often this must be done with little or no quantitative sampling data. This is especially true in the case of research and development, where activities may be short duration and non-routine. In this situation there are very limited opportunities for sampling, and therefore, professional judgement becomes a major tool industrial hygienists rely upon. One of the limitations of professional judgement is that exposures may be misclassified. That is, an exposure deemed to be unacceptable may in fact be acceptable and unnecessary effort and expense may be incurred as a result. On the other hand, an exposure may be judged acceptable when it is in fact unacceptable, resulting in an overexposure and potential injury or illness. It is therefore crucial that professional judgement be validated. In this study we evaluated 106 acceptable exposure determinations with retrospective quantitative exposure monitoring to validate the performance of Sandia National Laboratory industrial hygienists at performing exposure determinations with professional judgement.
This is the application for for the Quality New Mexico Roadrunner Award submitted for the Mission Computing, IT Financial Management, and Business Operations at Sandia National Laboratories.
This Environmental Restoration Operations (ER) Consolidated Quarterly Report provides the status of ongoing corrective action activities being implemented at Sandia National Laboratories, New Mexico (SNL/NM) during the January, February, and March 2018 quarterly reporting period.
Casper, Katya M.; Duan, Lian; Choudhari, Meelan M.; Chou, Amanda; Munoz, Federic; Radespiel, Rolf; Schilden, Thomas; Schroder, Wolfgang; Marineau, Eric C.; Chaudhry, Ross S.; Candler, Graham V.; Gray, Kathryn A.; Schneider, Steven P.
Prediction of boundary-layer transition is a critical part of the design of hypersonic vehicles because of the large increase in skin-friction drag and surface heating associated with the onset of transition. Testing in conventional (noisy) wind tunnels has been an important means of characterizing and understanding the boundary-layer transition (BLT) behavior of hypersonic vehicles. Because the existing low disturbance, i.e., quiet, facilities operate only at Mach 6, moderate Reynolds numbers, fairly small sizes, and low freestream enthalpy, conventional facilities will continue to be employed for testing and evaluation of hypersonic vehicles, especially for ground testing involving other Mach numbers, higher freestream enthalpies, and larger models. To enable better use of transition data from conventional facilities and more accurate extrapolation of wind-tunnel results to flight, one needs an in-depth knowledge of the broadband disturbance environment in those facilities as well as of the interaction between the freestream disturbances with laminar boundary layers.
As the technological world expands, vulnerabilities of our critical infrastructure are becoming clear. Fortunately, emerging services provide an opportunity to improve the efficiency and security of current practices. In particular, serverless computing (such as Amazon Web Services and REDFISHs Acequia) provide opportunities to improve current practices. However, the critical infrastructure needs to evolve and that will require due diligence to ensure that transferring aspects of its practices onto the internet is done in a secure manner.
Insertion is a widely utilized process for reversibly changing the stoichiometry of a solid through a chemical or electrochemical stimulus. Insertion is instrumental to many energy technologies, including batteries, fuel cells, and hydrogen storage, and has been the subject of extensive investigations. More recently, solid-state switching devices utilizing insertion have drawn significant interest; such devices dynamically switch a material's chemical stoichiometry, changing it from one state to another. This review illustrates the fundamental properties and mechanisms of insertion, including reaction, diffusion, and phase transformation, and discusses recent developments in characterization in these fields. We also review new classes of recently demonstrated insertion devices, which reversibly switch mechanical and electronic properties, and show how the fundamental mechanisms of insertion can be used to design improved switching devices.
This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.
We present sufficient conditions under which thermal generators can be aggregated in mixed-integer linear programming (MILP) formulations of the unit commitment (UC) problem, while maintaining feasibility and optimality for the original disaggregated problem. Aggregating thermal generators with identical characteristics (e.g., minimum/maximum power output, minimum up/down time, and cost curves) into a single unit reduces redundancy in the search space induced by both exact symmetry (permutations of generator schedules) and certain classes of mutually nondominated solutions. We study the impact of aggregation on two large-scale UC instances: one from the academic literature and the other based on real-world operator data. Our computational tests demonstrate that, when present, identical generators can negatively affect the performance of modern MILP solvers on UC formulations. Furthermore, we show that our reformation of the UC MILP through aggregation is an effective method for mitigating this source of computational difficulty.
Materials subjected to high dose irradiation by energetic particles often experience severe damage in the form of drastic increase of defect density, and significant degradation of their mechanical and physical properties. Extensive studies on radiation effects in materials in the past few decades show that, although nearly no materials are immune to radiation damage, the approaches of deliberate introduction of certain types of defects in materials before radiation are effective in mitigating radiation damage. Nanostructured materials with abundant internal defects have been extensively investigated for various applications. The field of radiation damage in nanostructured materials is an exciting and rapidly evolving arena, enriched with challenges and opportunities. In this review article, we summarize and analyze the current understandings on the influence of various types of internal defect sinks on reduction of radiation damage in primarily nanostructured metallic materials, and partially on nanoceramic materials. We also point out open questions and future directions that may significantly improve our fundamental understandings on radiation damage in nanomaterials. The integration of extensive research effort, resources and expertise in various fields may eventually lead to the design of advanced nanomaterials with unprecedented radiation tolerance.
This paper presents a solution to the optimal control problem of a three degrees-of-freedom (3DOF) wave energy converter (WEC). The three modes are the heave, pitch, and surge. The dynamic model is characterized by a coupling between the pitch and surge modes, while the heave is decoupled. The heave, however, excites the pitch motion through nonlinear parametric excitation in the pitch mode. This paper uses Fourier series (FS) as basis functions to approximate the states and the control. A simplified model is first used where the parametric excitation term is neglected and a closed-form solution for the optimal control is developed. For the parametrically excited case, a sequential quadratic programming approach is implemented to solve for the optimal control numerically. Numerical results show that the harvested energy from three modes is greater than three times the harvested energy from the heave mode alone. Moreover, the harvested energy using a control that accounts for the parametric excitation is significantly higher than the energy harvested when neglecting this nonlinear parametric excitation term.
Kustas, Andrew B.; Johnson, David R.; Trumble, Kevin P.; Chandrasekar, Srinivasan
Enhanced workability, as characterized by the magnitude and heterogeneity of accommodated plastic strains during sheet processing, is demonstrated in high Si content Fe-Si alloys containing 4 and 6.5 wt% Si using two single-step, simple-shear deformation techniques – peeling and large strain extrusion machining (LSEM). The model Fe-Si material system was selected for its intrinsically poor material workability, and well-known applications potential in next-generation electric machines. In a comparative study of the deformation characteristics of the shear processes with conventional rolling, two distinct manifestations of workability are observed. For rolling, the relatively diffuse and unconfined deformation zone geometry leads to cracking at low strains, with sheet structures characterized by extensive deformation twinning and banding. Workpiece pre-heating is required to improve the workability in rolling. In contrast, peeling and LSEM produce continuous sheet at large plastic strains without cracking, the result of more confined deformation geometries that enhances the workability. Peeling, however, results in heterogeneous, shear-banded microstructures, pointing to a second type of workability issue – flow localization – that limits sheet processing. This shear banding is to a large extent facilitated by unrestricted flow at the sheet surface, unavoidable in peeling. With additional confinement of this free surface deformation and appropriately designed deformation zone geometry, LSEM is shown to suppress shear banding, resulting in continuous sheet with homogeneous microstructure. Thus LSEM is shown to produce the greatest enhancement in process workability for producing sheet. These workability findings are explained and discussed based on differences in process mechanics and deformation zone geometry.
This report is a follow-up to the previous report on the difference between high fluence, high and low flux irradiations. There was a discrepancy in the data for the LBNL irradiated S5821 PIN diodes. There were diodes irradiated in the two batches (high and low flux) with the same flux and fluence for reference (lell ions/cm2/shot and 5, 10, and 20 ions/cm2 total flux). Although these diodes should have the same electrical characteristics their leakage currents were different by a factor of 5-6 (batch 2 was larger). Also, the C-V measurements showed drastically different results. It was speculated that these discrepancies were due to one of the following two reasons: 1. Different times elapsed between radiation and characterization. 2. Different areas were irradiated (roughly half of the diodes were covered during irradiation). To address the first concern, we annealed the devices according to the ASTM standard [1]. The differences remained the same. To determine the irradiated area, we performed large area IBIC scans on several devices. Error! Reference source not found. below shows the IBIC maps of two devices one from each batch. The irradiated areas are approximately the same.
The first solar hot air balloon was constructed in the early 1970s (Besset, 2016). Over the following decades the Centre National d'Etudes Spatiales (CNES) developed the Montgolfiere Infrarouge (MIR) balloon, which flew on solar power during the day and infrared radiation from the Earth's surface at night (Fommerau and Rougeron, 2011). The balloons were capable of flying for over 60 days and apparently reached altitudes of 30 km at least once (Malaterre, 1993). Solar balloons were the subject of a Jet Propulsion Laboratory study that performed test flights on Earth (Jones and Wu 1999) and discussed their mission potential for Mars, Jupiter, and Venus (Jones and Heun, 1997). The solar balloons were deployed from the ground and dropped from hot air balloons; some were altitude controlled by means of a remotely-commanded air valve at the top of the envelope. More recently, solar balloons have been employed for infrasound studies in the lower stratosphere (see Table 1). The program began in 2015, when a prototype balloon reached an altitude of 22 kilometers before terminating just prior to float (Bowman et al., 2015). An infrasound sensor was successfully deployed on a solar balloon during the 2016 SISE/USIE experiment, in which an acoustic signal from a ground explosion was captured at a range of 330 km (Anderson et al. 2018; Young et al. 2018). This led to the launch of a 5-balloon infrasound network during the Heliotrope experiment (Bowman and Albert, 2018). The balloons were constructed by the researchers themselves at a materials of less than $50 per envelope.
In July 2017, the Organization 630 senior manager requested that an assessment of selected causal analyses be performed for the period from July 2014 to July 2017. As a result, this assessment reviewed causal analyses performed by or for Environment, Safety and Health (ES&H) Center department personnel during the specified period. The purpose was to determine the degree to which ES&H Center personnel learn from use of the causal analysis process.
Prokopenko, Andrey; Thomas, Stephen; Swirydowicz, Kasia; Ananthan, Shreyas; Hu, Jonathan J.; Williams, Alan B.; Sprague, Michael
The goal of the ExaWind project is to enable predictive simulations of wind farms composed of many MW-scale turbines situated in complex terrain. Predictive simulations will require computational fluid dynamics (CFD) simulations for which the mesh resolves the geometry of the turbines, and captures the rotation and large deflections of blades. Whereas such simulations for a single turbine are arguably petascale class, multi-turbine wind farm simulations will require exascale-class resources. The primary code in the ExaWind project is Nalu, which is an unstructured-grid solver for the acoustically-incompressible Navier-Stokes equations, and mass continuity is maintained through pressure projection. The model consists of the mass-continuity Poisson-type equation for pressure and a momentum equation for the velocity. For such modeling approaches, simulation times are dominated by linear-system setup and solution for the continuity and momentum systems. For the ExaWind challenge problem, the moving meshes greatly affect overall solver costs as re-initialization of matrices and re-computation of preconditioners is required at every time step.
The following is intended as a possible approach to support Phase 1 Task 1 of the Joint Fuel Cycles Studies collaboration between the Republic of Korea (ROK) and the US DOE Nuclear Energy Used Fuel Disposition Campaign (UFDC). In this approach UFDC is providing ROK with a brief description of our simplified granite generic disposal system (GDS) model, as it has been implemented in our Generic Performance Assessment Model (GPAM). A more detailed description of the original, stand alone, granite GDS model and GPAM appears in the UFD FY 11 report (Clayton, et. al., 2011), which was provide as an attachment to an earlier email to Task 1 ROK counterparts. Additionally, UFDC is providing the input data sets (parameter, values, descriptions and uncertainty) used to implement the stand alone granite GDS model into GPAM, as a starting point.