Schedule Management Optimization (SMO) is a tool for automatically generating a schedule of project tasks. Project scheduling is traditionally achieved with the use of commercial project management software or case-specific optimization formulations. Commercial software packages are useful tools for managing and visualizing copious amounts of project task data. However, their ability to automatically generate optimized schedules is limited. Furthermore, there are many real-world constraints and decision variables that commercial packages ignore. Case-specific optimization formulations effectively identify schedules that optimize one or more objectives for a specific problem, but they are unable to handle a diverse selection of scheduling problems. SMO enables practitioners to generate optimal project schedules automatically while considering a broad range of real-world problem characteristics. SMO has been designed to handle some of the most difficult scheduling problems – those with resource constraints, multiple objectives, multiple inventories, and diverse ways of performing tasks. This report contains descriptions of the SMO modeling concepts and explains how they map to real-world scheduling considerations.
The atmospheric dispersion of contaminants in the wake of a large urban structure is a challenging fluid mechanics problem of interest to the scientific and engineering communities. Magnetic Resonance Velocimetry (MRV) is a relatively new technique that leverages diagnostic equipment used primarily by the medical field to make 3D engineering measurements of flow and contaminant dispersal. SIERRA/Fuego, a computational fluid dynamics (CFD) code at Sandia National Labs is employed to make detailed comparisons to the dataset to evaluate the quantitative and qualitative accuracy of the model. The comparison exercise shows good comparison between model and experimental results, with the wake region downstream of the tall building presenting the most significant challenge to the quantitative accuracy of the model. Model uncertainties are assessed through parametric variations. Some observations are made in relation to the future utility of MDV and CFD, and some productive follow-on activities are suggested that can help mature the science of flow modeling and experimental testing.
Scott, Ethan A.; Hattar, Khalid M.; Laros, James H.; Gaskins, John T.; Bai, Tingyu; Wang, Steven Y.; Gansky, Claire; Goorsky, Mark; Hopkins, Patrick E.
Full waveform inversion allows the seismologist to utilize an entire waveform and all the information it contains to help image the 3-D structure of the interior of the earth. This report summarizes the basic theory that has been developed in full waveform seismic inversion, primarily related to computation of sensitivity kernels. It then describes the implementation of this theory using Sandia Geophysics Department's Parelasti code, a 3-D full waveform elastic simulation algorithm. Finally, the code is validated using synthetics from simple homogeneous elastic earth models.
Many earth materials and minerals are seismically anisotropic; however, due to the weakness of anisotropy and for simplicity, the earth is often approximated as an isotropic medium. Specific circumstances, such as in shales, tectonic fabrics, or oriented fractures, for example, require the use of anisotropic simulations in order to accurately model the earth. This report details the development of a new massively parallel 3-D full seismic waveform simulation algorithm within the principle coordinate system of an orthorhombic material, which is a specific form of anisotropy common in layered, fractured media. The theory and implementation of Pararhombi is described along with verification of the code against other solutions.
We invert far field infrasound data for the equivalent seismo-acoustic time domain moment tensor to assess the effects of variable atmospheric models as well as to quantify the relative contributions of two presumed source phenomena. The infrasound data was produced by a series of underground chemical explosions that were conducted during the Source Physics Experiment, (SPE) which was originally designed to study explosion-generated seismo-acoustic signal phenomena. The goal of the work presented herein is two-fold: the first goal is to investigate the sensitivity of the estimated time domain moment tensors to variability of the estimated atmospheric model. The second goal is to determine the relative contribution of two possible source mechanisms to the observed infrasonic wave field. Rather than using actual atmospheric observations to estimate the necessary atmospheric Green's functions, we build a series of atmospheric models that rely on publicly available, regional atmospheric observations and the assumption that the acoustic energy results from a linear combination of an underground isotropic explosion and surface spall. The atmospheric observations are summarized and interpolated onto a 3D grid to produce a model of sound speed at the time of the experiment. For each of four SPE acoustic datasets that we invert, we produced a suite of three atmospheric models, based on ten years of regional meteorological observations: an average model, which averages the atmospheric conditions for ten years prior to each SPE event, as well as two extrema models. We find that the inversion yields relatively repeatable results for the estimated spall source. Conversely, the estimated isotropic explosion source is highly variable. This suggests that the majority of the observed acoustic energy is produced by the spall source and/or our modeling of the elastic energy propagation, and it's subsequent conversion to acoustic energy via linear elastic-to-acoustic coupling at the free surface, is too simplistic.
Patel, Sonal P.; Johnston, Mark D.; Webb, Timothy J.; Bliss, David E.; Bennett, Nichelle L.; Welch, D.; Kiefer, Mark L.; Savage, Mark E.; Cuneo, Michael E.; Maron, Yitzhak; Gilgenbach, Ronald M.
Due to the weight of overburden and tectonic forces, the solid earth is subject to an ambient stress state. This stress state is quasi-static in that it is generally in a state of equilibrium. Typically, seismology assumes this ambient stress field has a negligible effect on wave propagation. However, two basic theories have been put forward to describe the effects of ambient stress on wave propagation. Dahlen and Tromp (2002) expound a theory based on perturbation analysis that largely supports the traditional seismological view that ambient stress is negligible for wave propagation. The second theory, espoused by Korneev and Glubokovskikh (2013) and supported by some experimental work, states that perturbation analysis is inappropriate since the elastic modulus is very sensitive to the ambient stress states. This brief report reformulates the equations given by Korneev and Glubokovskikh (2013) into a more compact form that makes it amenable to statement in terms of a pre-stress form of Hooke's Law. Furthermore, this report demonstrates the symmetries of the pre-stress modulus tensor and discusses the reciprocity relationship implied by the symmetry conditions.
Due to the weight of overburden and tectonic forces, the solid earth is subject to an ambient stress state. This stress state is quasi-static in that it is generally in a state of equilibrium. Typically, seismology assumes this ambient stress field has a negligible effect on wave propagation. However, two basic theories have been put forward to describe the effects of ambient stress on wave propagation. Dahlen and Tromp (2002) expound a theory based on perturbation analysis that largely supports the traditional seismological view that ambient stress is negligible for wave propagation. The second theory, espoused by Korneev and Glubokovskikh (2013) and supported by some experimental work, states that perturbation analysis is inappropriate since the elastic modulus is very sensitive to the ambient stress states. This brief report reformulates the equations given by Korneev and Glubokovskikh (2013) into a more compact form that makes it amenable to statement in terms of a pre-stress form of Hooke's Law. Furthermore, this report demonstrates the symmetries of the pre-stress modulus tensor and discusses the reciprocity relationship implied by the symmetry conditions.
Waves propagating through natural materials such as ocean water encounter spatial variations in material properties that cannot easily be predicted or known in advance. Deterministic wave simulation algorithms must assume that all properties throughout the model space are precisely known. However, a stochastic wave simulation tool can parameterize the material as a stochastic medium with a certain probability distribution and correlation length. This report documents the addition of spatial stochastic variability into Paracousti-UQ, Sandia Geophysics Department's 3-D full waveform acoustic algorithm within stochastic media. The ability of the code to replicate Monte Carlo solutions in 1-D spatially variable media is also evaluated.
This report is an outcome of the ASC CSSE Level 2 Milestone 6362: Analysis of Re- silient Asynchronous Many-Task (AMT) Programming Model. It comprises a summary and in-depth analysis of resilience schemes adapted to the AMT programming model. Herein, performance trade-offs of a resilient-AMT prograrnming model are assessed through two ap- proaches: (1) an analytical model realized by discrete event simulations and (2) empirical evaluation of benchmark programs representing regular and irregular workloads of explicit partial differential equation solvers. As part of this effort, an AMT execution simulator and a prototype resilient-AMT programming framework have been developed. The former permits us to hypothesize the performance behavior of a resilient-AMT model, and has undergone a verification and validation (V&V) process. The latter allows empirical evaluation of the perfor- mance of resilience schemes under emulated program failures and enabled the aforementioned V&V process. The outcome indicates that (1) resilience techniques implemented within an AMT framework allow efficient and scalable recovery under frequent failures, that (2) the abstraction of task and data instances in the AMT programming model enables readily us- able Application Program Interfaces (APIs) for resilience, and that (3) this abstraction enables predicting the performance of resilient-AMT applications with a simple simulation infrastruc- ture. This outcome will provide guidance for the design of the AMT programming model and runtime systems, user-level resilience support, and application development for ASC's next generation platforms (NGPs).
This document consolidates the work performed by Sandia National Laboratories and the US Nuclear Regulatory Commission in participation of Program IRIS: “Improving the Robustness of the Assessment Methodologies for Structures Impacted by Missiles”. Three round-robin benchmark exercises on improving the robustness of the assessment of structures impacted by large missiles at medium to high velocities were organized by either the IAGE Subgroup on Ageing of Concrete Structures of the Organization for Economic Co-operation and Development Nuclear Energy Agency (NEA) or Électricité de France (EDF). The objectives of the exercises were to develop guidance for conducting impact analyses including issues related to computer codes, modeling approaches, and analysis techniques. The full project was comprised of three phases: Phase I, impact of walls; Phase II, impact of larger structures; and Phase III, transmission of shock and vibration to internal components.
This report provides details of the algorithms in the Bloodhound package for infrasound data analysis. The report provides a detailed description of the algorithms, general instructions on tuning Bloodhound for different signal types, and a complete listing of all input parameters and the complete output schema. Several Jupyter notebooks are provided with the distribution for illustrating how to use Bloodhound for different workflows.
This report presents computational analyses that simulate the structural response of caverns at the Strategic Petroleum Reserve Bryan Mound site. The cavern field comprises 20 caverns. Five caverns (1, 2, 4, and 5; 3 was later plugged and abandoned) were acquired from industry and have unusual shapes and a history dating back to 1946. The other 16 caverns (101-116) were leached according to SPR standards in the mid-1980s and have tall cylindrical shapes. The history of the caverns and their shapes are simulated in a 3-D geomechanics model of the site that predicts deformations, strains, and stresses. Historical wellhead pressures are used to calculate cavern pressures up through July 2016. Because of the extent of heterogeneous creep behavior observed throughout the Bryan Mound site, a set of cavern-specific creep coefficients was developed to produce better matches with measured cavern closure and surface subsidence. For this new implementation of the model, there are two significant advances: the use of the multimechanism deformation (M-D) salt creep model to evaluate both steady-state and transient salt creep; and the creation of finite element mesh geometries for the caverns that nearly exactly match the geometries obtained through sonar measurements. The results of the finite element model are interpreted to provide information on the current and future status of subsidence, well integrity, cavern stability, and drawdown availability.
Analysis of quartz sandstones shows that grain-scale crushing (fracture and rearrangement) and associated sealing of fractures contribute significantly to consolidation. The crushing strength (P*) for granular material is defined by laboratory experiments conducted at strain rates of 10−4 to 10−5 s−1 and room temperature. Based on experiments, many sandstones would require burial depths in excess of the actual maximum burial depth to create observed microstructure and density. We use experiments and soil mechanics principles to determine rate laws for brittle consolidation of fine-grained quartz sand to better estimate in situ failure conditions of porous geomaterials. Experiments were conducted on St. Peter sand utilizing different isostatic consolidation and creep load paths at temperatures to 200 °C and at strain rates of 10−4 to 10−10 s−1. Experiment results are consistent with observed rate dependence of consolidation in soils, and P* for sand can be identified by the change in the dependence of consolidation rate with stress, allowing the extrapolation of P* determined in the laboratory to geologic rates and temperatures. Additionally, normalized P* values can be described by a polynomial function to quantify temperature, stress, and strain-rate relationships for the consolidation of porous geomaterials by subcritical cracking. At geologic loading rates, P* for fine-grained quartz sand is achieved within ~3-km burial depth, and thus, shear-enhanced compaction under nonisostatic stress can occur at even shallower depths. These results demonstrate that time and temperature effects must be considered for predicting the brittle consolidation of sediments in depositional basins, petroleum reservoirs, and engineering applications.
Will quantum computation become an important milestone in human progress? Passionate advocates and equally passionate skeptics abound. IEEE already provides useful, neutral forums for state-of-the-art science and engineering knowledge as well as practical benchmarks for quantum computation evaluation. But could the organization do more.
Stevens, Mark J.; Trigg, Edward B.; Gaines, Taylor W.; Marechal, Manuel; Moed, Demi E.; Rannou, Patrice; Wagener, Kenneth B.; Winey, Karen I.
Recent advances in polymer synthesis have allowed remarkable control over chain microstructure and conformation. Capitalizing on such developments, here we create well-controlled chain folding in sulfonated polyethylene, leading to highly uniform hydrated acid layers of subnanometre thickness with high proton conductivity. The linear polyethylene contains sulfonic acid groups pendant to precisely every twenty-first carbon atom that induce tight chain folds to form the hydrated layers, while the methylene segments crystallize. The proton conductivity is on par with Nafion 117, the benchmark for fuel cell membranes. We demonstrate that well-controlled hairpin chain folding can be utilized for proton conductivity within a crystalline polymer structure, and we project that this structure could be adapted for ion transport. This layered polyethylene-based structure is an innovative and versatile design paradigm for functional polymer membranes, opening doors to efficient and selective transport of other ions and small molecules on appropriate selection of functional groups.
Measurements of energy balance components (energy intake, energy expenditure, changes in energy stores) are often plagued with measurement error. Doubly-labeled water can measure energy intake (EI) with negligible error, but is expensive and cumbersome. An alternative approach that is gaining popularity is to use the energy balance principle, by measuring energy expenditure (EE) and change in energy stores (ES) and then back-calculate EI. Gold standard methods for EE and ES exist and are known to give accurate measurements, albeit at a high cost. We propose a joint statistical model to assess the measurement error in cheaper, non-intrusive measures of EE and ES. We let the unknown true EE and ES for individuals be latent variables, and model them using a bivariate distribution. We try both a bivariate Normal as well as a Dirichlet Process Mixture Model, and compare the results via simulation. Our approach, is the first to account for the dependencies that exist in individuals’ daily EE and ES. We employ semiparametric regression with free knot splines for measurements with error, and linear components for error free covariates. We adopt a Bayesian approach to estimation and inference and use Reversible Jump Markov Chain Monte Carlo to generate draws from the posterior distribution. Based on the semipar-ameteric regression, we develop a calibration equation that adjusts a cheaper, less reliable estimate, closer to the true value. Along with this calibrated value, our method also gives credible intervals to assess uncertainty. A simulation study shows our calibration helps produce a more accurate estimate. Our approach compares favorably in terms of prediction to other commonly used models.
We have demonstrated a laboratory based, high bandwidth atom interferometer instrument and have performed an incipient gravity measurement with a fractional statistical uncertainty of ag/g = 4.4 x 10-6 where g is the acceleration due to gravity. We have designed, constructed, and optimised numerous laser systems for this purpose, most notably a powerful Raman laser (12 W at 780 nm) which will allow large momentum transfer and large detuning interferometry to be carried out at high bandwidth for the first time. This experiment is a general purpose test bed for exploring the fundamental limitations of atom interferometer techniques.
There are currently 2,462 dual-purpose canisters (DPCs) containing spent nuclear fuel (SNF) across the United States. Repackaging DPCs into specialized disposal canisters could be financially and operationally costly with additional radiological, operational safety, and management risks. There are several approaches to facilitate direct disposal of DPCs and demonstrate acceptable repository performance. A promising approach is to fill the void space within the DPCs with a material that would significantly limit the potential for criticality through limiting moderation and/or the addition of neutron absorbers in the interstitial spaces within the fuel assemblies and baskets. An acceptable filler would demonstrably show that the probability of criticality in DPCs during the disposal period of interest to be below the probability threshold for inclusion in repository performance assessment. Based on previous work conducted by domestic and international organizations, two approaches were identified as potentially viable for introduction of fillers into DPCs as liquids that would eventually solidify: (1) molten metal fillers introduced at higher temperatures, and (2) resins or cement slurries that solidify at lower temperatures.
This report documents the completion of milestone STPM12-4 Kokkos Training Bootcamp. The goal of this milestone was to hold a combined tutorial and hackathon bootcamp event for the Kokkos community and prospective users. The Kokkos Bootcamp event was held on-site at Oak Ridge National Lab from July 24 — July 27, 2018. There were over 40 registered participants from 12 institutions, including 7 Kokkos project staff from SNL, LANL, and ORNL. The event consisted of a roughly a two-day tutorial session including hands exercises, followed by 1.5 days of intensive porting work on codes that the participants brought explore, port, and optimize the use of Kokkos with the help of Kokkos project experts.
Acoustic waves with a wide range of frequencies are generated by lightning strokes during thunderstorms, including infrasonic waves (0.1 to 20 Hz). The source mechanism for these low-frequency acoustic waves is still debated, and studies have so far been limited to ground-based instruments. Here we report the first confirmed detection of lightning-generated infrasound with acoustic instruments suspended at stratospheric altitudes using a free-flying balloon. We observe high-amplitude signals generated by lightning strokes located within 100 km of the balloon as it flew over the Tasman Sea on 17 May 2016. The signals share many characteristics with waveforms recorded previously by ground-based instruments near thunderstorms. The ability to measure lightning activity with high-altitude infrasound instruments has demonstrated the potential for using these platforms to image the full acoustic wavefield in the atmosphere. Furthermore, it validates the use of these platforms for recording and characterizing infrasonic sources located beyond the detection range of ground-based instruments.
We demonstrate the ultrahigh extinction operation of a silicon photonic (SiP) amplitude modulator (AM) employing a cascaded Mach-Zehnder interferometer. By carrying out optimization sweeps without significantly degrading the extinction, the SiP AM is robust to environment changes and maintained >52 dB extinction for >6 hrs.
The purpose of this study was to first assess the sensitivity of the parameters of elasticplastic material models with anisotropic yield to choices in the calibration procedure. Two models were considered: Hill's 1948 and Barlat's Y1d2004-18p. Subsequently, it was shown that calibration choices can have an effect on the values of the stress and strain at the ultimate point in uniaxial specimen responses. Finally, the calibrated Barlat model was able to reasonably reproduce the load-deflection and strain fields of a validation specimen that experienced multiaxial states of stress. Overall, it was found that the Barlat model resulted in a closer fit to the measurements and that the parameters of the calibration procedure should be varied to assess the sensitivity of the results.
Here, we provide a demonstration that gas-kinetic methods incorporating molecular chaos can simulate the sustained turbulence that occurs in wall-bounded turbulent shear flows. The direct simulation Monte Carlo method, a gas-kinetic molecular method that enforces molecular chaos for gas-molecule collisions, is used to simulate the minimal Couette flow at Re = 500 . The resulting law of the wall, the average wall shear stress, the average kinetic energy, and the continually regenerating coherent structures all agree closely with corresponding results from direct numerical simulation of the Navier-Stokes equations. Finally, these results indicate that molecular chaos for collisions in gas-kinetic methods does not prevent development of molecular-scale long-range correlations required to form hydrodynamic-scale turbulent coherent structures.
The team worked on supporting compile and build on ATS2 initial deployment systems for SPARC. Conducted performance runs of EMPIRE on Trinity and an initial compile and run of SPARC on Intel Skylake processors.
The retina plays an important role in animal vision - namely preprocessing visual information before sending it to the brain through the optic nerve. Understanding howthe retina does this is of particular relevance for development and design of neuromorphic sensors, especially those focused towards image processing. Our research focuses on examining mechanisms of motion processing in the retina. We are specifically interested in detection of moving targets under challenging conditions, specifically small or low-contrast (dim) targets amidst high quantities of clutter or distractor signals. In this paper we compare a classic motion-sensitive cell model, the Hassenstein-Reichardt model, to a model of the OMS (object motion-sensitive) cell, that relies primarily on change-detection, and describe scenarios for which each model is better suited. We also examine mechanisms, inspired by features of retinal circuitry, by which performance may be enhanced. For example, lateral inhibition (mediated by amacrine cells) conveys selectivity for small targets to the W3 ganglion cell - we demonstrate that a similar mechanism can be combined with the previously mentioned motion-processing cell models to select small moving targets for further processing.
The retina plays an important role in animal vision - namely preprocessing visual information before sending it to the brain through the optic nerve. Understanding howthe retina does this is of particular relevance for development and design of neuromorphic sensors, especially those focused towards image processing. Our research focuses on examining mechanisms of motion processing in the retina. We are specifically interested in detection of moving targets under challenging conditions, specifically small or low-contrast (dim) targets amidst high quantities of clutter or distractor signals. In this paper we compare a classic motion-sensitive cell model, the Hassenstein-Reichardt model, to a model of the OMS (object motion-sensitive) cell, that relies primarily on change-detection, and describe scenarios for which each model is better suited. We also examine mechanisms, inspired by features of retinal circuitry, by which performance may be enhanced. For example, lateral inhibition (mediated by amacrine cells) conveys selectivity for small targets to the W3 ganglion cell - we demonstrate that a similar mechanism can be combined with the previously mentioned motion-processing cell models to select small moving targets for further processing.
Modelling and Simulation in Materials Science and Engineering
Akhondzadeh, Sh; Sills, Ryan B.; Papanikolaou, S.; Van Der Giessen, E.; Cai, W.
Three-dimensional discrete dislocation dynamics methods (3D-DDD) have been developed to explicitly track the motion of individual dislocations under applied stress. At present, these methods are limited to plastic strains of about one percent or less due to high computational cost associated with the interactions between large numbers of dislocations. This limitation motivates the construction of minimalistic approaches to efficiently simulate the motion of dislocations for higher strains and longer time scales. In the present study, we propose geometrically projected discrete dislocation dynamics (GP-DDD), a method in which dislocation loops are modeled as geometrical objects that maintain their shape with a constant number of degrees of freedom as they expand. We present an example where rectangles composed of two screw and two edge dislocation segments are used for modeling gliding dislocation loops. We use this model to simulate single slip loading of copper and compare the results with detailed 3D-DDD simulations. We discuss the regimes in which GP-DDD is able to adequately capture the variation of the flow stress with strain rate in the single slip loading condition. A simulation using GP-DDD requires ∼40 times fewer degrees of freedom for a copper single slip loading case, thus reducing computational time and complexity.
Broadband terahertz radiation potentially has extensive applications, ranging from personal health care to industrial quality control and security screening. While traditional methods for broadband terahertz generation rely on bulky and expensive mode-locked lasers, frequency combs based on quantum cascade lasers (QCLs) can provide an alternative compact, high power, wideband terahertz source. QCL frequency combs incorporating a heterogeneous gain medium design can obtain even greater spectral range by having multiple lasing transitions at different frequencies. However, despite their greater spectral coverage, the comparatively low gain from such gain media lowers the maximum operating temperature and power. Lateral heterogeneous integration offers the ability to cover an extensive spectral range while maintaining the competitive performance offered from each homogeneous gain media. Here, we present the first lateral heterogeneous design for broadband terahertz generation: by combining two different homogeneous gain media, we have achieved a two-color frequency comb spaced by 1.5 THz.
Because of their extraordinary surface areas and tailorable porosity, metal-organic frameworks (MOFs) have the potential to be excellent sensors of gas-phase analytes. MOFs with open metal sites are particularly attractive for detecting Lewis basic atmospheric analytes, such as water. Here, we demonstrate that thin films of the MOF HKUST-1 can be used to quantitatively determine the relative humidity (RH) of air using a colorimetric approach. HKUST-1 thin films are spin-coated onto rigid or flexible substrates and are shown to quantitatively determine the RH within the range of 0.1-5% RH by either visual observation or a straightforward optical reflectivity measurement. At high humidity (>10% RH), a polymer/MOF bilayer is used to slow the transport of H2O to the MOF film, enabling quantitative determination of RH using time as the distinguishing metric. Finally, the sensor is combined with an inexpensive light-emitting diode light source and Si photodiode detector to demonstrate a quantitative humidity detector for low humidity environments.
Twenty-five high-burnup fuel rods were extracted from seven different fuel assemblies used for power production at the North Anna nuclear power plant and shipped to Oak Ridge National Laboratory (ORNL) in 2016 for detailed non-destructive examination (NDE) and destructive examination (DE). The spent fuel rods were from 17×17 lattices and consist of four cladding types—Zirlo®, M5®, Zircaloy-4, and low tin Zircaloy-4 (Zirc-4). These spent fuel rods are being tested to provide: (a) baseline characterization and mechanical property data that can be used as a comparison to fuel that was loaded into a modified TN-32B cask in November 2017, as part of the high-burnup confirmatory data project and (b) data applicable to high-burnup fuel rods (>45 GWd/MTU) currently stored and to be stored in the dry-cask fleet. The TN-32B cask is referred to as the “Demo” cask and is currently expected to be transported to a separate location and the internal contents inspected in approximately ten years. ORNL has completed the NDE of the twenty-five fuel rods. The purpose of this technical memorandum is to present a simplified summary of the first phase of destructive examinations and test conditions that will be used for communicating with various stakeholders. The destructive examinations will leverage the expertise and capabilities from multiple national laboratories for performing independent measurements of relevant data. Close coordination is required to ensure that all examinations follow well documented procedures and are performed so that measured data and characteristics can be readily compared. Pacific Northwest National Laboratory (PNNL) has published a detailed overview of the test program. ORNL and PNNL developed detailed draft test plans for testing to be performed at their facilities. ORNL and PNNL are in the process of refining these test plans to apply specifically to the testing described in this memorandum. Argonne National Laboratory (ANL) contributed to the ORNL test plan by describing tests to be conducted at ANL. Testing will be based on continuous learning. If a test produces results that are inconsistent with expectations or current trends, further testing will be paused until a path forward is established to understand the results and to identify follow-on testing.
Curtis, Jeremy A.; Burch, Ashlyn D.; Barman, Biplob; Linn, A.G.; Mcclintock, Luke M.; O'Beirne, A.L.; Stiles, M.J.; Reno, J.L.; Mcgill, S.A.; Karaiskal, D.; Hilton, D.J.
In this paper, we describe the development of a broadband (0.3–10 THz) optical pump-terahertz probe spectrometer with an unprecedented combination of temporal resolution (≤200 fs) operating in external magnetic fields as high as 25 T using the new Split Florida-Helix magnet system. Finally, using this new instrument, we measure the transient dynamics in a gallium arsenide four-quantum well sample after photoexcitation at 800 nm.
The ordered monoclinic phase of the alkali-metal decahydro-closo-decaborate salt Rb2B10H10 was found to be stable from about 250 K all the way up to an order-disorder phase transition temperature of ≈762 K. The broad temperature range for this phase allowed for a detailed quasielastic neutron scattering (QENS) and nuclear magnetic resonance (NMR) study of the protypical B10H10 2- anion reorientational dynamics. The QENS and NMR combined results are consistent with an anion reorientational mechanism comprised of two types of rotational jumps expected from the anion geometry and lattice structure, namely, more rapid 90° jumps around the anion C4 symmetry axis (e.g., with correlation frequencies of ≈2.6 × 1010 s-1 at 530 K) combined with order of magnitude slower orthogonal 180° reorientational flips (e.g., ≈3.1 × 109 s-1 at 530 K) resulting in an exchange of the apical H (and apical B) positions. Each latter flip requires a concomitant 45° twist around the C4 symmetry axis to preserve the ordered Rb2B10H10 monoclinic structural symmetry. This result is consistent with previous NMR data for ordered monoclinic Na2B10H10, which also pointed to two types of anion reorientational motions. The QENS-derived reorientational activation energies are 197(2) and 288(3) meV for the C4 fourfold jumps and apical exchanges, respectively, between 400 and 680 K. Below this temperature range, NMR (and QENS) both indicate a shift to significantly larger reorientational barriers, for example, 485(8) meV for the apical exchanges. Finally, subambient diffraction measurements identify a subtle change in the Rb2B10H10 structure from monoclinic to triclinic symmetry as the temperature is decreased from around 250 to 210 K.
Anisotropic nanoparticles, such as nanorods and nanoprisms, enable packing of complex nanoparticle structures with different symmetry and assembly orientation, which result in unique functions. Despite previous extensive efforts, formation of large areas of oriented or aligned nanoparticle structures still remains a great challenge. Here, we report fabrication of large-area arrays of vertically aligned gold nanorods (GNR) through a controlled evaporation deposition process. We began with a homogeneous suspension of GNR and surfactants prepared in water. During drop casting on silicon substrates, evaporation of water progressively enriched the concentrations of the GNR suspension, which induces the balance between electrostatic interactions and entropically driven depletion attraction in the evaporating solution to produce large-area arrays of self-assembled GNR on the substrates. Electron microscopy characterizations revealed the formation of layers of vertically aligned GNR arrays that consisted of hexagonally close-packed GNR in each layer. Benefiting from the close-packed GNR arrays and their smooth topography, the GNR arrays exhibited a surface-enhanced Raman scattering (SERS) signal for molecular detection at a concentration as low as 10-15 M. Because of the uniformity in large area, the GNR arrays exhibited exceptional detecting reproducibility and operability. This method is scalable and cost-effective and could lead to diverse packing structures and functions by variation of guest nanoparticles in the suspensions.
Microtubule dynamics play a critical role in the normal physiology of eukaryotic cells as well as a number of cancers and neurodegenerative disorders. The polymerization/depolymerization of microtubules is regulated by a variety of stabilizing and destabilizing factors, including microtubule-associated proteins and therapeutic agents (e.g., paclitaxel, nocodazole). Here we describe the ability of the osmolytes polyethylene glycol (PEG) and trimethylamine-N-oxide (TMAO) to inhibit the depolymerization of individual microtubule filaments for extended periods of time (up to 30 days). We further show that PEG stabilizes microtubules against both temperature- and calcium-induced depolymerization. Our results collectively suggest that the observed inhibition may be related to combination of the kosmotropic behavior and excluded volume/osmotic pressure effects associated with PEG and TMAO. Taken together with prior studies, our data suggest that the physiochemical properties of the local environment can regulate microtubule depolymerization and may potentially play an important role in in vivo microtubule dynamics.
Measurement uncertainties in the techniques used to characterize loss in photonic waveguides becomes a significant issue as waveguide loss is reduced through improved fabrication technology. Typical loss measurement techniques involve environmentally unknown parameters such as facet reflectivity or varying coupling efficiencies, which directly contribute to the uncertainty of the measurement. We present a loss measurement technique, which takes advantage of the differential loss between multiple paths in an arrayed waveguide structure, in which we are able to gather statistics on propagation loss from several waveguides in a single measurement. This arrayed waveguide structure is characterized using a swept-wavelength interferometer, enabling the analysis of the arrayed waveguide transmission as a function of group delay between waveguides. Loss extraction is only dependent on the differential path length between arrayed waveguides and is therefore extracted independently from on and off-chip coupling efficiencies, which proves to be an accurate and reliable method of loss characterization. This method is applied to characterize the loss of the silicon photonic platform at Sandia Labs with an uncertainty of less than 0.06 dB/cm.
Two basic challenges limiting the simulation capabilities of the streamer discharge community are the efficient resolution of Poisson's equation and the proper treatment of photoionization. This paper addresses both of these challenges, beginning with a graphics processing unit executed multigrid (MG) algorithm to efficiently solve Poisson's equation on a massively parallel platform. When utilized in a 3D particle-in-cell (PIC) model with radiation transport, the MG solver is demonstrated to reduce the required simulation time by approximately a factor of three over a conventional Jacobi scheme. Next, a fully theoretical photoionization model, based on the basic properties of N2 and O2 molecules is developed as an alternative to widely utilized semi-empirical models. Following a review of N2 emission properties, a total of eight transitions from only three excited states are reported as a base set of transitions for a practical physics-based photoionization model. A 3D PIC simulation of streamer formation is demonstrated with two dominant transitions included in the radiation transport model.
Metals across all industries demand anticorrosion surface treatments and drive a continual need for high-performing and low-cost coatings. Here we demonstrate polymer-clay nanocomposite thin films as a new class of transparent conformal barrier coatings for protection in corrosive atmospheres. Films assembled via layer-by-layer deposition, as thin as 90 nm, are shown to reduce copper corrosion rates by >1000× in an aggressive H2S atmosphere. These multilayer nanobrick wall coatings hold promise as high-performing anticorrosion treatment alternatives to costlier, more toxic, and less scalable thin films, such as graphene, hexavalent chromium, or atomic-layer-deposited metal oxides.
This investigation tackles the probabilistic parameter estimation problem involving the Arrhenius parameters for the rate coefficient of the chain branching reaction H + O2 → OH + O. This is achieved in a Bayesian inference framework that uses indirect data from the literature in the form of summary statistics by approximating the maximum entropy solution with the aid of approximate bayesian computation. The summary statistics include nominal values and uncertainty factors of the rate coefficient, obtained from shock-tube experiments performed at various initial temperatures. The Bayesian framework allows for the incorporation of uncertainty in the rate coefficient of a secondary reaction, namely OH + H2 → H2O + H, resulting in a consistent joint probability density on Arrhenius parameters for the two rate coefficients. It also allows for uncertainty quantification in numerical ignition predictions while conforming with the published summary statistics. The method relies on probabilistic reconstruction of the unreported data, OH concentration profiles from shock-tube experiments, along with the unknown Arrhenius parameters. The data inference is performed using a Markov chain Monte Carlo sampling procedure that relies on an efficient adaptive quadrature in estimating relevant integrals needed for data likelihood evaluations. For further efficiency gains, local Padé–Legendre approximants are used as surrogates for the time histories of OH concentration, alleviating the need for 0-D auto-ignition simulations. The reconstructed realisations of the missing data are used to provide a consensus joint posterior probability density on the unknown Arrhenius parameters via probabilistic pooling. Uncertainty quantification analysis is performed for stoichiometric hydrogen–air auto-ignition computations to explore the impact of uncertain parameter correlations on a range of quantities of interest.
The use of S2 glass/SC15 epoxy woven fabric composite materials for blast and ballistic protection has been an area of on-going research over the past decade. In order to accurately model this material system within potential applications under extreme loading conditions, a well characterized and understood anisotropic equation of state (EOS) is needed. This work details both an experimental program and associated analytical modelling efforts which aim to provide better physical understanding of the anisotropic EOS behavior of this material. Experimental testing focused on planar shock impact tests loading the composite to peak pressures of 15 GPa in both the transverse and longitudinal orientations. Test results highlighted the anisotropic response of the material and provided a basis by which the associated numeric micromechanical investigation was compared. Results of the combined experimental and numerical modeling investigation provided insights into not only the constituent material influence on the composite response but also the importance of the plain weave microstructure geometry and the significance of the microstructural configuration.
The shock response of porous amorphous silica was investigated using classical molecular dynamics, over a range of porosity ranging from fully dense (2.21 g/cc) down to 0.14 g/cc. We observed an enhanced densification in the Hugoniot response at initial porosities above 50%, and the effect increased with increasing porosity. In the lowest initial densities, after an initial compression response, the systems expanded with increased pressure. These results show good agreement with experiments. We explored mechanisms leading to enhanced densification which appear to differ from mechanisms observed in similar studies in silicon.
Detonation corner turning describes the ability of a detonation wave to propagate into unreacted explosive that is not immediately in the path normal to the wave. The classic example of a corner turning test has a cylindrical geometry and involves a small diameter explosive propagating into a larger diameter explosive as described by Los Alamos' Mushroom test, where corner turning is inferred from optical breakout of the detonation wave. We present a complimentary method to study corner turning in millimeter-scale explosives through the use of vapor deposition to prepare the slab (quasi-2D) analog of the axisymmetric mushroom test. Because the samples are in a slab configuration, optical access to the explosive is excellent and direct imaging of the detonation wave and "dead zone" that results during corner turning is possible. Micromushroom test results are compared for two explosives that demonstrate different behaviors: pentaerythritol tetranitrate (PETN), which has corner turning properties that are nearly ideal; and hexanitroazobenzene (HNAB), which has corner turning properties that reveal a substantial dead zone.
Explosive shock desensitization phenomena have been recognized for some time. It has been demonstrated that pressure-based reactive flow models do not adequately capture the basic nature of the explosive behavior. Historically, replacing the local pressure with a shock captured pressure has dramatically improved the numerical modeling approaches. A pseudo-entropy based formulation using the History Variable Reactive Burn model, as proposed by Starkenberg, was implemented into the Eulerian shock physics code CTH. Improvements in the shock capturing algorithm in the model were made that allow reproduction of single shock behavior consistent with published Pop-plot data. It is also demonstrated to capture a desensitization effect based on available literature data, and to qualitatively capture multi-dimensional desensitization behavior. This model shows promise for use in modeling and simulation problems that are relevant to the desensitization phenomena. Issues are identified with the current implementation and future work is proposed for improving and expanding model capabilities.
With the increasing use of hydrocodes in modeling and system design, experimental benchmarking of software has never been more important. While this has been a large area of focus since the inception of computational design, comparisons with temperature data are sparse due to experimental limitations. A novel temperature measurement technique, magnetic diffusion analysis, has enabled the acquisition of in-flight temperature measurements of hyper velocity projectiles. Using this, an AC-14 bare shaped charge and an LX-14 EFP, both with copper linings, were simulated using CTH to benchmark temperature against experimental results. Particular attention was given to the slug temperature profiles after separation, and the effect of varying equation-of-state and strength models. Simulation fidelity to experiment was shown to greatly depend on strength model, ranging from better than 2% error to a worst case of 22%. This varied notably depending on the strength model used. Similar observations were made simulating the EFP case, with a minimum 4% deviation. Jet structures compare well with radiographic images and are consistent with ALEGRA simulations previously conducted. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525. SAND2017-10009C.
Tin has been shock compressed to ∼69 GPa on the Hugoniot using Sandia's Z Accelerator. A shockless compression wave closely followed the shock wave to ramp compress the shocked tin and probe a high temperature quasi-isentrope near the melt line. A new hybrid backwards integration - Lagrangian analysis routine was applied to the velocity waveforms to obtain the Lagrangian sound velocity of the tin as a function of particle velocity. Surprisingly, an elastic wave was observed on initial compression from the shock state. The presence of the elastic wave indicates tin possess a small but finite strength at this shock pressure, strongly indicating a (mostly) solid state. High fidelity shock Hugoniot measurements on tin sound velocities in this stress range may be required to refine the shock melting stress for pure tin.
The microstructure of pentaerythritol tetranitrate (PETN) films fabricated by physical vapor deposition can be altered substantially by changing the surface energy of the substrate on which they are deposited. High substrate surface energies lead to higher density, strongly textured films, while low substrate surface energies lead to lower density, more randomly oriented films. We take advantage of this behavior to create aluminum-confined PETN films with different microstructures depending on whether a vapor-deposited aluminum layer is exposed to atmosphere prior to PETN deposition. Detonation velocities are measured as a function of both PETN and aluminum thickness at near-failure conditions to elucidate the effects of microstructure on detonation behavior. The differences in microstructure produce distinct changes in detonation velocity but do not have a significant effect on failure geometry when confinement thicknesses are above the minimum effectively infinite condition.
High-resolution, quasi-static time series (QSTS) simulations are essential for modeling modern distribution systems with high-penetration of distributed energy resources (DER) in order to accurately simulate the time-dependent aspects of the system. Presently, QSTS simulations are too computationally intensive for widespread industry adoption. This paper proposes to simulate a portion of the year with QSTS and to use decision tree machine learning methods, random forests and boosting ensembles, to predict the voltage regulator tap changes for the remainder of the year, accurately reproducing the results of the time-consuming, brute-force, yearlong QSTS simulation. This research uses decision tree ensemble machine learning, applied for the first time to QSTS simulations, to produce high-accuracy QSTS results, up to 4x times faster than traditional methods.
Detonation corner turning describes the ability of a detonation wave to propagate into unreacted explosive that is not immediately in the path normal to the wave. The classic example of a corner turning test has a cylindrical geometry and involves a small diameter explosive propagating into a larger diameter explosive as described by Los Alamos' Mushroom test, where corner turning is inferred from optical breakout of the detonation wave. We present a complimentary method to study corner turning in millimeter-scale explosives through the use of vapor deposition to prepare the slab (quasi-2D) analog of the axisymmetric mushroom test. Because the samples are in a slab configuration, optical access to the explosive is excellent and direct imaging of the detonation wave and "dead zone" that results during corner turning is possible. Micromushroom test results are compared for two explosives that demonstrate different behaviors: pentaerythritol tetranitrate (PETN), which has corner turning properties that are nearly ideal; and hexanitroazobenzene (HNAB), which has corner turning properties that reveal a substantial dead zone.
It has been an ongoing scientific debate whether biological parameters are conserved across experimental setups with different media, pH values, and other experimental conditions. Our work explores this question using Bayesian probability as a rigorous framework to assess the biological context of parameters in a model of the cell growth controller in You et al. When this growth controller is uninduced, the E. coli cell population grows to carrying capacity; however, when the circuit is induced, the cell population growth is regulated to remain well below carrying capacity. This growth control controller regulates the E. coli cell population by cell to cell communication using the signaling molecule AHL and by cell death using the bacterial toxin CcdB. To evaluate the context dependence of parameters such as the cell growth rate, the carrying capacity, the AHL degradation rate, the leakiness of AHL, the leakiness of toxin CcdB, and the IPTG induction factor, we collect experimental data from the growth control circuit in two different media, at two different pH values, and with several induction levels. We define a set of possible context dependencies that describe how these parameters may differ with the experimental conditions and we develop mathematical models of the growth controller across the different experimental contexts. We then determine whether these parameters are shared across experimental contexts or whether they are context dependent. For each of these possible context dependencies, we use Bayesian inference to assess its plausibility and to estimate the parameters of the growth controller. Ultimately, we find that there is significant experimental context dependence in this circuit. Furthermore, we also find that the estimated parameter values are sensitive to our assumption of a context relationship.
The goal of this project is to create a modular optical section that can be inserted into an exhaust runner to measure soot mass being produced by combustion.
My oral presentation will focus on the progress made by me on mechanical design for the Exhaust Runner Soot Diagnostic (ERSD) for use on optical research diesel engines.
Proceedings of Correctness 2018: 2nd International Workshop on Software Correctness for HPC Applications, Held in conjunction with SC 2018: The International Conference for High Performance Computing, Networking, Storage and Analysis
'As scale grows and relaxed memory models become common, it is becoming more difficult to establish the correctness of HPC runtimes through simple testing, making formal verification an attractive alternative. This paper describes a formal specification and verification of an HPC user-level tasking runtime through the design, implementation, and evaluation of a model checked implementation of the Qthreads user-level tasking runtime. We implement our model in SPIN model checker by doing a function to function translation of Qthreads'' C implementation to Promela code. This translation bridges the differences in the modeling and implementation languages by translating C''s rich pointer semantics, functions and non-local gotos to Promela''s comparatively simple semantics. We then evaluate our implementation to show that it is both tractable and useful, exhaustively searching the state-space for counterexamples in reasonable time on modern architectures and use it to find a lingering concurrency error in the Qthreads runtime.
In this paper we apply Convolutional Neural Networks (CNNs) to the task of automatic threat detection, specifically conventional explosives, in security X-ray scans of passenger baggage. We present the first results of utilizing CNNs for explosives detection, and introduce a dataset, the Passenger Baggage Object Database (PBOD), which can be used by researchers to develop new threat detection algorithms. Using state-of-the-art CNN models and taking advantage of the properties of the Xray scanner, we achieve reliable detection of threats, with the best model achieving an AUC of the ROC of 0.95. We also explore heatmaps as a visualization of the location of the threat.
This paper discusses the optimal output feedback control problem of linear time-invariant systems with additional restrictions on the structure of the optimal feedback control gain. These restrictions include setting individual elements of the optimal gain matrix to zero and making the sum of certain rows of the gain matrix equal to desired values. The paper proposes a method that modifies the standard quadratic cost function to include soft constraints ensuring the satisfaction of these restrictions on the structure of the optimal gain. Necessary conditions for optimality with these soft constraints are derived, and an algorithm to solve the resulting optimal output feedback control problem is given. Finally, a power systems example is presented to illustrate the usefulness of proposed approach.
This paper formulates general computation as a feedback-control problem, which allows the agent to autonomously overcome some limitations of standard procedural language programming: resilience to errors and early program termination. Our formulation considers computation to be trajectory generation in the program's variable space. The computing then becomes a sequential decision making problem, solved with reinforcement learning (RL), and analyzed with Lyapunov stability theory to assess the agent's resilience and progression to the goal. We do this through a case study on a quintessential computer science problem, array sorting. Evaluations show that our RL sorting agent makes steady progress to an asymptotically stable goal, is resilient to faulty components, and performs less array manipulations than traditional Quicksort and Bubble sort.
Saturn is a short-pulse ( 40 ns FWHM) x-ray generator capable of delivering up 10 MA into a bremsstrahlung diode to yield up 5 × 10^12 rad/s (Si) per shot at an energy of 1 to 2 MeV. With the machine now over 30 years old it is necessary to rebuild and replace many components, upgrade controls and diagnostics, design for more reliability and reproducibility, and, as possible upgrade the accelerator to produce more current at a low voltage ( 1 MV or lower). Thus it has been necessary to reevaluate machine design parameters. The machine is modeled as a simple LR circuit driven with an equivalent a sine-squared drive waveform as peak voltage, drive impedance, and vacuum inductance are varied. Each variation has implications for vacuum insulator voltage, diode voltage, diode impedance, and radiation output. For purposes of this study, radiation is scaled as the diode current times the diode voltage raised to the 2.7 power. Results of parameter scans are presented and used to develop a design that optimizes radiation output. Results indicate that to maintain the existing short pulse length of the machine but to increase output it is most beneficial to operate at an even higher impedance than originally designed. Also discussed are critical improvements that need to be made.
Nominal behavior selection of an electronic device from a measured dataset is often difficult. Device characteristics are rarely monotonic and choosing the single device measurement which best represents the center of a distribution across all regions of operation is neither obvious nor easy to interpret. Often, a device modeler uses a degree of subjectivity when selecting nominal device behavior from a dataset of measurements on a group of devices. This paper proposes applying a functional data approach to estimate the mean and nominal device of an experimental dataset. This approach was applied to a dataset of electrical measurements on a set of commercially available Zener diodes and proved to more accurately represent the average device characteristics than a point-wise calculation of the mean. It also enabled an objective method for selecting a nominal device from a dataset of device measurements taken across the full operating region of the Zener diode.
Malware detection and remediation is an on-going task for computer security and IT professionals. Here, we examine the use of neural algorithms to detect malware using the system calls generated by executables-alleviating attempts at obfuscation as the behavior is monitored. We examine several deep learning techniques, and liquid state machines baselined against a random forest. The experiments examine the effects of concept drift to understand how well the algorithms generalize to novel malware samples by testing them on data that was collected after the training data. The results suggest that each of the examined machine learning algorithms is a viable solution to detect malware-achieving between 90% and 95% class-averaged accuracy (CAA). In real-world scenarios, the performance evaluation on an operational network may not match the performance achieved in training. Namely, the CAA may be about the same, but the values for precision and recall over the malware can change significantly. We structure experiments to highlight these caveats and offer insights into expected performance in operational environments. In addition, we use the induced models to better understand what differentiates malware samples from goodware, which can further be used as a forensics tool to provide directions for investigation and remediation.
Proceedings of ScalA 2018: 9th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems, Held in conjunction with SC 2018: The International Conference for High Performance Computing, Networking, Storage and Analysis
Sparse matrix-matrix multiplication is a critical kernel for several scientific computing applications, especially the setup phase of algebraic multigrid. The MPI+X programming model, which is growing in popularity, requires that such kernels be implemented in a way that exploits on-node parallelism. We present a single-pass OpenMP variant of Gustavson's sparse matrix matrix multiplication algorithm designed for architectures (e.g. CPU or Intel Xeon Phi) with reasonably large memory and modest thread counts (tens of threads, not thousands). These assumptions allow us to exploit perfect hashing and dynamic memory allocation to achieve performance improvements of up to 2x over third-party kernels for matrices derived from algebraic multigrid setup.
Proceedings of PMBS 2018: Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems, Held in conjunction with SC 2018: The International Conference for High Performance Computing, Networking, Storage and Analysis
Proxy applications, or proxies, are simple applications meant to exercise systems in a way that mimics real applications (their parents). However, characterizing the relationship between the behavior of parent and proxy applications is not an easy task. In prior work [1], we presented a data-driven methodology to characterize the relationship between parent and proxy applications based on collecting runtime data from both and then using data analytics to find their correspondence or divergence. We showed that it worked well for hardware counter data, but our initial attempt using MPI function data was less satisfactory. In this paper, we present an exploratory effort at making an improved quantification of the correspondence of communication behavior for proxies and their respective parent applications. We present experimental evidence of positive results using four proxy applications from the current ECP Proxy Application Suite and their corresponding parent applications (in the ECP application portfolio). Results show that each proxy analyzed is representative of its parent with respect to communication data. In conjunction with our method presented in [1] (correspondence between computation and memory behavior), we get a strong understanding of how well a proxy predicts the comprehensive performance of its parent.
Peacekeepmg and humamtanan aid interventions in Somalia have attempted to bring peace and stability to the country and region for more than twenty-five years. Different dynamics characterize four distinct phases of these interventions, determining the likelihood of conflict transformation. These dynamics display archetypal system behaviors representative of other persistent conflicts in Africa during the same time period. Field interviews combined with comparative statistics informed system models of conflict dynamics in Africa and Somalia. The models explored the relative impact of intervention feedback loops and key levers on potential for conflict transformation. It is shown that sustainable peace depends less on the appropriate sequencing of aid than on transparency, trust, and cooperation between various intervention actors and stakeholders to enable accountability at the local level. Technical innovations are needed to build transparency and trust between intervention stakeholders without increasing security risks. A potential solution is proposed that incorporates predictive analytics into peer-to-peer networks for monitoring interventions.
To counter manufacturing irregularities and ensure ASIC design integrity, it is essential that robust design verification methods are employed. It is possible to ensure such integrity using ASIC static timing analysis (STA) and machine learning. In this research, uniquely devised machine and statistical learning methods which quantify anomalous variations in Register Transfer Level (RTL) or Graphic Design System II (GDSII) formats are discussed. To measure the variations in ASIC analysis data, the timing delays in relation to path electrical characteristics are explored. It is shown that semi-supervised learning techniques are powerful tools in characterizing variations within STA path data and has much potential for identifying anomalies in ASIC RTL and GDSII design data.